arxiv_id
stringlengths
0
16
text
stringlengths
10
1.65M
gr-qc/0508040
\section{\label{sec:Int}Introduction} General relativity has many successful predictions, some of them are passing experimental tests with high precision, for example mercury's perihelion and gravitational redshift. On the other hand, the most spectacular and promising prediction is the existence of black holes. In this sense the theoretical area of black hole physics became an attractive and productive field of research in the past century. Those classical and quantum properties are well understood within the general relativity framework. From a classical point of view, nothing can escape from the black holes; however, from the quantum point of view, black holes are not completely black and they can emit radiation with a temperature given by $h/8 \pi k_{B}GM$ \cite{Hawking:1974sw}. This radiation is essentially thermal and they slowly evaporate by emitting quanta. Another property of interest that play an important role in black hole physics are the quasinormal modes (QNM's). They determine the late-time evolution of fields in the exterior of the black hole, and numerical simulations of stellar collapse and black hole collisions have shown that in the final stage of such processes the quasinormal modes eventually dominate the black hole response to any kind of perturbation. QNM's has received great attention in the last few years, since it is believed that these models shed light on the solution or understanding to fundamental problems in loop quantum gravity. In particular, from Hod's proposal \cite{Hod:1998vk}, modes with high damping have received great attention, specially their relevance in the quantization of the area of black holes \cite{Setare:2004uu,Choudhury:2003wd, Padmanabhan:2003fx} and the possibilities of fixing the Immirzi parameter in loop quantum gravity \cite{Dreyer:2002vy, Kunstatter:2002pj, Natario:2004jd}. Obviously the experimental test are impossible to make at the level of terrestrial laboratories. This opens the interest in analog models that mimic the properties of black hole physics. In this sense, the field of analog models of gravity allows in principle that the most processes of black hole physics can be studied in a laboratory. In particular the use of supersonic acoustics flows as an analogy to gravitating systems was for first time proposed by Unruh \cite{Unruh:81} and with the works of Visser \cite{Barcelo:2005fc, Novello:2002qg, Visser:1998qn, Visser:1997ux, Visser:1993ub} has received an exponentially growing attention. The basis of the analogy between gravitational black holes and sonic black holes comes from considering the propagation of acoustic disturbances on a barotropic, inviscid, inhomogeneous and irrotational (at least locally) fluid flow. It is well known that the equation of motion for this acoustic disturbance (described by its velocity potential ) is identical to the Klein-Gordon equation for a massless scalar field minimally coupled to gravity in a curved spacetime \cite{Visser:1998qn, Visser:1997ux, Visser:1993ub}. QNM's modes for acoustic black holes in 2+1 dimension was computed in Refs. \cite{Cardoso:2005ij, Cardoso:2004fi, Lepe:2004kv,Berti:2004ju} and for analogous black holes in Bose-Einstein condensate in Ref. \cite{Nakano:2004ha}. In this work we analytically compute the QNMs or acoustic disturbances in Unruh's 3+1 dimensional sonic black hole (Laval nozzle fluid flow), and also we show their large damping limit. The organization of the paper is as follows: In Sec. II we specify the Unruh's Sonic Black Hole. In Sec. III we determine the QNMs, and the large damping limits. Finally, we conclude in Sec. V. \section{\label{sec:Sonicbh}Unruh's Sonic Black Hole} The idea of using supersonic acoustic flows as analog systems to mimic some properties of black hole physics was proposed for the first time by Unruh \cite{Unruh:81}. Essentially he showed the possibility to use sonic flow in order to explore properties like the Hawking temperature near the sonic horizon and, in principle to developed an experimental setting to study the fundamental problem of evaporation of real general relativity black holes.As we mentioned in the introduction, the basis of the analogy between gravitational black hole and sonic black holes comes from considering the propagation of acoustic disturbances on a barotropic, inviscid, inhomogeneous and irrotational (at least locally) fluid flow. \ It is well known that the equation of motion for this acoustic disturbance (described by its velocity potential $\psi $) is identical to the Klein-Gordon equation for a massless scalar field minimally coupled to gravity in a curved space \cite{Visser:1998qn, Visser:1997ux, Visser:1993ub}, In Unruh's work, the acoustic geometry is described by the following sonic line element \begin{equation} ds^{2}=\frac{\rho _{0}}{\tilde{c}}\left( -\left( \tilde{c}% ^{2}-v_{o}^{r2}\right) d\tau ^{2}+\frac{\tilde{c}dr^{2}}{\tilde{c}% ^{2}-v_{o}^{r2}}+r^{2}d\Omega ^{2}\right) , \label{eq1} \end{equation}% where $\rho _{0\text{ }}$ is the density of the fluid, $\tilde{c}$ is the velocity of sound in the fluid (by simplicity we will assume these quantities constant) and $v_{0}^{r}$ represent the radial component of the flow velocity. On the other hand, if then we assume that at some value of $r=r_{+}$ we have the background fluid smoothly exceeding the velocity of sound, \begin{equation} v_{0}^{r}=-\tilde{c}+\tilde{a}(r-r_{+})+\vartheta (r-r_{+})^{2}, \label{eq2} \end{equation}% the above metric assumes just the form it has for a Schwarzschild metric near the horizon. In \ this limit metric our (\ref{eq1}) reads as follows \begin{equation} ds^{2}=\frac{\rho _{0}}{\tilde{c}}\left( -2\tilde{a}\tilde{c}(r-r_{+})d\tau ^{2}+% \frac{\tilde{c}dr^{2}}{2\tilde{a}\tilde{c}(r-r_{+})}+r^{2}d\Omega ^{2}\right) , \label{eq3} \end{equation}% where $\tilde{a}$ is a parameter associated with the velocity of the fluid defined as $(\nabla\cdot\overrightarrow{v})|_{r=r_{+}}$ \cite{Kim:2004sf}. Note that this geometry was study in Ref. \cite{Kim:2004sf} where the author studied the low energy dynamics and obtained the greybody factors for the sonic horizon from the absorption and the reflection coefficients. In the quantum version of this system we can hope that the acoustic black hole emits ''acoustic Hawking radiation''. This effect coming from the horizon of events is a pure kinematical effect that occurs in any Lorenzian geometry independent of its dynamical content \cite{Visser:1997ux}. It is well known that the acoustic metric does not satisfied the Einstein equations, due to the fact that the background fluid motion is governed by the continuity and the Euler equations. As a consequence of this fact, one should expect that the thermodynamic description of the acoustic black hole is ill defined. However, this powerful analogy between black hole physics and acoustic geometry admit to extend the study of many physical quantities associated to black holes, such as quasinormal modes which we consider in the next section. \section{\label{sec:Sonicbh1}Quasinormal modes of Unruh's Sonic Black Hole} The basis of the analogy between Einstein black holes and sonic black holes comes from considering the propagation of acoustic disturbances on a barotropic, inviscid, inhomogeneous and irrotational fluid flow. It is well known that the equation of motion for these acoustic disturbances (described by its velocity potential $\Phi $) is identical to the Klein-Gordon (KG) equation for a massless scalar field minimally coupled to gravity in a curved spacetime \cite{Visser:1998qn, Visser:1997ux, Visser:1993ub}, \begin{equation} \frac{1}{\sqrt{g}}\partial _{\mu }\left( \sqrt{g}g^{\mu \nu }\partial _{\nu }\Phi \right) =0. \label{kg1} \end{equation} In order to compute the QNM's we apply the standard procedure described in Refs.\cite{Kim:2004sf}\cite{Birmingham:2001hc}\cite{Fernando:2003ai}, and begin to rewrite the metric (\ref{eq1}) in the following form \begin{equation} ds^{2}=-f(r)d\tau ^{2}+\frac{\tilde{c}dr^{2}}{f(r)}+r^{2}d\Omega ^{2}, \label{eq4} \end{equation} where $f(r)=2\tilde{a}\tilde{c}(r-r_{+})$ \ In order to compute the QNM'S we need to solve the Klein Gordon equation (\ref{kg1}) in curve space described by the metric (\ref{eq4}) and, by virtue of symmetries of the metric we use the following Ansatz for the scalar field \[ \Phi =e^{-i\omega t}Y_{l}^{m}(\theta ,\varphi )R(r), \]% with this ansatz the KG equation can be separated as follows \begin{eqnarray} \frac{1}{\sin \theta }\left( \partial _{\theta }\left( \sin \theta \partial _{\theta }Y_{l}^{m}(\theta ,\varphi )\right) +\frac{1}{\sin \theta }\partial _{\varphi }^{2}Y_{l}^{m}(\theta ,\varphi \right) &=&l(l+1)Y_{l}^{m}(\theta ,\varphi ), \label{eq5} \\ \frac{1}{\tilde{c}^{2}r^{2}}\frac{d}{dr}\left( r^{2}f(r)\frac{d}{dr}% R(r)\right) +\left( \frac{\omega ^{2}}{f(r)}+\frac{l(l+1)}{r^{2}}\right) R(r) &=&0. \nonumber \end{eqnarray}% Now, if then we consider the change of variables $z=1-r_{+}/r$, that transform the radial equation to \begin{equation} z(1-z)\frac{d^{2}R}{dz^{2}}+\frac{dR}{dz}+P(z)R=0, \label{eq6} \end{equation}% here, \begin{equation} P(z)=\frac{B}{z(1-z)}-A, \label{p} \end{equation}% where, \begin{eqnarray} A &=&\frac{l(l+1)\tilde{c}}{2\tilde{a}r_{+}}, \label{eq7} \\ B &=&\left( \frac{\omega }{2\tilde{a}}\right) ^{2}. \nonumber \end{eqnarray} Note that in the new coordinate system, $z=0$ correspond to the horizon and $z=1$ corresponds to infinity, and if we consider the definition \begin{equation} R=z^{\alpha }(1-z)^{\beta }F(z), \label{eq8} \end{equation}% then the radial equation satisfies the standard hypergeometric form \cite{abramowitz} \begin{equation} z(1-z)\frac{d^{2}F}{dz^{2}}+(c-(1+a+b)z)\frac{dF}{dz}+abF=0, \label{eq9} \end{equation}% where \begin{eqnarray*} c &=&2\alpha +1, \\ ab &=&A+(\alpha +\beta )(\alpha +\beta -1), \\ a+b &=&2(\alpha +\beta )-1 \end{eqnarray*}% and \begin{eqnarray} \alpha ^{2} &=&-B, \label{eq10} \\ \beta &=&1\pm \sqrt{1-B}. \nonumber \end{eqnarray}% Without loss of generality, we put $\alpha =-i\sqrt{B}$ and $\beta =1-\sqrt{% 1-B}.$ It is well known that the hypergeometric equation has three regular singular point at $z=0,$ $z=1$ and $z=\infty $, and it has two independent solutions in the neighborhoods of each point \cite{abramowitz}. The solutions of the radial equation reads as follows \begin{equation} R(z)=C_{1}z^{\alpha }(1-z)^{\beta }F(a,b,c,z)+C_{2}z^{-\alpha }(1-z)^{\beta }F(a,b,c,z). \label{eq11} \end{equation} In the gravitational case quasinormal modes of a scalar classical perturbation of black holes are defined as the solution the Klein-Gordon equation characterized by purely ingoing waves at the horizon $\Phi \sim e^{-i\omega (t+r)}$, since at least classically outgoing flux is not allowed at the horizon. In addition, one has to impose boundary conditions on the solutions at the asymptotic region (infinity). It is crucial using the asymptotic geometry of the space time under study. In the case of asymptotically flat space time, the condition we need to imposes to the wave functions is a purely outgoing waves $\Phi \sim e^{-i\omega (t-r)}$ at the infinity \cite{Horowitz:1999jd}. For non asymptotically flat space time (AdS space time for example), several boundary conditions have been discussed in the literature. In the two dimensional cases of BTZ\ black holes, the quasinormal modes for scalar perturbations were found in Refs. \cite{Birmingham:2001hc}\cite{Cardoso:2001hn} by imposing the vanishing Dirichlet condition at infinity: In Ref. \cite{Birmingham:2001pj} these modes were computed with the condition of a vanishing energy momentum flux density at asymptotia. In this paper we consider the vanishing flux condition. In the neighborhood of the horizon $(z=0)$ using the property $F(a,b,d,0)=1$ the radial solution is given \ by \begin{equation} R(z)=C_{1}\exp (-i\frac{\omega }{2\tilde{a}}\ln (1-\frac{r_{+}}{r})+C_{2}\exp (i% \frac{\omega }{2\tilde{a}}\ln (1-\frac{r_{+}}{r})), \label{eq12} \end{equation}% while the boundary conditions of purely ingoing waves at the horizon demands $% C_{2}=0.$ In order to implement boundary condition at infinity $(z=1)$, we use the linear transformation $z\rightarrow 1-z$ formula for the hypergeometric function and we obtain, \begin{eqnarray} R &=&C_{1}z^{\alpha }(1-z)^{\beta }\frac{\Gamma (c)\Gamma (c-a-b)}{\Gamma (c-a)\Gamma (c-b)}F(a,b,a+b-c+1,1-z)+ \label{eq13} \\ &&+C_{1}z^{\alpha }(1-z)^{c-a-b+\beta }\frac{\Gamma (c)\Gamma (a+b-c)}{% \Gamma (a)\Gamma (b)}F(c-a,c-b,c-a-b+1,1-z). \nonumber \end{eqnarray} Using the condition that the flux is given by \begin{equation} \mathcal{F}=\frac{\sqrt{g}}{2i}(R^{\ast }\partial _{\mu }R-R\partial _{\mu }R^{\ast }), \label{eq14} \end{equation}% is vanished at infinity. Due to, $B>0$, the asymptotic flux has a set of divergent \ terms, with the leading term of order $(1-z)^{1-2\beta }$. Then according to Eq. (\ref{eq13}) each of these terms are proportional to \begin{equation} \left\vert \frac{\Gamma (c)\Gamma (a+b-c)}{\Gamma (a)\Gamma (b)}\right\vert ^{2}, \label{eq15} \end{equation}% and hence for a vanishing flux at $z=1$ the following restrictions have to be applied \begin{equation} a=-n\text{ or }b=-n, \label{eq17} \end{equation} where $n=0,1,2,...$. These conditions lead directly to an exact determination of the quasinormal modes as follows \begin{equation} \omega =-\frac{i}{2}\frac{(n-1)(n+3)\tilde{a}}{n+1}. \label{eq18} \end{equation} Note that the quasinormal modes have the properties being purely imaginary and independent of the radius of the horizon of the sonic black holes. Besides this, it shows the instabilities of this kind of analog black hole under sonic perturbations. Due to, that pure imaginary frequency of the zero mode has the wrong sign (positive)that mean an exponentially growing mode. For the infinite limit damping (\textit{i.e.} the $n\rightarrow \infty $ limit) \[ \omega _{\infty }=-i(n+\frac{1}{2})\tilde{a} \] \section{\label{sec:Sonicbh4}Conclusions and Remarks} In this paper we have computed the exact values of the quasinormal modes of Unruh's sonic black holes and according to Ref. \cite{Unruh:81} this QNM's should be behaved similarly to QNM's of near horizon Schwarzschild black holes. The QNM's are pure imaginary, this kind of QNM's was reported in Refs. \cite{Lopez-Ortega:2005ep}\cite{Berti:2003ud}\cite{Fernando:2003ai}. Therefore, for black hole that show this kind of QNM's is impossible to compute the area spectrum from the Kunstatter adiabatic invariant. On the other hand, the QNM's formula show that the zero mode exhibit an exponentially growing and therefore the Unruh's black hole in the limit that the background fluid smoothly exceeding the velocity of sound and, for the first order in the expansion (\ref{eq2}), becomes one instable geometry under sonic perturbations that excite the zero In opposite direction this result shows the large stabilities of the dumb hole for perturbations that excite the overtone with $n>1$ and for all overtones in higher damping limit. We also want to note that the frequencies of the QNM's do not depend of radius of the horizon and on the angular momentum of the perturbation. They only have a proportional dependence the control parameter $(\tilde{a})$ which describes the velocity of the sonic flow, in according to Ref. \cite{Kim:2004sf} where the decay rate for this geometry and the thermal emission was only proportional to the control parameter. \begin{acknowledgments} The author is grateful to C. Campuzano, S. Lepe, A. Lopez-Ortega, R. Troncoso and E. Vagenas for many useful and enlightening discussions. This work was supported by COMISION NACIONAL DE CIENCIAS Y TECNOLOGIA through FONDECYT \ Postdoctoral Grant 3030025 . Was also were partially supported by PUCV Grant No. 123.778/05. The author wishes to thank the Centro de Estudios Cient\'{\i}ficos (CECS) for its kind hospitality and U. Raff for a careful reading of the manuscript. \end{acknowledgments}
astro-ph/0508515
\section{Introduction} On February 7, 2004, during a routine Galactic plane scan, the \textit{INTEGRAL} observatory detected a source which was not in the \textit {INTEGRAL}\ reference catalog. Search in the archive led to the identification of several \textit {ROSAT}\ sources in the \textit {INTEGRAL}\ error box. Among them 2RXP~J130159.6-635806~ is the closest one to the best estimate of the source position obtained with \textit {INTEGRAL}\ \citep{chern-at}. The only mention of this source in the literature before the observations reported here (besides the \textit {ROSAT}\ catalog) can be found in \citet{kaspi95}. In this paper the authors report results of \textit {ASCA}\ observations of the famous binary system \psrb, and mention the presence of the another source located only 10 arc-minutes away to the north-west. \citet{kaspi95} note, in particular, that the absorption column $N_{\mathrm{H}}$ in the direction of this source is higher than that of \psrb, and that during their observations the brightness of 2RXP~J130159.6-635806~ was smaller, but comparable to that of \psrb. We started to follow the source after its detection with \textit{INTEGRAL} and our first \textit{XMM-Newton} observation in a set of observations organized to monitor \psrb~ during its 2004 periastron passage (\psrb\ has a very long, 3.4 years, orbital period). Analyzing these observations we have noticed that 2RXP~J130159.6-635806, which was also in the field of view, was significantly brighter than it was during the \textit{ASCA} observations \citep{chern-at} \footnote{In the telegram by a misprint 1RXP catalog was mentioned instead of 2RXP one}. On January 24, 2004 the 1-10 keV intensity was found to be approximately an order of magnitude higher than during the ASCA observation performed on August 13, 1995. In 2001 -- 2004 \psrb\ was regularly monitored by \textit {XMM-Newton}\, and 2RXP~J130159.6-635806\ was always in the \textit {XMM-Newton}\ field of view. In this paper we present the analysis of all available X-ray data from \textit {ASCA}, \textit{Beppo}SAX, \textit {INTEGRAL}, and \textit {XMM-Newton}\ in order to understand the nature of this variable source and investigate its properties. In particular, using the \textit {XMM-Newton}\ data we refine and improve the X-ray position, and discover X-ray pulsations. We study the long term spectral evolution, and through a simultaneous fit to the \textit {XMM-Newton}\ and \textit {INTEGRAL}\ data, show that the hard X-ray source seen in \textit {INTEGRAL}\ and 2RXP~J130159.6-635806\ are very likely to be the same object.\\ \indent The paper is organized as follows: in Section 2 we present the sequences of observations and methods used for data reduction and analysis. In section 3 we present the results obtained, and discuss them in Section 4. We then give a summary of our analysis in the last part of the paper. \section[]{Observations and Data Analysis} \subsection{\textit {INTEGRAL}\ observations} Since the launch of \textit {INTEGRAL}\ \citep{winkler} on October 17, 2002, 2RXP~J130159.6-635806\ was several times in the field of view of the main instruments during the routine Galactic plane scans and pointed observations (see Table \ref{intdata} for details). Most of the times the distance of the source from the center of the field of view was too big, and the exposure too short to use either the X-ray monitor JEM-X, or the spectrometer SPI. Therefore IBIS/ISGRI \citep{lebrun} is the only instrument we can use in our analysis of this source. In this analysis we have used the version 4.2 of the Offline Science Analysis (OSA) software distributed by ISDC \citep{courvoisier}. In 2003 the source was only marginally detected with IBIS/ISGRI, while in the beginning of 2004 it was clearly seen in the $20-60$ keV energy range. To obtain better results for spectral analysis we have combined data obtained on January 26 and February 7, when source was the brightest. \begin{table} \caption{Journal of the \textit {INTEGRAL}\ observations of 2RXP~J130159.6-635806. \label{intdata}} \begin{tabular}{c|c|c|c|c} \hline Data & Date & MJD& Exposure &20-60 keV Flux \\ Set & & & (ks) &$10^{-11}$ergs s$^{-1}$cm$^{-2}$\\ \hline I1& 2003-05-29 -- & 52788 -- &260& 1.4 $\pm$ 0.4\\ & 2003-07-18 & 52838 & & \\ I2& 2004-01-17&53021&5.3&3.83 $\pm$ 2.39\\ I3& 2004-01-26&53030&6.1&12.86 $\pm$ 2.83\\ I4& 2004-02-07&53042&4.2&15.04 $\pm$ 2.83\\ I5& 2004-02-19&53054&4.0&6.42 $\pm$ 3.10\\ \hline \end{tabular} \end{table} \subsection{\textit {XMM-Newton}\ observations \label{xmmobs}} In the course of monitoring of \psrb, \textit {XMM-Newton}\ observed 2RXP~J130159.6-635806~ 10 times during 2001 -- 2004, see Table \ref{data} for the journal of the observations. These data are a combination of public and private observations. The Observation Data Files (ODFs) were obtained from the online Science Archive\footnote{http://xmm.vilspa.esa.es/external/xmm\_data\_acc/xsa/index.shtml}, and were then processed and filtered using {\sc xmmselect} within the Science Analysis Software ({\sc sas}) v6.0.1. In all the observations the source was observed with MOS1 and MOS2 detectors only. The 2001 -- 2003 observations (X1 -- X5) were done in the Full Frame Mode, while the 2004 observations were performed in the Small Window Mode, in order to minimize pile-up problems for the primary source \psrb. In all observations a medium filter was used. \begin{table} \caption{Journal of \textit {ASCA}\, \textit{Beppo}SAX\, and \textit {XMM-Newton}\ observations of 2RXP~J130159.6-635806. \label{data}} \begin{center} \begin{tabular}{c|c|c|c} \hline Data& Date & MJD& Exposure \\ Set& & & (ks) \\ \hline \multicolumn{4}{c}{\textit {ASCA}~}\\ \hline A1& 1993-12-28 & 49349.28 & 20.7 \\ A2& 1994-01-26 & 49378.79 & 18.2 \\ A3& 1994-02-28 & 49411.61 & 8.3 \\ A4& 1995-02-07 & 49756.08 & 17.5 \\ A5& 1995-08-13 & 49942.98 & 19.6 \\ \hline \multicolumn{4}{c}{\textit{Beppo}SAX}\\ \hline S1& 1997-09-08 &50699.44 & 84 \\ \hline \multicolumn{4}{c}{\textit {XMM-Newton}~}\\ \hline X1& 2001-01-12 & 51921.73 & 11.3 \\ X2& 2001-07-11 & 52101.31 & 11.6 \\ X3& 2002-07-11 & 52467.24 & 41.0 \\ X4& 2003-01-29 & 52668.27 & 11.0 \\ X5& 2003-07-17 & 52837.53 & 11.0 \\ X6& 2004-01-24 & 53028.79 & 9.7 \\ X7& 2004-02-10 & 53045.43 & 5.2 \\ X8& 2004-02-16 & 53051.39 & 7.7 \\ X9& 2004-02-18 & 53052.02 & 5.2 \\ X10& 2004-02-20 & 53055.82 & 6.9 \\ \hline \end{tabular} \end{center} \end{table} The spectra and light curves were extracted from a 35$^\prime$$^\prime$ radius circle around the source position for the weak state of the source (i.e. obs. X1 -- X5, X9, X10), and from a 50$^\prime$$^\prime$ radius circle for the outburst phase (obs. X6 -- X8). As 2RXP~J130159.6-635806\ was not a primary goal of the \textit {XMM-Newton}\ observations its position is shifted to the edge of the field of view, and the shape of the source is slightly elongated. Therefore, in order to avoid mixing of source and background photons for the weak states of the source, we collected background light curves and spectra from a 35$^\prime$$^\prime$ radius circle located close to the source. For the bright state of the source we have used a circle of larger radius, and collected background light curves and spectra from a 100$^\prime$$^\prime$ outer radius annulus centered on the source position. Obs. X2, X4, X6 and X9 were partially affected by soft proton flares. Since proton flares originate from the interaction of the soft protons in the Earth's magnetosphere with the telescope, their timing behaviour is supposed to have no periodic structure. Therefore, no filtering of the data was applied to the timing analysis, as was done for another new X-ray pulsar IGR/AX J16320-4752 \citep{lut05}. We have nevertheless eliminated obs. X4 and X9 from our study as in these data sets the influence of the soft proton flares was especially strong. Arrival times of the photons have been corrected to the Solar System barycenter. The pulse period was searched with the epoch folding technique \citep{leahy83}: we produced periodograms and derived the best-fit period for each data set. Ten bins per any trial period were used. For the determination of the uncertainty of the source period we used the bootstrap method. We simulated a number of source "fictional" lightcurves, generating randomly (in accordance with the poissonian statistics of the counts) its flux in each lightcurve bin. These lightcurves provided us the range of "best-fit" periods of the source pulsations, therefore giving us information about the period uncertainty. Errors given in the paper represent a $1\sigma$ confidence level. For the spectral analysis the periods of soft protons need to be filtered out. To exclude them we extracted light curves above 10 keV with a hundred second binning and excluded all time bins in which the count was higher than 1.5 cnt/s. Data from MOS1 and MOS2 detectors were combined in both timing and spectral analysis in order to achieve better statistics. \subsection{\textit {ASCA}\ observations} 2RXP~J130159.6-635806\ was in the \textit {ASCA}\ field of view during the dates listed in Table \ref{data}. In our subsequent analysis we have used the data of both Gas Imaging Spectrometers (GIS 2 and 3). The data were analyzed with the help of the standard tasks of LHEASOFT/FTOOLS 5.2 package in accordance with the recommendations of the \textit {ASCA}\ Guest Observer Facility. \subsection{\textit{Beppo}SAX\ observations} During 1997 2RXP~J130159.6-635806\ was several times within the field of view of the instruments of the \textit{Beppo}SAX\ observatory. Unfortunately flux of the source detected by the MECS telescopes was strongly contaminated by instrumental features (e.g. ``strongback'', see \citealt{mecs}), and therefore detailed analysis of the source spectrum is not possible. However, the data obtained can still be used for timing analysis. For the data reduction we used standard tasks of LHEASOFT/FTOOLS 5.2 package. We only present here the results of an observation performed on September 8, 1997, when the statistics was good enough to perform a pulse search. \begin{figure} \begin{center} \includegraphics[width=8cm,angle=0]{psrb_field1.ps} \end{center} \caption{20--60 keV significance mosaic of I1,I2,I3,I4, and I5 observations. Axis are in Equatorial J2000 coordinates (degrees).} \label{intgr} \end{figure} \begin{figure} \begin{center} \includegraphics[width=8cm,angle=0]{ima300dpi_10contours.ps} \end{center} \caption{Contour plot of \textit {XMM-Newton}\ field of view for the X6 observation. A total of 10 contours were used with a linear scale. The external contour corresponds to 5 counts per pixel, and the most internal one to 50 counts per pixel. In this observation 2RXP~J130159.6-635806\ was forty times brighter than \psrb.} \label{xmm_ima} \end{figure} \section[]{Results} \subsection{Imaging Analysis} In Figure \ref{intgr} a zoom of the mosaic of all the \textit {INTEGRAL}\ observations mentioned in Table \ref{intdata} is given. 2RXP~J130159.6-635806\ is clearly seen in the image, along with a new source IGR J12349-6434 we have found during our analysis (Chernyakova et al. 2005a). All sources shown in the image were taken into account for a proper analysis of \textit {INTEGRAL}\ data (Goldwurm A. et al., 2003). During the \textit {XMM-Newton}\ monitoring programme of \psrb, two sources were clearly detected (e.g. Fig. \ref{xmm_ima} represents the contour plot of Obs. X6). Besides \psrb\ itself a second source can clearly be seen. The best coordinates we derive are RA$_ {J2000}$=13$^h$01$^m$58$^s$.8, DEC$_{J2000}$=-63$^\circ$ 58$^\prime$10$^\prime$$^\prime$ (the conservative error estimation is 3$^\prime$$^\prime$). This position is about 6$^\prime$$^\prime$ from the best {\textit {ROSAT}\ } position of 2RXP~J130159.6-635806. The uncertainty of the localisation of 2RXP~J130159.6-635806~ with {\textit {ROSAT}\ } is 5$^{\prime\prime}$ (ROSPSPC catalog\footnote{ftp://ftp.xray.mpe.mpg.de/rosat/catalogues/2rxp/pub}), therefore we conclude that most likely \textit {XMM-Newton}\ source and the \textit {ROSAT}\ one are the same. \subsection{Spectral Analysis} \begin{figure} \begin{center} \includegraphics[width=8cm,angle=0]{spcpar_rxp_4.ps} \end{center} \caption{Time evolution of the spectral parameters of 2RXP~J130159.6-635806\ and $2-10$ keV pulse fraction (in \%). Flux is given in units of $10^{-11}$ erg/s/cm$^2$. } \label{spcpar} \end{figure} The 1993--2004 time history of the $2-10$ keV flux from 2RXP~J130159.6-635806\ as observed by \textit {ASCA}\ and \textit {XMM-Newton}\ is shown in the upper panel of Fig. \ref{spcpar}. While during the \textit {ASCA}\ and the first half of the \textit {XMM-Newton}\ observations (X1 -- X5) the flux of the source was practically constant at a value $\sim 2.5\times10^{-11}$ ergs cm$^{2}$ s$^{-1}$, an outburst can be seen between the end of January and the beginning of February 2004 (obs. X5 -- X10). During this period the source flux increased by a factor of more than 5. Due to the strategy chosen for the \psrb\ monitoring campaign, the whole outburst was not entirely covered. During the flare 2RXP~J130159.6-635806\ was observed only twice (24 Jan and 10 Feb), with approximately the same flux level. During the following 10 days its flux dropped to the 2001-2003 level with a characteristic decay time of $\sim7.5$ days (Fig.~\ref{spcpar}). As can be seen from Table \ref{intdata}, this outburst was also detected by \textit {INTEGRAL}\ in the $20-60$ keV energy range. While in 2003 out of a $\sim260$ ks observation, the source was only marginally detected at $\sim3\sigma$ level, it was clearly seen during a 6.1 ks observation on January 26, 2004 (I3), and during a 4.2 ks observation on February 7, 2004 (I4). At those times it was ten times brighter than its averaged level over 2003. On February 19, 2004 (I5) and during a ToO observation of \psrb\ performed in March 2004 \citep{shaw}, the source was again only marginally or not detected. The spectral analysis was done with the XSPEC software package. The spectrum of 2RXP~J130159.6-635806~ during the lowest state as observed with \textit {ASCA}\ in 1994 (obs. A4), a typical \textit {XMM-Newton}\ spectrum of the source in 2002 -- 2003 (obs. X3), \textit {XMM-Newton}\ and \textit {INTEGRAL}\ spectra during the outburst (obs. X7, obs. I3+I4) and just after (obs. X8), are shown on Fig.~\ref{spectry}. \begin{figure} \begin{center} \includegraphics[width=8cm,bb=0 100 666 666]{spec.eps} \end{center} \caption{ Spectral evolution of 2RXP~J130159.6-635806, as observed with \textit {XMM-Newton}\, \textit {ASCA}\ and \textit {INTEGRAL}. To better show the spectral evolution for the \textit {XMM-Newton}\ observations the spectra from obs. X3 and obs. X8 were multiplied by 0.5 and 0.7 respectively. The combined \textit {XMM-Newton}\ and \textit {INTEGRAL}\ spectrum is fitted with an absorbed power law model with a high energy cutoff.} \label{spectry} \end{figure} The \textit {XMM-Newton}\ and \textit {ASCA}\ data show that the spectrum of the source in the soft $2-10$ keV energy range is well described by a simple power law modified by absorption. In Table \ref{fitpar} we present results of three-parameter fits. The uncertainties are given at the $1\sigma$ statistical level and do not include systematic uncertainties. A graphical representation of the evolution of the spectral parameters is shown in Fig.~\ref{spcpar}. For all observations, the value of the photo-absorption is practically constant with an average value of $N_{\mathrm{H}}=(2.48\pm0.07) \times 10^{22}$cm$^{-2}$. This value is about five times higher than the value we found for \psrb\ ($0.48\pm0.03 \times 10^{22}$cm$^{-2}$), which is located only 10 arcminutes away (Chernyakova et al., 2005b). Measurements of the interstellar hydrogen in the Galaxy by Dickey \& Lockman (1990) give $N_{\mathrm{H}}$ values in the range $(1.7-1.9)\times 10^{22}$cm$^{-2}$, which is smaller than the one we deduced from X-ray spectral fits. This indicates that part of the absorption might be intrinsic to the source. While the \textit {ASCA}\ and \textit {XMM-Newton}\ data are well fitted with a simple power law modified by photo-absorption (see Table \ref{fitpar}), \textit {INTEGRAL}\ data show a presence of a high-energy cut-off at about $\sim25$ keV, which is typical for accreting X-ray pulsars \citep{white83}. We fitted the joint spectrum obtained with \textit {XMM-Newton}\ (X7) and \textit {INTEGRAL}\ (I3+I4) with an absorbed cut-off power law. The best fit parameters obtained are: $N_{\mathrm{H}}=(2.55\pm0.13) \times 10^{22}$cm$^{-2}$, $\Gamma=0.69\pm0.05$, $E_{cut}=24.3\pm3.4$ keV, $E_f=8.5\pm3.3$ keV. The normalization of the INTEGRAL/IBIS spectrum was taken as arbitrary. \begin{table} \caption{Models Parameters for \textit {ASCA}\ and \textit {XMM-Newton}\ observations of 2RXP~J130159.6-635806.}\label{fitpar} \begin{tabular}{c|c|c|c|c} \hline Data& N$_{\mathrm{H}}$& Photon & Flux$_{2-10}^*$ &$\chi^2$ (dof) \\ Set& 10$^{22}$ cm$^{-2}$& Index & & \\ \hline A1& 2.16 $\pm$ 0.51& 1.11 $\pm$ 0.22& 1.30 $\pm$ 0.40 & 0.86 (187) \\ A2& 2.38 $\pm$ 0.35& 1.28 $\pm$ 0.15& 2.51 $\pm$ 0.62 & 0.97 (187) \\ A3& 2.08 $\pm$ 0.54& 1.26 $\pm$ 0.25& 2.29 $\pm$ 0.76 & 0.66 (187) \\ A4& 3.78 $^{+1.59}_{-1.27}$& 1.78 $\pm$ 0.50& 4.43 $^{+0.67}_{-0.56}$ & 0.95 (198) \\ A5& 2.89 $\pm$ 0.35& 1.45 $\pm$ 0.15& 1.08 $\pm$ 0.24 & 1.00 (760) \\ X1& 2.62 $\pm$ 0.13& 1.00 $\pm$ 0.06& 2.77 $\pm$ 0.27 & 1.00 (255) \\ X2& 2.43 $\pm$ 0.12& 0.96 $\pm$ 0.06& 2.50 $\pm$ 0.25 & 1.03 (245) \\ X3& 2.51 $\pm$ 0.06& 1.01 $\pm$ 0.03& 2.51 $\pm$ 0.12 & 1.15 (722) \\ X4& 2.51 $\pm$ 0.15& 0.98 $\pm$ 0.07& 2.40 $\pm$ 0.26 & 0.94 (230) \\ X5& 2.43 $\pm$ 0.14& 0.96 $\pm$ 0.07& 2.34 $\pm$ 0.24 & 1.15 (241) \\ X6& 2.42 $\pm$ 0.11& 0.63 $\pm$ 0.05& 9.58 $\pm$ 0.76 & 1.19 (435) \\ X7& 2.25 $\pm$ 0.13& 0.55 $\pm$ 0.06& 9.20 $\pm$ 0.84 & 1.12 (369) \\ X8& 2.58 $\pm$ 0.15& 0.77 $\pm$ 0.06& 4.47 $\pm$ 0.58 & 1.06 (316) \\ X9& 2.68 $\pm$ 0.23& 0.90 $\pm$ 0.10& 3.49 $\pm$ 0.60 & 1.06 (173) \\ X10& 2.51 $\pm$ 0.19& 0.86 $\pm$ 0.08& 2.35 $\pm$ 0.34 & 1.00 (169) \\ \hline \end{tabular} $^*$ in $10^{-11}$erg cm$^{-2}$ s$^{-1}$ units. \end{table} \subsection{Timing Analysis} \begin{figure} \begin{center} \includegraphics[width=8cm,angle=0]{lcur_efold.ps} \end{center} \caption{MOS1 $2-10$ keV light curve of 2RXP~J130159.6-635806\ during the flare (obs. X6) (top panel), and $\chi^2$ distribution versus trial period for the brightest (X6) and the longest (X3) observations (middle and bottom panels respectively).} \label{lcur} \end{figure} Analyzing the light curve of 2RXP~J130159.6-635806\ obtained with \textit {XMM-Newton}\ in the bright state we find that it demonstrates near coherent strong variations with a characteristic time about 700 s. Fig.~\ref{lcur} (upper panel) shows the example of a 48 s -- binned 2--10 keV MOS1 background subtracted light curve of 2RXP~J130159.6-635806\ during the flare (obs. X6). The periodograms ($\chi^2$ distribution versus trial period for observations X3 and X6) are also represented in the same figure. Periodic variations of the source flux are obvious. The following analysis showed that such variations are also observed in the light curve of the source in low state. \begin{figure} \begin{center} \includegraphics[width=7.5cm,bb=40 180 565 720,angle=0]{histper.ps} \end{center} \caption{Time evolution of 2RXP~J130159.6-635806\ pulse period. } \label{histper} \end{figure} \begin{figure} \begin{center} \includegraphics[width=8cm,bb=80 180 460 720,angle=0]{pprof.ps} \end{center} \caption{2RXP~J130159.6-635806\ pulse profiles variations in the $2-10$ keV energy range. Pulse profiles have been aligned using the minimum phase bin.} \label{pprof} \end{figure} Subsequent analysis of \textit {ASCA}\ and \textit{Beppo}SAX\ light curves of the source flux also showed pulsations, although not in all the datasets. This is due to the much smaller statistics of these data. With \textit {INTEGRAL}\ data we can set only upper limit on the pulse fraction. For the brightest I3 and I4 observations the the 3 $\sigma$ upper limit is 70\%, which is consistent with almost simultaneous \textit {XMM-Newton}\ observations X6 and X7. The values of the pulse period obtained between 1994 and 2004 are given in Table \ref{pulse} and Fig. \ref{histper}. An average spin up rate changes from $\dot P\simeq -6 \times 10^{-8}$s s$^{-1}$ in 1994 -- 2001, to approximately $\dot P\simeq -2 \times 10^{-7}$s s$^{-1}$ in 2001 -- 2004, that corresponds to $\dot \nu\simeq 10^{-13}$Hz s$^{-1}$ and $\dot \nu\simeq 4 \times 10^{-13}$Hz s$^{-1}$ respectively. \begin{table} \caption{2RXP~J130159.6-635806\ pulse period, as observed with \textit {ASCA}, \textit{Beppo}SAX, and \textit {XMM-Newton}. Observations X4 and X9 have been excluded, as they were strongly affected by soft proton flares. \label{pulse}} \begin{center} \begin{tabular}{c|c|c|c} \hline Data&Date,&Pulse & Pulse \\ Set&MJD&Period, s&Fraction, \% \\ \hline A2 &49378.79 &735 $\pm$ 2.7&35 $\pm$ 6 \\ S1 &50699.5 &730 $\pm$ 1.5&26 $\pm$ 5 \\ X1 &51921.73 & 721.9$\pm$ 2.7&23.4$\pm$ 2.6\\ X2 &52101.31 & 716.8$\pm$ 1.4&15.9$\pm$ 2.4\\ X3 &52467.24 & 714.4$\pm$ 0.5&10.1$\pm$ 1.3\\ X5 &52837.53 & 704.0$\pm$ 1.7&22.1$\pm$ 2.5\\ X6 &53028.79 & 703.8$\pm$ 0.4&60.2$\pm$ 1.2\\ X7 &53045.43 & 703.7$\pm$ 0.9&62.3$\pm$ 1.9\\ X8 &53051.39 & 703.9$\pm$ 1.1&49.8$\pm$ 2.2\\ X10&53055.82 & 704.2$\pm$ 1.1&44.0$\pm$ 2.9\\ \hline \end{tabular} \end{center} \end{table} The 2--10 keV pulse profiles of 2RXP~J130159.6-635806\ obtained in each data set by folding of the \textit {XMM-Newton}\ light curves at the best fitted period are shown in Fig.~\ref{pprof}. In general the source pulse profile consists of one broad peak, but in several observations (the low intensity ones) some additional features (as a second peak) are visible. We have calculated the $2-10$ keV pulse fraction $P=(I_{max}-I_{min})/(I_{max}+I_{min})$ (where $I_{max}$ and $I_{min}$ are intensities at the maximum and minimum of the pulse profile) in all the \textit {XMM-Newton}\ observations. These values are plotted on Fig.~\ref{spcpar} (second panel from the top). It is interesting to note that the pulse fraction is not constant and varies with time from $\sim$10 -- 25\% to $\sim60$ \% during the outburst. \begin{figure} \begin{center} \includegraphics[width=8cm,bb=20 280 550 720,angle=0]{pp_x6_2_6_10_new.ps} \end{center} \caption{$2-6$ keV and $6-10$ keV pulse profiles of 2RXP~J130159.6-635806\ during the brightest observation (X6) along with the hardness ratio. } \label{pprof2610} \end{figure} Fig.~\ref{pprof2610} shows $2-6$ keV and $6-10$ keV pulse profiles of 2RXP~J130159.6-635806\ during the brightest observation (obs. X6) along with the hardness ratio. We can see that the hardness remains practically constant during the pulse, except just before phase 0.5, where it suddenly drops by $\sim$20\%, and near the pulse minimum (around phase 1), where it increases by $\sim$20\%. In order to study the reasons of the variations in the shape of the pulse profile, we extracted separately spectra of obs. X6 from the low phase and from high phase. The background spectra were extracted from the same GTIs, and response files were produced as explained in Section \ref{xmmobs}. We then fitted the resultant spectra in XSPEC with a simple model of an absorbed power law. The best fit parameters are $N_{\mathrm{H}}=2.2\pm0.3\times 10^{22}$ cm$^{-2}$, and $\Gamma=0.38\pm0.14$ with a 2-10 keV unabsorbed flux of $6.9 \times 10^{-11}$ erg cm$^{-2}$ s$^{-1}$ for the low phase, and $N_{\mathrm{H}}=2.19\pm0.12\times 10^{22}$ cm$^{-2}$, and $\Gamma=0.56\pm0.05$ with a 2-10 keV unabsorbed flux of $1.5 \times 10^{-11}$ erg cm$^{-2}$ s$^{-1}$ for the high phase. In both cases the reduced $\chi2$ is close to 1. The $\Gamma$-$N_{\mathrm{H}}$ contour plots for both phases are given on Fig. \ref{cntr}. It is clear that the variations between both phases are not due to variations of the absorbing column density. On the contrary they seem to reflect some changes in the spectral properties of the emitting medium, since there is some increase in the photon index. However at the 3 $\sigma$ level both values are still compatible. \begin{figure} \begin{center} \includegraphics[width=8cm,bb=20 50 500 350,angle=0]{ContourX6.eps} \end{center} \caption{ Confidence contour plots of the column density $N_H$ vs. photon index $\Gamma$ for a power-law fit to high and low phases of obs. X6. The contours are the 68\%, 90\%, and 99\% confidence levels.} \label{cntr} \end{figure} We investigated the energetic dependence of the pulse fraction for the bright state of the source (obs. X6) and found that it is more or less stable around $53$--$55$\% in the 4-10 keV energy band and increases to $\sim63$\% in the soft 2-4 keV band. \section{Optical counterpart and the source distance} The accretion powered X-ray pulsars are usually found within high-mass X-ray binaries (HMXB). The HMXB may be divided mainly into those with main-sequence Be star companions, and those with evolved OB supergiants companions. In the case of Be/X-ray binaries the hard X-ray emission is caused by accretion of circumstellar material on to the neutron star. The source of accreting material is thought to be concentrated towards the equatorial plane of the rapidly rotating Be stars. Most of Be/X-rays binaries are transients, displaying X-ray outburst and long period of quiescence, when no X-ray flux is detected. A smaller group of Be/X-rays binaries are persistent sources with rather low X-ray luminosity ($<10^{35}$erg/s), relatively long ($>200$s) pulse periods and very weak iron line at 6.4 keV. (Reig \& Roche, 1999, Negueruela 2004, Haberl \& Pietsch 2004). The supergiant binaries may be further subdivided into two classes, depending on whether the mass transfer is due to the Roche lobe overflow, or a capture from the stellar wind. As the typical spin period for the pulsars with the companions filling its Roche lobe is less than 20 seconds (e.g. Corbet 1986) such a companion seems to be unlikely for 2RXP~J130159.6-635806. The wind-fed supergiant binaries has long (of several hundreds seconds) spin period, and are pesistent sources with short, irregular outbursts(e.g. Corbet 1986, Bildsten et al. 1997, Negueruela 2004). All the known systems display approximately the same X-ray luminosity $\sim 10^{36}$ erg/s. Variable X-ray activity of 2RXP~J130159.6-635806\ indicates that this binary system unlikely contains an OB supergiant. In any of the cases mentioned above we should expect that the optical companion of the X-ray source should be bright in the optical and infrared spectral bands. In order to check this we used the results of DSS and 2MASS surveys. In both catalogs a relatively bright star with magnitudes $B=17.2$, $R=13.9$, $J=8.87$, $H=7.53$, $K=7.01$, is visible in the vicinity of the X-ray source, but its position is just outside the \textit {XMM-Newton}\ error box (the offset between the positions is $\sim4.4$$^\prime$$^\prime$). Besides this bright star another possible counterpart candidate is found in 2MASS with coordinates (equinox 2000) RA=13$^h$01$^m$58$^s$.7, DEC=-63$^\circ$ 58$^\prime$09$^\prime$$^\prime$ (at $\sim 1.1$$^\prime$$^\prime$ from the best \textit {XMM-Newton}\ position) and magnitudes $J=12.96\pm 1.33$, $H=12.05\pm0.03$, $K_s=11.35\pm0.09$. The good agreement between both positions would tend to suggest that this second source is the likely counterpart to 2RXP~J130159.6-635806. To estimate the de-reddened magnitude we assume that this counterpart was only absorbed by the Galactic absorption on the line of sight. Using the value of $N_{\mathrm{H}} =1.7 \times10^{22}$ cm$^{-2}$ we estimate the de-reddened magnitudes $J_{der}=10.73\pm1.33$, $H_{der}=10.72\pm0.03$, ${K_s}_{der}=10.51\pm0.09$ (only statistical uncertainties are quoted). If the companion star is a Be main sequence star with surface temperature around 10000 K and the radius around 6-10 $R_{\odot}$ we can expect to see its infrared brightness $J, H, K\sim10-11$ if the binary system is at the distance $\sim 4-7$ kpc. An additional tentative argument in favour of such source distance is the source location in the direction to the Crux spiral arm tangent, as HMXBs are concentrated towards galactic spiral arms \citep{grimm02,lut05:0}. At such a distance unabsorbed intrinsic luminosity of 2RXP~J130159.6-635806\ is about $\sim 5\times 10^{34}$ -- $10^{35}$ erg/s, \textit{i.e.} compatible with the typical luminosities of the persistent Be/X-rays binaries. \section{Conclusions} We report the identification by \textit {XMM-Newton}\ of a new X-ray pulsar with a spin period of $\sim700$ s in the region of the Crux spiral arm. The source was observed several times in 1993-2004 with \textit {ASCA}, \textit{Beppo}SAX\ and \textit {XMM-Newton}\ during the monitoring campaigns of the pulsar \psrb. The typical flux measured from the source in the $2-10$ keV energy band is about $(2-3)\times 10^{-11}$ erg cm$^{-2}$ s$^{-1}$, but in Jan-Feb 2004 an outburst with more than 5 times increase of the intensity was observed from the source. During this outburst the source was also detected in the hard X-rays with the \textit {INTEGRAL}\ observatory. Strong pulsations of the X-ray flux with a period $\sim700$ s were detected. The study of a set of observations has shown that the pulse period changed from $\sim735$ sec in 1994 to $\sim 704$ sec in 2004. The average value of the spin-up rate is $\dot \nu\simeq 2\times 10^{-13}$Hz s$^{-1}$, that is typical for accretion powered X-ray pulsars (see e.g. Bildsten et al. 1997). Long pulsation period indicates that the pulsar likely resides in a binary system with a massive companion. The proposed infrared counterpart to the X-ray source does not contradict this hypothesis. From brightness of the infrared counterpart measured, a tentative estimate of the distance of binary system is 4-7 kpc, which can indicate that the HMXB is located close to the Crux spiral arm tangent. \section{Acknowledgements} The authors acknowledge useful discussions with T.J.-L. Courvoisier, P. Hakala, A. Paizis, I. Kreykenbohm, S.E. Shaw, and thank L. Sidoli for valuable comments on the details of \textit{Beppo}SAX\ data reduction. Authors are grateful to L. Foschini for the helpful advises on the \textit {XMM-Newton}\ data analysis. Authors thank S. Molkov for the support of this work. Authors are grateful to the anonymous referee for helpful comments. This work was partially done during AL visits to the INTEGRAL Science Data Centre. AL thanks the ISDC staff for its hospitality and computer resources; these visits were supported by ESA. AL also acknowledges the support of RFFI grant 04$-$02$-$17276.
gr-qc/0508086
\section{Introduction} One of the motivations for the quantization of cosmological models was that of avoiding the initial {\it Big Bang} singularity. Since the pioneering work in quantum cosmology due to DeWitt \cite{dewitt}, workers in this field have been attempting to prove that quantum cosmological models entail only regular space-times. An important contribution to this issue was given by Hartle and Hawking \cite{hawking}, who proposed the {\it no-boundary} boundary condition, which selects only regular space-times to contribute to the wave-function of the Universe, derived in the path integral formalism. Therefore, by construction, the {\it no-boundary} wave-functions are everywhere regular and predict a non-singular initial state for the Universe. Using that boundary condition, in certain particular cases the {\it no-boundary} wave-function can be explicitly computed \cite{hawking, gil, fujiwara}. Another way by which one may compute the wave-function of the Universe is by directly solving the Wheeler-DeWitt equation \cite{dewitt}. The wave-function of the Universe for some important models have been computed using this approach \cite{moss, lemos}. We should mention another boundary condition, the {\it tunelling} boundary condition, proposed by Vilenkin \cite{vilenkin}, to be imposed on solutions to the Wheeler-DeWitt equations. The {\it tunelling} wave-function was also shown to give rise to models free from the initial {\it Big Bang} singularity \cite{vilenkin}. More recently, the absence of singularities in quantum cosmological models has been investigated by using the DeBroglie-Bohm interpretation of quantum mechanics \cite{bohm}. In this interpretation, one may compute the dynamical trajectories for the quantum variables of the system. In particular, since for most of the quantum cosmological models one uses minisuperspaces \cite{misner}, one computes, through the DeBroglie-Bohm interpretation, the dynamical trajectories for the scale factor. Then, for the great majority of cases studied so far, the scale factor never vanishes, which implies that the model has no initial singularities \cite{germano, germano1}. Although we shall restrict our attention to a model in quantum general relativity, it is important to mention that several works in loop quantum cosmology have also shown that the wave-function of the Universe is free from initial singularities \cite{bojowald, bojowald1}. Several important theoretical results and predictions in quantum cosmology have been obtained with a negative cosmological constant. Considering a subset of all four-dimensional spacetimes with constant negative curvature and compact space-like hypersurfaces, Carlip and coworkers showed how to compute the sum over topologies leading to the {\it no-boundary} wave-function \cite{carlip, carlip1}. These spacetimes are curved only due to the presence of a negative cosmological constant. In Ref. \cite{carlip} it was shown how to obtain a vanishing cosmological constant as a prediction from the {\it no-boundary} wave-function and in Ref. \cite{carlip1} it was shown how to obtain predictions about the topology of the Universe from the {\it no-boundary} wave-function. We may also mention the result in Ref. \cite{gil}, where the WKB {\it no-boundary} wave-function of a homogeneous and isotropic Universe with a negative cosmological constant was computed. Due to the regularity condition imposed upon the space-times contributing to the {\it no-boundary} wave-function, it was shown that only a well defined, discrete spectrum for the cosmological constant is possible. It was also found that among the space-times contributing to wave function, there were two complex conjugate ones that showed a new type of signature change. It is important to mention that although recent observations point toward a positive cosmological constant, it is still possible that at the very early Universe the cosmological constant be negative. Besides that, we think it is important to understand more about such models which represent bound Universes (analogous to uni-dimensional atoms, in the present situation). In the present paper, we use the formalism of quantum cosmology in order to quantize three Friedmann-Robertson-Walker models in the presence of a negative cosmological constant and radiation. The radiation is treated by means of the variational formalism developed by Schutz \cite{schutz}. The models differ from each other by the constant curvature of their spatial sections, which may be positive, negative or zero. They give rise to Wheeler-DeWitt equations for the scale factor, which have the form of the Schr\"{o}dinger equation for the quartic anharmonic oscillator. We find the eigenvalues and eigenfunctions of those equations by using a method first developed by Chhajlany and Malnev \cite{chhajlany}. Then we use the eigenfunctions in order to construct wave packets for each case and evaluate the expection value of the scale factors as a function of time. In Sec. \ref{sec:classical} we introduce the classical models and solve them analytically, briefly commenting on the general behavior of the classical solutions. In Sec. \ref{sec:quantization} we quantize the model by solving the corresponding Wheeler-DeWitt equation. The wave-functional depends on the scale factor $a$ and on the canonical variable associated to the fluid, which in the Schutz variational formalism plays the role of time $T$. We separate the wave-functional in two parts, one depending solely on the scale factor and the other depending only on the time. The solution in the time sector of the Wheeler-DeWitt equation is trivial, leading to imaginary exponentials of the type $e^{-iE\tau}$, where $E$ is the system energy and $\tau =- T$. The scale factor sector of the Wheeler-DeWitt equation gives rise to the eigenvalue equation for the quartic anharmonic oscillator. We find semi-analytic solutions formed by the product of a decaying exponential term with a polynomial of fixed degree \cite{chhajlany}. In Sec. \ref{sec:results} we construct wave packets from the eigenfunctions, for each case, and compute the time-dependent, expectation values of the scale factors. We find in all cases that the expection values of the scale factors show bounded oscillations. Since the expectation values of the scale factors never vanish, we conclude that these models do not have singularities. We also observe that the energy levels depend on the value of the curvature constant $k$ of the spatial sections. The model with $k<0$ has the most bounded energy levels, followed by the one with $k=0$; the model with $k>0$ have less bounded energy levels. Finally, in Sec. \ref{sec:conclusions} we summarize the main points and results of our paper. \section{The Classical Models} \label{sec:classical} Friedmann-Robertson-Walker cosmological models are characterized by the scale factor $a(t)$ and have the following line element, \begin{equation} \label{1} ds^2 = - N(t)^2 dt^2 + a(t)^2\left( \frac{dr^2}{1 - kr^2} + r^2 d\Omega^2 \right)\, , \end{equation} where $d\Omega^2$ is the line element of the two-dimensional sphere with unitary radius, $N(t)$ is the lapse function and $k$ gives the type of constant curvature of the spatial sections. The curvature is positive for $k=1$, negative for $k=-1$ and zero for $k=0$. Here, we are using the natural unit system, where $\hbar=c=G=1$. The matter content of the model is represented by a perfect fluid with four-velocity $U^\mu = \delta^{\mu}_0$ in the comoving coordinate system used, plus a negative cosmological constant. The total energy-momentum tensor is given by, \begin{equation} T_{\mu,\, \nu} = (\rho+p)U_{\mu}U_{\nu} - p g_{\mu,\, \nu} - \Lambda g_{\mu,\, \nu}\, , \label{2} \end{equation} where $\rho$ and $p$ are the energy density and pressure of the fluid, respectively. Here, we assume that $p = \rho/3$, which is the equation of the state for radiation. This is justified because the Universe is initially dominated by radiation. Einstein's equations for the metric (\ref{1}) and the energy momentum tensor (\ref{2}) are equivalent to the Hamilton equations generated by the super-hamiltonian constraint \begin{equation} {\cal{H}}= -\frac{p_{a}^2}{12} - 3ka^2 +\Lambda a^{4} + p_{T}, \label{3} \end{equation} where $p_{a}$ and $p_{T}$ are the momenta canonically conjugated to $a$ and $T$ the latter being the canonical variable associated to the fluid \cite{germano1}. The classical dynamics is governed by the Hamilton equations, derived from eq. (\ref{3}), namely \begin{equation} \left\{ \begin{array}{llllll} \dot{a} =&\frac{\partial (\displaystyle N{\cal H})}{\displaystyle \partial p_{a}}=-\frac{\displaystyle Np_{a}}{\displaystyle 6}\, ,\\ & \\ \dot{p_{a}} =&-\frac{\displaystyle \partial (N{\cal H})}{\displaystyle \partial a}=6kaN-4\Lambda a^3N \, ,\\ & \\ \dot{T} =&\frac{\displaystyle \partial (N{\cal H})}{\displaystyle \partial p_{T}}=N\, ,\\ & \\ \dot{p_{T}} =&-\frac{\displaystyle \partial (N{\cal H})}{\displaystyle \partial T}=0\, .\\ & \\ \end{array} \right. \label{4} \end{equation} We also have the constraint equation ${\cal H} = 0$. Choosing the gauge $N=1$, we have the following solutions for the system (\ref{4}): \begin{equation} T (\tau) = \tau + c_{1}\, ,\\ a(\tau) = \frac{\sqrt{6\,\beta}}{{{\displaystyle\sqrt{3k+\sqrt {9k^2-12\,\Lambda\, \beta}}}}} sn\left(\frac{\displaystyle\sqrt{18k+6\,\sqrt{9k^2-12\, \Lambda\,\beta}} \left( \tau-\tau_0\right)}{\displaystyle 6},\sigma\right), \label{5} \end{equation} where $c_{1}$, $\beta$ and $\tau_{0}$ are integration constants, $sn$ is the Jacobi's elliptic sine \cite{abramowitz} of modulus $\sigma$ given by \begin{equation} \sigma =\frac{\sqrt{2}}{2}\sqrt{{\frac{-2\beta\Lambda+3k^2-k\sqrt{9k^2-12\, \Lambda\,\beta}}{\Lambda\,\beta}}}. \label{6} \end{equation} In the case of the models studied here, for wich $\Lambda<0$, Eqs. (\ref{5}) and (\ref{6}) imply that the scale factor performs bounded oscillations, for all values of $k$. When the scale factor vanishes we have the formation of a singularity which may be either a {\it Big Bang} or a {\it Big Crunch}. For the sake of completeness, we mention that for $\Lambda=0$, the case studied in Ref. \cite{lemos} is recovered. \section{The Quantization of the Models} \label{sec:quantization} We wish to quantize the models following the Dirac formalism for quantizing constrained systems \cite{dirac}. First we introduce a wave-function which is a function of the canonical variables $\hat{a}$ and $\hat{T}$, \begin{equation} \label{7} \Psi\, =\, \Psi(\hat{a} ,\hat{T} )\, . \end{equation} Then, we impose the appropriate commutators between the operators $\hat{a}$ and $\hat{T}$ and their conjugate momenta $\hat{P}_a$ and $\hat{P}_T$. Working in the Schr\"{o}dinger picture, the operators $\hat{a}$ and $\hat{T}$ are simply multiplication operators, while their conjugate momenta are represented by the differential operators \begin{equation} p_{a}\rightarrow -i\frac{\partial}{\partial a}\hspace{0.2cm},\hspace{0.2cm} \hspace{0.2cm}p_{T}\rightarrow -i\frac{\partial}{\partial T}\hspace{0.2cm}. \label{8} \end{equation} Finally, we demand that ${\cal H}$, the super-hamiltonian operator corresponding to (\ref{3}), annihilate the wave-function $\Psi$, which leads to Wheeler-DeWitt equation \begin{equation} \bigg(\frac{1}{12}\frac{{\partial}^2}{\partial a^2} - 3ka^2 + \Lambda a^4\bigg)\Psi(a,\tau) = -i \, \frac{\partial}{\partial \tau}\Psi(a,\tau), \label{9} \end{equation} where the new variable $\tau= -T$ has been introduced. In order to avoid possible countributions from boundary terms at spatial infinity, we shall consider compact tri-dimensional spatial sections in the cases $k=0$ and $k=-1.$ The operator $\hat{{\cal H}}$ is self-adjoint \cite{lemos} with respect to the internal product, \begin{equation} (\Psi ,\Phi ) = \int_0^{\infty} da\, \,\Psi(a,\tau)^*\, \Phi (a,\tau)\, , \label{10} \end{equation} if the wave functions are restricted to the set of those satisfying either $\Psi (0,\tau )=0$ or $\Psi^{\prime}(0, \tau)=0$, where the prime $\prime$ means the partial derivative with respect to $a$. The Wheeler-DeWitt equation (\ref{9}) is the Schr\"{o}dinger equation for the quartic anharmonic oscillator and may be solved by writing $\Psi(a, \tau)$ as \begin{equation} \Psi (a,\tau) = e^{-iE\tau}\eta(a)\, \label{11} \end{equation} where $\eta(a)$ depends solely on $a$. Then $\eta(a)$ satisfies the eigenvalue equation \begin{equation} -\frac{d^2{\eta(a)}}{da^2} + V_{e} (a)\eta(a)= 12E\eta(a)\, , \label{12} \end{equation} where the effective potential $V_{e}(a)$ is given by \begin{equation} V_{e}(a) = 36ka^2-12\Lambda a^4\, . \label{13} \end{equation} \subsection{The Method of Chhajlany and Malnev} The method of Chhajlany and Malnev \cite{chhajlany} starts with the addition of an extra term to the original anharmonic oscillator potential, so that the modified Hamiltonian admits a subset of manifestly normalizable solutions. In the case we are considering, the extra term to be added to the effective potential (\ref{13}) is proportional to $a^6$. In terms of that new enlarged potential, the eigenvalue equation (\ref{12}) may be re-written as \begin{equation} \label{14} \eta''(a)\, +\, ( \varepsilon - \alpha a^2 - b a^4 - c a^6 ) \eta(a)\, =\, 0\, \end{equation} where $\varepsilon = 12E$, $\alpha = 36k$, $b = -12\Lambda$, $c$ is a parameter to be determined by the method. The {\it Ansatz} for the solution of Eq. (\ref{14}) takes the form \begin{equation} \label{15} \eta(a)\, = N \, \exp{\left(-\frac{c}{4} a^4 - \frac{\gamma}{2} a^2\right)}\, v(a)\, , \end{equation} and has finite norm for $c>0$. Here, $v(a)$ is a polynomial of a certain degree, yet to be chosen; the parameter $\gamma$ is to be chosen according to our convenience, as we shall see; $N$ is a normalization factor. The method is based on the fact, shown in Ref. \cite{chhajlany}, that the larger the degree of the polynomial $v(a)$, the smaller $c$ is. Therefore, if one increases the order of $v(a)$, the energy eigenvalues predicted by the present method tend monotonically, from above, to the energy eigenvalues of the original problem. One important property of the method is that the convergence is very fast. This means that one does not need to use a polynomial of very large order to obtain a good agreement with the energies of anharmonic oscillators already computed in the literature by other methods. The next step is the substitution of the {\it Ansatz} (\ref{15}) into the differential equation (\ref{14}), which gives rise to the following equation for the polynomial $v(a)$: \begin{equation} \label{16} v''(a)\, +\, 2( ca^3 + \gamma a ) v'(a)\, +\, [ \varepsilon - \gamma + (\gamma^2 + \alpha - 3c)a^2 ] v(a)\, =\, 0\, . \end{equation} Next, writing $v(a) = \sum_n \beta_n a^n$ and inserting it in Eq. (\ref{16}) along with the condition \begin{equation} \label{17} 2 c \gamma\, =\, b\, \end{equation} on $c$ and $\gamma$, \cite{chhajlany} we manage to find: \begin{equation} \left( \epsilon-{\it \gamma} \right) \beta_{{0}} + 2\,\beta_{{2}} =0, \qquad \left( \epsilon-3\,{\it \gamma} \right) \beta_{{1}} + 6\,\beta_{{3}} =0, \label{18a} \end{equation} and the general recurrence relation for the polynomial coefficients $\beta_n$, \begin{equation} \label{18} (n+4)(n+3)\beta_{n+4}\, +\, [\varepsilon - \gamma (2n+5)]\beta_{n+2}\, +\, [\gamma^2 -\alpha - c(2n + 3)]\beta_n\, =\, 0\, , \end{equation} for $n\geq 0$. The degree of the polynomial $v(a)$ is fixed to be, say $K$, by imposing the following conditions in (\ref{18}), \begin{equation} \label{19} \beta_K \neq 0, \qquad \beta_{K+2}\, =\, \beta_{K+4}\, =\, 0\, . \end{equation} Due to the nature of the recurrence relation (\ref{18}), it is clear that by fixing $K$ to be even (odd) the resulting polynomial $v(a)$ will be even (odd). Then, the coefficients $\beta_n$, $n=2,4,...,K$ ($n=3,5,...,K$), will be determined in terms of $\beta_0$ ($\beta_1$) by the normalization condition. In the present situation, we restrict our attention to the case of an odd polynomial. It means that, $K=2m+1$ for $m=0, 1, 2,...$. This condition is imposed in order that our wave-function vanishes at $a=0$. Eqs. (\ref{18}) and (\ref{19}) require that the coeficient $\beta_K$ must vanish; then \begin{equation} \label{20} \gamma^2 =\, \alpha\, +\, c(2K + 3)\, . \end{equation} Combining this with (\ref{17}), we obtain a cubic algebraic equation in the parameter $c$, \begin{equation} \label{21} 4 c^3 (2K + 3)\, +\, 4 \alpha c^2\, -\, b^2\, =\, 0\, . \end{equation} The solutions of this equation depend on the known parameters $b$ and $K$. We must find the real, positive root to this equation so that the {\it Ansatz} Eq. (\ref{15}) be normalizable. That real positive root, as proved in Ref. \cite{chhajlany}, is a monotonically decreasing function of $K$. Therefore, the greater the polynomial degree, the better the agreement between the energy eigenvalues obtained by this method and the actual energy eigenvalues. Now, by setting the condition $\beta_{K+2}=0$ in Eq. (\ref{18}) we may determine the corresponding energy eigenvalues $\varepsilon$ and polynomial coefficients $\beta_n$. The $(m+1)$ allowed energy levels of the anharmonic oscillator are obtained as the roots of the equation $D = 0$, where $D$ is the following $(m+1) \times (m+1)$ determinant: \begin{equation} \label{22} \left|\begin{array}{ccccccccc} (\varepsilon - 3\gamma) & 6 & 0 & 0 & \cdots & & \cdots & & 0 \\ \gamma^2 - \alpha - 5c & (\varepsilon - 7\gamma) & 20 & 0 & \cdots & & \cdots & & 0 \\ 0 & \gamma^2 - \alpha - 9c & (\varepsilon - 11\gamma) & 42 & \cdots & & \cdots & & 0 \\ \cdots & \cdots & \cdots & \cdots & (\gamma^2 - \alpha - (2K+3)c) & & \varepsilon - (2K+5)\gamma & & (K+4)(K+3) \end{array} \right|. \end{equation} The lowest real, positive root will correspond to the ground state energy level and the excited levels will be given by the sequence of higher real, positive roots. Next, we must substitute these values in the set of Eqs. (\ref{18}) in order to evaluate the coefficients $\beta_n$ and obtain the appropriate polynomial $v_{l}(a)$; the index $l=0,1,2,...,m$ represents the energy level, for each of which we shall have an eigenfunction $\eta_{l} (a)$ and a wave-function $\Psi_{l} (a, \tau) = exp{(-iE_l\tau)} \eta_{l}(a)$, according to Eqs. (\ref{11}) and (\ref{15}). We construct a general solution to the Wheeler-DeWitt equation (\ref{9}) by taking linear combinations of the $\Psi_{l} (a, \tau)$'s, \begin{equation} \Theta (a,\tau) = \sum_{l=0}^{m} A_{l}(E)\eta_l(a) e^{-iE_{l}\tau}, \label{23} \end{equation} where the coefficients $A_{l}(E)$ will be fixed later. With those combinations we compute the expected value for the scale factor $a$, following the {\it many worlds interpretation} of quantum mechanics \cite{everett}. In the present situation, we may write the expected value for the scale factor $a$ is \begin{equation} \left<a\right>(\tau) = \frac{\int_{0}^{\infty}a\,|\Theta (a,\tau)|^2 da} {\int_{0}^{\infty}|\Theta (a,\tau)|^2 da}. \label{24} \end{equation} \section{Results} \label{sec:results} We shall treat, now, each model separately depending on the constant curvature of the spatial sections. The difference from one model to the other will appear in the value of the parameter $\alpha$ in Eq. (\ref{14}). For all models we shall use the value of $\Lambda = -0.1$, therefore one has $b = 1.2$ in Eq. (\ref{14}). Also, we shall fix the polynomial degree to be $K = 45$, for all models. It means that, we shall have $23$ energy levels and 23 eigenfunctions $\eta_l(a)$. A precision of at least 15 significant digits had to be used, in order to guarantee the orthogonality of the set of functions $\eta_l (a)$'s Eq. (\ref{15}). The symbolic system Maple has been used, and the precision of calculations was chosen so that the largest number of energy levels be achieved and the corresponding (approximate) eigenfunctions be sufficiently orthogonal. \subsection{The model with $k=1$.} \label{subsec:k=1} In this model $\alpha = 36$ and the spatial sections are $S^3$'s. Using the values of $K$ and $b$, we solved Eq. (\ref{21}) to find $c=0.090068960669615962974$. Computations have been performed with 20 significant digits. Now, introducing all these quantities in the determinant $D$, Eq. (\ref{22}), we obtain the first $23$ energy levels; they are listed in Table \ref{tableenergy}. The first lowest energy levels are in agreement with the ones computed, pertubatively, by Landau for the quartic anharmonic potencial, equivalent to the present case \cite{landau}. After that, we substitute these values in the set of Eqs. (\ref{18}) and compute the coefficients $\beta_n$. With these $\beta_n$, we write the following $\eta (a, \tau)$, according to Eq. (\ref{15}), \begin{equation} \eta_l(a)= {N}_{l} \ {e^{- 0.022517240167403990744\,{a}^{4}- 3.3307811899866029985\,{a}^{2}}}\ v_l(a), \label{25} \end{equation} where \begin{equation} \label{26} v_l(a)=\sum_{i= 0}^{22} A_{l,2i+1} \ a^{2i+1}. \end{equation} The coefficients ${N}_{l}$ are normalization coefficients and $i=0, 1,..., m$. The $N_{l}$'s and the $A_{l,2i+1}$'s for the present model are listed in the appendix \ref{A1}, Tables \ref{tabela2} to \ref{tabela7}. Next, we construct the wave-packet $\Theta (a, \tau )$ with the aid of the $\eta_l (a)$, according to Eqs. (\ref{23}), Eq. (\ref{25}), and the energy levels in Table \ref{tableenergy}. Finally, using the wave-packet $\Theta (a, \tau )$ we compute the expected value for the scale factor $a$, Eq. (\ref{24}). The result is shown in Fig. \ref{f1}; it can be seen that $\left<a\right>$ does not vanish, therefore we may say that the quantization of this model removed the singularities it had at the classical level. It is clear from Fig. \ref{f1}, also, that $\left<a\right>$ performs bounded oscillations. That means that the spatial sections $S^3$'s oscillate between finite maximum and minimum radius. \subsection{The model with $k=0$} \label{subsec:k=0} In this model $\alpha = 0$ and the spatial sections are some closed three-dimensional solid with zero curvature, locally isometric to $R^3$ \cite{wolf}. Here, like in the previous case, we have used 20 significant digits. Introducing the values of $K$ and $b$ in Eq. (\ref{21}) we obtain $c=0.15701453260387612225$. Now, using all these quantities in the determinant $D$ Eq. (\ref{22}), we obtain the first $23$ energy levels. They are shown in Table \ref{tableenergy}. The first lowest energy levels are in agreement with the ones computed, pertubatively, by Landau for the quartic anharmonic potencial, equivalent to the present case \cite{landau}. After that, we substitute these values in the set of Eqs. (\ref{18}) and compute the coefficients $\beta_n$. With these $\beta_n$, we write the following $\eta (a, \tau)$ Eq. (\ref{15}), \begin{equation} \eta_l(a)=N_{l} \ {e^{- 0.039253633150969030562\,{a}^{4}- 1.9106511672830600300\,{a}^{2}}}\ v_l(a), \label{27} \end{equation} where the $v_{l}(a)$ have the general expression given in Eq. (\ref{26}) and the coefficients $N_{l}$ are normalization coefficients. The $N_{l}$'s and the $A_{l,2i+1}$'s for the present model are listed in appendix \ref{A1}, Tables \ref{tabela8} to \ref{tabela13}. Next, we construct the wave-packet $\Theta (a, \tau )$, with the aid of the $\eta_l (a)$ and the energy levels, according to Eqs. (\ref{23}) and (\ref{27}), as well as Table \ref{tableenergy}. Using the wave-packet $\Theta (a, \tau )$ we compute the expected value for the scale factor $a$, as in Eq. (\ref{24}). The result is shown in Fig. \ref{f2}. It can be seen that $\left<a\right>$ does not assume the value zero; therefore we may say that the quantization of this model removed the singularities it had at the classical level. It is clear, also, from Fig. \ref{f2}, that $\left<a\right>$ has bounded oscillations, that is, oscillates between finite maximum and minimum values. \subsection{The model with $k=-1$} \label{subsec:k=-1} In this model $\alpha = - 36$ and the spatial sections are some closed three-dimensional solid with negative constant curvature, locally isometric to $H^3$ \cite{thurston}. Here, we have used 15 significant digits. Introducing the values of $K$ and $b$, we solved Eq. (\ref{21}) to find $c = 0.410111969406177$. Now, using all these quantities in the determinant $D$ Eq. (\ref{22}), we obtain the first $23$ energy levels; they are listed in Table \ref{tableenergy}. After that, we substitute these values in the set of Eqs. (\ref{18}) and compute the coefficients $\beta_n$, used in \begin{equation} \eta_l(a)=N_{l} \ {e^{- 0.102527992350000\,{a}^{4}- 0.731507545000000\,{a}^{2}}}\ v_l(a), \label{29} \end{equation} where the $v_{l}(a)$ have the general expression given in Eq. (\ref{26}) and the $N_{l}$'s are normalization coefficients. The $N_{l}$'s and the $A_{l,2i+1}$'s for the present model are listed in the appendix \ref{A1}, Tables \ref{tabela14} to \ref{tabela18}. Due to numerical inconsistencies we have considered, in the present case, $18$ $v_{l}(a)$'s corresponding to the first $18$ energy levels. Next, we construct the wave-packet $\Theta (a, \tau )$ with the aid of the $\eta_l (a)$, according to Eqs. (\ref{23}) and (\ref{29}) and the energy levels in Table \ref{tableenergy}. Finally, using the wave-packet $\Theta (a, \tau )$ we compute the expected value for the scale factor $a$, according to (\ref{24}). The result is shown in Fig. \ref{f3}. It can be seen that $\left<a\right>$ does not assume the value zero. Therefore, we may say that the quantization of this model removed the singularities it had at the classical level. It is also clear that $\left<a\right>$ performs bounded oscillations. From Table \ref{tableenergy}, we observe that the energy levels depend on the value of the curvature constant of the spatial sections. The model with negative constant curvature has the most bounded energy levels, then one has the model with zero curvature and finally the model with positive constant curvature has the less bounded energy levels. Observing Eqs. (\ref{25}), (\ref{27}) and (\ref{29}), we see that the wave-functions $\Psi (a, \tau)$ for all three cases are exponentially damped as $a\rightarrow\infty$ and behave as powers of $a$ in the limit $a\rightarrow 0$. Following Hawking and Page \cite{hawking1}, we may say that this behavior of the $\Psi (a, \tau)$'s makes them {\it excited states} of wormholes. \begin{table}[h!] {\scriptsize\begin{tabular}{|c|c|c|c|} \hline Level & $k=1$ & $k=0$ & $k=-1$\\ \hline $E_{1}$ & $1.5103016760578712464$ & $0.34040279742876372340$ & $-10.5531383749465$ \\ \hline $E_{2}$ & $3.5509871014722954423$ & $1.0496018816133576012$ & $-8.85096087898748$ \\ \hline $E_{3}$ & $5.6230931685080087038$ & $1.9248782225962522139$ & $-7.22478356024350$ \\ \hline $E_{4}$ & $7.7256531719671366439$ & $2.9232145294773461677$ & $-5.68029499265656$ \\ \hline $E_{5}$ & $9.8577817762723638293$ & $4.0227383298683634726$ & $-4.22519889374100$ \\ \hline $E_{6}$ & $12.018664367453930157$ & $5.2097960735548422725$ & $-2.87075159719689$ \\ \hline $E_{7}$ & $14.207548216681935132$ & $6.4748734067709572007$ & $-1.63482529734928$ \\ \hline $E_{8}$ & $16.423735430300010131$ & $7.8108774877366064367$ & $-0.536430728059501$ \\ \hline $E_{9}$ & $18.666574187417793427$ & $9.2122719218977134548$ & $0.481759230982801$ \\ \hline $E_{10}$ & $20.935469528987599984$ & $10.674588231181872438$ & $1.58319085808971$ \\ \hline $E_{11}$ & $23.229800589269486517$ & $12.194126672497696030$ & $2.83303176055370$ \\ \hline $E_{12}$ & $25.549220854858484858$ & $13.767762525648273494$ & $4.21398303307806$ \\ \hline $E_{13}$ & $27.892697492531834013$ & $15.392811058629387358$ & $5.70783697327336$ \\ \hline $E_{14}$ & $30.260978882469751228$ & $17.066942189323770737$ & $7.30286131347368$ \\ \hline $E_{15}$ & $32.651319599290796010$ & $18.788091721482546958$ & $8.99091730871327$ \\ \hline $E_{16}$ & $35.066840708164574204$ & $20.554451348143454694$ & $10.7657478496350$ \\ \hline $E_{17}$ & $37.502643477205645295$ & $22.364363322148552407$ & $12.6225505157144$ \\ \hline $E_{18}$ & $39.963027226267456788$ & $24.216376790909429384$ & $14.5570952852202$ \\ \hline $E_{19}$ & $42.443644402959303694$ & $26.109132260627772146$ & $16.5661310939634$ \\ \hline $E_{20}$ & $44.946892054208187300$ & $28.041419503470901850$ & $18.6464752057244$ \\ \hline $E_{21}$ & $47.470929938318337116$ & $30.012110489502592151$ & $20.7956557497686$ \\ \hline $E_{22}$ & $50.01608754545613080 $ & $32.020172678051097447$ & $23.0112449804346$ \\ \hline $E_{23}$ & $52.581852999800707709$ & $34.064648529603099090$ & $25.2911792258758$ \\ \hline \end{tabular}} \mycaption{The lowest calculated energy levels for the cases $k=0$, $k=1$, and $k=-1$ (in all cases, $\Lambda=-0.1$).} \label{tableenergy} \end{table} \section{Conclusions.} \label{sec:conclusions} In the present paper, the formalism of quantum cosmology was employed to quantize three Friedmann-Robertson-Walker models in the presence of a negative cosmological constant and radiation. The variational formalism of Schutz \cite{schutz} allowed us to ascribe dynamical degrees of freedom to the radiation fluid. The models differ from each other by the constant curvature of the spatial sections, which may be positive, negative or zero. The quantization of the models gave rise to Wheeler-DeWitt equations, for the scale factor, which had the form of the Schr\"{o}dinger equation for the quartic anharmonic-oscillator. We found the approximate eigenvalues and eigenfunctions of those equations by using a method first developed by Chhajlany and Malnev \cite{chhajlany}. After that, we used the eigenfunctions in order to construct wave-packets for each case and evaluate the time dependent, expected value of the scale factors. We found for all of them that the expected values of the scale factors evolve with bounded oscillations. Since the expectation value of the scale factors never vanish, we concluded that these models do not have singularities. We also observed that the energy levels depend on the value of the curvature constant of the spatial sections. The model with negative curvature constant has the most bounded energy levels, whereas the model with positive constant curvature has the less bounded energy levels. \begin{acknowledgements} E. V. Corr\^{e}a Silva thanks CNPq for partial financial support. \end{acknowledgements}
2010.15530
\section{Introduction} \label{section1} Consider a discrete nonlinear system \begin{equation} y_{k} = f_0(x_k,w_k),\label{sistema} \end{equation} where $f_0(\cdot,\cdot)$ is not known, $k$ is the discrete time instant, $y_k\in {\mathcal{Y}} \subseteq \R$ is the output of the system, $w_k$ accounts for parametric uncertainty, noise, disturbances, etc. Also, vector $x_k \in X \subseteq \R^{n_x}$ represents the past inputs and outputs of the system, i.e., $x_k= [y_{k-1},y_{k-2},...,y_{k-n_y},u_{k},u_{k-1},...,u_{k-n_u}]$ and $n_x = n_y+n_u+1$. Note that nonlinear terms of past system inputs-outputs could be incorporated into vector $x_k$. In this paper we focus on interval predictions. That is, given the regressor $x_k$, the objective is to compute an interval $I(x_k)=[y_k^-,y_k^+]$ such that we maximize the probability that $y_k$ belongs to $I(x_k)$ while minimizing the interval width $(y_k^+-y_k^-)$. These two conflicting objectives can be reconciled if one minimizes the interval width with the constraint that $I(x_k)$ contains $y_k$ with a pre-specified probability. Interval predictions play a relevant role in the control of uncertain systems. Zonotopes and DC Programming are used to obtain interval state estimators in \cite{AlamoAUT05} and \cite{AlaBravRedCama08} respectively. Interval observers for linear time-varying systems have been proposed in \cite{Thabet20142677} and \cite{Chebotarev201582}. Fault detection methods based on zonotopic bounds can be found in \cite{Raka2013119}. In \cite{Xu2014947}, set theoretic approaches are also used in the context of fault detection. Set membership methods \cite{milanese2004set,milanese2011unified} can also be used to obtain interval predictions. A mixed Bayesian/set-membership approach is proposed in \cite{FernandezCanti201559}. There exists different methods in the literature that address the problem of obtaining interval predictions for system (\ref{sistema}). For example, if the uncertain vector $w_k$ is bounded and $f_0(\cdot,\cdot)$ satisfies some Lipschitz assumptions, one can resort to bounded error methods \cite{MilaNortPieWal96} that guarantee that $y_k$ is always contained in $I(x_k)$. See, for example, \cite{MilaNovAUT05} and \cite{manzano2020robust}. Other bounded error strategies have been proposed in \cite{BaiTempo:99}, \cite{Jaulin00}, \cite{Bravo:2016:BoundingTechniques}. The statistical characterization of noise and disturbances can be used to enhance the performance of interval estimation me\-thods. See \cite{RollNazinLjung:05}, \cite{BravoAlamo:15}, \cite{Combastel15Aut} and references therein. Also, probabilistic validation methods can be used to assess the performance of the interval predictors \cite{Efron:86bootstrap}, \cite{alamo2015randomized}, \cite{alamo2018robust}, \cite{mirasierra2021prediction}. Denote $F({\bar{y}} |x_k)$ the cumulative distribution function of the associated output $y$ conditioned to the regressor $x=x_k$. That is, $$ F({\bar{y}}|x_k) = {\rm{Prob}} \set{ y \leq {\bar{y}}}{ x=x_k}. $$ Related with this probability is the notion of quantile \cite{Murphy:12}, \cite{Koenker:1978:RegressionQuantiles}. Given $x_{k}$, we say that ${\bar{y}}_{\qp}$ is the conditioned $\qp$-quantile if $$ F({\bar{y}}_{\qp}|x_k) = {\rm{Prob}} \set{ y \leq {\bar{y}}_{\qp}}{ x=x_k} = \qp. $$ The notion of quantile is closely related to the one of confidence intervals. The estimation of the conditioned quantiles is relevant in multiple applications (see \cite{Davino:14} and \cite{bassett2002portfolio}) and can be addressed using different methodologies. The most classical approach relies on the assumption that $y_k$ and $x_k$ are jointly normal. That is, the assumption that the (joint) probability density function of the (random) variables $y$ and $x$ is a multivariable normal probability density function. Under this assumption, the conditioned p.d.f. is a monovariable normal p.d.f. and the quantiles can be obtained in a simple and direct way \cite{Papoulis:02}. Unfortunately, the methods based on normal distributions are very sensitive to the presence of outlier contamination. Moreover, in many long-tailed situations, the normal assumption is not well suited to characterize confidence intervals and one has to resort to non-Gaussian distributions. In these cases, generalizations of the Chebyshev inequality can be used to obtain probabilistic bounds \cite{navarro2016very}, \cite{stellato2017multivariate}. The computation of the conditioned quantiles can be also addressed by means of parametric regression techniques \cite{Koenker:1978:RegressionQuantiles}, \cite{Davino:14}. If one assumes that there exists $\theta$ for which $y_k \approx \theta\T x_k$, then parameter vector $\theta$ can be chosen as the one that minimizes a cost function of the error $\theta\T x_k-y_k$. If one chooses a cost function that penalizes in an asymmetric way positive and negative errors then a quantile regressor is obtained. Given the training pairs $(y_j,x_j)$, $j=1,\ldots,N$ and $\qp\in (0,1)$, the quantile regressor is defined in terms of the following optimization problem $$ \min\limits_{\theta} \Sum{j=1}{N} (1-\qp) \max\{0,\theta\T x_j-y_j\} + \qp\max\{0,y_j-\theta\T x_j\}.$$ This linear optimization problem penalizes the (training) errors $e_j=\theta\T x_j -y_j$, $j=1,\ldots,N$ in an asymmetric way. The positive errors are weighted with coefficient $(1-\qp)$ and the negatives with coefficient $\qp$. If $\qp\in(0,1)$ is close to zero, then the positive errors will be highly penalized (in comparison with the negative ones). This means that every optimal solution $\theta_{\qp}$ to the linear optimization problem will tend to make most of the errors negative. This implies that $\theta_{\qp}\T x_k$ could be used as a probabilistic lower bound for $y_k$. In a similar way, a probabilistic upper bound could be obtained taking $\qp\in(0,1)$ close to 1. Under rather mild assumptions, any minimizer $\theta_{\qp}$ of the proposed optimization problem can be used to obtain an estimation of the $\qp$ quantile. That is, $\theta_{\qp}\T x_k$ serves as an estimation of the $\qp$ quantile associated with $y_k$. See \cite{Koenker:1978:RegressionQuantiles}, \cite{portnoy1997gaussian} and \cite{Davino:14} for further details. One of the main limitations of quantile regression is that a large number of training samples $N$ is required if one desires to obtain probabilistic guarantees of the method when $\qp$ is chosen close to the extremes of the interval $(0,1)$. This is due to the fact that estimating the probability of rare events requires a large number of samples. For example, the number of independent identically distributed samples required to obtain the $1-\epsilon$ quantile of a monovariable random variable grows with $\frac{1}{\epsilon}$ (see \cite{TeBaDa:97}, \cite{alamo2015randomized} and \cite{alamo2018robust}). This paper presents a new methodology for the computation of interval predictions of a dynamical system. Dissimilarity functions are used to estimate the conditional probability density function of the outputs. The estimated probability density function is used to derive the interval prediction. It is shown that the standard linear regression is a particular case of the proposed methodology. The paper is organized as follows. In Section \ref{sec:DissimilarityFunctions} a family of dissimilarity functions is proposed. In Section \ref{sec:regression} the role of dissimilarity functions in linear regression is analyzed. The probabilistic interval predictors are presented in Section \ref{sec:IntervalEstimation}. The methodology is applied to some forecasting problems in Section \ref{sec:example}. The paper ends with a section of conclusions. \section{Dissimilarity functions}\label{sec:DissimilarityFunctions} Given a data set $$ {\mathcal{D}}=\set{z_i}{i=1,\ldots,N} \subset \R^n,$$ we are interested in determining if a given vector $z$ can be considered to be similar to the other vectors of the data set ${\mathcal{D}}$. In a more precise way, we are looking for a function $$J_d(\cdot,\cdot):\R^n\times {\mathcal{D}}\to [0,\infty]$$ that measures the dissimilarity between a given point $z$ and the data set ${\mathcal{D}}$. Large values of $J_d(z,{\mathcal{D}})$ represent a high degree of dissimilarity, while small values correspond to a high degree of similarity (i.e., a small degree of dissimilarity). Clearly, from a dissimilarity function $J_d(x,{\mathcal{D}})$ one can obtain a similarity function $J_s(x,{\mathcal{D}})$. For example, given $\sigma>0$, $J_s(x,{\mathcal{D}})=\mathrm{e}^{-\sigma J_d(z,{\mathcal{D}})}$ is small when $z$ is not similar to the points in ${\mathcal{D}}$ and close to $1$ when $z$ is very similar to the elements of ${\mathcal{D}}$. Another possibility would be $J_s(x,{\mathcal{D}})=(1+\sigma J_d(z,{\mathcal{D}}))^{-1}$, where $\sigma>0$. There exists a wide class of operators that can serve as dissimilarity functions for the particular case in which ${\mathcal{D}}$ is a singleton (${\mathcal{D}}=\{z_{\mathcal{D}}\}$). For singleton ${\mathcal{D}}$, one popular choice is $$ J_d(z,z_{\mathcal{D}}) = \|z-z_{\mathcal{D}}\|,$$ where $\|\cdot\|$ is a given norm. One could also use the minimum distance to set ${\mathcal{D}}$. That is, \begin{equation}\label{equ:min:distance} J_d(z,{\mathcal{D}}) = \min\limits_{\hat{z}\in {\mathcal{D}}} \, \|z-\hat{z}\|.\end{equation} Another possibility could be to consider as a dissimilarity function the mean value of the distances of $z$ to each member of set ${\mathcal{D}}$. See chapter 2 of \cite{Goshtasby:12} and chapter 2 of \cite{wierzchon2018modern} for a review of similarity and dissimilarity functions applied in the field of image registration and in the context of cluster analysis, respectively. Dissimilarity and similarity functions can be used in the context of regression. Suppose that we have the pairs $\{x_i,y_i\}$, $i=1,\ldots,N$ and that we would like to estimate, given $x$, its corresponding output $y$. Given the similarity function $J_s(\cdot,\cdot)$, one possibility for the estimation $\hat{y}$ of $y$ is $$ \hat{y} = \Sum{i=1}{N} \lambda_i y_i, $$ where the scalars $\lambda_i$ are chosen in such a way that $\lambda_i$ is small when the similarity function $J_s(x,x_i)$ is small. It is also reasonable to normalize the sum of the scalars $\lambda_i$ to the unity. That is, $\Sum{i=1}{N} \lambda_i=1$. For example, one could choose $$ \lambda_i = \frac{J_s(x,x_i)}{\Sum{j=1}{N} J_s(x,x_j)}, \;\; i=1,\ldots,N.$$ Although this approach could be valid for some applications, more sophisticated approaches are required in many situations, as it is just a weighted average. We propose in this paper a convex optimization problem to obtain a measure of dissimilarity between a point $z$ and a set ${\mathcal{D}}$. This is formally stated in the following definition. \begin{definition}\label{def1} Given $z\in \R^n$, a set of measurements ${\mathcal{D}}=\{z_1,\ldots,z_N\}\subset \R^n$ and the scalar $\gamma\geq 0$, the dissimilarity function $J_{\gamma}(z,{\mathcal{D}})$ is defined as \begin{eqnarray} J_{\gamma}(z,{\mathcal{D}})&=&\min\limits_{\lambda_1,\ldots,\lambda_N} \Sum{i=1}{N} \lambda_i^2 + \gamma \Sum{i=1}{N}|\lambda_i| \nonumber \\ s.t. && z=\Sum{i=1}{N}\lambda_i z_i \nonumber\\ && 1 = \Sum{i=1}{N}\lambda_i \label{problem:general}. \end{eqnarray} \end{definition} \begin{remark} Note that non negative constant weights could be included into the cost function. That is, one could consider the cost function $$ \Sum{i=1}{N} w_i\lambda_i^2 + \gamma \Sum{i=1}{N}|\lambda_i|,$$ where the scalars $w_i$, $i=1,\ldots,N$ are used to weight the different elements in ${\mathcal{D}}$. These weights could be computed using a distance function between $z$ and the singleton $\{z_i\}$ (for example, $w_i = ||z-z_i||$) or any dissimilarity function. This would be a way to incorporate local information into the analysis. This strategy could be useful when the considered system is non-linear. Although the results of the paper are stated for the particular case in which $w_i=1$, $i=1,\ldots,N$, the generalization to the general case is not difficult. \end{remark} \begin{remark} We notice that the optimization problem (\ref{problem:general}) could be non-feasible. In order to rule out this possibility, we assume that the vectors that compose set ${\mathcal{D}}$ span all the space. \end{remark} \begin{remark} Optimization problem (\ref{problem:general}) is similar to the one appearing in the context of direct weight optimization and kriging, where central predictions of a certain variable are obtained by means of the solution of an optimization problem \cite{RollNazinLjung:05}, \cite{Bravo:2016:BoundingTechniques}, \cite{salvador2019offset}, \cite{cressie1986kriging}, \cite{salvador2018data}. \end{remark} It is important to remark that the proposed dissimilarity measure is invariant with respect to affine transformations. This is formally stated in the following property. \begin{property} Consider $z_{T,v}$ and $\mathcal{D}_{T,v}$ obtained from $z$ and $\mathcal{D}$ through the following affine transformation. \begin{eqnarray*} z_{T,v} &=& Tz + v \\ \mathcal{D}_{T,v} &=& \set{Tz+v}{z\in \mathcal{D}}, \end{eqnarray*} where $T$ is any non-singular matrix and $v$ is any vector of adequate dimensions. Then $$ J_{\gamma}(z,\mathcal{D}) = J_{\gamma}(z_{T,v},\mathcal{D}_{T,v}).$$ \end{property} {\sc Proof } We first show that any feasible solution $\lambda_i$, $i=1,\ldots,N$ to the problem of computing $J_{\gamma}(z,\mathcal{D})$ is also a feasible solution for the computation of $J_{\gamma}(z_{T,v},\mathcal{D}_{T,v})$. Suppose that $z=\Sum{i=1}{N} \lambda_i z_i$ and $\Sum{i=1}{N} \lambda_i=1$. Then \begin{eqnarray*} z_{T,v} & = & Tz+v \\ & = & T\left( \Sum{i=1}{N} \lambda_i z_i\right) + \left( \Sum{i=1}{N} \lambda_i \right)v \\ & = & \Sum{i=1}{N} \lambda_i (Tz_i + v). \end{eqnarray*} We notice that $Tz_i+v$, $i=1,\ldots,N$, are the elements of $\mathcal{D}_{T,v}$. Therefore $\lambda_i$, $i=1,\ldots,N$, is also a feasible solution for the problem that defines $J_{\gamma}(z_{T,v},\mathcal{D}_{T,v})$. From this we infer that $J_{\gamma}(z_{T,v},\mathcal{D}_{T,v})\leq J_{\gamma}(z,\mathcal{D})$. On the other hand, since $T$ is non-singular we can make a similar reasoning to show that any feasible solution for $J_{\gamma}(z_{T,v},\mathcal{D}_{T,v})$ is a feasible solution for $J_{\gamma}(z,\mathcal{D})$. In this way we prove also that $J_{\gamma}(z,\mathcal{D})\leq J_{\gamma}(z_{T,v},\mathcal{D}_{T,v})$. Both inequalities prove the claimed equality. $\rm This invariance property is very important because it gua\-ran\-tees that the analysis based on the proposed dissimilarity function is not affected by the choice of the coordinate system. We notice that many of the dissimilarity functions that can be found in the literature are not invariant. For example, any dissimilarity function based on the distance of $z$ to the elements of $\mathcal{D}$, such as that of equation (\ref{equ:min:distance}), will be dependent on the particular choice of coordinate system. The proposed optimization problem (\ref{problem:general}) is a strict convex optimization problem subject to convex constraints. This means that it has a unique solution \cite{Boyd04}. From an optimization point of view, we notice that the numerical resolution can be addressed using a dual formulation. In the dual formulation for this particular optimization problem, the number of dual decision variables is equal to the number of equality constraints ($n+1$) which is in many situations much smaller than the number of primal variables ($N$). On the other hand, the gradient of the objective function in the dual formulation can be obtained in a direct way because once the dual variables are fixed, the optimal values for the primal variables are obtained solving a separable optimization problem (which has an explicit solution). The numerical examples of this paper have been obtained using an accelerated gradient method in the dual variables. See \cite{beck2017first}, \cite{Beck09} and \cite{nesterov2018lectures}. The alternating direction method of multipliers can also be used in this context \cite{Boyd10}. As it is formally stated in the following property, the optimization problem has an explicit solution for the particular case $\gamma=0$ (see Appendix A for proof). \begin{property}\label{property:explicit:gamma:zero} Suppose that $\mathcal{D}=\{z_1,z_2,\ldots,z_N\}$, then $J_{0}(z,\mathcal{D})$ has the following explicit expression $$J_{0}(z,\mathcal{D}) = N^{-1} + (z-\bar{z})\T(ZZ\T-N\bar{z}\bar{z}\T)^{-1}(z-\bar{z}),$$ where $Z = [ z_1\;z_2\;...\;z_N ]$, $\bar{z} = N^{-1}Zu$ and $u \in \R^N$ is a vector with all its $N$ components equal to 1. \end{property} The previous result shows that the dissimilarity function is a quadratic function on the argument $z$ for the particular case $\gamma=0$. For the more general case in which $\gamma>0$ we can infer from the Karush-Kuhn-Tucker optimality conditions \cite{Boyd04} that the dissimilarity function $J_{\gamma}(z,\mathcal{D})$ is a piecewise convex quadratic function with respect to $z$. \section{Dissimilarity functions and regression}\label{sec:regression} We show in this section how dissimilarity functions can be used in the context of regression. Imagine that the data set $$ \mathcal{D} = \set{z_i = \bmat{c} y_i \\ x_i \emat}{ i=1,\ldots,N}\subset {\mathcal{Y}}\times {\mathcal{X}},$$ is available. Given $x_k$, and $\gamma\geq 0$, one could obtain and estimation $\hat{y}_k$ for $y_k$ minimizing the dissimilarity function of vector $\bmat{c} y \\ x_k\emat$ with respect to the data set ${\mathcal{D}}$. That is, $$ \hat{y}_k = \arg\,\min\limits_y J_{\gamma}(\bmat{c} y\\ x_k\emat,\mathcal{D}). $$ Therefore, given $x_k$ and $\gamma\geq 0$, the estimation $\hat{y}_k$ could be obtained from the optimization problem \begin{eqnarray} \min\limits_{y,\lambda_1,\ldots,\lambda_N} && \Sum{i=1}{N} \lambda_i^2 + \gamma \Sum{i=1}{N}|\lambda_i| \nonumber \\ s.t. && \bmat{c} y \\ x_k \emat =\Sum{i=1}{N}\lambda_i \bmat{c} y_i \\ x_i\emat \label{equ:y:lambda} \\ && 1 = \Sum{i=1}{N}\lambda_i. \nonumber \end{eqnarray} Since the decision variable $y$ appears only in the equality constraint (\ref{equ:y:lambda}), one could solve the optimization problem ignoring the equality constraint $$ y = \Sum{i=1}{N} \lambda_i y_i, $$ and making the optimal value of $y$ equal to \begin{equation} \label{equ:optimal:y} \hat{y}_k = y^* = \Sum{i=1}{N} \lambda_i^* y_i, \end{equation} where $\lambda_i^*$, $i=1,\ldots,N$ are the optimal values of the optimization problem \begin{eqnarray*} \min\limits_{\lambda_1,\ldots,\lambda_N} && \Sum{i=1}{N} \lambda_i^2 + \gamma \Sum{i=1}{N}|\lambda_i| \\ s.t. && x_k =\Sum{i=1}{N}\lambda_i x_i \\ && 1 = \Sum{i=1}{N}\lambda_i. \end{eqnarray*} Therefore, the estimation provided by the method is a linear combination of the outputs $y_i$. This is consistent with different results from the specialized literature in which it is shown that under certain assumptions, the optimal solution to an estimation problem is given by a linear combination of the observed outputs (see \cite{RollNazinLjung:05}, and \cite{MilaBel82}). The central estimation provided by equation (\ref{equ:optimal:y}) is similar to other weighted methods like \cite{Bravo:2016:BoundingTechniques}, \cite{RollNazinLjung:05} and \cite{salvador2019offset}. As it is stated in the following property, the estimation obtained for the particular case $\gamma=0$ matches the one given by the linear least squares method. \begin{property} Given a point $x_k$, the estimation $$\hat{y}_k = \arg\,\min_{y}\; J_{0}(\bmat{c} y \\ x_k \emat,\mathcal{D}),$$ matches the estimation obtained by linear least squares using $\mathcal{D} = \set{z_i =\bmat{c} y_i \\ x_i \emat}{ i=1,\ldots,N}$ as data set. The proof is provided in Appendix B. \end{property} Previous property shows that the estimation method proposed in this paper encompasses the least squares method for the particular case $\gamma=0$. A family of optimal estimators is obtained if one considers $\gamma$ as a tuning parameter. In the following sections, we show not only how to tune the value of $\gamma$, but also how to use this methodology to obtain probabilistic interval estimations. \subsection{Empirical Probability Density Function} The dissimilarity of a given vector $z\in \cal{Z}$ $\subseteq \R^{n}$ with respect to the elements of $\mathcal{D}$ can be used to define an empirical probability density function. The next definition introduces the notion of empirical p.d.f. \begin{definition}[Empirical p.d.f.] Given a set of measurements $\mathcal{D}=\{z_1,\ldots,z_N\}\subset\mathcal{Z}$, $\gamma\geq 0$ and $c\geq 0$, the empirical probability density function ${\rm{ep}}_{\gamma,c}(z,\mathcal{D})$ is defined for every $z\in \cal{Z}$ as \begin{equation}\label{equ:epdf}{\rm{ep}}_{\gamma,c}(z,\mathcal{D})= \frac{\exp\left(-c J_{\gamma}(z,\mathcal{D})\right)} {\int\limits_{\mathcal{Z}}\exp\left(-c J_{\gamma}(\hat{z},\mathcal{D})\right)d\hat{z}}. \end{equation} \end{definition} Note that expression (\ref{equ:epdf}) provides a family of probability density functions that are parameterized by constants $\gamma$ and $c$. We notice that by construction, $$ \int\limits_{\mathcal{Z}}{\rm{ep}}_{\gamma,c}(\hat{z},\mathcal{D})d\hat{z} = 1.$$ We also notice that if $\mathcal{Z}$ is a compact set, the choice $c=0$ provides a uniform p.d.f. in $\mathcal{Z}$. \begin{remark} Recall that from property \ref{property:explicit:gamma:zero} we have \begin{eqnarray*} J_{0}(z,\mathcal{D}) &=& N^{-1} + (z-\bar{z})\T(ZZ\T-N\bar{z}\bar{z}\T)^{-1}(z-\bar{z}) \\ &=& \frac{1}{N} + \frac{1}{N}(z-\bar{z})\T(\frac{1}{N} ZZ\T-\bar{z}\bar{z}\T)^{-1}(z-\bar{z}), \end{eqnarray*} where $Z = [ z_1\;z_2\;...\;z_N ]$, $\bar{z} = N^{-1}Zu$ and $u$ is a vector with all its components equal to 1. This means that if $c = \frac{N}{2}$ and $\gamma=0$ then ${\rm{ep}}_{\gamma,c}(z,\mathcal{D})$ is a multivariable normal distribution with mean $\bar{z}$ and covariance matrix $ \frac{1}{N} ZZ\T-\bar{z}\bar{z}\T$, which corresponds to the empirical covariance matrix of the data set $\mathcal{D}$. \end{remark} As already commented, the proposed method provides a way to obtain a family of empirical probability density functions that encompasses the normal distributions and the uniform one. In order to obtain the parameters $c$ and $\gamma$ for a given data set $\mathcal{D}$ generated by other distributions, one could use the maximum likelihood methodology. See, for example, \cite{Murphy:12}. We show in the following example how the proposed methodology can be used to estimate the probability density function that characterizes a given data set $\mathcal{D}$. \subsection{Clarifying example} \label{sec:clarifying} A sample of 600 points in $\R$ is obtained from a uniform probability function with support $[0,1]$. One half of the available points are used as set $\mathcal{D}$. The other half is used as a test set. Then for every point in the test set, equation (\ref{equ:epdf}) is used to estimate the empirical probability density function associated with the considered pairs $(c,\gamma)$. Figure \ref{DistRef2} shows the empirical probability density functions estimated using different values of the parameters $c$ and $\gamma$. In this case, $c=1.5$ and $\gamma=5$ is the pair that achieves the best fit for the distribution proposed in this example. \begin{figure} \centering \includegraphics [width=85mm,height=60mm]{clarifying_example.eps} \caption{Estimated probability distribution functions.}\label{DistRef2} \end{figure} \section{Interval estimation}\label{sec:IntervalEstimation} This section presents the methodology to obtain, given $x_k \in \mathcal{X}$, an interval estimation of the corresponding output $y_k$. Given $x_k\in \mathcal{X}$, the data set $$ \mathcal{D} = \set{\bmat{c} y_i \\ x_i \emat}{ i=1,\ldots,N} \subset \mathcal{Y}\times\mathcal{X},$$ and the non negative scalars $c$ and $\gamma$, the empirical {\bf conditional} p.d.f. in $\mathcal{Y} \times \mathcal{X}$ is defined as $$ {\rm{ecp}}_{\gamma, c}(y,x_k,\mathcal{D}) = \frac{\exp\left(-c J_{\gamma}(\bmat{c} y \\ x_k \emat, \mathcal{D} )\right)}{\int\limits_{\mathcal{Y}} \exp\left(-c J_\gamma(\bmat{c} \hat{y} \\ x_k \emat, \mathcal{D})\right)d\hat{y}}, \; \forall y\in \mathcal{Y}. $$ The empirical conditional p.d.f. serves to model the probability of $y$ given the occurrence of $x_k$. We now show how to use this notion to compute, given $x_k$, an interval estimation of $y_k$. First, in order to simplify the numerical integration required to compute the interval estimations, we approximate set ${\mathcal{Y}}$ with a set $\bar{{\mathcal{Y}}}$ of finite cardinality. That is, we consider the set $$ \bar{{\mathcal{Y}}}=\{{\bar{y}}_1, \ldots, {\bar{y}}_M \},$$ where ${\bar{y}}_j< {\bar{y}}_{j+1}$, $j=1,\ldots,M-1$, and the extreme values of $\bar{{\mathcal{Y}}}$ are chosen to guarantee that $y_k$ belongs to $[{\bar{y}}_1,{\bar{y}}_M]$ with high probability. A reasonable procedure to construct set $\bar{{\mathcal{Y}}}$ is to define ${\bar{y}}_1,\ldots,{\bar{y}}_M$ as follows \begin{eqnarray*} {\bar{y}}_1&=&\min\limits_{i=1,\ldots,N} y_i\\ {\bar{y}}_M&=&\max\limits_{i=1,\ldots,N} y_i\\ {\bar{y}}_j&=& {\bar{y}}_1+ \left(\frac{{\bar{y}}_M-{\bar{y}}_1}{M-1}\right)(j-1), \; j=2,\ldots,M-1. \end{eqnarray*} We notice that larger values of $M$ provide better approximations of ${\mathcal{Y}}$ at the expense of a larger computational burden. Given $x_k\in {\mathcal{X}}$, we define the \textit{discrete} empirical conditional distribution, defined at each point of $\bar{{\mathcal{Y}}}$, as \begin{equation} \overline{{\rm{ecp}}}_{\gamma, c}(y,x_k,\mathcal{D}) = \frac{\exp\left(-c J_{\gamma}(\bmat{c} y \\ x_k \emat, \mathcal{D} )\right)}{\Sum{j=1}{M} \exp\left(-c J_\gamma(\bmat{c} {\bar{y}}_j \\ x_k \emat, \mathcal{D})\right)}. \label{eq:decp} \end{equation} By construction, $$\Sum{\ell=1}{M} \overline{{\rm{ecp}}}_{\gamma, c}({\bar{y}}_\ell,x_k,\mathcal{D})=1.$$ This discrete empirical conditional p.d.f. defines a conditioned probability distribution on $\bar{{\mathcal{Y}}}$ (given $x_k$), that we denote ${\rm{Prob}}_{\bar{{\mathcal{Y}}}|x_k}$. According to this discrete distribution, we have that ${\bar{y}}_\ell$ (i.e. the $\ell$-th element of $\bar{{\mathcal{Y}}}$) satisfies,\begin{eqnarray} {\rm{Prob}}_{\bar{{\mathcal{Y}}}|x_k} \, \{y \leq \bar{y}_\ell\} &=& \Sum{j=1}{\ell} \overline{{\rm{ecp}}}_{\gamma, c}({\bar{y}}_j,x_k,\mathcal{D}), \label{equ:prob:lower} \\ {\rm{Prob}}_{\bar{{\mathcal{Y}}}|x_k} \, \{y \geq \bar{y}_\ell\} &=& \Sum{j=\ell}{M}\overline{{\rm{ecp}}}_{\gamma, c}({\bar{y}}_j,x_k,\mathcal{D}). \label{equ:prob:upper} \end{eqnarray} Given $x_k$, $\gamma\geq 0$ and $c\geq 0$, and $\qp\in(0,1)$, we define the empirical upper conditioned $\qp$-quantile, denoted by $y_{\qp}^+$, as the \textit{smallest} element of $\bar{{\mathcal{Y}}}$ that satisfies $$ {\rm{Prob}}_{\bar{{\mathcal{Y}}}|x_k} \,\{ y \leq y_{\qp}^+\} \geq 1-\qp. $$ In a similar way, the empirical lower conditional $\qp$-quantile, denoted by $y_\qp^-$, is defined as the \textit{largest} element of $\bar{{\mathcal{Y}}}$ that satisfies $$ {\rm{Prob}}_{\bar{{\mathcal{Y}}}|x_k} \,\{ y \geq y_{\qp}^-\} \geq 1-\qp. $$ Given $x_k$ and $\qp\in(0,1)$, the interval prediction for $y_k$ is $[y_\qp^-,y_\qp^+]$. According to the discrete distribution ${\rm{Prob}}_{\bar{{\mathcal{Y}}}|x_k}$, and the definition of $y_\qp^-$ and $y_\qp^+$ we have $$ {\rm{Prob}}_{\bar{{\mathcal{Y}}}|x_k}\{y\in [y^-_\qp,y^+_\qp] \} \geq 1 - 2\qp.$$ What precedes illustrates how to compute the interval prediction $[y_\qp^-,y_\qp^+]$, for a given $x_k$, $\qp$ and pair ($\gamma,c$). See Algorithm \ref{alg:ComputationInterval} for a detailed description of the procedure. \begin{remark}[Conditioned median]\label{remark:median} Given the occurrence of $x_k$, a sensible estimation for $y_k$ is the conditioned median $y_m(x_k,\gamma,c)$, that can be approximated by the center of the interval $[y^-_{0.5},y^+_{0.5}]$. \end{remark} \begin{algorithm}[h] \caption{Interval estimation $[y_\qp^-(x,\gamma,c), y_\qp^+(x,\gamma,c) ]$. \label{alg:ComputationInterval}} \begin{algorithmic}[1] \REQUIRE $x$, $\qp\in(0,1)$, $\gamma\geq 0$, $c\geq 0$, ${\mathcal{D}}$, $\bar{{\mathcal{Y}}}=\{{\bar{y}}_1,\ldots, {\bar{y}}_M\}$. \ENSURE $y_\qp^-$, $ y_\qp^+$. \STATE Obtain the dissimilarity function (see Definition \ref{def1}) for each element of $\bar{{\mathcal{Y}}}$: $$ d_j = J_{\gamma}\left(\bmat{c} {\bar{y}}_j \\ x \emat , \mathcal{D} \right), \; j=1,\ldots,M.$$ \STATE Compute the conditioned probabilities (see equation \ref{eq:decp}): $$ p_j = \overline{{\rm{ecp}}}_{\gamma,c}({\bar{y}}_j,x,{\mathcal{D}})= \frac{\exp\left( -cd_j\right)}{\Sum{\ell=1}{M} \exp\left( -cd_\ell\right) }, \; j=1,\ldots,M.$$ \STATE Compute the indexes $\ell_\qp^+$ and $\ell_\qp^-$ corresponding to the lower and upper conditioned $\qp$-quantiles (see (\ref{equ:prob:lower}) and (\ref{equ:prob:upper})): \begin{eqnarray*} \ell_\qp^+&=&\mbox{smallest integer $\ell$ satisfying } \Sum{ j=1 }{\ell} p_j \geq 1-\qp,\\ \ell_\qp^-&=&\mbox{largest integer $\ell$ satisfying } \Sum{j=\ell}{M} p_j \geq 1-\qp.\\ \end{eqnarray*} \STATE $y^-_\qp={\bar{y}}_{\ell_\qp^-}$ and $y^+_\qp={\bar{y}}_{\ell_\qp^+}$. \end{algorithmic} \end{algorithm} The properties of the prediction intervals obtained using the procedure detailed above rely on the specific choice for $\gamma$ and $c$, since they determine the underlining empirical distribution (see the example of section \ref{sec:clarifying}). Given $\qp\in(0,1)$, we now detail how to obtain a pair $(\gamma^*_\qp,c^*_\qp)$ such that sharp interval estimations are obtained, while meeting the probabilistic specifications (determined by $\qp$) in a validation set $$ {\cal{V}} = \set{\bmat{c} {\tilde{y}}_s \\ {\tilde{x}}_s \emat}{ s=1,\ldots,N_{{\cal{V}}}} \subset {\mathcal{Y}}\times{\mathcal{X}}.$$ Let us now analyze the role of parameter $c\geq 0$ in the discrete empirical conditioned distribution given in equation (\ref{eq:decp}). On the one hand, the choice $c=0$ provides a flat distribution in which each element of $\bar{{\mathcal{Y}}}$ has a conditioned probability equal to $\frac{1}{M}$. On the other hand, large values of $c$ provide narrow distributions centered around the point in $\bar{{\mathcal{Y}}}$ that minimizes, given $x_k$, the dissimilarity function $J_\gamma(\cdot,{\mathcal{D}})$. Consequently, for a fixed value of $\gamma$, larger values of $c$ reduce the size of the obtained interval at the expense of increasing the fraction of outputs that are not contained in the interval estimations. Therefore, given $\gamma$, the corresponding value for $c$ should be chosen as the largest value of $c$ that guarantees in the validation set that the obtained intervals contain the outputs with the desired probability. \begin{algorithm}[h] \caption{Optimal value of $c\geq 0$, for given $\gamma\geq 0$ and $\tau\in(0,1)$ \label{alg:computation:c}} \begin{algorithmic}[1] \REQUIRE $\qp\geq 0$, $\gamma\geq 0$, $c_{\max}>0$ and $\epsilon>0$, ${\mathcal{D}}$, $\bar{{\mathcal{Y}}}$ and the validation data set ${\cal{V}} = \set{\bmat{c} {\tilde{y}}_s \\ {\tilde{x}}_s \emat}{ s=1,\ldots,N_{{\cal{V}}}} \subset {\mathcal{Y}}\times{\mathcal{X}}$. \ENSURE $c_\gamma$. \STATE $c_{\min}=0$. \WHILE { $c_{\max}-c_{\min} \geq \epsilon$} \STATE $c=\frac{1}{2}(c_{\max}+c_{\min})$. \STATE Compute, using Algorithm \ref{alg:ComputationInterval}, the $N_{\cal{V}}$ intervals $$ I_s=[y_\qp^-({\tilde{x}}_s,\gamma,c), y_\qp^+({\tilde{x}}_s,\gamma,c)], \; s=1,\ldots,N_{\cal{V}}. $$ \STATE Make $n^+_{\rm{viol}}$ equal to the number of violations of the upper constraints $$ {\tilde{y}}_s \leq y_\qp^+({\tilde{x}}_s,\gamma,c), \; s=1,\ldots,N_{\cal{V}},$$ and $n^-_{\rm{viol}}$ equal to the number of violations of the lower constraints $${\tilde{y}}_s \geq y_\qp^-({\tilde{x}}_s,\gamma,c), \; s=1,\ldots,N_{\cal{V}}.$$ \IF{ $ \fracg{\max\{n^+_{\rm{viol}},n^-_{\rm{viol}}\}}{N_{\cal{V}}}< \tau $} \State $c_{\min}=c$, \ELSE{} \State $c_{\max}=c$. \ENDIF \ENDWHILE \STATE $c_\gamma=c_{\min}$. \end{algorithmic} \end{algorithm} From the discussion above, we have that the parameter $c$ corresponding to a particular choice of $\gamma>0$ (denoted $c_\gamma$) is determined by $\tau$. As it is detailed in Algorithm \ref{alg:computation:c}, $c_\gamma$ is chosen as the largest value of $c$ (up to a given accuracy $\epsilon>0$) that guarantees in the validation set that the obtained confidence intervals contain the outputs with the desired probability. That is, non smaller than $1-2\tau$. Parameter $\gamma>0$ can be obtained maximizing the likelihood ratio which, for a specific $\gamma$ and corresponding $c_\gamma$, is defined as $$ L_{\gamma} = \sum_{s=1}^{N_{\cal{V}}} \textrm{log}\left( \textrm{ecp}_{\gamma,c_\gamma} ({\tilde{y}}_{j},{\tilde{x}}_{j},\mathcal{D}) \right) $$ Using $\bar{{\mathcal{Y}}}=\{{\bar{y}}_1,\ldots,{\bar{y}}_M\}$, a numerical approximation to the optimal value of $\gamma$ is given by \begin{equation} \gamma_\qp^* \approx \arg\max_{\gamma\in \Gamma }\sum_{s=1}^{N_{\cal{V}}} \textrm{log}\left( \frac{\exp\left(-c_\gamma J_{\gamma}(\bmat{c} {\tilde{y}}_s \\ {\tilde{x}}_s \emat, \mathcal{D} )\right)}{\Sum{j=1}{M} \exp\left(-c_\gamma J_\gamma(\bmat{c} {\bar{y}}_j \\ {\tilde{x}}_s \emat, \mathcal{D})\right)} \right), \label{eq:max_lik} \end{equation} where $\Gamma$ is a set containing all the possible values considered for $\gamma$. \begin{remark} Other criteria can be used to compute $\gamma^*_\qp$. For example, $\gamma^*_\qp$ could be obtained by minimizing a cost function $Q_{\gamma}$ that penalizes the average length of the intervals and/or the average prediction error with respect to the conditioned median introduced in Remark \ref{remark:median}. We notice, however, that explicitly minimizing the size of the intervals may translate into an increased violation rate when the validation set has not a sufficiently large number of samples. \end{remark} \section{Example} \label{sec:example} \begin{table*}[htbp] \caption{Results for the Lorenz Attractor, interval $[o_{5\%},o_{95\%}]$.} \label{tab:tabLorenz90} \centering \begin{tabular}{c|cc|cc|cc} \hline & \multicolumn{2}{|c|}{\textbf{Proposed approach}} & \multicolumn{2}{|c}{\textbf{Quantile Regression}} & \multicolumn{2}{|c}{\textbf{Set Membership}} \\ \hline Data set length & \textbf{Empirical Probability} & \textbf{Interval Width} & \textbf{Empirical Probability} & \textbf{Interval Width} & \textbf{Empirical Probability} & \textbf{Interval Width} \\ \hline 200 & 0.9140 & 2.0578 & 0.8290 & 3.0965 & 0.8960 & 2.9378 \\ 350 & 0.8990 & 1.9352 & 0.8260 & 3.0550 & 0.9100 & 2.4773 \\ 500 & 0.9070 & 2.0223 & 0.8410 & 3.2450 & 0.9120 & 2.5671\\ \hline \end{tabular} \end{table*} \begin{table*}[htbp] \caption{Results for the Lorenz Attractor, interval $[o_{10\%},o_{90\%}]$.} \label{tab:tabLorenz80} \centering \begin{tabular}{c|cc|cc|cc} \hline & \multicolumn{2}{|c|}{\textbf{Proposed approach}} & \multicolumn{2}{|c}{\textbf{Quantile Regression}} & \multicolumn{2}{|c}{\textbf{Set Membership}} \\ \hline Data set length & \textbf{Empirical Probability} & \textbf{Interval Width} & \textbf{Empirical Probability} & \textbf{Interval Width} & \textbf{Empirical Probability} & \textbf{Interval Width} \\ \hline 200 & 0.8060 & 1.6053 & 0.7450 & 2.2776 & 0.8160 & 2.4248 \\ 350 & 0.8060 & 1.6164 & 0.7270 & 2.0607 & 0.7900 & 1.9797 \\ 500 & 0.8100 & 1.6195 & 0.7630 & 2.4371 & 0.8100 & 2.0021\\ \hline \end{tabular} \end{table*} The Lorenz attractor is a system of ODEs known for having chaotic solutions with certain values of the parameters of the system. The equations that define the system are the following \begin{align} &\frac{\textrm{d}o}{\textrm{d}t} \, = \, \sigma (p-o) \nonumber \\ &\frac{\textrm{d}p}{\textrm{d}t} \,=\, o (\rho-q) - p \\ &\frac{\textrm{d}q}{\textrm{d}t} \,=\, op - \beta q\,, \nonumber \end{align} where $\sigma$, $\rho$ and $\beta$ are real scalar parameters. In this example, these parameters take the values $\sigma=10$, $\rho=28$ and $\beta=8/3$. Furthermore, in order to obtain the necessary data, the ODEs have been integrated numerically with a fixed time step of $T_s = 0.1s$ and initial conditions $o(0)=1$, $p(0)=1$ and $q(0)=1$. Here, it is considered the task of forecasting the one-step ahead value of $o$, i.e., $y_{k} = o_{k}$, using the two previous values of $o$, that is, the regressor vector will be $x_k = [y_{k-1}, \,y_{k-2}]\T$. To start with, $2500$ data points are considered, normalized in the $[0,1]$ range. Different sizes for the data set $\mathcal{D}$ are considered in this example ($200$, $350$ and $500$ points). The validation set ${\cal{V}}$ consists of $1000$ data points and other $1000$ data points are used as a test set, denoted by $\mathcal{S}$ (note that $\mathcal{D}$, ${\cal{V}}$ and $\mathcal{S}$ are mutually disjoint sets). The set $\Gamma$ is taken from $[0,3]$ using a $0.1$ sampling step. On the other hand, $\mathcal{\bar{Y}}$ is obtained from a grid of equally distant points in the interval $[-0.1893,1.2298]$ sampled with a $1.4191\times 10^{-4}$ step. Two different techniques will be considered as benchmarks. The first one is quantile regression \cite{Koenker:1978:RegressionQuantiles}, \cite{Davino:14}, a classical method for the estimation of conditioned quantiles. The second one is the set-membership method described in \cite{milanese2004set,milanese2011unified}. This technique is a well-known method to generate interval bounds for a time series (usually produced from a dynamical system). For the sake of comparison, to guarantee that these bounds contain the output within a prescribed probability, we choose the parameters $\epsilon, \gamma$ of \cite{milanese2004set} such that the resulting empirical probability of containing a sample within the validation set ${\cal{V}}$ is no smaller than $1-2\tau$. The numerical results of the proposed approach and the two benchmark techniques are shown in table \ref{tab:tabLorenz90} for a $[o_{5\%},o_{95\%}]$ interval, that is, $\qp=0.05$, and in table \ref{tab:tabLorenz80} for $[o_{10\%},o_{90\%}]$ ($\qp=0.1$). The output of the test data should be contained in the first interval with a probability of $0.9$ ($0.8$ for the second interval). The optimal value for $\gamma$ has been chosen maximizing the likelihood function $L_\gamma$ (see equation (\ref{eq:max_lik}) and figure \ref{fig:max_lik}). The empirical probability in the case of the quantile regression clearly does not meet the probabilistic specifications. In the case of the proposed approach and the set-membership method, the observed fraction of outputs that fall into the predicted intervals is much closer to the desired one. Note that, for all techniques, the obtained empirical probability can be below the desired probability. This could be solved relying on a probabilistic scaling scheme \cite{mirasierra2021prediction} or on probabilistic validation schemes \cite{carnerero2021probabilistically}, \cite{karg2019probabilistic}. Regarding the interval width, the proposed approach clearly manages to obtain the smallest intervals for each data set. For the $[o_{5\%},o_{95\%}]$ interval, the interval width is on average $24.35\%$ smaller than those provided by the set-membership method and $35.96\%$ than the quantile regression. On the other hand, for the $[o_{10\%},o_{90\%}]$ interval, the intervals with the proposed technique are $23.75\%$ smaller than those with set-membership and $28.21\%$ than those with the quantile regression. Taking into account the empirical probability values and the interval widths, we can conclude that the proposed approach obtains the best results. Finally, in figure \ref{fig:lorenz_90}, we show the test set $\mathcal{S}$ along with the computed intervals $[o_{5\%},o_{95\%}]$ of the proposed approach. Note that the intervals are wider when there are trend changes in the output. Furthermore, figure \ref{fig:max_lik} shows an example of the value of the maximum likelihood ratio $L_\gamma$ as a function of $\gamma$ (in this case for the data set of $200$ points and interval $[o_{5\%},o_{95\%}]$). \begin{figure}[h] \centering \includegraphics[width=0.45\textwidth]{lorenz_90_v2.eps} \caption{Test set and computed intervals for Lorenz Attractor (interval $[o_{5\%},o_{95\%}]$).} \label{fig:lorenz_90} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.45\textwidth]{Likelihood_2.eps} \caption{Maximum likelihood ratio as a function of $\gamma$.} \label{fig:max_lik} \end{figure} \section{Conclusions} \label{section6} This work presents a new approach to obtain an interval predictor to be used in nonlinear systems. The methodology relies on a parameterized family of dissimilarity functions, that are used to estimate the probability density function of the system output, conditioned to last inputs and outputs. A family of empirical probability density functions, parameterized by means of two parameters, is proposed. It is shown that the proposed family encompasses the multivariable normal probability density function as a particular case. The methodology allows us to provide probabilistic interval predictions of the output of the system. For a particular choice of the tuning parameters, the conditional probability density function of the output attains a maximum at the output estimated by least-squares regression. This shows that the proposed method constitutes a generalization of classical estimation methods. A validation scheme is used to tune the two parameters on which the methodology relies ($c$ and $\gamma$). The method has been applied to generate interval predictions, which have been compared favourably with the ones obtained by means of quantile regression and set-membership methods. \section*{Appendix A} \textbf{Proof of Property 2}: Denote $\lambda = [\lambda_1 \; \lambda_2 \; ... \; \lambda_N]\T$. We solve the optimization problem using a dual formulation where $\mu\in \R^{n+1}$ denotes the multipliers associated with the equality constraint $$ z=\Sum{i=1}{N}\lambda_i z_i = Z\lambda,$$ and $\nu$ is the multiplier corresponding to the equality $$ 1 = \Sum{i=1}{N} \lambda_i = u\T \lambda.$$ the Lagrange function is $$\mathcal{L}(\lambda,\mu,\nu) = \lambda\T\lambda + \mu\T(Z\lambda-z) +\nu(u\T\lambda-1).$$ Denote $\lambda^*$, $\mu^*$ and $\nu^*$ the optimal values for the primal and dual variables. From $\frac{\partial \mathcal{L}(\lambda^*,\mu^*,\nu^*)}{\partial \lambda}=0$ we obtain that the optimal vector $\lambda^*$ is given by \begin{equation}\label{ec:lambda}\lambda^* = -\frac{1}{2}(Z\T\mu^*+u\nu^*).\end{equation} Since $u\T \lambda^*=1$, $Zu=N\bar{z}$ and $u\T u=N$ we can premultiply both terms of last equality by $u\T$ to obtain \begin{eqnarray*} 1 &=& -\frac{1}{2}(u\T Z\T \mu^*+ N\nu^*)\\ & = & -\frac{N}{2}(\bar{z}\T\mu^* + \nu^*). \end{eqnarray*} Therefore, $$\nu^* = -\frac{2}{N}-\bar{z}\T\mu^*.$$ Substituting the expression for $\nu^*$ in (\ref{ec:lambda}) yields, \begin{eqnarray} \lambda^* &=& -\frac{1}{2}\left(Z\T\mu^*-u(\frac{2}{N}+\bar{z}\T\mu^*)\right) \nonumber\\ &=&\frac{u}{N} - \frac{1}{2}(Z\T- u\bar{z}\T )\mu^*.\label{ec:lambda2} \end{eqnarray} Premultiplying by $Z$ we obtain \begin{equation}\label{ec:Zlambda} Z\lambda^* = \bar{z} - \frac{1}{2}(ZZ\T-N\bar{z}\bar{z}\T)\mu^*.\end{equation} From the equality constraint $Z\lambda^*=z$ and (\ref{ec:Zlambda}) we have $$\mu^* = -2(ZZ\T-N\bar{z}\bar{z}\T)^{-1}(z-\bar{z}).$$ Substituting $\mu^*$ in equation (\ref{ec:lambda2}) we infer $$\lambda^* = \frac{u}{N} + ( Z\T-u\bar{z}\T)(ZZ\T-N\bar{z}\bar{z}\T)^{-1}(z-\bar{z}).$$ Finally, taking into account that $$u\T(Z\T-u\bar{z}\T ) = (N\bar{z}\T-N\bar{z}\T)=0$$ we obtain $$(Z\T -u\bar{z}\T )\T(Z\T -u\bar{z}\T) = ZZ\T-N\bar{z}\bar{z}\T.$$ From last equality and the expression for $\lambda^*$ we conclude \begin{eqnarray*} J_{0}(z,\mathcal{D}) &=& (\lambda^*)\T\lambda^* \\ & =& \frac{1}{N}+(z-\bar{z})\T(ZZ\T-N\bar{z}\bar{z}\T)^{-1}(z-\bar{z}). \;\quad \blacksquare \end{eqnarray*} \section*{Appendix B} \textbf{Proof of Property 3}: From equation (\ref{equ:optimal:y}) we have that the optimal value for the estimation is $\hat{y}_k = y^* = \Sum{i=1}{N} \lambda_i^* y_i$, where for the particular case $\gamma=0$, $\lambda_i^*$, $i=1,\ldots,N$, are the optimal values of the optimization problem \begin{eqnarray*} & \min\limits_{\lambda_1,\ldots,\lambda_N} & \Sum{i=1}{N} \lambda_i^2 \\ & s.t. & x_k =\Sum{i=1}{N}\lambda_i x_i \\ && 1 = \Sum{i=1}{N}\lambda_i. \end{eqnarray*} Defining $$ R = \bmat{cccc} x_1 & x_2 & \ldots & x_N \\ 1 & 1 &\ldots & 1\emat,\;\; r_k = \bmat{c} x_k \\ 1 \emat,$$ we have that the equality constraints can be rewritten as \begin{equation}\label{equ:V:equality} R\lambda=r_k, \end{equation} where $\lambda=\bmat{cccc} \lambda_1 & \lambda_2 & \ldots & \lambda_N \emat\T$. From the Karush-Kuhn-Tucker optimality conditions we infer that the optimal solution is given by (see subsection 10.1.1 in \cite{Boyd04}) $$ \bmat{cc} \rm{I} & R\T \\ R & 0 \emat \bmat{c} \lambda^* \\ \varphi^* \emat = \bmat{c} 0 \\ r_k\emat, $$ where $\varphi^*$ corresponds to the optimal dual decision variables corresponding to the equality constraint (\ref{equ:V:equality}), (see \cite{Boyd04}). The previous equation can be rewritten as \begin{eqnarray*} \lambda^* &=& - R\T \varphi^* \\ R \lambda^* & = &r_k. \end{eqnarray*} From here we obtain $-RR\T \varphi^* =r_k$ which implies $\varphi^*=-(RR\T)^{-1} r_k$. We finally obtain $$ \lambda^* =R\T(RR\T)^{-1}r_k.$$ Therefore, \begin{eqnarray*} \hat{y}_k &=& Y\T \lambda^* \\ & = & Y\T R\T (R R\T)^{-1} r_k\\ & = & r_k\T (RR\T)^{-1}R Y, \end{eqnarray*} where $Y=\bmat{cccc} y_1 & y_2 & \ldots & y_N\emat\T$. We notice that this corresponds to the least squares estimation obtained when we consider as regressors the vectors $\bmat{cc} x_j\T & 1 \emat\T$, $j=1,\ldots,N$ (see \cite{Ljung99}, \cite{Murphy:12}). \hfill $\rm
2010.15630
\section{Half-quantized Hall conductance of the 3D TI with antiparallel magnetization alignment surfaces in Fig.~1} \begin{figure}[htbp] \includegraphics[scale=0.3]{half_hall_cond.eps} \caption{It shows numerical simulations of the Hall conductance of the top surface. The red curve is obtained from a 2D massive Dirac Hamiltonian. Here, the system size is $L_x\times L_y \times L_z = 30\times30\times8$, $E_F$ is the Fermi energy and $M_z$ the magnetization strength. Other parameters are $A_{1}=0.8$, $A_{2}=0.5$, $B_{1}=0.25$, $B_{2}=0.8$, $M_0=0.5$, and $M_{z}=0.15$. } \label{half_hall_cond} \end{figure} Here, we would like to demonstrate the 3D TI with antiparallel magnetization alignment surfaces shown in Fig.~1(a) of the main text describes an axion insulator. To justify the existence of an axion insulator phase that is featured by the half-quantized-surface Hall conductance, we discretize Hamiltonian $H$ on square lattices and calculate the Hall conductance of the surface states numerically. For a clean system, the numerical result in Fig.~\ref{half_hall_cond} through Eq.~(2) in the main text shows the Hall conductance of the top surface in our model is $e^2/(2h)$ with the Fermi energy $E_F$ in the surface gap, and approaches zero when the Fermi energy $E_F$ moves outside the gap. These results are captured by the analytic curve, which is obtained from the Hall conductance of the 2D massive surface Dirac Hamiltonian $H_{surf}^t=A_1(\sigma_x k_x + \sigma_x k_x) + M_z\sigma_z$, i.e. \begin{equation} \sigma_{xy}= \frac{M_{z}}{2|M_{z}|}\Theta(|M_{z}|-|E_{F}|)+\frac{M_{z}}{2|E_{F}|}\Theta(|E_{F}|-|M_{z}|)\nonumber \end{equation} with the Heaviside function $\Theta(...)$. Consequently, such a model with an outward pointing magnetization $H$ describes an axion insulator. \section{Anderson transition in a normal insulator} \begin{figure}[htbp] \includegraphics[width=\columnwidth]{normal_insulator.eps} \caption{(color online) (a) Schematic plot of a normal insulator with antiparallel magnetization alignment surfaces. (b) It shows the Hall conductance as a function of the layer index $z$ with $z=1$ for the bottom layer and $z=8$ for the top layer. Here, the disorder strength $W=0$ and the Fermi energy $E_F/M_z\approx0.083$. (c) Renormalized localization length $\Lambda=\lambda(L)/L$ against the Fermi energy $E_{F}/M_{z}$ with the disorder strength $W=1.5$. The curves corresponds to different sample widths $L$. Here, the $x$ direction is periodic and the $z$ direction is open. A new set of parameters to describe a normal insulator is $A_{1}=A_{2}=0.55$, $B_{1}=B_{2}=0.25$, $M_0=-0.3$, and $M_{z}=0.12$. } \label{Figs1} \end{figure} As a comparison, we discuss the Anderson transition in a normal insulator. With the effective Hamiltonian in the Eq.~(1) of the main text, we use a new set of parameters to describe a normal insulator. The geometry of the sample shown in \cref{Figs1}(a) is the same as Fig.~1(a) in the main text. Then, we investigate the Hall conductance of the sample to further identity it is a normal insulator. Compared with the axion insulator in Fig.~3, the normal insulator manifests itself in the zero Hall conductance contributed from each layer in \cref{Figs1}(b). As we have done in the main text, we consider a 3D long bar sample of length $L_y$ and widths $L_{x}=L_{z}=L$ with periodic boundary condition in the $x$ direction and open boundary condition in the $z$ direction to calculate the localization length. The result in \cref{Figs1}(c) only shows a critical point, which indicates a 3D Anderson metal-insulator transition, and the 2D critical phase in the axion insulator disappears here. Therefore, we propose to probe the axion insulator state by investigating the universal signature of 2D quantum-Hall-type critical behaviors in the 3D magnetic TI. \section{Discussions in an antiferromagneic topological insulator $\mathbf{MnBi_2Ti_4}$} \begin{figure}[htbp] \includegraphics[scale=0.3]{mnbite1.eps} \caption{(color online) Lattice structure of the antiferromagneic topological insulator $\mathrm{MnBi_2Ti_4}$ between two adjacent layers. } \label{mnbite_device} \end{figure} In the main text, we have investigated the disorder-induced Anderson transition of an axion insulator consisting of a 3D time-reversal invariant topological insulator with antiparallel magnetization alignment surfaces, and found a 2D phase transition between the axion insulating phase and the Anderson insulating phase, which does not occur in normal insulators. It is necessary to repeat the finite-size scaling in an antiferromagneic (AFM) topological insulator $\mathrm{MnBi_2Ti_4}$ to confirm its model-independent property. The AFM $\mathrm{MnBi_2Ti_4}$ is fabricated by stacking septuple layers with opposite magnetization between the neighbouring septuple layers [See \cref{mnbite_device}]. Besides, it forms an axion insualtor (quantum anomalous Hall insulator) for even (odd) septuple layers~\cite{MnBiTe_FPC_wangjing}. The effective low-energy model for a AFM $\mathrm{MnBi_2Ti_4}$ reads~\cite{MnBiTe_FPC_wangjing,dassarma}, \begin{equation} H=\sum_{i=1}^{4} d_{i}(\mathbf{k}) \Gamma_{i}+ M(z) \cdot s_z\otimes \sigma_0\label{mnbite_H} \end{equation} where $d_{1}=A_{1}k_{x}$, $d_{2}=A_{1}k_{y}$, $d_{3}=A_{2}k_{z}$, and $d_{4}=M_0-B_{1} k_{z}^{2}-B_{2}\left(k_{x}^{2}+k_{y}^{2}\right)$. $\Gamma_{i}=s_{i} \otimes \sigma_{1}$ for $i=1,2,3$, and $\Gamma_{4}=s_{0} \otimes \sigma_{3}$. $s_{i}$ and $\sigma_{i}$ are the Pauli matrices for the spin and orbital degrees of freedom. The last term of \cref{mnbite_H} characterize the magnetization of the $z$-th layer by $M(z)=\left[-2\cdot mod(z,2)+1\right]\cdot M_0$, where $mod(...)$ returns the rest of a division. Based on even-layer $\mathrm{MnBi_2Ti_4}$, we calculate the renormalized localization length as we have done in the main text, and show it in \cref{mnbite_lambda}. Compared with Fig.~2(a) in the main text, \cref{mnbite_lambda} shows the axion insulator $\mathrm{MnBi_2Ti_4}$ undergoes multiple phase transitions with increasing Fermi energy $E_F$ similarly. To be specific, for $W=1.5$ in \cref{mnbite_lambda}, one can identify an axion insulator phase with $d\Lambda/dL<0$ when $|E_F/M_z|\lesssim 1$, where the Fermi energy $E_F$ is within the the surface Dirac gap $M_z$. With increasing the Fermi energy, we also find that the system goes through a critical phase where $d\Lambda/dL=0$ and arrive at a normal insulator phase, just like the model we use in the main text. Therefore, we conclude that the 2D phase transition of an axion insulator is model-independent. \begin{figure}[htbp] \includegraphics[scale=0.3]{mnbite3.eps} \caption{(color online) Renormalized localization length $\Lambda=\lambda(L)/L$ against the Fermi energy $E_{F}/M_{z}$ with the disorder strength $W=1.5$. The curves corresponds to different sample widths $L$. Here, the $x$ direction is periodic and the $z$ direction is open. Parameters of the Hamiltonian are $A_{1}=A_{2}=0.55$, $B_{1}=B_{2}=0.25$, $M_0=0.3$, and $M_{z}=0.12$. } \label{mnbite_lambda} \end{figure}
2012.04609
\section{Introduction} In dynamics we are interested in the orbits under some law of motion, but it is usually a better way to have a global view of the dynamics to understand it from a probabilistic point of view. That is, instead of being concerned with the orbit of any point we want to say important information about most of the orbits, or at least for those which are (for any reason) particularly important. Hence, it comes to play ergodic theory, which deals with dynamical systems from a probabilistic point of view.\\ From the ergodic theory point of view, invariant measures have play an important role in the theory. In the 1970s, Sinai, Ruelle and Bowen (\cite{Barreira} and references therein) started to study equilibrium states in dynamical systems inspired by techniques and results from statistical mechanics.\\ Given a diffeomorphism $f$ over a compact manifold $M$, an \textit{equilibrium state} for a continuous \textit{potential} $\phi:M\to \mathbb{R}$ is an $f$-invariant Borel probability measure $\mu$ that maximizes the quantity $h_{\mu}(f)$+$\int\phi d\mu$ among all $f$-invariant measures. When $\phi\equiv 0,$ the measure $\mu$ is called \textit{maximal entropy measure}. An interesting problem of ergodic theory is to determinate existence and uniqueness of equilibrium states with respect to some class of potentials. In the setting of uniformly hyperbolic diffeomorphisms, Bowen \cite{Bowen1} solved this problem for Hölder continuous potentials, and in other contexts there exists many approach for this problem. \\ Since the pioneering works of Sinai, Ruelle and Bowen on thermodynamical formalism many results have been obtained, in particular in the context of non-uniformly hyperbolic maps and partially hyperbolic diffeomorphisms. Climenhaga and Thompson extended Bowen's techniques for a nonuniform setting \cite{ClimenhagaFisherThompson,ClimenhagaFisherThompson1}, using those general techniques, Climenhaga \emph{et. al.} \cite{ClimenhagaFisherThompson1, ClimenhagaFisherThompson} proved that robustly transitive diffeomorphisms introduced by Mañé and Bonatti-Viana have a unique equilibrium state for natural classes of potentials. For potentials with a small variational condition, Rios and Siqueira \cite{RiosSiqueira} obtained uniqueness of equilibrium states for partially hyperbolic horseshoes. For the same class of potentials that we will study in this work, Carvalho and Pérez \cite{CarvalhoPerez} obtained similar results about equilibrium states for a class of skew product. Recently, Climenhaga \emph{et. al.} \cite{ClimenhagaPesinZelerowicz} studied uniqueness of equilibrium states for certain transitive partially hyperbolic diffeomorphisms.\\ In this work we are also concerned with thermodynamical formalism in the less explored context of a partially hyperbolic diffeomorphism with higher dimensional center foliaiton (i.e. two dimensional or higher) which are isotopic to Anosov. Using disintegration techniques of measures along the center foliation, under some extra assumptions, we obtain similar results to those Crisostomo and Tahzibi \cite{CrisostomoTahzibi}. We give conditions on $f$ and its center foliation under which one can control the geometry of the preimage of the semiconjugacy. Roughly speaking, our conditions are \begin{itemize} \item that $f$ is dynamically coherent and the center foliation splits into line-bundles, and \item that the preimage of a point $x$ under the semiconjugacy is contained in a unique center leaf and has a ``controlled geometry'', namely a rectangle-like structure. \end{itemize} In particular these assumptions are satisfied by the maps considered by Buzzi \emph{et. al.} \cite{BuzziFisherSambarinoVasquez} and Carrasco \emph{et. al.} \cite{CarrascoLizanaPujalsVasquez}, such maps are isotopic to a linear Anosov. \\ Under these hypotheses we address the problem of existence and uniqueness (or finiteness) of the equilibrium states. On what follows the set $C$ is the set where the semiconjugacy of $f$ with its linearization is not injective. We now present our results: \begin{thmx}\label{th.Equilibrium} Let $f:\mathbb{T}^4\to \mathbb{T}^4$ be a partially hyperbolic diffeomorphism isotopic to Anosov. Under the above assumptions if the measure of $C$ is zero we have uniqueness of the equilibrium state. Otherwise, the disintegration of the measure on the center foliation is atomic and, under some additional assumptions, the equilibrium state is ``virtually hyperbolic'' and not unique. \end{thmx} By virtually hyperbolicity we mean that there exists a full measurable invariant subset which intersects each center leaf in at most one point. This result would actually follow from a more general one regarding ergodic measures: \begin{thmx}\label{th.Ergodic} Let $f:\mathbb{T}^4\to \mathbb{T}^4$ be a partially hyperbolic diffeomorphism isotopic to Anosov. Under the above assumptions if the measure of $C$ is zero the system is almost conjugated to an Anosov. Otherwise, the disintegration of the measure is atomic and, under some additional assumptions, the measure is ``virtually hyperbolic''. \end{thmx} A precise statement on the assumptions and the main results are given in Section \ref{sec.Main} int Theorem \ref{main:equilibrium} and Theorem \ref{main:ergodic} respectively. The novelty with respect to previous works is the study for partially hyperbolic diffeomorphims with 2-dimensional (or higher) center foliations. This is done by a careful study on the disintegration of the measures along the line-bundles of the center foliation. \\ \begin{remark} We remark that the results of theorems \ref{th.Equilibrium} and \ref{th.Ergodic} are valid for partially hyperbolic diffeomorphisms $f : \mathbb{T}^n \to \mathbb{T}^n$ isotopic to Anosov with $k$-dimensional center bundle with $1\leq k <n$. Provided that they satisfy the assumptions above. The proof for the higher dimensional case follows in a similar way as in the 2-dimensional case. A further discussion about it will be presented in Section \ref{sec.ProofMain}. \end{remark} The remainder of the article is organized as follows. In the next Section we discuss some necessary preliminaries in equilibrium states, partially hyperbolic dynamics and disintegration of the measures. In Section \ref{sec.Main} we give precise statements os our main results, while their proofs are presented in sections \ref{sec.ProofMain}, \ref{sec.ProofCor} and \ref{sec.ProofEq}. \section{Preliminaries} \subsection{Entropy and equilibrium states} Let $(M,d)$ be a compact metric space and $f:M\to M$ a continuous map. For $\delta\in (0,1)$, $n\in \mathbb{N}$ and $\epsilon>0$, a finite set $E\subset M$ is called an $(n,\epsilon,\delta)$-\textit{covering} if the union of the all $\epsilon$-balls, $B_{n}(x,\epsilon)=\{y\in M: d(f^{i}(x),f^{i}(y))<\epsilon\}$, centered at points $x\in E$ has $\mu$-measure greater than $\delta$. The \textit{metric entropy} is defined by $$h_{\mu}(f)=\displaystyle\lim_{\epsilon \rightarrow 0}\displaystyle\limsup_{n\rightarrow \infty}\frac{1}{n}\log \min\{\# E: E\subseteq M \ {\rm is \ a \ }(n,\epsilon,\delta)\mbox{\rm{-covering}\ set} \}.$$ A set $E\subseteq M$ is said to be $(n,\epsilon)$-\textit{separated}, if for every $x, y \in E, x \neq y$, there exists $i\in\{0,\ldots, n-1\}$ such that $d(f^{i}x, f^{i}y)\geq \epsilon$. The \textit{topological entropy} on a non-empty compact set $K\subset M$ is defined by $$h(f,K)=\displaystyle\lim_{\epsilon \rightarrow 0}\displaystyle\limsup_{n\rightarrow \infty}\frac{1}{n}\log \sup\{\# E: E\subseteq K \ \mbox{\rm{is}} \ (n,\epsilon)\mbox{\rm{-separated}} \}.$$ We denote $h_{top}(f):=h(f,M).$ \begin{definition} Let $f:M\rightarrow M$ be a continuous map over a compact manifold $M$. An $f$-invariant Borel probability measure $\mu$ is an \textbf{equilibrium state} for $f$ with respect to a potential $\phi\in C^{0}(M,\mathbb{R})$ if it satisfies $$h_{\mu}(f)+\displaystyle\int \phi d\mu= \sup\{h_{\nu}(f)+\displaystyle\int \phi d\nu:\nu\in \mathcal{M}(f)\},$$ where $h_{\mu}(f)$ is the metric entropy of $f$ with respect to $\mu$. If $\phi\equiv 0$, $\mu$ is called \textbf{measure of maximal entropy}. \end{definition} \subsection{Partially hyperbolicity} Let $f:M\to M$ be a diffeomorphism defined on a compact manifold $M$, $f$ is said to be \textit{partially hyperbolic} if: \begin{enumerate} \item There exists a non-trivial splitting of the tangent bundle $TM=E^s\oplus E^c\oplus E^u$ invariant under the derivative $Df$; \item There exist a Riemannian metric $\|\cdot\|$ on $M$, such that we have positive continuous functions $\nu$, $\hat{\nu}$, $\gamma$, $\hat{\gamma}$ with $\nu$, $\hat{\nu}<1$ and $\nu<\gamma<{\hat\gamma}^{-1}<{\hat{\nu}}^{-1}$ such that, for any unit vector $v\in T_xM$, \begin{alignat*}{2} & \|Df(x)v \| < \nu(x) & \quad & \text{if } v\in E^s(x), \\ \gamma(x) < & \|Df(x)v \| < {\hat{\gamma}(x)}^{-1} & & \text{if } v\in E^c(x), \\ {\hat{\nu}(x)}^{-1} < & \|Df(x)v\| & & \text{if } v\in E^u(x). \end{alignat*} \end{enumerate} The bundles $E^s$, $E^u$, $E^c$ are called the stable, unstable and center bundle respectively. It is well-known that the stable and unstable bundles integrate to $f$-invariant foliations $\mathcal{F}^s$ and $\mathcal{F}^u$ \cite{HirschPughShub}. The leaf of $\mathcal{F}^{\sigma}$ containing $x$ will be called $W^{\sigma}(x)$, for $\sigma= s,u$. Such foliations are $f$-invariant, that means $f$ sends leaves to leaves. \begin{remark} Not always the central bundle $E^{c}$ may be tangent to an invariant foliation, but whenever such a foliation exists, it is denoted by $\mathcal{F}^{c}.$ \end{remark} \begin{definition} A partially hyperbolic diffeomorphism $f:M\rightarrow M$ is called dynamically coherent if there exist invariant foliations $\mathcal{F}^{c \sigma}$ tangent to $E^{c \sigma}=E^{c}\oplus E^{\sigma}$ for $\sigma=s,u.$ \end{definition} \begin{remark} In this article we assume that $f$ preserves the orientation on the $\mathcal{F}^i$ leaves. \end{remark} \subsection{Derived from Anosov} \begin{definition} A $C^1$-diffeomorphism $f:\mathbb{T}^d\to \mathbb{T}^d$ is called Derived from Anosov (DA) if it is isotopic to its action in the homology $A: H_1(\mathbb{T}^d )\to H_1(\mathbb{T}^d )$. We call $A$ the linear part of $f$. \end{definition} By a well-known result of Franks \cite{Franks} there exist a semiconjugacy $H:\mathbb{T}^d \to \mathbb{T}^d$ between $f$ and $A$, that is, $H\circ f=A\circ H.$ Moreover, its lift $\tilde{H}$ to $\mathbb{R}^d$ semiconjugates $\tilde{f}$ to $\tilde{A}$, and for some constant $K$ we have \[ \|\tilde{H}-\operatorname{id}\|_{C^0}\leq K. \] \vspace{0.1cm} In particular, $\tilde{H}$ is proper. The constant $K$ depends continuously on $f$, and tends to zero as $f$ tends to $A$ in the $C^1$ norm. For every $\tilde{x}\in\mathbb{R}^d$, each $\tilde{H}^{-1}(\tilde{x})$ is a compact set whose diameter is uniformly bounded from above $\operatorname{diam} (\tilde{H}^{-1}(\tilde{x}))\leq 2E$.\\ \begin{remark} Take $\mu$ any $f$-invariant measure and let $\nu=H_{\ast}\mu$. It is well-known that $h_{\mu}(f)\geq h_{\nu}(A)$. Furthermore, the Ledrappier-Walters Variational principle \cite{LedrappierWalters} says that \begin{equation}\label{eqn:LedWalt} \sup_{\mu:H_{\ast}\mu=\nu}h_{\mu}(f)=h_{\nu}(A)+\int_{\mathbb{T}^d}h(f,H^{-1}(x))d\nu(x). \end{equation} Hence, when $h(f,H^{-1}(x))=0$ for every $x\in\mathbb{T}^d$, we have that for any $f$-invariant measure $\mu$ \begin{equation}\label{eqn:EqualityMetricEntropy} h_{\mu}(f)= h_{\nu}(A). \end{equation} \end{remark} Consider a potential $\phi:\mathbb{T}^d\to\mathbb{R}$ for $A$ and define $\varphi=\phi\circ H$. A well-known result due to Bowen \cite{Bowen} states that if $\mu$ is an equilibrium state of $(f,\varphi)$ then, $\nu=H_{\ast}\mu$ is an equilibrium state of $(A,\phi)$. Furthermore, the reciprocate is also true under certain conditions:\\ \begin{lemma}\label{lemma:ExistenceMu} Let $f:\mathbb{T}^d \to \mathbb{T}^d$ be a DA partially hyperbolic diffeomorphism. Let $\phi:\mathbb{T}^d\to\mathbb{R}$ be a continuous potential and define $\varphi=\phi\circ H$ . Assume that $h(f,H^{-1}(x))=0$ for every $x\in \mathbb{T}^{d}$. If $\nu$ is an equilibrium state for $(A,\phi)$, then every $\mu\in \mathcal{M}(f)$ such that $H_{\ast}\mu=\nu$ is an equilibrium state for $(f,\varphi=\phi\circ H)$. \end{lemma} \begin{proof} Let $\nu$ be an equilibrium state for $(A,\phi)$. By the Riezs theorem and the compactness of the set of Borel probability measures on $\mathbb{T}^d$ we can guarantee the existence of an $f$-invariant measure $\mu$ such that $\nu=H_{\ast}\mu$ (see for example \cite[Lemma~4.3]{Bowen} for a similar construction). Moreover, by (\ref{eqn:EqualityMetricEntropy}) we have that \begin{align*} \sup \left\lbrace h_{\eta}(f)+\int \varphi d\eta: \eta\in \mathcal{M}(f) \right\rbrace &=\sup \left\lbrace h_{H_{\ast}\eta}(A)+\int \phi d H_{\ast}\eta: \eta\in \mathcal{M}(f) \right\rbrace,\\ &\leq \sup \left\lbrace h_{\hat{\nu}}(A)+\int \phi d\hat{\nu} : \hat{\nu}\in \mathcal{M}(A) \right\rbrace,\\ &\leq h_{\nu}(A)+\int \phi d\nu. \end{align*} Therefore, any $f$-invariant measure $\mu$ satisfying that $\nu=H_{\ast}\mu$ is an equilibrium state for $(f,\varphi)$. \end{proof} \subsection{Rectangle structure}\label{sec.Rect} We say that a DA partially hyperbolic diffeomorphism $f:\mathbb{T}^d\to\mathbb{T}^d$ has \textbf{rectangle structure} in the center bundle if: \begin{enumerate}[label=\Alph*.] \item \label{Hyp.R1} There exist a splitting $E^c=E^1\oplus E^2$ where each $E^i$ is a line-bundle and integrates to an $f$-invariant foliation $F^i$, for $i=1,2$. \item \label{Hyp.R2} For every $x\in\mathbb{T}^d$, if $z,z'\in H^{-1}(x)$ and $z'\in \mathcal{F}^i(z)$ for some $1\leq i\leq 2$, then \[ [z,z']_i\subset H^{-1}(x), \] where $[z,z']_i$ is the closed interval inside $\mathcal{F}^i(z)$ with end points $z$ and $z'$. \item \label{Hyp.R3} For each $x\in \mathbb{T}^d,$ $H^{-1}(x)$ is a finite union of rectangles contained in a unique center leaf of $\mathcal{F}^c$. \end{enumerate} The \emph{rectangles} mentioned above are compact sets obtained in the following inductive procedure. Let $z_0,...,z_k$, with $1\leq k\leq \ell$, be points in $H^{-1}(x)$ such that $z_j\in \mathcal{F}^{i_j}(z_0)$. We construct the rectangle (of dimension $k$ and corner $z_0$) by starting with $R_1=[z_0,z_1]_{i_1}\subset \mathcal{F}^{i_1}(z_0)$. Taking $i_2\neq i_1$ we can define $R_2$ as the trace inside $\mathcal{F}^c(z_0)$ of the set obtained by sliding $R_1$ along $[z_0, z_2]_{i_2}\subset \mathcal{F}^{i_2}(z_0)$, that is, \[ R_2=\bigcup_{w\in[z_0, z_2]_{i_2}}[w,y(w)]_{i_1}, \] where $[w,y(w)]_{i_1}$ is the image of $[z_0,z_1]_{i_1}$ by the $\mathcal{F}^{i_2}$-holonomy. Continuing this way, we can define $R_k$ as \[ R_k=\bigcup_{w\in[z_0, z_{k}]_{i_2}}R^{k-1}(w), \] where $R^{k-1}(w)$ is a rectangle of dimension $k-1$ and corners $z_0,...,z_{k-1}$ obtained as the image of $R_{k-1}$ in the corresponding center manifold by the $\mathcal{F}^{i_k}$-holonomy sending $z_0$ in $w$.\\ In a recent work, Carrasco \emph{et. al.} \cite{CarrascoLizanaPujalsVasquez} proved that, if the center bundle of $f$ is strongly simple (it decomposes into one dimensional sub-bundles with global product structure), then it has rectangle structure in the center bundle. Moreover, they also proved that for every $x\in\mathbb{T}^d$ then: \[ h(f,H^{-1}(x))=0. \] \subsection{Disintegration of measures} Let $(M,\mathcal{B}, \mu)$ be a probability space, where $M$ is a compact metric space, $\mathcal{B}$ the borelian $\sigma$-algebra and $\mu$ a probability measure. Let $\mathcal{P}$ be a partition of $M$ and $\hat{\mathcal{B}}=\pi_{\ast}\mathcal{B}$, $\hat{\mu}=\pi_{\ast}\mu$ where $\pi:M\rightarrow M/\mathcal{P}$ is the canonical projection that assigns to each point $x\in M$ the element $\mathcal{P}(x)$ of the partition that contains it, then $(\tilde{M}:=M/\mathcal{P},\hat{\mathcal{B}},\hat{\mu})$ is a probability space. \begin{definition A disintegration of $\mu$ with respect to $\mathcal{P}$ is a family $\{\mu_{P}\}_{P\in \mathcal{P}}$ of conditional probability measures on $M$ such that: \begin{enumerate} \item given $\phi\in C^{0}(M),$ then $P\mapsto \int \phi d\mu_{P}$ is measurable; \item $\mu_{P}(P)=1, \hat{\mu}$-a.e.; \item $\mu=\int_{\tilde{M}}\mu_{P}d\hat{\mu}$, i.e, if $\phi\in C^{0}(M),$ then $\int\phi d\mu=\int_{\tilde{M}}\int_{P}d\mu_{P}d\tilde{\mu}.$ \end{enumerate}\vspace{0.1cm} When it is clear which partition we are referring to, we say that the family $\{\mu_{P}\}$ disintegrates the measure $\mu$. There exists an equivalent form of writing the disintegration formula above: $$\mu=\int_{M}\mu_{x}d\mu$$ by considering the conditional measures $\mu_{x}, x\in M$ where $\mu_{x}=\mu_{y}$ if $y\in \mathcal{P}(x).$ \end{definition} \begin{proposition}[\cite{OliveiraVianaErgodic}] If $\{\mu_{P}\}_{P\in \mathcal{P}}$ and $\{\tilde{\mu}_{P}\}_{P\in \mathcal{P}}$ are disintegrations of $\mu$ with respect to $\mathcal{P}$, then $\mu_{P}=\tilde{\mu}_{P}$ for $\hat{\mu}$-almost every $P\in\mathcal{P}$. \end{proposition} The previous proposition asserts that disintegrations are essentially unique, when they exist. Consequently, for an invariant measure it follows that: \begin{corollary} If $f:M\to M$ preserves a probability measure $\mu$ and the partition $\mathcal{P}$, then $f_{\ast}\mu_{P}=\mu_{f(P)}$ $\hat{\mu}$-a.e. \end{corollary} \begin{definition} We say that a partition $\mathcal{P}$ of $M$ is measurable with respect to probability measure $\mu$ if there exist a measurable family $\{A_{i}\}_{i\in \mathbb{N}}$ and a measurable set $C$ of full measure such that if $B\in \mathcal{P}$, then there exists a sequence $\{B_{i}\}_{i\in \mathbb{N}}$, where $B_{i}\in \{A_{i}, A_{i}^{c}\}$ such that $B\cap C=\bigcap_{i\in \mathbb{N}}B_{i}\cap C.$ \end{definition} The next theorem guarantees the existence of disintegrations with respect to a measurable partition. \begin{theorem}[Rokhlin's Disintegration \cite{Rokhlin}]\label{tro} Let $\mathcal{P}$ be a measurable partition of a compact metric space $M$ and $\mu$ a Borel probability measure. Then, $\mu$ admits some disintegration with respect to $\mathcal{P}$. \end{theorem} The partition by leaves of a foliation may be non-measurable in general. For instance, this is the case for the stable and unstable foliations of Anosov diffeomorphisms with respect to measures of non vanishing metric entropy. Instead, one must consider disintegrations on compact foliated boxes. This conditional measures will depend on the foliated boxes, however, in \cite[Lemma~ 3.2]{AvilaVianaWilkinson} they proved that this measures are defined up to scaling. That is, they are equivalence classes where one identifies any two (possibly infinite) measures that differ only by a constant factor. \begin{definition} We say that a foliation $\mathcal{F}$ has atomic disintegration with respect to a measure $\mu$ if the conditional measures on any foliated box are sum of Dirac measures. \end{definition} Another way to define atomic disintegration is as follows: there exist a full measurable subset $Z$ such that $Z$ intersects all leaves in at most a countable set.\\ Even though the disintegration of a measure along a general foliation is defined in compact foliated boxes, it makes sense to say that a foliation $\mathcal{F}$ has a quantity $k\in\mathbb{N}$ of atoms per leaf. The meaning of ``per leaf'' should always be understood as a generic leaf, i.e. almost every leaf. That means that there is a set $A$ of $\mu$-full measure which intersects a generic leaf on exactly $k$ points. \begin{definition} We say that a measure is virtually hyperbolic if there is a full measure set which intersects the center leaf in at most one point. \end{definition} Let us finish this section by enunciating a well-known result known as the Measurable Choice Theorem. This result has appeared in the context of Decision Theory from Economics and as been proved by R. J. Aumann \cite{Aumann}: \begin{theorem}[Measurable Choice Theorem]\label{th.Measurable} Let $(T,\mu)$ be a $\sigma$-finite measure space, let $S$ be a Lebesgue space, and let $G$ be a measurable subset of $T\times S$ whose projection on $T$ is all of $T$. Then there is a measurable function $g:T\to S$, such that $(t, g(t))\in G$ for almost all $t\in T$. \end{theorem} \section{Main results}\label{sec.Main} For now on we focus in DA partially diffeomorphisms in $\mathbb{T}^4$ with 2-dimensional center bundle. Consider the set \begin{equation}\label{eqn:DefC} C:=\{x\in \mathbb{T}^4:\#H^{-1}H(x)>1\}. \end{equation} We claim that $C$ is a measurable set. One may check that by first observing that by simply reproducing \textit{ipsis litteris} the proof of \cite[Lemma 3.2]{PonceTahzibiVaraoBernoulli} only changing $\mathbb{T}^3$ by $\mathbb{T}^4$ and $\mathbb{R}^3$ by $\mathbb{R}^4$ one obtains that the set $H(C)$ is a measurable set. Hence $C=H^{-1} H(C)$ is a measurable set. Moreover, notice that $C$ is $f$-invariant. \\ We are now able to properly state Theorem \ref{th.Ergodic}: \begin{theorem}\label{main:ergodic} Let $f:\mathbb{T}^{4}\rightarrow \mathbb{T}^{4}$ be a DA partially hyperbolic diffeomorphism dynamically coherent and with rectangle structure in the center bundle. Assume that $f$ preserves the orientation of $\mathcal{F}^{i}$, for $i=1,2$. Then, if $\mu$ be an ergodic probability for $f$ \begin{enumerate} \item If $\mu(C)=0$, $(f,\mu)$ is almost conjugate to an Anosov. \item If $\mu(C)=1$, $C$ defines a partition such that $\mu$ has atomic disintegration with a finite number of atoms. \end{enumerate} \end{theorem} Mixed derived from Anosov examples $g:\mathbb{T}^d\to \mathbb{T}^d$ introduced by Buzzi \emph{et. al.} \cite[Section~5]{BuzziFisherSambarinoVasquez} satisfy the dynamical coherent and rectangle structure hypothesis. In particular, the center foliation $\mathcal{F}^c$ admits two invariant 1-dimensional sub-foliations $\mathcal{F}^{cu}, \mathcal{F}^{cs}$ such that $H^{-1}(x)\cap \mathcal{F}^{cu}_{loc}(x)$ and $H^{-1}(x)\cap \mathcal{F}^{cs}_{loc}(x)$ are segments in the center foliation. Another class of derived from Anosov $f:\mathbb{T}^4\to \mathbb{T}^{4}$ that satisfy these assumptions was studied by Carrasco \emph{et. al.} \cite[Theorem~A and Section~3]{CarrascoLizanaPujalsVasquez}. The virtually hyperbolicity result mention in Theorem \ref{th.Ergodic} follows from the next corollary. \begin{corollary}\label{cor.main} Under the assumptions of Theorem \ref{main:ergodic}. Let us assume that $\nu:=H_{\ast}\mu$ has full support and the semiconjugacy $H$ sends center leaves of $f$ to center leaves of $A$. If one of the following conditions is satisfied \begin{enumerate} \item \label{it.2} The center direction of $A$ is expansive or contractive. \item \label{it.3} $H(\mathcal \mathcal{F}^i)$ is some invariant foliation of $A$, for each $i=1,2$. \end{enumerate} Then, $\mu$ is virtually hyperbolic. \end{corollary} Now, we consider the following property: \begin{enumerate}[label=(H)] \item \label{Hyp.5} $h(f, H^{-1}(x))=0$, for all $x\in \mathbb{T}^d$. \end{enumerate} As mention before in Section \ref{sec.Rect}, \cite[Theorem~4.1]{CarrascoLizanaPujalsVasquez} guarantees that DA partially hyperbolic diffeomorphisms with central bundle $E^{c}$ strongly simple (see definition \cite[Definition~1.4]{CarrascoLizanaPujalsVasquez}) also satisfy the assumption \ref{Hyp.5}. Furthermore, \cite[Corollary~5.2]{BuzziFisherSambarinoVasquez} guarantees that mixed derived from Anosov examples satisfy the assumption \mbox{\rm\ref{Hyp.5}}.\\ Let us recall that if $f:\mathbb{T}^4\to \mathbb{T}^4$ is a DA partially hyperbolic diffeomorphism with a dominated splitting, then the existence of equilibrium states associated to any continuous potential is guaranteed as a consequence of the work of Díaz \emph{et. al.} \cite[Corollary~1.3]{DiazFisherPacificoVieitez}. The assumption \ref{Hyp.5} will guarantee the existence of the equilibrium state as a consequence of Lemma \ref{lemma:ExistenceMu}. Moreover, Theorem \ref{th.Equilibrium} also gives a partial answer to the uniqueness problem of the equilibrium states. We now proceed to formaly state Theorem \ref{th.Equilibrium}: \begin{theorem}\label{main:equilibrium} Let $f:\mathbb{T}^{4}\rightarrow \mathbb{T}^{4}$ be a DA partially hyperbolic diffeomorphism dynamically coherent, with rectangle structure in the center bundle and satisfying \ref{Hyp.5}. Let us assume that $\phi$ is a continuous potential such that $(A,\phi)$ that has a unique equilibrium state with full support. Define the potential $\varphi=\phi\circ H$ and $\mu$ be any ergodic equilibrium state of $f$ with respect to $\varphi$. If $f$ preserves the orientation of $\mathcal{F}^{i}, \ i=1,2$ then: \begin{enumerate} \item if $\mu(C)=0$, then $\mu$ is the unique equilibrium state; \item if $\mu(C)=1$, $C$ defines a partition such that $\mu$ has atomic disintegration with a finite number of atoms. Moreover, if the semiconjugacy $H$ sends center leaves of $f$ to center leaves of $A$ and one of the following conditions is satisfied \begin{enumerate} \item \label{it.Eq2} The center direction of $A$ is expansive or contractive. \item \label{it.Eq3} $H(\mathcal \mathcal{F}^i)$ is some invariant foliation of $A$, for each $i=1,2$. \end{enumerate} Then $\mu$ is virtually hyperbolic and it is not a unique equilibrium state. \end{enumerate} \end{theorem} \section{Proof of Theorem \ref{th.Ergodic}}\label{sec.ProofMain} From now on we assume $\mu(C)=1$ and let us prove that the partition determined by $C$ has atomic disintegration. That is, consider the partition: \[ \mathcal{P}:=\{\mathcal{P}(x):=H^{-1}H(x)|x\in C\}.\\ \] Let us prove that $\mathcal{P}$ is a measurable partition with respect to any measure considered. Let $\{A_i\}_{i \in \mathbb{N}}$ be a countable basis for the topology of $\mathbb{T}^4$. Now for any point $x \in \mathbb{T}^4$ we have sets $B_i \equiv B_i(x) \in \{A_i, A^c_i\}$ such that $\{ x \} = \cap_{i \in \mathbb{N}} B_i$. Since $\{H^{-1}(A_i)\}$ is a measurable set (because $A_i$ is an open set and $H$ is continuous) notice that \[ H^{-1}(x) = \bigcap_{i \in \mathbb{N}}H^{-1}(B_i). \] Thus proving that $\mathcal{P}$ is a measurable partition. Moreover, it is easy to see that $\mathcal{P}$ is left invariant by $f$, that is $f(\mathcal{P}(x))=P(f(x))$.\\ Assume, without loss of generality, that $\mathcal{F}^1$ is oriented and $f$ preserves its orientation. We define another partition $\mathcal{Q}$ as the one whose elements are the connected components of the intersection of elements of $\mathcal{P}$ and $\mathcal{F}^1$. That is \[ \mathcal{Q}:=\{Q(x)=\mathcal{F}^1(x)\cap \mathcal{P}(x)|x\in C\}.\\ \] Recall that, by the rectangle structure, $H^{-1}(z)$ is a finite union of rectangles in $\mathcal{F}^c$, so we can write for each $x\in C$ \begin{equation}\label{eqn.UnionRectangles} \mathcal{P}(x)=\bigcup_{j=1}^{n_x}R_j(x), \end{equation} where $n_x$ represents the number of rectangles in the class and $R_j(x)$ denotes a rectangle of dimension $1\leq k_j=k_j(x)\leq 2$ with corners $z_0,...,z_{k_j}$. Moreover, assumption \ref{Hyp.R2} guarantees that $Q(x)$ has only one connected component, an interval or a point. Therefore, the foliation of each element of $\mathcal{P}$ by $\mathcal{F}^1$ is a foliation by compact leaves. Thus, we can consider $\mathcal{Q}$ as a measurable partition. Indeed, any foliation with compact leaves can be consider as a measurable partition, see \cite[Proposition~ 3.7]{AvilaVianaWilkinson2}. Let us denote the conditional measures on $\mathcal{Q}$ by $\mu_x$.\\ \begin{figure}[t!] \centering \captionsetup{justification=centering} \begin{minipage}[c]{.7\linewidth} \centering \newrgbcolor{qqttzz}{0 0.2 0.6} \newrgbcolor{ccqqqq}{0.8 0 0} \psset{xunit=0.5cm,yunit=0.5cm,algebraic=true,dotstyle=o,dotsize=3pt 0,linewidth=0.8pt,arrowsize=3pt 2,arrowinset=0.25} \begin{pspicture*}(-3.5,-8)(19.56,7.23) \parametricplot[linewidth=1.2pt,linecolor=ccqqqq]{-0.2406942442470834}{0.24560912946714195}{1*16.73*cos(t)+0*16.73*sin(t)+-11.22|0*16.73*cos(t)+1*16.73*sin(t)+0.4} \parametricplot[linestyle=dotted,linecolor=ccqqqq]{-0.38249366103447713}{0.3774431667194532}{1*16.57*cos(t)+0*16.57*sin(t)+-11.06|0*16.57*cos(t)+1*16.57*sin(t)+0.36} \parametricplot[linestyle=dotted]{1.4906069730008062}{1.6614294498905402}{1*47.58*cos(t)+0*47.58*sin(t)+5.78|0*47.58*cos(t)+1*47.58*sin(t)+-47.57} \rput[tl](5.05,6.44){\ccqqqq{$ \mathcal{F}^1(z) $}} \rput[tl](1.53,-0.6){$ \mathcal{F}^2(z) $} \psbrace(6.03,-3.6)(6.01,4.5){$\mathcal{Q}(z)$} \begin{scriptsize} \psdots[dotstyle=*,linecolor=ccqqqq](5.03,-3.59) \rput[bl](4.1,-3.6){\ccqqqq{$z_0$}} \psdots[dotstyle=*,linecolor=ccqqqq](5.51,0.02) \rput[bl](4.8,0.3){\ccqqqq{$z$}} \psdots[dotstyle=*,linecolor=ccqqqq](5.01,4.47) \rput[bl](4.22,4.3){\ccqqqq{$z_1$}} \end{scriptsize} \end{pspicture*} \vspace{-1cm} \subcaption{$R_j$ is a rectangle of dimension 1 contained in a $\mathcal{F}^1$-leaf}\label{fig.RecDim1F1} \end{minipage} \vspace{-2cm} \begin{minipage}[c]{.7\linewidth} \centering \newrgbcolor{qqttzz}{0 0.2 0.6} \newrgbcolor{ccqqqq}{0.8 0 0} \psset{xunit=0.5cm,yunit=0.5cm,algebraic=true,dotstyle=o,dotsize=3pt 0,linewidth=0.8pt,arrowsize=3pt 2,arrowinset=0.25} \begin{pspicture*}(-3.5,-8)(21.85,11.37) \parametricplot[linewidth=1.2pt,linestyle=dotted,linecolor=qqttzz]{-0.2406942442470834}{0.24560912946714195}{1*16.73*cos(t)+0*16.73*sin(t)+-11.22|0*16.73*cos(t)+1*16.73*sin(t)+0.4} \parametricplot[linestyle=dotted]{1.4592798033166172}{1.7051363958390253}{1*60.59*cos(t)+0*60.59*sin(t)+6.57|0*60.59*cos(t)+1*60.59*sin(t)+-60.06} \parametricplot[linewidth=1.2pt]{1.4758004302769274}{1.6799377399158195}{1*62.95*cos(t)+0*62.95*sin(t)+6.48|0*62.95*cos(t)+1*62.95*sin(t)+-62.41} \psline[linewidth=0.4pt,linecolor=ccqqqq]{->}(5.51,0.53)(6.43,1.43) \rput[tl](5.62,5.19){\qqttzz{$ \mathcal{F}^1(z) $}} \rput[tl](1.53,0.25){$ \mathcal{F}^2(z) $} \begin{scriptsize} \psdots[dotstyle=*,linecolor=ccqqqq](5.51,0.53) \rput[bl](4.8,1){\ccqqqq{$z$}} \rput[bl](6.43,1.37){\ccqqqq{$\mathcal{Q}(z)$}} \psdots[dotstyle=*](-0.38,0.16) \rput[bl](-0.69,-0.64){$z_0$} \psdots[dotstyle=*](12.45,0.25) \rput[bl](12.3,-0.67){$z_1$} \end{scriptsize} \end{pspicture*} \vspace{-1.5cm} \subcaption{$R_j$ is a rectangle of dimension 1 contained in a $\mathcal{F}^2$-leaf}\label{fig.RecDim1F2} \end{minipage} \vspace{0.5cm} \begin{minipage}[c]{.7\linewidth} \centering \newrgbcolor{qqttzz}{0 0.2 0.6} \newrgbcolor{ccqqqq}{0.8 0 0} \psset{xunit=0.55cm,yunit=0.55cm,algebraic=true,dotstyle=o,dotsize=3pt 0,linewidth=0.8pt,arrowsize=3pt 2,arrowinset=0.25} \begin{pspicture*}(-3.5,-8)(19.56,7.23) \parametricplot[linewidth=1.2pt]{1.4658919836717454}{1.675700669918048}{1*76.4*cos(t)+0*76.4*sin(t)+6|0*76.4*cos(t)+1*76.4*sin(t)+-79.98} \parametricplot[linewidth=1.2pt,linecolor=qqttzz]{-0.23260396708662157}{0.23260396708662137}{1*17.35*cos(t)+0*17.35*sin(t)+-18.89|0*17.35*cos(t)+1*17.35*sin(t)+0} \parametricplot[linewidth=1.2pt,linecolor=qqttzz]{-0.24231811194623631}{0.24231811194623612}{1*16.67*cos(t)+0*16.67*sin(t)+-2.18|0*16.67*cos(t)+1*16.67*sin(t)+0} \parametricplot[linewidth=1.2pt]{1.4517784610528233}{1.68981419253697}{1*67.38*cos(t)+0*67.38*sin(t)+6|0*67.38*cos(t)+1*67.38*sin(t)+-62.9} \parametricplot[linewidth=0.4pt,linestyle=dotted]{1.438933038681401}{1.7070124324631315}{1*59.92*cos(t)+0*59.92*sin(t)+6.34|0*59.92*cos(t)+1*59.92*sin(t)+-62.39} \parametricplot[linewidth=0.4pt,linestyle=dotted]{1.4366131522745456}{1.7048701232109778}{1*59.88*cos(t)+0*59.88*sin(t)+6.35|0*59.88*cos(t)+1*59.88*sin(t)+-61.35} \parametricplot[linewidth=0.4pt,linestyle=dotted]{1.4545880574749945}{1.6891027161937202}{1*68.47*cos(t)+0*68.47*sin(t)+6.52|0*68.47*cos(t)+1*68.47*sin(t)+-69} \parametricplot[linewidth=0.4pt,linestyle=dotted]{1.4362722835818533}{1.7053205450573163}{1*59.72*cos(t)+0*59.72*sin(t)+6.48|0*59.72*cos(t)+1*59.72*sin(t)+-59.19} \parametricplot[linewidth=0.4pt,linestyle=dotted]{1.4398908084292725}{1.7045415640600527}{1*60.7*cos(t)+0*60.7*sin(t)+6.53|0*60.7*cos(t)+1*60.7*sin(t)+-59.19} \parametricplot[linewidth=0.4pt,linestyle=dotted]{1.4369775988146314}{1.7047383570510684}{1*59.99*cos(t)+0*59.99*sin(t)+6.36|0*59.99*cos(t)+1*59.99*sin(t)+-57.46} \parametricplot[linewidth=0.4pt,linestyle=dotted]{1.4275639066583594}{1.7112163309832855}{1*56.64*cos(t)+0*56.64*sin(t)+6.13|0*56.64*cos(t)+1*56.64*sin(t)+-53.06} \parametricplot[linewidth=0.4pt,linestyle=dotted,linecolor=qqttzz]{-0.2589490416790463}{0.2561277029838513}{1*15.73*cos(t)+0*15.73*sin(t)+-16.22|0*15.73*cos(t)+1*15.73*sin(t)+0.13} \parametricplot[linewidth=0.4pt,linestyle=dotted,linecolor=qqttzz]{-0.24847858790406097}{0.2533679762640946}{1*16.16*cos(t)+0*16.16*sin(t)+-15.63|0*16.16*cos(t)+1*16.16*sin(t)+0.16} \parametricplot[linewidth=0.4pt,linestyle=dotted,linecolor=qqttzz]{-0.21560038250441327}{0.21120664301181347}{1*18.97*cos(t)+0*18.97*sin(t)+-17.52|0*18.97*cos(t)+1*18.97*sin(t)+0.32} \parametricplot[linewidth=0.4pt,linestyle=dotted,linecolor=qqttzz]{-0.22569590480210344}{0.22017112155274524}{1*18.19*cos(t)+0*18.19*sin(t)+-15.73|0*18.19*cos(t)+1*18.19*sin(t)+0.39} \parametricplot[linewidth=0.4pt,linestyle=dotted,linecolor=qqttzz]{-0.2299891503973468}{0.23557478716500682}{1*17.44*cos(t)+0*17.44*sin(t)+-13.94|0*17.44*cos(t)+1*17.44*sin(t)+0.34} \parametricplot[linewidth=0.4pt,linestyle=dotted,linecolor=qqttzz]{-0.23475855500593656}{0.22522388125080922}{1*17.67*cos(t)+0*17.67*sin(t)+-13.19|0*17.67*cos(t)+1*17.67*sin(t)+0.5} \parametricplot[linewidth=1.2pt,linecolor=ccqqqq]{-0.2406942442470834}{0.24560912946714195}{1*16.73*cos(t)+0*16.73*sin(t)+-11.22|0*16.73*cos(t)+1*16.73*sin(t)+0.4} \parametricplot[linewidth=0.4pt,linestyle=dotted,linecolor=qqttzz]{-0.2368613212533699}{0.23192095946279598}{1*17.34*cos(t)+0*17.34*sin(t)+-10.85|0*17.34*cos(t)+1*17.34*sin(t)+0.49} \parametricplot[linewidth=0.4pt,linestyle=dotted,linecolor=qqttzz]{-0.25504126325191834}{0.244983345635518}{1*16.28*cos(t)+0*16.28*sin(t)+-8.77|0*16.28*cos(t)+1*16.28*sin(t)+0.52} \parametricplot[linewidth=0.4pt,linestyle=dotted,linecolor=qqttzz]{-0.2644294392483273}{0.25447439575352954}{1*15.69*cos(t)+0*15.69*sin(t)+-7.15|0*15.69*cos(t)+1*15.69*sin(t)+0.5} \parametricplot[linewidth=0.4pt,linestyle=dotted,linecolor=qqttzz]{-0.23990094560670006}{0.24518545590870444}{1*16.76*cos(t)+0*16.76*sin(t)+-7.27|0*16.76*cos(t)+1*16.76*sin(t)+0.34} \parametricplot[linewidth=0.4pt,linestyle=dotted,linecolor=qqttzz]{-0.24954157024380397}{0.2550438078858211}{1*16.11*cos(t)+0*16.11*sin(t)+-5.58|0*16.11*cos(t)+1*16.11*sin(t)+0.29} \parametricplot[linewidth=0.4pt,linestyle=dotted,linecolor=qqttzz]{-0.27735663855019776}{0.2768195775981685}{1*14.69*cos(t)+0*14.69*sin(t)+-3.14|0*14.69*cos(t)+1*14.69*sin(t)+0.28} \parametricplot[linewidth=0.4pt,linestyle=dotted,linecolor=qqttzz]{-0.25942436688335313}{0.2651887834136855}{1*15.48*cos(t)+0*15.48*sin(t)+-2.92|0*15.48*cos(t)+1*15.48*sin(t)+0.15} \parametricplot[linewidth=0.4pt,linestyle=dotted,linecolor=qqttzz]{-0.25925503532936034}{0.25861054882642875}{1*15.65*cos(t)+0*15.65*sin(t)+-2.12|0*15.65*cos(t)+1*15.65*sin(t)+0.11} \parametricplot[linewidth=0.4pt,linestyle=dotted,linecolor=ccqqqq]{-0.38249366103447713}{0.3774431667194532}{1*16.57*cos(t)+0*16.57*sin(t)+-11.06|0*16.57*cos(t)+1*16.57*sin(t)+0.36} \rput[tl](5.06,6.56){\ccqqqq{$ \mathcal{F}^1(z) $}} \psbrace(6.03,-3.4)(6.01,4.3){$\mathcal{Q}(z)$} \begin{scriptsize} \psdots[dotstyle=*,linecolor=qqttzz](-2,-4) \rput[bl](-3.26,-5.11){\qqttzz{$z_0$}} \psdots[dotstyle=*,linecolor=blue](14,-4) \rput[bl](14.77,-5.04){\blue{$z_1$}} \rput[bl](6.3,-5.2){$\mathcal{F}^2$} \psdots[dotstyle=*,linecolor=qqttzz](-2,4) \rput[bl](-3.29,4.11){\qqttzz{$z_2$}} \rput[bl](-3.48,0.14){\qqttzz{$\mathcal{F}^1$}} \psdots[dotstyle=*,linecolor=ccqqqq](5.51,0.02) \rput[bl](5.0,-0.2){\ccqqqq{$z$}} \end{scriptsize} \end{pspicture*} \vspace{-0.5cm} \subcaption{$R_j$ is a rectangle of dimension 2}\label{fig.RecDim2} \end{minipage} \caption{Partition $\mathcal{Q}$.}\label{fig.PartQx} \end{figure} It is easy to see that the partition $\mathcal{Q}$ is $f$-invariant and, therefore, $f_*\mu_x=\mu_{f(x)}$. Consider $\pi:C\rightarrow \widehat{C}:=C/\mathcal{Q}$ the canonical projection that assigns to each point $x\in C$ the element $\mathcal{Q}(x)$ of the partition that contains it. Denote the quotient measure as $\hat{\mu}=\pi_{\ast}\mu$. \\ \begin{lemma}\label{lem.AtomicDis} The measure $\mu$ has atomic disintegration with respect to the partition $\mathcal{Q}$. \end{lemma} \begin{proof} We want to show that the conditional measure $\mu_x$ is a countable linear combination of Dirac masses for $\hat{\mu}$-almost every $Q(x)\in \mathcal{Q}$. By absurd assume there exist a set $\hat{\Lambda}\subset \mathcal{Q}$ with positive $\hat{\mu}$-measure such that for every $Q(x)\in\hat{\Lambda}$ the measure $\mu_x$ is not atomic. Moreover, by the invariance of the disintegration, $\hat{\Lambda}$ can be assumed to be invariant and, by the ergodicity, of full measure.\\% Without loss of generality let us assume $\mu_x$ doesn't have any atom.\\ Let $Q=Q(x_0)\in\hat{\Lambda}\cap\operatorname{supp}(\hat{\mu})$ and let $\mathcal{B}$ be a foliated (by $\mathcal{F}^1$) box around $Q$. That is, some image of a topological embedding \[ \phi: D^{3}\times D^1\to \mathbb{T}^4, \] where $D^k$ is the closed unit disk in $\mathbb{R}^k$ and, such that every plaque $P_x=\phi(\{x\}\times D^1)$ is contained in a leaf of $\mathcal{F}^1$. Let us identified $\mathcal{B}$ with the product $D^{3}\times D^1$ through the corresponding homeomorphism. Let $\hat{V}$ be an open neighborhood of $Q$ small enough so it is contained in $\mathcal{B}$. Moreover, since $\tilde{H}^{-1}(\tilde{x})$ is uniformly bounded we can assume that $\mathcal{B}$ contains $\mathcal{P}(x)$ for every $x\in D^3$. \\ Consider the following map \begin{eqnarray*} \psi: D^3 \times [0,1] & \rightarrow & \mathcal{B}\\ (x,t) &\mapsto & (x,\theta_x(t)) \end{eqnarray*} where $(x,\theta_x(t))$ is defined as the higher point in the local leaf $Q(x)\subset \mathcal{B}$ such that $\mu_x([x,\theta_x(t)]_1)=t$.\\ Notice that $\psi$ is an invertible map when restricted to its image. Moreover, since we are assuming a non atomic disintegration, $\psi^{-1}$ is a continuous map restricted to the second coordinate and a measurable map when restricted to the first coordinate. Maps of these type are known as Caratheodory functions and these are measurable maps (\cite[Lemma 4.51]{Aliprantis}).\\ Consider the set $H_t^0:=\psi\left(\Sigma \times [0,t]\right)$, which is measurable since $\psi^{-1}$ is Caratheodory. Thus, the set $H_t=\cup_{n\in\mathbb{Z}}f^n(H_t^0)$ forms an invariant measurable set. Notice that by the definition of $\psi$ we have that if $0<t<1$ \begin{align*} \mu(H_t)&=\int \mu_x(H_t\cap Q(x))d\hat{\mu}(Q(x)),\\ &\geq \int_{\hat{R}}\mu_x(H_t^0\cap Q(x))d\hat{\mu}(Q(x)),\\ &= \int_{\hat{R}}\mu_x([x,\theta_x(t)]_1)d\hat{\mu}(Q(x)),\\ &=\hat{\mu}(\hat{R})t>0. \end{align*} On the other hand, define $G_t^0=\left(H_t^0\right)^c$ and the $f$-invariant set $G_t=\cup_{n\in\mathbb{Z}}f^n(G_t^0)$. In a similar manner as before we have \[ \mu(G_t)\geq \hat{\mu}(\hat{R})(1-t)>0. \] Therefore, by the ergodicity, both sets should have full measure. Although, this would imply that their intersection also should have full measure but we claim this is not the case. In fact, if it were true, for $\hat{\mu}$-almost every $Q(x)$ \[ \mu_x(H_t\cap G_t\cap Q(x))=1. \] But if $\omega$ belongs to $H_t\cap G_t\cap Q(x)$, without loss of generality, we may assume that for some $n\in\mathbb{N}$, $\omega\in f^{-n}(H_t^0)\cap G_t^0$. Hence, since $f$ preserves orientation, it is easy to see that \[ t<\mu_{\omega}([0_{\omega},\omega]_1)=(f^n)_{\ast}\mu_{\omega}\left(f^n([0_{\omega},\omega]_1)\right)\leq \mu_{f^n(\omega)}\left(\left[0_{f^n(\omega)},f^n(\omega)\right]_1\right)\leq t. \] This is an absurd, which implies that the disintegration of $\mu$ is atomic for $\hat{\mu}$-almost every point. \end{proof} We have proved that $\mu$ has atomic disintegration with respect the partition $\mathcal{Q}$. We now want to see that there is a finite number of atoms on the disintegration considered. In order to do that we first need to prove the measurability of certain sets.\\ Consider $\mathcal{B}$ a foliated (by $\mathcal{F}^1$) box, as before, and identify $\mathcal{B}$ with the product $D^{3}\times D^1$ through the corresponding homeomorphism.\\ Fix $\delta>0$, and consider the set \[ H_\delta = \{ x \in \mathcal{B}| \mu_x(\{x\}) \geq \delta \}. \] Let us see that this is a measurable set. To do so, consider a countable basis $\mathcal{V}$ of the topology of $\mathbb{T}^4$. From Rokhlin's theorem we know that the map $x\mapsto \mu_x(V)$ is measurable (up to measure zero) for any measurable set $V$. Therefore, by Lusin's theorem, given any $\varepsilon>0$ there exist a compact set $K_{\varepsilon}\subset D^3$ such that $\hat{\mu}(K_{\varepsilon})>1-\varepsilon$ and $x\mapsto\mu_x(V)$ is continuous on $K_{\varepsilon}$, for every $V\in\mathcal{V}$. In particular, $x\mapsto\mu_x$ is continuous with respect to the weak* topology for any $x$ in $K_{\varepsilon}$.\\ Let $\varepsilon>0$ be fixed. For each $x\in C$, let $A(x)$ be the set of atoms of $\mu_x$. It is clear that the set \begin{equation}\label{eqn.DefGamma} \Gamma_{\delta}(x):=\{a\in A(x):\mu_x(\{a\})\geq \delta\}, \end{equation} is finite, and hence compact. Furthermore, the definition of $K_{\varepsilon}$ ensures that the function $x\mapsto\Gamma_{\delta}(x)$ is upper semi-continuous on $x\in K_{\varepsilon}$. Therefore, \[ \Gamma(\varepsilon,\delta):=\{(x,a):x\in K_{\varepsilon}\text{ and }a\in\Gamma_{\delta}(x)\}, \] is a closed set. Then, $\cup_{n}\Gamma(1/n,\delta)$ is a (measurable) full measure subset of $H_\delta$. Thus, $H_\delta$ is a measurable set (up to measure zero).\\ Consider $\{\mathcal{B}_k:k\in\mathbb{N}\}$ a countable cover of $\mathbb{T}^4$ by foliated boxes. Proceeding as before we obtain the measurable sets $H_\delta^k$ of atoms of measure bigger or equal to $\delta$ in each foliated box $\mathcal{B}_k$. Therefore, \[ H_{\delta}^+:=\bigcup_{k\in\mathbb{N}}H_{\delta}^k=\{x\in C:\mu_x(\{x\})\geq\delta\} \] is also measurable.\\ \begin{lemma}\label{lem.OneAtomQ} $\hat{\mu}$-almost every $Q(x)$ contains only one atom. \end{lemma} \begin{proof} Let $x\in \mathcal{M}$ and $\delta\geq 0$. Consider the set $H_{\delta}^+$ as before and notice that \[ \delta\leq\mu_x(\{x\})\leq f_{\ast}\mu_x(\{f(x)\})=\mu_{f(x)}(\{f(x)\}). \] Therefore, $H_{\delta}^+$ is invariant and, by ergodicity, it has measure zero or one. We know that $\mu(H_1^+)=0$ and $\mu(H_0^+)=1$. Let $\delta_0$ be the discontinuity point of the function $\delta \mapsto \mu(H_\delta^+)$, for $\delta \in [0,1]$. Hence $\mu(H_{\delta_0}^+)=1$, that means the weight of the atoms are all equal to $\delta_0$. Therefore there are $n=1/\delta_0$ atoms on each element of the partition $\mathcal{Q}$.\\ Let us see that the disintegration of $\mu$ on $\mathcal{Q}$ has one atom per local leaf. Assume by contradiction that $n=2$, as the case of finite atoms is similar. Let $a(x)$ and $b(x)$ be the two atoms of $\mu_x$. Without loss of generality, let us assume that $a(x)<b(x)$, where ``$<$'' is the fixed order in $\mathcal{F}^1$. Consider \[ A:=\{a(x): x\in C\}\quad\text{and}\quad B:=\{b(x): x\in C\}, \] the sets of first and second atoms respectively. Since $f$ preserves the orientation in $\mathcal{F}^1$, it is easy to see that $A$ and $B$ are invariant sets.\\ Let $Q=Q(x_0)\in\operatorname{supp}\hat{\mu}$ and let $\hat{V}$ be an open neighborhood of $Q$. Consider the disjoint sets \[ B(a):=\bigcup_{Q(x)\in \hat{V}}\{x\}\times B(a(x))\quad\text{and}\quad B(b):=\bigcup_{Q(x)\in \hat{V}}\{x\}\times B(b(x)), \] where $B(a(x))$ and $B(b(x))$ are two disjoint closed balls in $Q(x)$ around $a(x)$ and $b(x)$ respectively. Notice that, following the proof of the measurability of $H_{\delta}^+$ by substituting the set $\Gamma_{\delta}(x)$ in \eqref{eqn.DefGamma} by $B(a(x))$, we can prove that $B(a)$ and $B(b)$ are both measurable sets. By the definition of $B(a)$ and $B(b)$, their saturation by $\mathcal{Q}$ coincide, that is, $\pi(B(a)) = \pi(B(b))$. Therefore, $B(a)$ and $B(b)$ have positive $\mu$-measure.\\ Let us define the $f$-invariant sets \[ H(a):=\bigcup_{n\in\mathbb{Z}}f^n(B(a))\quad\text{and}\quad H(b):=\bigcup_{n\in\mathbb{Z}}f^n(B(b)). \] We claim that $\mu(H(a)\cap H(b))=0$. In fact, if it is not true we have that \[ 0<\mu(H(a)\cap H(b))=\int\mu_x(H(a)\cap H(b)\cap Q(x))d\hat{\mu}. \] Therefore, there must exist $\hat{\Lambda}\subset \hat{C}$ of positive $\hat{\mu}$-measure such that for every $Q(x)\in\hat{\Lambda}$ \[ \mu_x(H(a)\cap H(b)\cap Q(x))>0. \] Hence, $a(x)$ or $b(x)$ must belong to the intersection of $H(a)\cap H(b)$. Without loss of generality, let us assume that there exists $n\in\mathbb{Z}$ such that $a(x)\in f^n(B(b))\cap B(a)$. Therefore, we have that $f^{-n}(a(x))=b(y)$ for some $Q(y)\in\hat{V}$. However, this contradicts the invariance of $A$, and we proved our claim.\\ Now, by ergodicity of $\mu$, the sets $H(a)$ and $H(b)$ should have full measure and have zero measure intersection. Absurd, therefore we have only one atom on $\mathcal{Q}(x)$ which proves our claim. \\ \end{proof} Let denote the atom found in Lemma \ref{lem.OneAtomQ} by $a(x)$, that is \begin{equation}\label{eq.Etaxz} \mu_x=\delta_{a(x)}. \end{equation} We now want to see that the disintegration of $\mu$ on $\mathcal{P}$ has only one atom in each connected component of every element of the partition.\\ Recall that $\widehat{C}:=C/\mathcal{Q}$. Define $\hat{f}:\widehat{C}\to\widehat{C}$ by $\hat{f}(\hat{z}):=\widehat{f(z)}$, which satisfies \[ \pi\circ f=\hat{f}\circ\pi. \] Notice that by \eqref{eqn.UnionRectangles} we can identify $\widehat{\mathcal{P}(x)}$ with $n_x$ connected components in the $\mathcal{F}^2$ foliation. That means the space $\widehat{C}$ has now a one dimensional foliation coming from this quotient. Consider the partition $\widehat{\mathcal{Q}}$ given by \[ \widehat{\mathcal{Q}}:=\{\widehat{R(x)}: x\in C\}, \] where $R(x)$ is the rectangle in $\mathcal{P}(x)$ containing $x$. Moreover, notice that $\widehat{R(x)}$ can be identified with the interval $[c_0(x),c_1(x)]_2$, where $c_0(x)$ and $c_1(x)$ are the corners of $R(x)$ in the same $\mathcal{F}^2$-leaf. Consequently, proceeding as before, the conditional measures $\hat{\eta}_x$ defined by the partition $\widehat{\mathcal{Q}}$ for the measure $\hat{\mu}$, have at most one atom in each $\widehat{R(x)}$ that we denote by $\hat{a}(x)$. Thus, \begin{equation}\label{eqn.Etax} \hat{\eta}_x=\delta_{\hat{a}(x)}. \end{equation} Combining this with \eqref{eq.Etaxz} we have that $a_j(x)\in\pi^{-1}\left(\hat{a}(x)\right)\cap R_j(x)$ is the only one atom per rectangle $R_j(x)$. This concludes the proof of our result.\\ \begin{remark} For the higher dimensional case the proof follows in a similar manner. More precisely, if $dim E^c=k$ after proving the atomic decomposition of the conditional measures of $\mu$ defined by the partition $\mathcal{Q}$, we proceed by reducing the dimension to $k-1$ using a quotient procedure to define the conditional measures \eqref{eqn.Etax}. Continuing in this manner we keep reducing the dimension until getting dimension 1 and conclude our proof. \end{remark} \section{Proof of Corollary \ref{cor.main}}\label{sec.ProofCor} We are left with the task of proving that if $H$ sends center leaves of $f$ to center leaves of $A$ and if one of the conditions \ref{it.2} or \ref{it.3} are satisfied, then $\mu$ is virtually hyperbolic. First, let us assume \ref{it.2} is valid. Moreover, we assume that the center direction of $A$ is expanding, otherwise we work with $f^{-1}$.\\ By the proof of Theorem \ref{main:ergodic}, there are at most countably many elements in $\mathcal{P}$ with positive measure, we get a full measurable subset $\mathcal{M}\subset\mathbb{T}^4$ which intersects each center leaf in at most countably many points. Furthermore, we claim that there exist finitely many atoms of $\mu$ per (global) center leaf. In fact, this was proved in \cite[Proposition~3.2]{CrisostomoTahzibi}. Although they assume one dimensional center foliation, under our assumptions their proof could be applied. Let us recall the main steps. \\% We can delete this part of the proof as it is the same as in Crisostomo Tahzibi.}\\ Assume by contradiction that every full measurable subset of $\mathcal{M}$ intersects any typical center leaves in infinitely many points. Define $\nu=H_{\ast}\mu$ which is an invariant measure by the linear hyperbolic automorphism. Let $R_i$ be the Markov partition for $A$ and consider the partition $\mathcal{Q}:=\{\mathcal{F}^c_R(x): x\in R_i \text{ for some }i\}$, where $\mathcal{F}^c_R(x)$ denotes the connected component of $\mathcal{F}^c(x)\cap R_i$ containing $x$. The partition $\mathcal{Q}$ is measurable and we denote $\nu_x$ the disintegration of $\nu$ along the elements of $\mathcal{Q}$. The assumption of full support of $\nu$ guarantees it gives zero mass to the boundary of the Markov partition.\\ As $H(\mathcal{M})$ intersects typical leaves in a countable number of points, $\nu_x$ must be atomic. Moreover, there exists a natural number $\alpha_0\in\mathbb{N}$ such that $\nu_x$ contains exactly $\alpha_0$ atoms for $\nu$-almost every $x$ (see \cite[Lemma~3.3]{CrisostomoTahzibi}). Hence, given a fixed $L\in\mathbb{R}_+$, there exist $N\in\mathbb{N}$ such that the number of atoms in any typical center plaque of diameter $L$ is at most $N$. We are assuming that $H(\mathcal{M})$ intrinsically intersects center leaves in infinitely many points (or non uniformly finite). Taking $D\subset\mathcal{F}^c(x)$ with more than $N$ atoms. By backward contraction along central leaves by $A$ there exists $n>0$ such that the diameter of $A^{-n}(D)$ is less that $L$. As $\nu$ is invariant and the uniqueness of the disintegration, we get a center plaque with diameter less than $L$ containing more than $N$ atoms, which is absurd and establishes our claim.\\ We have proved that the number of atoms is finite and constant by ergodicity on almost every center leaf. The task is now to conclude that, since $f$ preserves orientation, the number of atoms is one. In order to do this first consider the set of atoms in each $\mathcal{F}^1$-leaf. Proceeding as in the proof of Lemma \ref{lem.OneAtomQ}, we can prove that there must be only one atom per $\mathcal{F}^1$-leaf. Now consider the space $\widetilde{C}:= C/\sim$, where $ x \sim y$ iff $y \in \mathcal \mathcal{F}^1(y)$. The way we should see $\tilde{C}$ is as turning the center foliation (which is a plane) into a 1-dimensional segment. Let us denote this new foliation as $\tilde{\mathcal{Q}}$. Notice that the disintegration of $\mu$ in the partition given by $\tilde{\mathcal{Q}}$ is exactly the quotient measure $\hat \eta^ x$.\\ Since $\mathcal \mathcal{F}^1$ has an orientation, we may define a transversal orientation by the following way: a vector $v \in T_x \mathcal{F}_{loc}^c(x)$ points in the positive direction if for any positive vector $w \in T_x \mathcal{F}_{loc}^c(x)$ we have $\omega_x(v,w) >0$, where $\omega_x$ is the restriction of the volume form to $\mathcal{F}_{loc}^c(x)$. \\ Now consider the extremal atoms per central leaf. By left extremal atom we consider the atom whose projection by the map $\pi:C\to\widetilde{C}$ is the left extreme one (see Figure \ref{fig.GlobalAtom}). Since $f$ preserves the orientation in $\mathcal{F}^1$, then $f$ preserves the transversal orientation. Once again, proceeding as the proof of Lemma \ref{lem.OneAtomQ} we conclude that there is only one atom per global center leaf. Therefore, $\mu$ is virtually hyperbolic.\\ On the other hand, if \ref{it.3} is valid then $H(\mathcal{F}^1)$ must coincides with $E^s_A$ or $E_A^u$. Without loss of generality, let us assume $H(\mathcal{F}^1)$ coincides with $E_A^u$. By the proof of Theorem \ref{main:ergodic}, the set $\mathcal{M}$ intersects each $\mathcal{F}^1$ leaf in at most countably many points. Proceeding as before, using the $\mathcal{F}^1$ foliation instead the center one, one can also conclude that $\mu$ is virtually hyperbolic. \begin{figure \centering \captionsetup{justification=centering} \newrgbcolor{ccqqqq}{0.8 0 0} \newrgbcolor{qqttzz}{0 0.2 0.6} \newrgbcolor{ttwwqq}{0.2 0.4 0} \psset{xunit=0.6cm,yunit=0.6cm,algebraic=true,dotstyle=o,dotsize=3pt 0,linewidth=0.8pt,arrowsize=3pt 2,arrowinset=0.25} \begin{pspicture*}(-6.5,-8)(19.56,7.23) \parametricplot[linestyle=dotted]{1.438933038681401}{1.7070124324631315}{1*59.92*cos(t)+0*59.92*sin(t)+6.34|0*59.92*cos(t)+1*59.92*sin(t)+-62.39} \parametricplot[linestyle=dotted]{1.4366131522745456}{1.7048701232109778}{1*59.88*cos(t)+0*59.88*sin(t)+6.35|0*59.88*cos(t)+1*59.88*sin(t)+-61.35} \parametricplot[linestyle=dotted]{1.4545880574749945}{1.6891027161937202}{1*68.47*cos(t)+0*68.47*sin(t)+6.52|0*68.47*cos(t)+1*68.47*sin(t)+-69} \parametricplot[linewidth=1.6pt,linecolor=ccqqqq]{1.4362722835818533}{1.7053205450573163}{1*59.72*cos(t)+0*59.72*sin(t)+6.48|0*59.72*cos(t)+1*59.72*sin(t)+-59.19} \parametricplot[linestyle=dotted]{1.4398908084292725}{1.7045415640600527}{1*60.7*cos(t)+0*60.7*sin(t)+6.53|0*60.7*cos(t)+1*60.7*sin(t)+-59.19} \parametricplot[linestyle=dotted]{1.4369775988146314}{1.7047383570510684}{1*59.99*cos(t)+0*59.99*sin(t)+6.36|0*59.99*cos(t)+1*59.99*sin(t)+-57.46} \parametricplot[linestyle=dotted]{1.4275639066583594}{1.7112163309832855}{1*56.64*cos(t)+0*56.64*sin(t)+6.13|0*56.64*cos(t)+1*56.64*sin(t)+-53.06} \parametricplot[linestyle=dotted,linecolor=qqttzz]{-0.2589490416790463}{0.2561277029838513}{1*15.73*cos(t)+0*15.73*sin(t)+-16.22|0*15.73*cos(t)+1*15.73*sin(t)+0.13} \parametricplot[linestyle=dotted,linecolor=qqttzz]{-0.24847858790406097}{0.2533679762640946}{1*16.16*cos(t)+0*16.16*sin(t)+-15.63|0*16.16*cos(t)+1*16.16*sin(t)+0.16} \parametricplot[linestyle=dotted,linecolor=qqttzz]{-0.21560038250441327}{0.21120664301181347}{1*18.97*cos(t)+0*18.97*sin(t)+-17.52|0*18.97*cos(t)+1*18.97*sin(t)+0.32} \parametricplot[linestyle=dotted,linecolor=qqttzz]{-0.22569590480210344}{0.22017112155274524}{1*18.19*cos(t)+0*18.19*sin(t)+-15.73|0*18.19*cos(t)+1*18.19*sin(t)+0.39} \parametricplot[linestyle=dotted,linecolor=qqttzz]{-0.2299891503973468}{0.23557478716500682}{1*17.44*cos(t)+0*17.44*sin(t)+-13.94|0*17.44*cos(t)+1*17.44*sin(t)+0.34} \parametricplot[linestyle=dotted,linecolor=qqttzz]{-0.23475855500593656}{0.22522388125080922}{1*17.67*cos(t)+0*17.67*sin(t)+-13.19|0*17.67*cos(t)+1*17.67*sin(t)+0.5} \parametricplot[linestyle=dotted,linecolor=qqttzz]{-0.2406942442470834}{0.24560912946714195}{1*16.73*cos(t)+0*16.73*sin(t)+-11.22|0*16.73*cos(t)+1*16.73*sin(t)+0.4} \parametricplot[linestyle=dotted,linecolor=qqttzz]{-0.2368613212533699}{0.23192095946279598}{1*17.34*cos(t)+0*17.34*sin(t)+-10.85|0*17.34*cos(t)+1*17.34*sin(t)+0.49} \parametricplot[linestyle=dotted,linecolor=qqttzz]{-0.25504126325191834}{0.244983345635518}{1*16.28*cos(t)+0*16.28*sin(t)+-8.77|0*16.28*cos(t)+1*16.28*sin(t)+0.52} \parametricplot[linestyle=dotted,linecolor=qqttzz]{-0.2644294392483273}{0.25447439575352954}{1*15.69*cos(t)+0*15.69*sin(t)+-7.15|0*15.69*cos(t)+1*15.69*sin(t)+0.5} \parametricplot[linestyle=dotted,linecolor=qqttzz]{-0.23990094560670006}{0.24518545590870444}{1*16.76*cos(t)+0*16.76*sin(t)+-7.27|0*16.76*cos(t)+1*16.76*sin(t)+0.34} \parametricplot[linestyle=dotted,linecolor=qqttzz]{-0.24954157024380397}{0.2550438078858211}{1*16.11*cos(t)+0*16.11*sin(t)+-5.58|0*16.11*cos(t)+1*16.11*sin(t)+0.29} \parametricplot[linestyle=dotted,linecolor=qqttzz]{-0.27735663855019776}{0.2768195775981685}{1*14.69*cos(t)+0*14.69*sin(t)+-3.14|0*14.69*cos(t)+1*14.69*sin(t)+0.28} \parametricplot[linestyle=dotted,linecolor=qqttzz]{-0.25942436688335313}{0.2651887834136855}{1*15.48*cos(t)+0*15.48*sin(t)+-2.92|0*15.48*cos(t)+1*15.48*sin(t)+0.15} \parametricplot[linestyle=dotted,linecolor=qqttzz]{-0.25925503532936034}{0.25861054882642875}{1*15.65*cos(t)+0*15.65*sin(t)+-2.12|0*15.65*cos(t)+1*15.65*sin(t)+0.11} \parametricplot[linewidth=1.2pt,linecolor=ttwwqq]{-0.1197877952429538}{0.0049240162956248066}{1*16.16*cos(t)+0*16.16*sin(t)+-15.63|0*16.16*cos(t)+1*16.16*sin(t)+0.16} \parametricplot[linewidth=1.2pt,linecolor=ttwwqq]{8.409078352180899E-4}{0.16892944583905878}{1*18.19*cos(t)+0*18.19*sin(t)+-15.73|0*18.19*cos(t)+1*18.19*sin(t)+0.39} \parametricplot[linewidth=1.2pt,linecolor=ttwwqq]{2.1025287395711277E-5}{0.11292633657533116}{1*17.67*cos(t)+0*17.67*sin(t)+-13.19|0*17.67*cos(t)+1*17.67*sin(t)+0.5} \parametricplot[linewidth=1.2pt,linecolor=ttwwqq]{-0.1921390198810604}{3.514068882600108E-4}{1*15.69*cos(t)+0*15.69*sin(t)+-7.15|0*15.69*cos(t)+1*15.69*sin(t)+0.5} \parametricplot[linewidth=1.2pt,linecolor=ttwwqq]{0.007043515370644108}{0.06566601556149111}{1*16.76*cos(t)+0*16.76*sin(t)+-7.27|0*16.76*cos(t)+1*16.76*sin(t)+0.34} \parametricplot[linewidth=1.2pt,linecolor=ttwwqq]{0.0048587683820751305}{0.1341039757655995}{1*15.48*cos(t)+0*15.48*sin(t)+-2.92|0*15.48*cos(t)+1*15.48*sin(t)+0.15} \psline[linewidth=1.8pt,linecolor=ccqqqq]{->}(5.51,0.53)(6.49,0.54) \psline[linewidth=1.2pt,linecolor=ttwwqq]{->}(0.53,-0.24)(0.53,0.24) \psline[linewidth=1.2pt,linecolor=ttwwqq]{->}(2.46,0.9)(2.47,0.4) \psline[linewidth=1.2pt,linecolor=ttwwqq]{->}(4.47,0.95)(4.48,0.51) \psline[linewidth=1.2pt,linecolor=ttwwqq]{->}(8.54,0.08)(8.55,0.5) \psline[linewidth=1.2pt,linecolor=ttwwqq]{->}(9.48,0.95)(9.49,0.46) \psline[linewidth=1.2pt,linecolor=ttwwqq]{->}(12.55,0.72)(12.56,0.22) \psline{->}(-3.22,1.84)(0.53,0.24) \rput[tl](6.58,-4){\qqttzz{$ \mathcal{F}^1 $}} \rput[tl](-3,-0.9){$ \mathcal{F}^2 $} \rput[lt](-5.38,2.33){\parbox{2.98 cm}{Left extremal \\ atom}} \begin{scriptsize} \psdots[dotsize=4pt 0,dotstyle=*,linecolor=ttwwqq](0.42,-1.77) \rput[bl](0.8,-1.64){\ttwwqq{$a_1$}} \psdots[dotsize=4pt 0,dotstyle=*,linecolor=ttwwqq](2.21,3.44) \rput[bl](2.31,3.58){\ttwwqq{$a_2$}} \psdots[dotsize=4pt 0,dotstyle=*,linecolor=ttwwqq](4.37,2.49) \rput[bl](4.47,2.63){\ttwwqq{$a_3$}} \psdots[dotsize=4pt 0,dotstyle=*,linecolor=ttwwqq](8.25,-2.5) \rput[bl](8.6,-2.35){\ttwwqq{$a_4$}} \psdots[dotsize=4pt 0,dotstyle=*,linecolor=ttwwqq](9.46,1.44) \rput[bl](9.57,1.6){\ttwwqq{$a_5$}} \psdots[dotsize=4pt 0,dotstyle=*,linecolor=ttwwqq](12.42,2.22) \rput[bl](12.51,2.36){\ttwwqq{$a_{n_x}$}} \psdots[dotsize=4pt 0,dotstyle=*,linecolor=ttwwqq](0.53,0.24) \psdots[dotsize=4pt 0,dotstyle=*,linecolor=ttwwqq](2.47,0.4) \psdots[dotsize=4pt 0,dotstyle=*,linecolor=ttwwqq](4.48,0.5) \psdots[dotsize=4pt 0,dotstyle=*,linecolor=ttwwqq](8.54,0.5) \psdots[dotsize=4pt 0,dotstyle=*,linecolor=ttwwqq](9.49,0.46) \psdots[dotsize=4pt 0,dotstyle=*,linecolor=ttwwqq](12.56,0.23) \end{scriptsize} \end{pspicture*} \vspace{-2cm} \caption{Global Atom in a center leaf.}\label{fig.GlobalAtom} \end{figure} \section{Proof of Theorem \ref{th.Equilibrium}}\label{sec.ProofEq} \textbf{Case 1:} $\mu(C)=0$\\ This case follows as in the proof of \cite[Theorem~A]{CrisostomoTahzibi}. We present it here for completeness.\\ Let $\nu$ be the unique equilibrium state of $(A,\phi)$. Assume by contradiction that there exist $\eta$ another equilibrium state for $(f,\varphi=\phi\circ H)$. By the uniqueness of $\nu$ we have that $H_{\ast}\mu=H_{\ast} \eta$. Let $\psi:\mathbb{T}^4\to \mathbb{R}$ be any continuous map. Since $H^{-1}H(C)=C$ we have $\eta(C)=0$. Therefore, \begin{align*} \int \psi d\mu &= \int_{\mathbb{T}^4\backslash C}\psi d\mu, \\ &=\int_{\mathbb{T}^4\backslash C}\psi\circ H^{-1} dH_{\ast}\mu,\\ &=\int_{\mathbb{T}^4\backslash C}\psi\circ H^{-1} dH_{\ast}\eta,\\ &=\int_{\mathbb{T}^4\backslash C}\psi d\eta,\\ &=\int \psi d\eta. \end{align*} Since $\psi$ is arbitrary, this implies that $\mu=\eta$.\\ \textbf{Case 2:} $\mu(C)=1$\\ Consider the partition: \[ \mathcal{P}:=\{\mathcal{P}(x):=H^{-1}H(x)|x\in C\}, \] and denote by $\mu_x$ the conditional measure of $\mu$ supported on $\mathcal{P}(x)$. We proceed as in the proof of Theorem \ref{main:ergodic}. Hence, we have that \[ \mu_x=\sum_{j=1}^{n_x}p_j(x)\delta_{a_j(x)}, \] for some $a_j(x)\in R_j(x)$. Moreover, if conditions \ref{it.2} and \ref{it.3} are satisfied, then $\mu$ is virtually hyperbolic. The only thing left to prove is the existence of another equilibrium state.\\ \begin{lemma}\label{lem.MeasurableExtremal} If $H$ sends center leaves of $f$ to center leaves of $A$ and \ref{it.Eq3} is satisfied then, the set of extremal points of intervals $Q(x)=\mathcal{P}(x)\cap\mathcal{F}^1(x)$ forms a measurable set. \end{lemma} \begin{proof} Let us denote $\mathcal{F}^1_A$ the foliation of the center direction of $A$ induced by the image of $\mathcal{F}^1$ by the semiconjugacy $H$. We will prove the measurability of the lower extremal points of $Q(x)$. The case of higher extremal points is similar.\\ Consider $\varphi:\mathbb{T}^4\to\mathbb{T}^4$ the flow on $\mathbb{T}^4$ having constant speed one in $\mathcal{F}^1_A$. More precisely, we know that the leaves of $\mathcal{F}^1$ in the center foliation of A are straight lines and orientable by assumption. Define $\varphi(t,x)$ the unique point in the $\mathcal{F}^1_A(x)$ which has distance $t$ inside this $\mathcal{F}^1_A$-leaf and in the positive direction from $x$.\\ Following the proof of \cite[Lemma~ 3.2]{PonceTahzibiVaraoBernoulli}, we have that $H(C)$ is a measurable set. Therefore, $\varphi(-1/n,H(C))$ is a measurable set. Furthermore, since $H$ is continuous, the set $H^{-1}\left(\varphi(-1/n,H(C))\right)$ is also measurable.\\ Consider $\hat{C}=C/\mathcal{Q}$ where $\mathcal{Q}:=\{Q(x):=\mathcal{P}(x)\cap\mathcal{F}^1(x):x\in C\}$. Let \[ \phi_n:\hat{C}\to H^{-1}\left(\varphi(-1/n,H(C))\right), \] be the function given by the Measurable Choice Theorem \ref{th.Measurable} applied to the product $\hat{C}\times \mathbb{T}^4$ and the measurable set $G=H^{-1}\left(\varphi(-1/n,H(C))\right)$.\\ Notice that fixing $Q(x)\in\hat{C}$ we have that $\phi_n(Q(x))$ is an increasing sequence. Therefore, we can define the function \begin{align*} \phi:&\hat{C}\to \mathbb{T}^4\\ &Q(x)\mapsto\lim_{n\to\infty}\phi_n(Q(x)), \end{align*} and by its construction $\phi(Q(x))$ is the lower extreme of $Q(x)$. Notice that $\phi$ is a measurable function because it is the limit of measurable functions. Let $\pi$ the canonical projection and let $\hat{\mu}=\pi_{\ast}\mu$ the measure in the quotient space. By Lusin's theorem for any $n\in\mathbb{N}$ there exist a compact set $\hat{K}_n\subset\hat{C}$ such that $\hat{\mu}(\hat{K}_n^c)<1/n$ and $\phi$ is a continuous function when restricted to $\hat{K}_n$. Therefore, $\phi(\hat{K}_n)$ is a compact set. Without loss of generality we may consider $\hat{C}=\cup_{n\in\mathbb{N}}\hat{K}_n$. Therefore, \[ \phi(\hat{C})=\bigcup_{n\in\mathbb{N}}\phi(K_n), \] is a measurable set. Thus, we have proven so far that the base of the intervals from $\mathcal{Q}$ forms a measurable set. \end{proof} We have seen that the center foliation is measure theoretically equivalent to the partition of $\mathbb{T}^4$ into points, hence measurable. Let us denote $(\hat{M},\hat{\mu})$ the quotient space $\hat{M}:=\mathbb{T}^4/\mathcal{F}^c$ equipped with the quotient measure. We denote by $\hat{f}:\hat{M}\to\hat{M}$ the induced map on the quotient space. Therefore, since $\mu$ if $f$-invariant, then $\hat{\mu}$ is $\hat{f}$-invariant.\\ Notice that, by the virtual hyperbolicity proved above, every element $\hat{x}\in\hat{M}$ can be identified by the unique $\mathcal{Q}_x(z)\subset\mathcal{F}^c(x)$ where its atom belongs to. When $\mathcal{Q}_x(z)$ is a collapse interval inside a $\mathcal{F}^1$-leaf, we define $\mathcal{Q}(\hat{x}):=\mathcal{Q}_x(z)$. On the other hand, if $\mathcal{Q}_x(z)$ is a point, this means that the rectangle $R_j$ containing the atom is one dimensional and contained in an $\mathcal{F}^2$-leaf. In this case, we define $\mathcal{Q}(\hat{x}):=R_j$ (see Figure \ref{fig.RecDim1F2}).\\ Thus, we can write \[ \mu=\int\delta_{a(\hat{x})}d\hat{\mu}, \] where $a(\hat{x})$ is the atom inside the collapse interval $\mathcal{Q}(\hat{x})$. Choose $b(\hat{x})\neq a(\hat{x})$ the left (or right) extreme point of $\mathcal{Q}(\hat{x})$. Let us define \[ \eta=\int\delta_{b(\hat{x})}d\hat{\mu}, \] which is well-defined because $\{b(\hat{x}):\hat{x}\in\hat{M}\}$ is measurable by Lemma \ref{lem.MeasurableExtremal}. We claim that this is an $f$-invariant ergodic measure satisfying $H_{\ast}\eta=H_{\ast}\mu$. In order to see this, consider any continuous map $\psi$ and notice that \begin{align*} \int\psi\circ fd\eta &=\int\int\psi\circ f d\delta_{b(\hat{x})}d\hat{\mu}\\ &=\int\psi(f(b(\hat{x})))d\hat{\mu}\\ &=\int\psi(b(\hat{f}(\hat{x})))d\hat{\mu}\\ &=\int\psi(b(\hat{x}))d\hat{\mu}\\ &=\int\psi d\eta,\\ \end{align*} where the third equality comes from the invariance of collapse intervals and that $f$ preserves the orientation of the $\mathcal{F}^i$-foliations with $i=1,2$. The fourth equality is due to the $\hat{f}$-invariance of $\hat{\mu}$.\\ To see the ergodicity of $\eta$, consider any invariant subset $D$ with positive $\eta$-measure. Since $\hat{\mu}$ is ergodic and $f(b(\hat{x}))=b(\hat{f}(\hat{x}))$, we have that the set $\{\hat{x}:\mathbbm{1}_D(b(\hat{x}))=1\}$ is $\hat{f}$-invariant. So the ergodicity of $\hat{\mu}$ guarantees it has full measure, which implies $\eta(D)=1$.\\ Notice that, if $\varphi=\phi\circ H$, since $H(a(\hat{x}))=H(b(\hat{x}))$ then \[ \int\varphi d\eta=\int\varphi(b(\hat{x})) d\hat{\mu}=\int\varphi(a(\hat{x})) d\hat{\mu}=\int\varphi d\mu. \] However, by the essential uniqueness of disintegration we have that $\eta\neq\mu$.\\ We are left with the task of determining $h_{\eta}(f)=h_{\mu}(f)$. But this is a direct consequence of the fact that $(f,\mu)$ and $(f,\eta)$ are measure theoretically isomorphic by the map that sends $a(\hat{x})$ to $b(\hat{x})$. Thus, $\eta$ is also an equilibrium state form $(f,\varphi)$.\\ \section*{Acknowledgments} The authors thank Cristina Lizana by the communication of her work and for clarifying some questions. C.F.A. was partially funded by CAPES-Brazil (grant \#2019/88882.329056-01). A.S. thanks the Math Department of ICMC (São Carlos) where most of the work was developed. This work was partially supported by FAPESP (Fundação de Amparo à Pesquisa do Estado de São Paulo), grant \#2018/18990-0 and Universidad de Costa Rica for A.S. and R.V. was partially supported by National Council for Scientific and Technological Development– CNPq, Brazil and partially supported by FAPESP (grants \#17/06463-3 and \#16/22475-9).
2006.08578
\section{Introduction} Quantum knot invariants arise in theoretical quantum physics, where a knot can be regarded as the spacetime orbit of a charged particle. A typical example of such an invariant is the $n$-colored Jones polynomial of the knot $K=4_1$ (the figure-eight knot), which is given by\footnote{Throughout the paper, empty sums equal $0$, and empty products equal $1$.} \[ J_{4_1,n} (q) = \sum_{N=0}^\infty q^{-nN} \prod_{j=1}^N (1 - q^{n-j}) (1 - q^{n+j}), \qquad n \ge 2, \] defined for roots of unity $q$. For a fixed $q$, the mapping $n \mapsto J_{4_1,n}(q)$ is periodic in $n$, and so the definition can be extrapolated backwards in $n$ to give \begin{equation}\label{Fq} J_{4_1,0}(q)=\sum_{N=0}^{\infty} |(1-q)(1-q^2) \cdots (1-q^N)|^2 \end{equation} for a root of unity $q$. Note that both sums actually have only finitely many terms for any root of unity $q$. The figure-eight knot is the simplest hyperbolic knot; for other hyperbolic knots $K$ one obtains formulas for $J_{K,0}$ which are of a somewhat similar but more complicated nature. The so-called Kashaev invariant $\langle K \rangle_n = J_{K,n} (e(1/n))$, $n=1,2,\dots$ is another quantum invariant of the knot $K$; here and for the rest of the paper $e(x)=e^{2 \pi i x}$. The Kashaev invariant plays a key role in the volume conjecture, an open problem in knot theory which relates quantum invariants of knots with the hyperbolic geometry of knot complements. For more general background information, see \cite{MY}; in the context of our present paper we refer to \cite{BD} and the references therein. The functions $J_{K,0}$ also have an interpretation as quantum modular forms as introduced by Zagier \cite{ZA}, and are predicted by Zagier's modularity conjecture to satisfy an approximate modularity property. For the figure-eight knot the modularity conjecture has been established \cite{BD,GZ}; that is, the function $J_{4_1,0}(q)$ satisfies a remarkable modularity relation of the form $J_{4_1,0}(e(\gamma r))/J_{4_1,0}(e(r)) \sim \varphi_{\gamma} (r)$, where $\gamma \in \mathrm{SL}_2(\mathbb{Z})$ acts on rational numbers $r$ in a natural way, and the asymptotics holds as $r \to \infty$ along rational numbers with bounded denominators. The ratio $J_{4_1,0}(e(\gamma r))/J_{4_1,0}(e(r))$ as a function of $r$ in general has a jump at every rational point, and consequently the asymptotics of $J_{4_1,0}(e(a/b))$ along rationals $a/b$ with $b \to \infty$ is quite involved. It is known \cite{AH} that \[ J_{4_1,0}(e(1/n)) \sim \frac{n^{3/2}}{\sqrt[4]{3}} \exp \left( \frac{\textup{Vol}(4_1)}{2 \pi} n \right) \qquad \textrm{as } n \to \infty , \] where \[ \textup{Vol}(4_1) = 4 \pi \int_0^{5/6} \log \left( 2 \sin (\pi x) \right) \, \mathrm{d}x \approx 2.02988 \] is the hyperbolic volume of the complement of the figure-eight knot; this follows from the fact that the volume conjecture, as well as its stronger form, the arithmeticity conjecture have been verified for $4_1$. Bettin and Drappeau \cite[Theorem 3]{BD} found the asymptotics of $J_{4_1,0}(e(a/b))$ for more general rationals $a/b$ in terms of their continued fraction expansions: if $a/b=[a_0;a_1, \dots, a_k]$, $a_k>1$, is a sequence of rational numbers such that $(a_1+\cdots +a_k)/k \to \infty$, then \begin{equation}\label{BettinDrappeau} \log J_{4_1,0}(e(a/b)) \sim \frac{\textup{Vol}(4_1)}{2 \pi} \left( a_1 + \cdots + a_k \right) . \end{equation} The result applies to a large class of rationals, including $1/n=[0;n]$, as well as to almost all reduced fractions with denominator at most $n$, as $n \to \infty$. Verifying a conjecture made by Bettin and Drappeau, in this paper we will show that \eqref{BettinDrappeau} in general fails to be true without the assumption $(a_1+\cdots +a_k)/k \to \infty$. The individual terms in \eqref{Fq} can be expressed in terms of the so-called Sudler products, which are defined as \begin{equation} \label{sudler_def} P_N(\alpha):=\prod_{n=1}^N |2\sin (\pi n \alpha)|, \qquad \alpha \in \mathbb{R}. \end{equation} This could also be written using $q$-Pochhammer symbols as \[ P_N(\alpha) =| (q;q)_N| = |(1-q)(1-q^2) \cdots (1-q^N)| \qquad \text{with } q = e(\alpha) , \] but \eqref{sudler_def} seems to be the more common notation. The history of such products goes back at least to work of Erd\H os and Szekeres \cite{ESZ} and Sudler \cite{sudler} around 1960, and they seem to arise in many different contexts (see \cite{LU} for references). Bounds for such products also play a role in the solution of the Ten Martini Problem by Avila and Jitomirskaya \cite{AJ}. It is somewhat surprising that, despite the obvious connection between \eqref{Fq} and \eqref{sudler_def}, we have not found a reference where both objects appear together. A possible explanation is that \eqref{Fq} is only well-defined when $q = e(a/b)$ with $a/b$ being a rational, while the asymptotic order of \eqref{sudler_def} as $N \to \infty$ is only interesting when $\alpha$ is irrational. We will come back to this issue in Proposition~\ref{transferprinciple} below. Note that for any rational number $a/b$ we have $P_N (a/b)=0$ whenever $N \ge b$; in particular, this means that $J_{4_1,0}(e(a/b)) = \sum_{N=0}^{b-1} P_N (a/b)^2$. The asymptotics \eqref{BettinDrappeau} a fortiori holds for more general functionals of the sequence $(P_N(a/b))_{0 \le N <b}$. For instance, it is not difficult to see that under the same conditions as those of \eqref{BettinDrappeau} for any real $c>0$, \begin{equation}\label{BettinDrappeauc} \log \left( \sum_{N=0}^{b-1} P_N (a/b)^c \right)^{1/c} \sim \frac{\textup{Vol}(4_1)}{4 \pi} \left( a_1 + \cdots + a_k \right) , \end{equation} and also \begin{equation}\label{BettinDrappeaumax} \log \max_{0 \le N <b} P_N (a/b) \sim \frac{\textup{Vol}(4_1)}{4 \pi} \left( a_1 + \cdots + a_k \right) . \end{equation} In particular, the maximal term in $J_{4_1,0}(e(a/b)) = \sum_{N=0}^{b-1} P_N(a/b)^2$ is almost as large as the sum itself. For the sake of completeness, we formally deduce \eqref{BettinDrappeauc} and \eqref{BettinDrappeaumax} from \eqref{BettinDrappeau} in Section \ref{generalsection}. It is natural to consider the same quantities along the sequence of convergents $p_k/q_k=[a_0;a_1, \dots, a_k]$ to a given irrational number $\alpha=[a_0;a_1,a_2, \dots]$. Formulas \eqref{BettinDrappeau}, \eqref{BettinDrappeauc} and \eqref{BettinDrappeaumax} give precise results for a large class of irrationals, but not when the sequence $a_k$ is bounded; our main result concerns this case. Recall that $\alpha$ is \textit{badly approximable} if and only if $a_k$ is bounded, and that $\alpha$ is a \textit{quadratic irrational} if and only if $a_k$ is eventually periodic; in particular, quadratic irrationals are badly approximable. \begin{thm}\label{quadraticasymptotics} Let $\alpha$ be a quadratic irrational. For any real $c>0$ and any $k \ge 1$, \[ \log \left( \sum_{N=0}^{q_k-1} P_N (p_k/q_k)^c \right)^{1/c} = K_c (\alpha ) k +O \left( \max \{ 1, 1/c \} \right) \] and \[ \log \max_{0 \le N < q_k} P_N (p_k/q_k) = K_{\infty} (\alpha ) k +O(1) \] with some constants $K_c (\alpha), K_{\infty}(\alpha) >0$. The implied constants depend only on $\alpha$. \end{thm} \noindent In particular, we have $$ \log J_{4_1,0} (p_k/q_k) = 2K_2 (\alpha ) k +O(1) . $$ The proof of Theorem \ref{quadraticasymptotics} is based on the self-similar structure of quadratic irrationals; that is, on the periodicity of the continued fraction. In fact, it is not difficult to construct a badly approximable $\alpha$ for which the result is not true. Quadratic irrationals exhibit a remarkable deviation from the universal behavior of irrationals with unbounded partial quotients. In contrast to \eqref{BettinDrappeau}, \eqref{BettinDrappeauc} and \eqref{BettinDrappeaumax}, the constants $K_c(\alpha)$ and $K_{\infty}(\alpha)$ in Theorem \ref{quadraticasymptotics} are in general not equal to each other, or to $\textup{Vol}(4_1) \overline{a}(\alpha)/(4 \pi)$; here $\overline{a}(\alpha) = \lim_{k \to \infty} (a_1+\cdots +a_k)/k$ denotes the average partial quotient, which is of course simply the average over a period in the continued fraction expansion. We have not been able to calculate the precise value of $K_c (\alpha)$ for any specific quadratic irrational and for any $c$; as far as we can say, this seems to be a difficult problem. The precise value of $K_\infty (\alpha)$ is known for some quadratic irrationals with very small partial quotients, as a consequence of results in \cite{ATZ}; for $\alpha$ with larger partial quotients, calculating $K_\infty (\alpha)$ precisely also seems to be difficult. However, it is possible to give fairly good general upper and lower bounds for $K_c(\alpha)$ and $K_\infty (\alpha)$ in terms of the partial quotients. Recall that for any quadratic irrational $\alpha$, we have $\log q_k = \lambda (\alpha) k+O(1)$ with some constant $\lambda (\alpha )>0$; see Section \ref{quadraticsection} for a simple way of computing $\lambda (\alpha )$ from the continued fraction. We start with three simple observations. First, for any rational number $a/b$ with $(a,b)=1$, the identity\footnote{The last step follows e.g.\ from taking the limit as $x \to 1$ in the factorization $(x^b-1)/(x-1)=\prod_{j=1}^{b-1} (x-e(j/b))$.} \begin{equation}\label{lastterm} P_{b-1}(a/b) = \prod_{n=1}^{b-1} |1-e(na/b)| = \prod_{j=1}^{b-1} |1-e(j/b)| =b \end{equation} provides the trivial lower bound $\max_{0 \le N <b} P_N(a/b) \ge b$. This immediately yields \begin{equation}\label{triviallowerbound} K_{\infty} (\alpha ) \ge \lambda (\alpha ) . \end{equation} Second, for any rational number $a/b$ with $(a,b)=1$ and any real $c>0$, we also have \begin{equation}\label{trivialbounds} \left( \frac{1}{b} \sum_{N=0}^{b-1} P_N (a/b)^c \right)^{1/c} \le \max_{0 \le N <b} P_N (a/b) \le \left( \sum_{N=0}^{b-1} P_N (a/b)^c \right)^{1/c}, \end{equation} which in turn shows that \begin{equation}\label{KcKinftybounds} K_c(\alpha) - \frac{\lambda(\alpha)}{c} \leq K_\infty(\alpha) \leq K_c(\alpha) . \end{equation} Finally, we establish an antisymmetry of the sequence $(P_N(a/b))_{0 \le N <b}$; we call it the ``reflection principle''. It is based on an observation which was already made in \cite{ATZ}. \begin{prop}\label{dualityprinciple} For any rational number $a/b$ with $(a,b)=1$, and any integer $0 \leq N < b$, $$ \log P_N(a/b) + \log P_{b-N-1}(a/b) = \log b. $$ \end{prop} \noindent Note that Proposition \ref{dualityprinciple} immediately implies that \[ \log \max_{0 \le N<b} P_N (a/b) + \log \min_{0 \le N<b} P_N (a/b) = \log b, \] thus relating the largest with the smallest value of $P_N(a/b)$. In particular, all results for $\max_{0 \le N <b} P_N(a/b)$ have straightforward analogues for the minimum. As a nice further application of Proposition \ref{dualityprinciple}, we deduce the average value of $\log P_N (a/b)$ as \begin{equation}\label{averagelogPN} \frac{1}{b} \sum_{N=0}^{b-1} \log P_N (a/b) = \frac{\log b}{2} . \end{equation} From \eqref{triviallowerbound} it follows that $K_c(\alpha)$ and $K_{\infty} (\alpha)$ exceed $\textup{Vol}(4_1) \overline{a}(\alpha )/(4 \pi)$ whenever the quadratic irrational $\alpha$ has relatively small partial quotients. For instance, $\frac{1+\sqrt{5}}{2}=[1;1,1,1,\dots]$, $\sqrt{2}=[1;2,2,2,\dots]$, $\frac{1+\sqrt{13}}{2}=[2;3,3,3,\dots]$, $\sqrt{5}=[2;4,4,4,\dots]$, $\frac{1+\sqrt{29}}{2}=[3;5,5,5,\dots]$ and $\sqrt{10}=[3;6,6,6,\dots]$ satisfy \begin{equation}\label{Kinftylowerbound} \begin{split} K_{\infty} \bigg( \frac{1+\sqrt{5}}{2} \bigg) &\ge \log \frac{1+\sqrt{5}}{2} \approx 0.4812, \qquad K_{\infty} (\sqrt{2}) \ge \log (1+\sqrt{2}) \approx 0.8814, \\ K_{\infty} \bigg( \frac{1+\sqrt{13}}{2} \bigg) &\ge \log \frac{3+\sqrt{13}}{2} \approx 1.1948, \qquad K_{\infty} (\sqrt{5}) \ge \log (2+\sqrt{5}) \approx 1.4436, \\ K_{\infty} \bigg( \frac{1+\sqrt{29}}{2}\bigg) &\ge \log \frac{5+\sqrt{29}}{2} \approx 1.6472, \qquad K_{\infty} (\sqrt{10}) \ge \log (3+\sqrt{10}) \approx 1.8184, \end{split} \end{equation} whereas \[ \begin{split} \frac{\textup{Vol}(4_1)}{4 \pi} \overline{a} \bigg( \frac{1+\sqrt{5}}{2} \bigg) &\approx 0.1615, \qquad \frac{\textup{Vol}(4_1)}{4 \pi} \overline{a} (\sqrt{2}) \approx 0.3231, \\ \frac{\textup{Vol}(4_1)}{4 \pi} \overline{a} \bigg( \frac{1+\sqrt{13}}{2} \bigg) &\approx 0.4846, \qquad \frac{\textup{Vol}(4_1)}{4 \pi} \overline{a} (\sqrt{5}) \approx 0.6461, \\ \frac{\textup{Vol}(4_1)}{4 \pi} \overline{a} \bigg( \frac{1+\sqrt{29}}{2} \bigg) &\approx 0.8077, \qquad \frac{\textup{Vol}(4_1)}{4 \pi} \overline{a} (\sqrt{10}) \approx 0.9692 . \end{split} \] In particular, the sequence of convergents to these quadratic irrationals violate \eqref{BettinDrappeau}, \eqref{BettinDrappeauc} and \eqref{BettinDrappeaumax}, demonstrating that the condition $(a_1+\cdots +a_k)/k \to \infty$ cannot be removed. For a quadratic irrational $\alpha$ with large partial quotients, the constants $K_c (\alpha)$ and $K_{\infty} (\alpha )$ are nevertheless close to $\textup{Vol}(4_1) \overline{a}(\alpha) /(4 \pi )$. Indeed, from results of Bettin and Drappeau \cite[Theorem 2 and Lemma 15]{BD} it follows that for any rational $a/b=[a_0;a_1, \dots, a_k]$, $a_k>1$, \begin{equation}\label{BettinDrappeaugeneral} \log J_{4_1,0}(e(a/b)) = \frac{\textup{Vol}(4_1)}{2 \pi} (a_1+\cdots +a_k) +O(A+k \log A) \end{equation} with $A=1+\max_{1 \le \ell \le k} a_{\ell}$ and a universal implied constant. A fortiori, for any real $c>0$, \begin{equation}\label{BettinDrappeaugeneralc} \log \left( \sum_{N=0}^{b-1} P_N(a/b)^c \right)^{1/c} = \frac{\textup{Vol}(4_1)}{4 \pi} (a_1+\cdots +a_k) +O(A+k \max \{ 1, 1/c \} \log A) \end{equation} and \begin{equation}\label{BettinDrappeaugeneralmax} \log \max_{0 \le N <b} P_N (a/b) = \frac{\textup{Vol}(4_1)}{4 \pi} (a_1+\cdots +a_k) +O(A+k \log A) . \end{equation} The last two relations immediately show that for any quadratic irrational $\alpha$ and any $0<c \le \infty$, \begin{equation}\label{Kcestimate} K_c (\alpha ) = \frac{\textup{Vol}(4_1)}{4 \pi} \overline{a}(\alpha ) +O( \max \{ 1, 1/c \} \log A(\alpha) ) \end{equation} with $A(\alpha ) = 1+ \max_{k \ge 1} a_k$ and a universal implied constant. In principle it might be the case that $K_\infty (\alpha )= K_c (\alpha )$ holds for $c$ beyond some threshold; note that this would be in accordance with the asymptotics \eqref{BettinDrappeauc} and \eqref{BettinDrappeaumax} for the case $(a_1 + \dots + a_k)/k \to \infty$. However, we rather believe that $K_\infty (\alpha)<K_c (\alpha )$ for all quadratic irrationals and all $c$. In this direction, from \eqref{averagelogPN} and the Jensen inequality applied with the convex function $e^{cx}$ we deduce that for any real $c>0$, \[ K_c (\alpha ) \ge \left( \frac{1}{c} + \frac{1}{2} \right) \lambda (\alpha) ; \] in light of \eqref{triviallowerbound} and \eqref{KcKinftybounds}, this is nontrivial for $0<c<2$. In particular, $K_{\infty} (\alpha) < K_c (\alpha )$ for all small enough $c$, and the set $\{ K_c (\alpha ) \, : \, c>0 \}$ is infinite for all quadratic irrationals. As we mentioned earlier, previous results on the Sudler product concerned the asymptotics of $P_N(\alpha)$ as $N \to \infty$ with a given irrational $\alpha$, whereas $J_{4_1,0}(e(a/b)) = \sum_{N=0}^{b-1} P_N(a/b)^2$ has been studied for rational $a/b$. To make these two types of results easier to compare, we prove the following simple ``transfer principle''. \begin{prop}\label{transferprinciple} Let $\alpha=[a_0;a_1,a_2,\dots]$ be an irrational number with convergents $p_k/q_k=[a_0;a_1,\dots, a_k]$. For any integers $k \ge 1$ and $0 \le N < q_k$, \[ \left| \log P_N (\alpha ) - \log P_N (p_k/q_k) \right| \ll \frac{\log A_k}{a_{k+1}} \] with $A_k=1+\max_{1 \le \ell \le k} a_{\ell}$ and a universal implied constant. \end{prop} \noindent In particular, for any real $c>0$, \[ \left| \log \left( \sum_{N=0}^{q_k-1} P_N(\alpha )^c \right)^{1/c} - \log \left( \sum_{N=0}^{q_k-1} P_N(p_k/q_k)^c \right)^{1/c} \right| \ll \frac{\log A_k}{a_{k+1}} \] and \[ \left| \log \max_{0 \le N <q_k} P_N (\alpha ) - \log \max_{0 \le N <q_k} P_N (p_k/q_k) \right| \ll \frac{\log A_k}{a_{k+1}} . \] Applying the transfer principle to a quadratic irrational $\alpha$, Theorem \ref{quadraticasymptotics} can thus be restated in the irrational setting as \begin{equation}\label{irrationalsetting} \begin{split} \log \left( \sum_{N=0}^M P_N(\alpha )^c \right)^{1/c} &= \frac{K_c (\alpha )}{\lambda (\alpha)} \log M +O \left( \max \{ 1, 1/c \} \right) , \\ \log \max_{0 \le N \le M} P_N(\alpha ) &= \frac{K_{\infty}(\alpha )}{\lambda (\alpha)} \log M +O(1) , \end{split} \end{equation} where $\lambda (\alpha) >0$ is defined by $\log q_k = \lambda (\alpha)k+O(1)$, as before. For an arbitrary irrational $\alpha$ the reflection principle becomes, with the notation of Proposition \ref{transferprinciple}, \[ \log P_N (\alpha) + \log P_{q_k-N-1} (\alpha) = \log q_k + O \left( \frac{\log A_k}{a_{k+1}} \right) ; \] in particular, \begin{equation}\label{dualityirrational} \log \max_{0 \le N <q_k} P_N (\alpha) + \log \min_{0 \le N <q_k} P_N (\alpha) = \log q_k +O \left( \frac{\log A_k}{a_{k+1}} \right) . \end{equation} For the sake of simplicity, we state all remaining results in the irrational setting only. Erd\H{o}s and Szekeres \cite{ESZ} proved that $\liminf P_N (\alpha ) =0$ for almost all $\alpha$ in the sense of the Lebesgue measure, and asked whether the relation is actually true for all irrational $\alpha$. Lubinsky \cite{LU} gave more quantitative results on $P_N (\alpha)$ in terms of the continued fraction expansion of $\alpha$. The metric result of Erd\H{o}s and Szekeres was extended to a convergence/divergence type criterion, and it was also shown that $\liminf P_N(\alpha) =0$ whenever $\alpha$ has unbounded partial quotients. In addition, the results in the same paper imply \[ \liminf_{N \to \infty} P_N (\alpha ) < \infty \qquad \textrm{and} \qquad \limsup_{N \to \infty} \frac{P_N (\alpha )}{N} >0 \] for all irrational $\alpha$. Note that the relations in the previous line also follow from the identity \eqref{lastterm} and the transfer and reflection principles; in fact, the limsup relation is a far-reaching generalization of our trivial lower bound \eqref{triviallowerbound}. More recently, Grepstad, Kaltenb\"ock and Neum\"uller \cite{GKN} established the remarkable relation $\liminf P_N (\frac{1+\sqrt{5}}{2}) >0$, thus answering the question of Erd\H{o}s and Szekeres in the negative. On the other hand, however, in the paper \cite{GKN2} by the same authors it was shown that for the special irrational $\alpha = [0;a,a,a,\dots]=(\sqrt{a^2+4}-a)/2$ one has $\liminf P_N (\alpha)=0$ whenever $a$ is sufficiently large. Thus, rather remarkably, the question whether $\liminf P_N (\alpha)>0$ or $=0$ depends on the actual size of the partial quotients of $\alpha$ in a very sensitive way. Numerical experiments suggested a similar change of behavior for large values of $P_N$: for $\alpha= [0;a,a,a,\dots]$ it was conjectured that $\limsup P_N(\alpha)/N < \infty$ or $=\infty$, depending on the size of $a$. This problem was settled in \cite{ATZ}, where it was proved that for $\alpha= [0;a,a,a,\dots]$, \begin{equation}\label{ale5} \liminf_{N \to \infty} P_N (\alpha ) >0 \qquad \textrm{and} \qquad \limsup_{N \to \infty} \frac{P_N(\alpha )}{N} < \infty \qquad \textrm{if } a \le 5 \end{equation} and \begin{equation}\label{age6} \liminf_{N \to \infty} P_N (\alpha ) =0 \qquad \textrm{and} \qquad \limsup_{N \to \infty} \frac{P_N(\alpha )}{N} = \infty \qquad \textrm{if } a \ge 6. \end{equation} Regarding general badly approximable irrationals $\alpha$, Lubinsky \cite{LU} proved that $|\log P_N (\alpha)| \ll \log N$; equivalently, \begin{equation}\label{c1c2} N^{-c_1} \ll P_N (\alpha ) \ll N^{c_2} \end{equation} with some constants $c_1, c_2$ and implied constants depending on $\alpha$. Let $c_1(\alpha)$ resp.\ $c_2(\alpha)$ denote the infimum of all $c_1$ resp.\ $c_2$ for which \eqref{c1c2} holds; Lubinsky remarked that it is an interesting problem to determine these constants. The reflection principle \eqref{dualityirrational} immediately shows that given a badly approximable $\alpha$ and a real constant $c$, we have $P_N (\alpha) \gg N^{-c} \Leftrightarrow P_N (\alpha ) \ll N^{1+c}$, with implied constants depending on $\alpha$. Therefore $c_2(\alpha)= c_1 (\alpha)+1$, which is a striking general relation that seems not to have been noticed so far. Thus establishing the optimal value of $c_1$ and that of $c_2$ in \eqref{c1c2} are actually one and the same problem. For a quadratic irrational $\alpha$ our main result \eqref{irrationalsetting} shows that $c_2(\alpha ) =c_1(\alpha )+1= K_{\infty} (\alpha )/\lambda(\alpha)$, and in fact \[ 0< \liminf_{N \to \infty} N^{c_1(\alpha)} P_N(\alpha) < \infty \quad \textrm{and} \quad 0< \limsup_{N \to \infty} \frac{P_N(\alpha )}{N^{c_2(\alpha )}} < \infty . \] In particular, \[ \liminf_{N \to \infty} P_N(\alpha )>0 \Longleftrightarrow \limsup_{N \to \infty} \frac{P_N(\alpha)}{N} < \infty \Longleftrightarrow K_{\infty}(\alpha ) = \lambda (\alpha ) , \] where the last condition means that our trivial lower bound \eqref{triviallowerbound} holds with equality. Note that this relation also explains why the behavior of the liminf and the limsup in \eqref{ale5} and \eqref{age6} changes at the same critical value of $a$; this is also reflected in the six examples we gave in \eqref{Kinftylowerbound}, where we actually have equality everywhere except for $\sqrt{10}$, in which case the inequality is strict. It seems to be a difficult problem to give a complete characterization of all quadratic irrationals $\alpha$ such that $K_{\infty}(\alpha) = \lambda (\alpha)$; this is the subject of an upcoming paper of Grepstad, Neum\"uller and Zafeiropoulos \cite{GNZ}. The results in our paper allow us to give a fairly precise estimate for the constants $c_1(\alpha ), c_2(\alpha)$ in the question of Lubinsky. From \eqref{Kcestimate} with $c=\infty$ we obtain that for any quadratic irrational $\alpha$, \[ c_2(\alpha ) = c_1(\alpha ) +1= \frac{\textup{Vol}(4_1)}{4 \pi} \cdot \frac{\overline{a}(\alpha )}{\lambda (\alpha )} + O \left( \frac{\log A (\alpha)}{\lambda (\alpha )} \right) \] with a universal implied constant. In particular, for $\alpha = [0;a,a,a,\dots]$ we have \[ c_2(\alpha ) = c_1(\alpha ) +1 = \frac{\textup{Vol}(4_1)}{4 \pi} \cdot \frac{a}{\log \frac{a+\sqrt{a^2+4}}{2}} +O(1) = \frac{\textup{Vol}(4_1)}{4 \pi} \cdot \frac{a}{\log a} +O(1) . \] The discussion above shows that $K_{\infty} (\alpha) \ge K_{\infty} (\frac{1+\sqrt{5}}{2}) = \log \frac{1+\sqrt{5}}{2}$ for all quadratic irrationals. We do not know whether $K_c (\alpha) \ge K_c (\frac{1+\sqrt{5}}{2})$ for all quadratic irrationals and all $c>0$; in fact, we do not even know the precise value of $K_c (\frac{1+\sqrt{5}}{2})$ for any $c$. We mention, however, that numerical evidence found by Zagier \cite{ZA} and Bettin and Drappeau \cite{BD} suggests that for the golden mean we have $\log J_{4_1,0}(e(p_k/q_k)) \approx 1.1 k$; that is, $K_2 (\frac{1+\sqrt{5}}{2}) \approx 0.55$. In this context, it is an interesting question to characterize those values of $N$ for which particularly large resp.\ small values of $P_N(\alpha)$ occur. It is also interesting to estimate the relative number of indices which generate such values of $P_N (\alpha)$. This would shed some light on the relation between the numbers $K_c(\alpha)$ and $K_\infty(\alpha)$ in Theorem \ref{quadraticasymptotics} and \eqref{irrationalsetting}. Essentially, the problem is whether the sum $\sum_{N=0}^M P_N(\alpha)^c$ is dominated by a very small number of indices $N$ which produce particularly large values of $P_N$, or if there are enough such indices so that the full sum is of a significantly different asymptotic order than its maximal term. We plan to come back to all these questions in a future paper. Finally, we mention a further open problem. In \cite{ZA} Zagier introduced the function $h(x) = \log \left(J_{4_1,0}(e(x)) / J_{4_1,0}(e(1/x)) \right)$. A conjecture of Zagier, established by Bettin and Drappeau in \cite{BD}, implies that $h$ has jumps at all rational points. Zagier also suggested that $h(x)$ is continuous at irrational values of $x$ (more precisely, since $h$ is formally only defined for rational $x$, the conjecture is that $h$ can be extended to all reals such that it is continuous at irrational values). Let $\alpha$ be a quadratic irrational whose continued fraction expansion is of the simple form $\alpha = [0;a,a,a,\dots]$, and let $p_k/q_k$ be its convergents. Then it is easily seen that $h(p_k/q_k) = \log \left(J_{4_1,0}(e(p_k/q_k)) \right) - \log \left( J_{4_1,0}(e(p_{k-1}/q_{k-1})) \right)$. Thus, while we cannot prove that $h(p_k/q_k)$ converges as $k \to \infty$, as a consequence of Theorem \ref{quadraticasymptotics} we can at least conclude that $K^{-1} \sum_{k=1}^K h(p_k/q_k)$ converges as $K \to \infty$. Note that if the error term in Theorem \ref{quadraticasymptotics} could be reduced from $O(1)$ to $o(1)$, then we could deduce that actually $h(p_k/q_k)$ itself converges as $k \to \infty$, so such a conclusion is just beyond the reach of our theorem. Finally, note that Theorem \ref{quadraticasymptotics} implies that if $h$ can indeed be continuously extended to $\alpha$, then the only possible value is $h(\alpha) = 2 K_2(\alpha)$ with $K_2 (\alpha)$ from Theorem \ref{quadraticasymptotics}. Thus our results can be seen as progress towards Zagier's problem, while it seems that there is still a long way to go for a full solution of the problem. From the discussion above, one might expect that the problem requires different approaches according to whether the partial quotients of $\alpha$ are large (say, as in the case $(a_1 + \cdots + a_k)/k \to \infty$), or small (say, bounded). The most difficult case could be the one when the partial quotients of $\alpha$ are small, but there is no particular structure such as periodicity. \section{General rationals and irrationals}\label{generalsection} Recalling \eqref{trivialbounds}, to deduce \eqref{BettinDrappeauc} and \eqref{BettinDrappeaumax} from \eqref{BettinDrappeau} we only need to show that the condition $(a_1+\cdots +a_k)/k \to \infty$ implies that $\log b /(a_1+\cdots +a_k) \to 0$; indeed, this will show that for any $c>0$, \[ \log \left( \sum_{N=0}^{b-1} P_N(a/b)^c \right)^{1/c} \sim \log \max_{0 \le N <b} P_N (a/b) . \] From the recursion satisfied by the convergents we get $b \le (a_1+1)\cdots (a_k+1)$. Letting $\overline{a}_k=(a_1+\cdots +a_k)/k$, the AM--GM inequality gives \[ \frac{\log b}{a_1+\cdots +a_k} \le \frac{\log ((a_1+1)\cdots (a_k+1))}{a_1+\cdots +a_k} \le \frac{\log (\overline{a}_k+1)}{\overline{a}_k} \to 0 , \] and we are done. To deduce \eqref{BettinDrappeaugeneralc} and \eqref{BettinDrappeaugeneralmax} from \eqref{BettinDrappeaugeneral}, simply note that \[ \log b \le \log (a_1+1) + \cdots +\log (a_k+1) = O(k \log A), \] and hence \eqref{trivialbounds} shows that for any $c>0$, \[ \log \left( \sum_{N=0}^{b-1} P_N(a/b)^c \right)^{1/c} = \log \max_{0 \le N < b} P_N (a/b) + O\left( \frac{k}{c} \log A \right) . \] \subsection{The reflection principle} \begin{proof}[Proof of Proposition \ref{dualityprinciple}] By the definition of Sudler products, for any integer $0 \le N <b$, \begin{equation}\label{factorization} P_N (a/b) \cdot \prod_{n=N+1}^{b-1} |2 \sin (\pi n a/b)| = P_{b-1} (a/b) . \end{equation} A simple reindexing shows that here \[ \prod_{n=N+1}^{b-1} |2 \sin (\pi n a/b)| = \prod_{j=1}^{b-N-1} |2 \sin (\pi (b-j) a/b)| = P_{b-N-1} (a/b) . \] As observed in \eqref{lastterm}, we also have $P_{b-1}(a/b)=b$. Hence \eqref{factorization} yields \[ \log P_N (a/b) + \log P_{b-N-1} (a/b) = \log b, \] as claimed. \end{proof} \subsection{The transfer principle} Let $\alpha =[a_0;a_1,a_2,\dots]$ be an arbitrary irrational number with convergents $p_k/q_k=[a_0;a_1, \dots, a_k]$. Let $A_k =1+ \max_{1 \le \ell \le k} a_{\ell}$, and let $\| x \|$ denote the distance from a real number $x$ to the nearest integer. The sequence $q_k$ satisfies the recursion $q_k=a_k q_{k-1} + q_{k-2}$ with initial conditions $q_0=1$, $q_1=a_1$. Recall that for any $k \ge 1$ and any $0<n<q_k$, we have $\| n \alpha \| \ge \| q_{k-1} \alpha \|$. Further, if $k \ge 1$, or $k=0$ and $a_1>1$, then \begin{equation}\label{||qkalpha||} \frac{1}{q_{k+1}+q_k} < \| q_k \alpha \| < \frac{1}{q_{k+1}} . \end{equation} The main tool in the proof of the transfer principle is a bound on a cotangent sum proved by Lubinsky \cite[Theorem 4.1]{LU}, which states that for any $k \ge 1$ and any $0\le N<q_k$, \begin{equation}\label{cotangentsumirrational} \left| \sum_{n=1}^N \cot (\pi n \alpha ) \right| \le \left( 124+ 24 \log A_k \right) q_k . \end{equation} The same bound holds in the rational setting as well, i.e. \begin{equation}\label{cotangentsumrational} \left| \sum_{n=1}^N \cot (\pi n p_k/q_k ) \right| \le \left( 124+ 24 \log A_k \right) q_k . \end{equation} Indeed, we can apply \eqref{cotangentsumirrational} to a sequence of irrational $\alpha$'s converging to $p_k/q_k$, whose continued fraction expansions have initial segments identical to $p_k/q_k=[a_0;a_1, \dots, a_k]$. The same cotangent sum and various generalizations thereof in the rational setting have been studied recently in \cite{BD2}, and used in \cite{BD} to establish \eqref{BettinDrappeau}. Cotangent sums have a long history in analytic number theory; some of them are known to satisfy interesting reciprocity formulas, and they also appear in the Nyman--Beurling--B\'aez-Duarte approach to the Riemann hypothesis. See \cite{BC,BC2} for more details. \begin{proof}[Proof of Proposition \ref{transferprinciple}] We consider the cases $q_k<200$ and $q_k \ge 200$ separately, starting with the former; the value $200$ is of course basically accidental. First, assume that $q_k<200$. If $k=1$ and $a_1=1$, then $N=0$ and we are done; we may therefore assume that either $k \ge 2$, or $k=1$ and $a_1>1$. For any $0<n<q_k$ we thus have \[ 2 \ge |2 \sin (\pi n \alpha )| \ge 4 \| n \alpha \| \ge 4 \| q_{k-1} \alpha \| \ge \frac{4}{q_k + q_{k-1}} > \frac{1}{100}, \] and similarly \begin{equation} \label{sin_dist} 2 \ge |2 \sin (\pi n p_k/q_k)| \ge 4 \| n p_k/q_k \| \ge \frac{4}{q_k} > \frac{1}{50} . \end{equation} Consequently, $|\log P_N (\alpha)| \ll 1$ and $|\log P_N (p_k/q_k)| \ll 1$, and we are done provided $a_{k+1}$ is bounded. For large $a_{k+1}$ note that \[ |\sin (\pi n \alpha ) - \sin (\pi n p_k/q_k)| \le \pi n |\alpha -p_k/q_k| \le \frac{\pi}{a_{k+1}} . \] In particular, by \eqref{sin_dist} we have \[ \left| \frac{\sin (\pi n \alpha )}{\sin (\pi n p_k/q_k)} -1 \right| < \frac{50 \pi}{a_{k+1}} , \] and hence \[ \left( 1-\frac{50 \pi}{a_{k+1}} \right)^{200} \le \frac{P_N (\alpha )}{P_N (p_k/q_k)} \le \left( 1+\frac{50 \pi}{a_{k+1}} \right)^{200} . \] This finishes the proof in the case $q_k<200$. Next, assume that $q_k \ge 200$. Observe that $q_k \ge q_{\ell} \ge a_{\ell}$ for all $1 \le \ell \le k$; in particular, $q_k \ge A_k-1$. From the assumption $q_k \ge 200$ we thus deduce $q_k \ge \sqrt{200(A_k-1)} \ge 10 \sqrt{A_k}$. Using a trigonometric identity, we can write \begin{equation}\label{prod1+xn} \frac{P_N (\alpha )}{P_N (p_k/q_k )} = \left| \prod_{n=1}^N \frac{\sin (\pi n \alpha )}{\sin (\pi n p_k/q_k )} \right| = \left| \prod_{n=1}^N (1+x_n+y_n) \right|, \end{equation} where \[ \begin{split} x_n &= \cos (\pi n (\alpha -p_k/q_k )) -1, \\ y_n &= \sin (\pi n (\alpha -p_k/q_k)) \cot (n \pi p_k/q_k ) . \end{split} \] Here \[ \left| \alpha - \frac{p_k}{q_k} \right| < \frac{1}{q_k q_{k+1}} \le \frac{1}{q_k^2} \left( 1-\frac{1}{\frac{q_k}{q_{k-1}}+1} \right) \le \frac{1}{q_k^2} \left( 1-\frac{1}{A_k+1} \right) . \] From the Taylor expansions of sine and cosine, for all $0<n<q_k$, \[ |x_n| \le \frac{\pi^2 n^2}{2} \left| \alpha - \frac{p_k}{q_k} \right|^2 \le \frac{\pi^2}{2 q_k^2} \le \frac{1}{20 A_k} , \] as well as \[ \begin{split} \left| y_n \right| &\le \frac{|\sin (\pi n (\alpha -p_k/q_k ))|}{\sin (\pi /q_k )} \le \frac{\pi n |\alpha -p_k/q_k|}{\pi /q_k - \pi^3 /(6q_k^3)} \le \frac{1}{1-\pi^2 /(6q_k^2)} \left( 1-\frac{1}{A_k+1} \right) \\ &\le \frac{1}{1-\pi^2 /(600 A_k)} \left( 1-\frac{1}{A_k+1} \right) \le 1-\frac{3}{10 A_k} . \end{split} \] The previous two estimates give $|x_n+y_n| \le 1-1/(4 A_k)$; the point is that $x_n+y_n$ is bounded away from $-1$. Observe that for any $|x| \le 1-1/(4A_k)$, \[ e^{x-2x^2 \log (4A_k)} \le 1+x \le e^x . \] Indeed, one readily verifies that the function $e^{-x+2x^2 \log (4A_k)} (1+x)$ attains its minimum on the interval $[-1+1/(4A_k), 1-1/(4A_k)]$ at $x=0$. Applying this estimate with $x=x_n+y_n$ in each factor of \eqref{prod1+xn}, we obtain \begin{equation}\label{pnalpha/pnpkqk} \begin{split} \frac{P_N (\alpha )}{P_N (p_k/q_k )} &= \exp \left( O \left( \left| \sum_{n=1}^N (x_n+y_n) \right| + \sum_{n=1}^N (x_n^2+y_n^2) \log (4A_k) \right) \right) . \end{split} \end{equation} Note that the right-hand side of \eqref{pnalpha/pnpkqk} provides both an upper and a lower bound for the quotient on the left-hand side. Since \[ |x_n| \le \frac{\pi^2 n^2}{2} \left| \alpha - \frac{p_k}{q_k} \right|^2 \le \frac{\pi^2}{2 a_{k+1}^2 q_k^2}, \] the contribution of $x_n$ and $x_n^2$ in \eqref{pnalpha/pnpkqk} is negligible: \[ \sum_{n=1}^N |x_n| + \sum_{n=1}^N x_n^2 \log (4A_k) \ll \frac{1}{a_{k+1}^2 q_k} + \frac{\log A_k}{a_{k+1}^4 q_k^3}. \] From Lubinsky's bound on cotangent sums \eqref{cotangentsumrational}, summation by parts and \[ \left| \sin (\pi (n+1) (\alpha -p_k/q_k)) - \sin (\pi n (\alpha -p_k/q_k)) \right| \le \pi |\alpha -p_k/q_k| \ll \frac{1}{a_{k+1}q_k^2}, \] we obtain \[ \left| \sum_{n=1}^N y_n \right| = \left| \sum_{n=1}^N \sin (\pi n (\alpha -p_k/q_k)) \cot (\pi n p_k/q_k) \right| \ll \frac{\log A_k}{a_{k+1}} . \] Finally, \[ \sum_{n=1}^N y_n^2 \log (4A_k) \ll \sum_{n=1}^N \frac{\log A_k}{a_{k+1}^2 q_k^2 \| np_k/q_k \|^2} \le \sum_{j=1}^{q_k-1} \frac{\log A_k}{a_{k+1}^2 q_k^2 \| j/q_k \|^2} \ll \frac{\log A_k}{a_{k+1}^2}, \] since the integers $np_k$, $1 \le n \le N$ attain each nonzero residue class modulo $q_k$ at most once. Hence \eqref{pnalpha/pnpkqk} simplifies as \[ \frac{P_N (\alpha )}{P_N (p_k/q_k )} = \exp \left( O \left( \frac{\log A_k}{a_{k+1}} \right) \right), \] which proves the proposition. \end{proof} \section{Quadratic irrationals}\label{quadraticsection} Fix a quadratic irrational $\alpha$. Throughout this section, constants and implied constants depend only on $\alpha$. The continued fraction expansion is of the form $\alpha =[a_0;a_1,\dots, a_s, \overline{a_{s+1}, \dots, a_{s+p}}]$, where the overline means period. As before, $p_k/q_k=[a_0;a_1,\dots, a_k]$ denotes the $k$-th convergent to $\alpha$; further, let $\delta_k=(-1)^k (q_k \alpha -p_k)$. The sequences $q_k$ and $p_k$ satisfy the same recursion; consequently, $\delta_k=-a_k \delta_{k-1}+\delta_{k-2}$ for all $k \ge 2$. If $k \ge 1$, or $k=0$ and $a_1>1$, then $\delta_k = \| q_k \alpha \|$. For any $1 \le r \le p$, let \[ T_r = \left( \begin{array}{cc} 0 & 1 \\ 1 & a_{s+r+p} \end{array} \right) \cdots \left( \begin{array}{cc} 0 & 1 \\ 1 & a_{s+r+1} \end{array} \right) . \] The recursion $q_k = a_k q_{k-1} +q_{k-2}$ can be written in the form \[ T_r^m \left( \begin{array}{c} q_{s+r-1} \\ q_{s+r} \end{array} \right) = \left( \begin{array}{c} q_{s+mp+r-1} \\ q_{s+mp+r} \end{array} \right), \qquad m=1,2,\dots \] Observe that $\det T_r =(-1)^p$, and that $\mathrm{tr}\, T_r$ does not depend on $r$. Therefore the eigenvalues $\eta$ and $\mu$ of $T_r$ are the same for all $1 \le r \le p$. Since $q_k \to \infty$ exponentially fast, we have, say, $\eta>1$ and $\mu=(-1)^p/\eta$. Consequently, the recursions for $q_k$ and $\delta_k$ have solutions \begin{equation}\label{recursionsolutions} \begin{split} q_{s+mp+r} &= C_r \eta^m + D_r (-1)^{mp} \eta^{-m}, \\ \delta_{s+mp+r} &= E_r \eta^{-m} \end{split} \end{equation} with some constants $C_r, E_r >0$ and $D_r$, $1 \le r \le p$. In particular, $\log q_k = \lambda (\alpha)k +O(1)$ with $\lambda (\alpha) = \log \eta^{1/p}$. \begin{lem}\label{deltaklemma} For any $k \ge 0$, we have $\kappa \delta_k \le \delta_{k+1} \le (1-\kappa) \delta_k$ with some constant $\kappa >0$. \end{lem} \begin{proof} We claim that $\delta_k \le (a_{k+2}+2) \delta_{k+1}$ for all $k \ge 0$. Indeed, if $k=0$, $a_1=1$ this can be verified ``by hand''; else, from \eqref{||qkalpha||} we obtain \[ \delta_k < \frac{1}{q_{k+1}} \le \frac{a_{k+2}+2}{q_{k+2}+q_{k+1}} < (a_{k+2}+2) \delta_{k+1} . \] On the other hand, \[ \delta_{k+1} \le a_{k+2} \delta_{k+1} = \delta_k - \delta_{k+2} \le \delta_k - \frac{1}{a_{k+3}+2} \delta_{k+1}, \] and hence $\delta_{k+1} \le \delta_k (a_{k+3}+2)/(a_{k+3}+3)$. The claim thus follows with $\kappa=1/(\max_{k \ge 1} a_k +3)$. \end{proof} \subsection{Perturbed Sudler products}\label{sec_pert} The fundamental objects in the proof of Theorem \ref{quadraticasymptotics} are ``perturbed'' versions of the Sudler product defined as \[ P_{q_k} (\alpha, x) := \prod_{n=1}^{q_k} |2 \sin (\pi (n\alpha +(-1)^k x/q_k))|, \qquad x \in \mathbb{R}. \] Perturbed Sudler products were first introduced by Grepstad, Kaltenb\"ock and Neum\"uller \cite{GKN}, and have since been used in \cite{ATZ} and \cite{GNZ}. The relevance of these functions come from the Ostrowski expansion of integers, which we now recall. Any integer $N \ge 0$ can be written in a unique way in the form $N=\sum_{k=0}^{\infty} b_k q_k$, where $0 \le b_0 \le a_1-1$ and $0 \le b_k \le a_{k+1}$, $k \ge 1$ are integers satisfying the extra rule that $b_{k-1}=0$ whenever $b_k=a_{k+1}$. Of course, the series only has finitely many nonzero terms; more precisely, if $0 \le N <q_{k+1}$, then $b_{k+1}=b_{k+2}=\cdots =0$. Given an integer $N \ge 0$ with Ostrowski expansion $N=\sum_{k=0}^{\infty} b_k q_k$, let us introduce the notation \[ \varepsilon_k (N) := q_k \sum_{\ell =k+1}^{\infty} (-1)^{k+\ell} b_{\ell} \delta_{\ell}, \] where $\delta_\ell=(-1)^\ell (q_\ell \alpha -p_\ell)$ was already defined at the beginning of this section. Further, we shall write $f(x)=|2 \sin (\pi x)|$. \begin{lem}\label{ostrowskilemma} For any integer $N \ge 0$ with Ostrowski expansion $N=\sum_{k=0}^{\infty} b_k q_k$, \[ P_N (\alpha ) = \prod_{k=0}^{\infty} \prod_{b=0}^{b_k-1} P_{q_k} (\alpha, bq_k \delta_k + \varepsilon_k (N)) . \] \end{lem} \begin{proof} Note that only finitely many factors are different from $1$. Let $N_k=\sum_{\ell=k}^{\infty} b_{\ell} q_{\ell}$. Then $N=N_0 \ge N_1 \ge N_2 \ge \cdots$, and $N_k=0$ for all large enough $k$. By the definition of Sudler products, \[ \begin{split} P_N (\alpha ) &= \prod_{k=0}^{\infty} \prod_{n=N_{k+1}+1}^{N_k} f(n \alpha ) \\ &=\prod_{k=0}^{\infty} \prod_{n=1}^{b_k q_k} f(n\alpha +N_{k+1}\alpha ) \\ &= \prod_{k=0}^{\infty} \prod_{b=0}^{b_k-1} \prod_{n=1}^{q_k} f \left( n\alpha +b q_k \alpha + \sum_{\ell =k+1}^{\infty} b_{\ell} q_{\ell} \alpha \right) \\ &= \prod_{k=0}^{\infty} \prod_{b=0}^{b_k-1} \prod_{n=1}^{q_k} f\left( n \alpha + (-1)^k b \delta_k + \sum_{\ell =k+1}^{\infty} (-1)^{\ell} b_{\ell} \delta_{\ell} \right) \\ &= \prod_{k=0}^{\infty} \prod_{b=0}^{b_k-1} P_{q_k} (\alpha, b q_k \delta_k + \varepsilon_k (N)) . \end{split} \] \end{proof} The main message of the next lemma is that for quadratic irrational $\alpha$, the function $P_{q_k}(\alpha, x)$ has a positive lower bound at all points which appear in the claim of Lemma \ref{ostrowskilemma}. From now on let $1 \le [k] \le p$ denote the remainder of $k-s$ modulo $p$, where $s$ is the length of the pre-period in the continued fraction for $\alpha$; that is, if $k=s+mp+r$, then $[k]=r$. \begin{lem}\label{intervallemma} \hspace{1mm} \begin{enumerate} \item[(i)] For any integer $N \ge 0$ with Ostrowski expansion $N=\sum_{k=0}^{\infty} b_k q_k$, any $k \ge 0$ and any $0 \le b<b_k$ we have $P_{q_k}(\alpha , bq_k \delta_k + \varepsilon_k (N)) \gg 1$ uniformly in $N$, $k$ and $b$. \item[(ii)] There exist compact intervals $I_r$, $1 \le r \le p$, and a constant $k_0>s$ with the following properties. First, for any integer $N \ge 0$ with Ostrowski expansion $N=\sum_{k=0}^{\infty} b_k q_k$, any $k \ge k_0$ and any $0 \le b<b_k$ we have $bq_k \delta_k + \varepsilon_k (N) \in I_{[k]}$. Second, $P_{q_k}(\alpha, x) \gg 1$ on $I_{[k]}$ uniformly in $k \ge k_0$. \end{enumerate} \end{lem} \begin{proof} Fix an integer $N \ge 0$ with Ostrowski expansion $\sum_{k=0}^{\infty} b_k q_k$, and integers $k \ge 0$ and $0 \le b <b_k$. We necessarily have $b_k>0$; in particular, $k \ge 1$, or $k=0$ and $a_1>1$. Observe that \[ \begin{split} \varepsilon_k (N) = q_k \sum_{\ell =k+1}^{\infty} (-1)^{k+\ell} b_{\ell} \delta_{\ell} &\le q_k \left( a_{k+3} \delta_{k+2}+a_{k+5} \delta_{k+4} +\cdots \right) \\&= q_k \left( (\delta_{k+1}-\delta_{k+3} ) + (\delta_{k+3}-\delta_{k+5})+\cdots \right) \\ &= q_k \delta_{k+1} . \end{split} \] Since $b_k>0$, by the extra rule of the Ostrowski expansions we have $b_{k+1}<a_{k+2}$. Therefore \[ \begin{split} \varepsilon_k (N) = q_k \sum_{\ell =k+1}^{\infty} (-1)^{k+\ell} b_{\ell} \delta_{\ell} &\ge -q_k \left( (a_{k+2}-1) \delta_{k+1} + a_{k+4} \delta_{k+3} + \cdots \right) \\ &= -q_k \left( -\delta_{k+1} + (\delta_k - \delta_{k+2}) + (\delta_{k+2}-\delta_{k+4}) + \cdots \right) \\ &=-q_k (\delta_k - \delta_{k+1}) . \end{split} \] Letting $\kappa >0$ be as in Lemma \ref{deltaklemma}, we thus have $|\varepsilon_k (N)| \le (1-\kappa) q_k \delta_k$. Consider now \[ P_{q_k} (\alpha, bq_k \delta_k +\varepsilon_k (N)) = \prod_{n=1}^{q_k} f((n+bq_k)\alpha + (-1)^k \varepsilon_k (N)/q_k) . \] For each $1 \le n \le q_k$ we have $n+bq_k \le a_{k+1}q_k<q_{k+1}$, and hence by the best approximation property of continued fractions, \[ \left\| (n+bq_k)\alpha +(-1)^k \varepsilon_k (N) /q_k \right\| \ge \| q_k \alpha \| - |\varepsilon_k (N)|/q_k \ge \kappa \delta_k . \] Consequently, $P_{q_k}(\alpha, bq_k \delta_k+\varepsilon_k (N)) \gg 1$ for any \textit{fixed} $k \ge 0$. It will thus be enough to prove Lemma \ref{intervallemma} (ii), and Lemma \ref{intervallemma} (i) will follow. We now prove Lemma \ref{intervallemma} (ii). Observe that \eqref{recursionsolutions} implies $q_{s+mp+r} \delta_{s+mp+r} \to B_r$ as $m \to \infty$, where $B_r=C_r E_r>0$, $1 \le r \le p$, are constants. Define \[ I_r = \left[ -(1-\kappa /2) B_r, (a_{s+r+1}-\kappa /2) B_r \right] \qquad (1 \le r \le p) . \] These intervals, together with some constant $k_0$, to be chosen, satisfy the claim. Choosing $k_0$ large enough, for all $k \ge k_0$ and all $0 \le b<a_{k+1}$ we have $bq_k \delta_k +[-(1-\kappa )q_k \delta_k, (1-\kappa )q_k \delta_k] \subseteq I_{[k]}$. In particular, $bq_k \delta_k +\varepsilon_k (N) \in I_{[k]}$ for all $N \ge 0$, all $k \ge k_0$ and all $0 \le b<b_k$. Now let $k \ge k_0$ and $x \in I_{[k]}$ be arbitrary, and let us prove a lower bound for $P_{q_k}(\alpha, x)$. Then $x=b B_{[k]}+y$ for some appropriate integer $0 \le b<a_{k+1}$, and some $|y| \le (1-\kappa /2)B_{[k]}$. Let \[ z= \frac{(-1)^k (y+b(B_{[k]} - q_k \delta_k))}{q_k} , \] and note that \begin{equation}\label{zestimate} |z| \le \frac{(1-\kappa /2)B_{[k]} + (a_{k+1}-1)|q_k \delta_k -B_{[k]}|}{q_k} \le (1-\kappa /4) \delta_k \end{equation} provided $k_0$ was chosen large enough. With this choice of $z$ we have $$ f (n \alpha + (-1)^k x/q_k) = f((n+bq_k)\alpha +z), $$ and so \begin{equation}\label{pqkfactorization} \begin{split} \frac{P_{q_k}(\alpha, x)}{\prod_{n=bq_k+1}^{(b+1)q_k} f(n \alpha )} &= \frac{\prod_{n=1}^{q_k} f((n+bq_k)\alpha +z)}{\prod_{n=1}^{q_k} f((n+bq_k)\alpha )} \\ &= \left| \prod_{n=1}^{q_k} \left( \cos (\pi z) + \sin (\pi z) \cot (\pi (n+bq_k)\alpha ) \right) \right|, \end{split} \end{equation} where the last equation follows from standard trigonometric identities. Using $\| (n+bq_k) \alpha \| \ge \| q_k \alpha \|=\delta_k$ and \eqref{zestimate}, we obtain \[ \begin{split} \left| \cos (\pi z)-1 + \sin (\pi z) \cot (\pi (n+bq_k)\alpha ) \right| &\le |\cos (\pi z)-1| + \frac{|\sin (\pi z)|}{\sin (\pi \delta_k )} \\ &\le \frac{\pi^2}{2} (1-\kappa /4)^2 \delta_k^2 + \frac{\pi (1-\kappa /4)\delta_k}{\pi \delta_k - \pi^3 \delta_k^3/6} \\ &\le 1-\kappa /8 \end{split} \] provided $k_0$ was chosen large enough; the point is that each factor in \eqref{pqkfactorization} is bounded away from $0$. Following the steps in the proof of Proposition \ref{transferprinciple} (in particular, recalling the cotangent sum estimate \eqref{cotangentsumirrational}), we thus deduce that \[ \begin{split} &\frac{P_{q_k}(\alpha, x)}{\prod_{n=bq_k+1}^{(b+1)q_k} f(n \alpha )} \\ &= \exp \left( O \left( 1+ \delta_k \left| \sum_{n=1}^{q_k} \cot (\pi (n+bq_k)\alpha ) \right| + \delta_k^2 \sum_{n=1}^{q_k} \cot^2 (\pi (n+bq_k)\alpha ) \right) \right) \\ &= \exp \left( O(1) \right) . \end{split} \] On the other hand, a general result of Lubinsky \cite[Proposition 5.1]{LU} implies that $1 \ll P_N (\alpha) \ll 1$ whenever the Ostrowski expansion of $N$ contains $\ll 1$ nonzero terms. In particular, \[ \prod_{n=bq_k+1}^{(b+1)q_k} f(n \alpha ) = \frac{P_{(b+1)q_k}(\alpha )}{P_{bq_k}(\alpha)} \gg 1, \] and hence $P_{q_k} (\alpha, x) \gg 1$, as claimed. \end{proof} \subsection{The limit functions} The perturbed Sudler products $P_{q_k}(\alpha, x)$ were shown to converge to an explicitly given limit function for the special irrationals $\alpha=[0;a,a,a,\dots]$ in \cite{ATZ}. A generalization to all quadratic irrationals has recently been announced by Grepstad, Technau and Zafeiropoulos \cite{GNZ}; see also \cite{GN} for a version without the perturbation variable $x$. In this paper we prove the locally uniform convergence of $P_{q_k}(\alpha, x)$ with an explicit rate. This explicit (in fact, exponential) rate is needed to derive the $O(\max \{ 1,1/c\})$ resp.\ $O(1)$ error terms in Theorem \ref{quadraticasymptotics}. Let $\alpha=[a_0;a_1,\ldots, a_s, \overline{a_{s+1},\ldots, a_{s+p}}]$ be a quadratic irrational, and let $C_r, E_r$ be as in \eqref{recursionsolutions}. For any $1 \le r \le p$ let \begin{equation}\label{Grdef} G_r(\alpha, x)= 2 \pi |x+C_r E_r| \prod_{n=1}^{\infty} \left| \left( 1 - C_r E_r \frac{\{ n \alpha_r \} -\frac{1}{2}}{n} \right)^2 - \frac{\left( x+\frac{C_r E_r}{2} \right)^2}{n^2} \right| , \end{equation} where \[ \alpha_r = [0;\overline{a_{s+r+p}, \ldots, a_{s+r+2},a_{s+r+1}}] . \] \begin{thm}\label{limittheorem} The infinite product in \eqref{Grdef} is locally uniformly convergent on $\mathbb{R}$. The function $G_r(\alpha , \cdot )$ is continuous on $\mathbb{R}$, and continuously differentiable on the open set $\{ x \in \mathbb{R} \, : \, G_r (\alpha, x) \neq 0 \}$. For any compact interval $I \subset \mathbb{R}$ and any integer $k \ge 1$, \[ P_{q_k} (\alpha, x) = \left( 1 + O \left( q_k^{-1/2} \log^{3/4} q_k \right) \right) G_{[k]} (\alpha, x) + O \left( q_k^{-2} \right) , \qquad x \in I \] with implied constants depending only on $I$ and $\alpha$. In particular, $P_{q_{s+mp+r}} (\alpha, \cdot ) \to G_r (\alpha, \cdot)$ locally uniformly on $\mathbb{R}$, as $m \to \infty$. \end{thm} \noindent We postpone the proof to Section \ref{limitfunctionsection}. Note that the periodicity of the continued fraction expansion is crucial for such a limit relation; in particular, Theorem \ref{limittheorem} does not hold for all badly approximable $\alpha$. Lemma \ref{intervallemma} implies that $G_r (\alpha, \cdot )>0$, and consequently that $\log G_r (\alpha, \cdot )$ is Lipschitz on the compact interval $I_r$. The main idea of the proof of Theorem \ref{quadraticasymptotics} is to replace the perturbed Sudler product $P_{q_k} (\alpha, bq_k \delta_k + \varepsilon_k (N))$ by its limit $G_{[k]}(\alpha, bq_k \delta_k + \varepsilon_k (N))$ in the claim of Lemma \ref{ostrowskilemma}. To this end, for any integer $N \ge 0$ with Ostrowski expansion $N=\sum_{k=0}^{\infty} b_k q_k$ let \[ G_N (\alpha ) = \prod_{k=k_0}^{\infty} \prod_{b=0}^{b_k-1} G_{[k]} (\alpha, bq_k \delta_k + \varepsilon_k (N)), \] where $k_0$ is the constant from the conclusion of Lemma \ref{intervallemma} (ii). \begin{lem}\label{pngnlemma} For any real $c>0$ and any integer $\ell \ge 1$, we have \[ \log \left( \sum_{0 \le N < q_{\ell}} P_N(\alpha )^c \right)^{1/c} = \log \left( \sum_{0 \le N<q_{\ell}} G_N (\alpha )^c \right)^{1/c} + O(1), \] as well as \[ \log \max_{0 \le N <q_{\ell}} P_N (\alpha ) = \log \max_{0 \le N <q_{\ell}} G_N (\alpha ) + O(1). \] \end{lem} \begin{proof} By Lemma \ref{ostrowskilemma}, for all $0 \le N <q_{\ell}$ with Ostrowski expansion $N=\sum_{k=0}^{\ell-1} b_k q_k$, \[ \frac{P_N (\alpha )}{G_N (\alpha )} = \prod_{k=0}^{k_0-1} P_{q_k} (\alpha, bq_k \delta_k + \varepsilon_k (N)) \cdot \prod_{k=k_0}^{\ell-1} \prod_{b=0}^{b_k-1} \frac{P_{q_k}(\alpha, bq_k \delta_k + \varepsilon_k (N))}{G_{[k]}(\alpha, bq_k \delta_k + \varepsilon_k (N))} . \] Lemma \ref{intervallemma} (i) shows that here $1 \ll \prod_{k=0}^{k_0-1} P_{q_k} (\alpha, bq_k \delta_k + \varepsilon_k (N)) \ll 1$. Since by Lemma \ref{intervallemma} (ii) we have $G_{[k]} (\alpha, b q_k \delta_k + \varepsilon_k (N)) \gg 1$, each factor $k_0 \le k \le \ell -1$ also stays between two positive constants. Finally, for a large enough constant $k_1>k_0$ Theorem \ref{limittheorem} gives \[ \begin{split} \prod_{k=k_1}^{\ell-1} \prod_{b=0}^{b_k-1} \frac{P_{q_k}(\alpha, bq_k \delta_k + \varepsilon_k (N))}{G_{[k]}(\alpha, bq_k \delta_k + \varepsilon_k (N))} &= \prod_{k=k_1}^{\ell-1} \prod_{b=0}^{b_k-1} \left( 1 + O \left( q_k^{-1/2} \log^{3/4} q_k \right) \right) \\ &= \prod_{k=k_1}^{\infty} \left( 1 + O \left( q_k^{-1/2} \log^{3/4} q_k \right) \right) \in [1/2,2] . \end{split} \] Hence $1 \ll P_N (\alpha ) / G_N (\alpha ) \ll 1$, and the claims follow. \end{proof} \subsection{Approximate additivity} The final step is to prove that our sequences with $P_N(\alpha)$ replaced by $G_N(\alpha)$ are additive up to a small error; the proof of Theorem \ref{quadraticasymptotics} will then be immediate. \begin{lem}\label{additivelemma} For any real $c>0$, the sequences \[ c_m := \log \left( \sum_{0 \le N<q_{k_0+mp}} G_N (\alpha )^c \right)^{1/c} \qquad \textrm{and} \qquad c_m^* := \log \max_{0 \le N<q_{k_0+mp}} G_N (\alpha ) \] satisfy $c_{m+n} = c_m+c_n+O(\max \{ 1,1/c \})$ and $c_{m+n}^* = c_m^*+c_n^*+O(1)$ for all $m,n \ge 1$. \end{lem} \begin{proof} First, we prove that $c_m$ and $c_m^*$ are approximately subadditive. Let $0 \le N < q_{k_0+(m+n)p}$ be an integer with Ostrowski expansion $N=\sum_{k=0}^{k_0+(m+n)p-1} b_k q_k$. Consider the natural factorization \begin{equation}\label{gnfactorization} G_N (\alpha ) = \prod_{k=k_0}^{k_0+mp-1} \prod_{b=0}^{b_k-1} G_{[k]}(\alpha, b q_k \delta_k + \varepsilon_k (N)) \prod_{k=k_0+mp}^{k_0+(m+n)p-1} \prod_{b=0}^{b_k-1} G_{[k]}(\alpha, b q_k \delta_k + \varepsilon_k (N)) . \end{equation} Let us write $N=N_1+N_2$ with $N_1=\sum_{k=0}^{k_0+mp-1} b_k q_k$ and $N_2=\sum_{k=k_0+mp}^{k_0+(m+n)p-1} b_k q_k$. The plan of the proof is to show that, making a small error, we can replace $\varepsilon_k (N)$ in \eqref{gnfactorization} by $\varepsilon_k (N_1)$ and $\varepsilon_k (N_2)$, respectively, so that $G_N \approx G_{N_1}G_{N_2}$. Then we will show that we can replace $N_2$ by a number $N_2^*$ having the ``shifted'' Ostrowski representation $N_2^*=\sum_{k=k_0}^{k_0+np-1} b_{k+mp} q_k$, and obtain $G_N \approx G_{N_1} G_{N_2^*}$. This approximate shift-invariance is crucial for the argument, and comes from the periodicity of the continued fraction expansion of $\alpha$. From $G_N \approx G_{N_1} G_{N_2^*}$ we can deduce that $c_{m+n} \approx c_m + c_n$ and $c_{m+n}^* \approx c_m^* + c_n^*$, which is what we want to prove. Now we make this precise. Note that for any $k_0 \le k \le k_0+mp-1$, \[ \begin{split} \left| \varepsilon_k (N) - \varepsilon_k (N_1) \right| &= \left| q_k \sum_{\ell =k+1}^{k_0+(m+n)p-1} (-1)^{k+\ell} b_{\ell} \delta_{\ell} - q_k \sum_{\ell =k+1}^{k_0+mp-1} (-1)^{k+\ell} b_{\ell} \delta_{\ell} \right| \\ &\ll q_k \sum_{\ell =k_0+mp}^{k_0+(m+n)p-1} \delta_{\ell} \\ &\ll q_k \delta_{k_0+mp} . \end{split} \] Since $\log G_r$ is Lipschitz on $I_r$, the previous estimate implies that the first factor in \eqref{gnfactorization} is \begin{equation}\label{gn1} \begin{split} \prod_{k=k_0}^{k_0+mp-1} \prod_{b=0}^{b_k-1} &G_{[k]}(\alpha, b q_k \delta_k + \varepsilon_k (N)) \\ &= \prod_{k=k_0}^{k_0+mp-1} \prod_{b=0}^{b_k-1} G_{[k]}(\alpha, b q_k \delta_k + \varepsilon_k (N_1)) e^{O(q_k \delta_{k_0+mp})} \\ &= e^{O(q_{k_0+mp} \delta_{k_0+mp})} \prod_{k=k_0}^{k_0+mp-1} \prod_{b=0}^{b_k-1} G_{[k]}(\alpha, b q_k \delta_k + \varepsilon_k (N_1)) \\ &= e^{O(1)} G_{N_1}(\alpha ) . \end{split} \end{equation} Now let $N_2^*=\sum_{k=k_0}^{k_0+np-1} b_{k+mp}q_k$; observe that this is a valid Ostrowski expansion of $0 \le N_2^* < q_{k_0+np}$. Using \eqref{recursionsolutions}, for any $k_0+mp \le k \le k_0+(m+n)p-1$ and any $0 \le b < b_k$, \[ \left| b q_k \delta_k - b q_{k-mp} \delta_{k-mp} \right| \ll \eta^{2m-2k/p} . \] Similarly, \[ \begin{split} |\varepsilon_k (N) &- \varepsilon_{k-mp} (N_2^*)| \\ &= \left| q_k \sum_{\ell =k+1}^{k_0+(m+n)p-1} (-1)^{k+\ell} b_{\ell} \delta_{\ell} - q_{k-mp} \sum_{\ell =k-mp+1}^{k_0+np-1} (-1)^{k-mp+\ell} b_{\ell +mp} \delta_{\ell} \right| \\ &=\left| \sum_{\ell =k+1}^{k_0+(m+n)p-1} (-1)^{k+\ell} b_{\ell} \left( q_k \delta_{\ell} - q_{k-mp} \delta_{\ell -mp} \right) \right| \\ &\ll \sum_{\ell =k+1}^{k_0+(m+n)p-1} \eta^{2m-k/p-\ell /p} \\ &\ll \eta^{2m-2k/p} . \end{split} \] Therefore the second factor in \eqref{gnfactorization} is \begin{equation}\label{gn2} \begin{split} \prod_{k=k_0+mp}^{k_0+(m+n)p-1} \prod_{b=0}^{b_k-1} &G_{[k]}(\alpha, b q_k \delta_k + \varepsilon_k (N)) \\ &= \prod_{k=k_0+mp}^{k_0+(m+n)p-1} \prod_{b=0}^{b_k-1} G_{[k]}(\alpha, b q_{k-mp} \delta_{k-mp} + \varepsilon_{k-mp} (N_2^*)) e^{O(\eta^{2m-2k/p})} \\ &=e^{O(1)} \prod_{k=k_0}^{k_0+np-1} \prod_{b=0}^{b_{k+mp}-1} G_{[k]} (\alpha, b q_k \delta_k +\varepsilon_k (N_2^*) ) \\ &= e^{O(1)} G_{N_2^*} (\alpha ) . \end{split} \end{equation} From \eqref{gnfactorization}, \eqref{gn1} and \eqref{gn2} we finally obtain the approximate factorization $G_N(\alpha) = G_{N_1}(\alpha) G_{N_2^*} (\alpha) e^{O(1)}$. As $N$ runs in the interval $0 \le N < q_{k_0+(m+n)p}$, we obtain each pair $(N_1, N_2^*) \in [0,q_{k_0+mp}) \times [0,q_{k_0+np})$ at most once by the uniqueness of Ostrowski expansions. Therefore \[ \begin{split} \sum_{0 \le N <q_{k_0+(m+n)p}} G_N (\alpha )^c &\le \left( \sum_{0 \le N<q_{k_0+mp}} G_N (\alpha )^c \right) \left( \sum_{0 \le N<q_{k_0+np}} G_N (\alpha )^c \right) e^{O(c)}, \\ \max_{0 \le N <q_{k_0+(m+n)p}} G_N (\alpha ) &\le \left( \max_{0 \le N<q_{k_0+mp}} G_N (\alpha ) \right) \left( \max_{0 \le N<q_{k_0+np}} G_N (\alpha ) \right) e^{O(1)}, \end{split} \] and the approximate subadditivity $c_{m+n} \le c_m+c_n +O(1)$ and $c_{m+n}^* \le c_m^*+c_n^*+O(1)$ follows. The proof of approximate superadditivity is entirely analogous. Let $0 \le N' < q_{k_0+mp}$ and $0 \le N'' < q_{k_0+np}$ be integers with Ostrowski expansions $N'=\sum_{k=0}^{k_0+mp-1} b_k' q_k$ and $N''=\sum_{k=0}^{k_0+np-1} b_k'' q_k$. Define $N=\sum_{k=0}^{k_0+(m+n+1)p-1} b_k q_k$, where \[ b_k= \left\{ \begin{array}{ll} b_k' & \textrm{if } 0 \le k \le k_0 +mp-1, \\ 0 & \textrm{if } k_0+mp \le k \le k_0 +(m+1)p-1, \\ b_{k-(m+1)p}'' & \textrm{if } k_0+(m+1)p \le k \le k_0+(m+n+1)p-1 . \end{array} \right. \] Note that this is a valid Ostrowski expansion of an integer $0 \le N < q_{k_0+(m+n+1)p}$; the block of zeroes in the middle ensures that the extra rule ($b_{k-1}=0$ whenever $b_k=a_{k+1}$) is satisfied. Repeating the arguments from above, we deduce the approximate factorization $G_N(\alpha) =G_{N'}(\alpha) G_{N''}(\alpha)e^{O(1)}$. Observe that as $(N',N'') \in [0,q_{k_0+mp}) \times [0,q_{k_0+np})$, we obtain each integer $N \in [0,q_{k_0+(m+n+1)p})$ at most $O(1)$ times; indeed, from the value of $N$ one can recover all Ostrowski digits of $N'$ and $N''$ except for $b_k''$, $0 \le k \le k_0-1$. Therefore \[ \begin{split} \left( \sum_{0 \le N<q_{k_0+mp}} G_N (\alpha )^c \right) \left( \sum_{0 \le N<q_{k_0+np}} G_N (\alpha )^c \right) &\le \sum_{0 \le N <q_{k_0+(m+n+1)p}} G_N (\alpha )^c e^{O(c)}, \\ \left( \max_{0 \le N<q_{k_0+mp}} G_N (\alpha ) \right) \left( \max_{0 \le N<q_{k_0+np}} G_N (\alpha ) \right) &\le \max_{0 \le N <q_{k_0+(m+n+1)p}} G_N (\alpha ) e^{O(1)}, \end{split} \] and we get $c_m+c_n \le c_{m+n+1}+O(1)$ and $c_m^*+c_n^* \le c_{m+n+1}^*+O(1)$. By the approximate subadditivity proved above, here \[ \begin{split} c_{m+n+1} &\le c_{m+n}+c_1+O(1) = c_{m+n}+O(\max \{ 1,1/c \} ), \\ c_{m+n+1}^* &\le c_{m+n}^*+c_1^*+O(1)=c_{m+n}^*+O(1). \end{split} \] Hence \[ \begin{split} c_m+c_n-O(\max \{ 1,1/c \}) &\le c_{m+n} \le c_m+c_n+O(1), \\ c_m^*+c_n^*-O(1) &\le c_{m+n}^* \le c_m^*+c_n^*+O(1), \end{split} \] as claimed. \end{proof} \begin{proof}[Proof of Theorem \ref{quadraticasymptotics}] According to Lemma \ref{additivelemma}, the sequence $c_m+L$ is subadditive, and the sequence $c_m-L$ is superadditive with some constant $L=O(\max \{ 1,1/c \})$. Fekete's subadditive lemma thus shows that $c_m/m$ converges, and its limit is \[ p K_c(\alpha ) :=\lim_{m \to \infty} \frac{c_m}{m} = \inf_{m \ge 1} \frac{c_m+L}{m} = \sup_{m \ge 1} \frac{c_m-L}{m} . \] The previous relations also yield the rate of convergence $c_m=K_c(\alpha ) mp+O(\max \{ 1 , 1/c\})$. Since $K_c (\alpha ) = O (\max \{ 1,1/c \})$ (see \eqref{KcKinftybounds}), we have \[ \log \left( \sum_{0 \le N < q_k} G_N (\alpha )^c \right)^{1/c} = K_c (\alpha ) k + O(\max \{ 1,1/c \} ). \] Lemma \ref{pngnlemma} and Proposition \ref{transferprinciple} show that here $G_N (\alpha )$ can be replaced by $P_N(\alpha )$ and also by $P_N (p_k/q_k )$. Hence \[ \log \left( \sum_{0 \le N < q_k} P_N (p_k/q_k)^c \right)^{1/c} = K_c (\alpha ) k + O(\max \{ 1,1/c \} ) , \] as claimed. An identical proof gives \[ \log \max_{0 \le N < q_k} P_N (p_k/q_k) = K_{\infty} (\alpha ) k + O(1). \] It follows from \eqref{triviallowerbound} and \eqref{KcKinftybounds} that the constants $K_c (\alpha)$ and $K_{\infty} (\alpha)$ are positive. \end{proof} \section{Locally uniform convergence of perturbed Sudler products}\label{limitfunctionsection} \begin{proof}[Proof of Theorem \ref{limittheorem}] Fix a compact interval $I \subseteq \mathbb{R}$. Throughout the proof, constants and implied constants depend only on $I$ and $\alpha$. For the sake of readability, we continue to write $f(x)=|2 \sin (\pi x)|$. We start by peeling off the last factor in $P_{q_k}(\alpha ,x)$ and using $\alpha = p_k/q_k+(-1)^k \delta_k/q_k$ to get \[ \begin{split} P_{q_k}(\alpha, x) &= f (q_k \alpha + (-1)^k x/q_k) \prod_{n=1}^{q_k-1} f \left( \frac{np_k}{q_k} + (-1)^k \frac{n\delta_k +x}{q_k} \right) \\ &= f(\delta_k +x/q_k) \prod_{n=1}^{q_k-1} f \left( \frac{np_k}{q_k} + (-1)^k \left( \left\{ \frac{n}{q_k} \right\} -\frac{1}{2} \right) \delta_k + (-1)^k \frac{2x+q_k\delta_k}{2q_k} \right) . \end{split} \] The factors in the previous line depend only on the remainder of $n$ modulo $q_k$. The general identity $q_k p_{k-1}-p_k q_{k-1}=(-1)^k$ in the theory of continued fractions shows that $q_k$ and $q_{k-1}$ are relatively prime, and $p_k q_{k-1} \equiv (-1)^{k+1} \pmod{q_k}$. Reordering the product via the bijection $n \mapsto q_{k-1}n$ of the set of nonzero residues modulo $q_k$, and using the identity \eqref{lastterm}, we obtain \[ P_{q_k}(\alpha, x) = f( \delta_k +x/q_k ) q_k \prod_{n=1}^{q_k-1} \frac{f \left( \frac{n}{q_k} - \left( \left\{ \frac{nq_{k-1}}{q_k} \right\} - \frac{1}{2} \right) \delta_k - \frac{2x+q_k \delta_k}{2q_k} \right)}{f(n/q_k)} . \] Let us combine the $n$th and the $(q_k-n)$th factors in the product via the trigonometric identity $f(u-v)f(u+v)=|f^2(u)-f^2(v)|$. If $q_k$ is even, then the $n=q_k/2$ factor is $1+O(q_k^{-2})$, thus \begin{equation}\label{pqkproduct} \begin{split} P_{q_k}(\alpha ,x) = &(1+O(q_k^{-2})) f(\delta_k +x/q_k)q_k \\ &\times \prod_{0<n<q_k/2} \frac{\left| f^2 \left( \frac{n}{q_k} - \left( \left\{ \frac{nq_{k-1}}{q_k} \right\} - \frac{1}{2} \right) \delta_k \right) - f^2 \left( \frac{2x+q_k \delta_k}{2q_k} \right) \right|}{f^2 \left( \frac{n}{q_k} \right)} . \end{split} \end{equation} Let $\log t \ll \psi (t) =o(t)$ be a function, to be chosen. We now show that the factors $\psi(q_k) \le n < q_k/2$ in \eqref{pqkproduct} have negligible contribution. Simple trigonometric identities and estimates give \[ \begin{split} &\frac{\left| f^2 \left( \frac{n}{q_k} - \left( \left\{ \frac{nq_{k-1}}{q_k} \right\} - \frac{1}{2} \right) \delta_k \right) - f^2 \left( \frac{2x+q_k \delta_k}{2q_k} \right) \right|}{f^2 \left( \frac{n}{q_k} \right)} \\ &\hspace{20mm}= \frac{f^2 \left( \frac{n}{q_k} - \left( \left\{ \frac{nq_{k-1}}{q_k} \right\} - \frac{1}{2} \right) \delta_k \right)}{f^2 \left( \frac{n}{q_k} \right)} + O \left( \frac{1}{n^2} \right) \\ &\hspace{20mm} = 1 - \sin \left( 2 \pi \left( \left\{ \frac{nq_{k-1}}{q_k} \right\} - \frac{1}{2} \right) \delta_k \right) \cot \left( \pi \frac{n}{q_k} \right) + O \left( \frac{1}{n^2} \right) . \end{split} \] In particular, each factor $\psi (q_k) \le n < q_k/2$ is $1+O(1/n)$, and thus \[ \begin{split} &\prod_{\psi (q_k) \le n < q_k/2} \frac{\left| f^2 \left( \frac{n}{q_k} - \left( \left\{ \frac{nq_{k-1}}{q_k} \right\} - \frac{1}{2} \right) \delta_k \right) - f^2 \left( \frac{2x+q_k \delta_k}{2q_k} \right) \right|}{f^2 \left( \frac{n}{q_k} \right)} \\ &= \exp \left( O \left( \left| \sum_{\psi (q_k) \le n < q_k/2} \sin \left( 2 \pi \left( \left\{ \frac{nq_{k-1}}{q_k} \right\} - \frac{1}{2} \right) \delta_k \right) \cot \left( \pi \frac{n}{q_k} \right) \right| + \frac{1}{\psi (q_k)} \right) \right) . \end{split} \] Since the partial quotients of $q_{k-1}/q_k=[0;a_k,a_{k-1}, \dots, a_1]$ are bounded by a constant depending only on $\alpha$, by a classical estimate \cite[pp.\ 125]{KN} the discrepancy of the sequence $\{ n q_{k-1}/q_k \}$, $1 \le n \le N$ is $\ll (\log N)/N$ provided that\footnote{Such discrepancy estimates are usually stated for irrational numbers, but the same proof works for rational numbers as well provided that $N$ is less than the denominator.} $N<q_k$. Applying Koksma's inequality \cite[pp.\ 143]{KN} to the mean zero, $1$-periodic function $\sin (2 \pi (\{ t \}-1/2)\delta_k)$ of total variation $\ll q_k^{-1}$, we thus deduce \[ \left| \sum_{n=1}^N \sin \left( 2 \pi \left( \left\{ \frac{nq_{k-1}}{q_k} \right\} - \frac{1}{2} \right) \delta_k \right) \right| \ll \frac{\log N}{q_k} \qquad (1 \le N <q_k/2) . \] Observe also that $\cot (\pi n /q_k)$ is monotone in $0<n<q_k/2$. Summation by parts yields \[ \left| \sum_{\psi (q_k) \le n < q_k/2} \sin \left( 2 \pi \left( \left\{ \frac{nq_{k-1}}{q_k} \right\} - \frac{1}{2} \right) \delta_k \right) \cot \left( \pi \frac{n}{q_k} \right) \right| \ll \frac{\log q_k}{\psi (q_k)} , \] hence \begin{equation}\label{psiqk<n<qk/2} \prod_{\psi (q_k) \le n < q_k/2} \frac{\left| f^2 \left( \frac{n}{q_k} - \left( \left\{ \frac{nq_{k-1}}{q_k} \right\} - \frac{1}{2} \right) \delta_k \right) - f^2 \left( \frac{2x+q_k \delta_k}{2q_k} \right) \right|}{f^2 \left( \frac{n}{q_k} \right)} = 1+O \left( \frac{\log q_k}{\psi (q_k)} \right) . \end{equation} Next, fix a large constant $N_0>0$, and consider the factors $N_0<n <\psi(q_k)$ in \eqref{pqkproduct}. Applying the estimate $f^2(t)=4 \pi^2 t^2 (1+O(t^2))$ in all three terms introduces a multiplicative error of $\prod_{N_0<n<\psi (q_k)} \left( 1+O(n^2/q_k^2) \right) = 1+O(\psi(q_k)^3/q_k^2)$, and we get \[ \begin{split} &\prod_{N_0<n<\psi (q_k)} \frac{\left| f^2 \left( \frac{n}{q_k} - \left( \left\{ \frac{nq_{k-1}}{q_k} \right\} - \frac{1}{2} \right) \delta_k \right) - f^2 \left( \frac{2x+q_k \delta_k}{2q_k} \right) \right|}{f^2 \left( \frac{n}{q_k} \right)} \\ &= \left( 1 + O \left( \frac{\psi(q_k)^3}{q_k^2} \right) \right) \prod_{N_0<n<\psi(q_k)} \left| \left( 1-q_k \delta_k \frac{\left\{ \frac{nq_{k-1}}{q_k} \right\} - \frac{1}{2}}{n} \right)^2 - \frac{\left( x+ \frac{q_k \delta_k}{2} \right)^2}{n^2} \right| . \end{split} \] Choosing $N_0$ large enough, we can ensure that each factor in the previous product stays in, say, $[1/2,2]$. By the explicit formula \eqref{recursionsolutions} for $q_k$ and $\delta_k$, we have $q_k \delta_k=C_{[k]}E_{[k]}+O(q_k^{-2})$. Hence replacing $q_k \delta_k$ by $C_{[k]}E_{[k]}$ in the previous formula introduces a negligible multiplicative error of $\prod_{N_0<n<\psi (q_k)} (1+O(q_k^{-2}n^{-1}))=1+O(q_k^{-2}\log \psi (q_k))$. We also wish to replace $q_{k-1}/q_k=[0;a_k,a_{k-1},\dots, a_1]$ by its limit $\alpha_{[k]}$; recall that $\alpha_r=[0;\overline{a_{s+r+p}, \dots, a_{s+r+2},a_{s+r+1}}]$. By the explicit formula \eqref{recursionsolutions}, we have $q_{k-1}/q_k=\alpha_{[k]}+O(q_k^{-2})$ (in particular, $\alpha_r=C_{r-1}/C_r$, $1 \le r \le p$ with the convention $C_0=C_p/\eta$). Observe that the function $(\{ nt \}-1/2)/n$ consists of linear segments of slope 1, with jumps at the points $j/n$, $j \in \mathbb{Z}$. Since $q_{k-1}/q_k$ has bounded partial quotients, for all $N_0<n<\psi (q_k)$ and all $j \in \mathbb{Z}$ we have \[ \left| \frac{q_{k-1}}{q_k} - \frac{j}{n} \right| \ge \frac{\| n q_{k-1}/q_k \|}{n} \gg \frac{1}{n^2} , \] hence there is no jump between $q_{k-1}/q_k$ and $\alpha_{[k]}$. Therefore \[ \left| \frac{\left\{ \frac{n q_{k-1}}{q_k} \right\} - \frac{1}{2}}{n} - \frac{\left\{ n \alpha_{[k]} \right\}\ - \frac{1}{2}}{n} \right| \ll \frac{1}{q_k^2} , \] so replacing $q_{k-1}/q_k$ by $\alpha_{[k]}$ introduces a negligible multiplicative error of $1+O(q_k^{-2} \psi (q_k))$. We have thus deduced \begin{equation}\label{N0<n<psiqk} \begin{split} &\prod_{N_0<n<\psi (q_k)} \frac{\left| f^2 \left( \frac{n}{q_k} - \left( \left\{ \frac{nq_{k-1}}{q_k} \right\} - \frac{1}{2} \right) \delta_k \right) - f^2 \left( \frac{2x+q_k \delta_k}{2q_k} \right) \right|}{f^2 \left( \frac{n}{q_k} \right)} \\ &= \left( 1 + O \left( \frac{\psi(q_k)^3}{q_k^2} \right) \right) \prod_{N_0<n<\psi(q_k)} \left| \left( 1-C_{[k]} E_{[k]} \frac{\{ n \alpha_{[k]} \} - \frac{1}{2}}{n} \right)^2 - \frac{\left( x+ \frac{C_{[k]} E_{[k]}}{2} \right)^2}{n^2} \right| . \end{split} \end{equation} \begin{lem}\label{cauchylemma} For any integers $N_0<N_1 \le N_2$ and any $1 \le r \le p$, \[ \prod_{n=N_1}^{N_2} \left| \left( 1-C_r E_r \frac{\{ n \alpha_r \} - \frac{1}{2}}{n} \right)^2 - \frac{\left( x+ \frac{C_r E_r}{2} \right)^2}{n^2} \right| = 1+O \left( \frac{\log N_1}{N_1} \right) . \] \end{lem} \begin{proof} Choosing the constant $N_0$ large enough, each factor stays in, say, $[1/2,2]$. Since the partial quotients of $\alpha_r$ are bounded by a constant depending only on $\alpha$, the same discrepancy estimate and Koksma's inequality yield $\left| \sum_{n=1}^N (\{ n \alpha_r \} -1/2) \right| \ll \log N$ for all $N \ge 1$. Applying summation by parts, we deduce \[ \left| \sum_{n=N_1}^{N_2} \frac{\{ n \alpha_r \} -\frac{1}{2}}{n} \right| \ll \frac{\log N_1}{N_1}, \] and the claim of Lemma \ref{cauchylemma} follows. \end{proof} Lemma \ref{cauchylemma} immediately implies via the Cauchy criterion that the infinite product \begin{equation}\label{infiniteproduct} \prod_{n=N_0+1}^{\infty} \left| \left( 1-C_r E_r \frac{\{ n \alpha_r \} - \frac{1}{2}}{n} \right)^2 - \frac{\left( x+ \frac{C_r E_r}{2} \right)^2}{n^2} \right| \end{equation} is uniformly convergent on $I$, and its limit is positive. Its logarithm is given by a uniformly convergent series; the series of term-by-term derivatives \[ \sum_{n=N_0+1}^{\infty} \left( \left( 1-C_r E_r \frac{\{ n \alpha_r \} - \frac{1}{2}}{n} \right)^2 - \frac{\left( x+ \frac{C_r E_r}{2} \right)^2}{n^2} \right)^{-1} \frac{-2x-C_r E_r}{n^2} \] is also seen to be uniformly convergent on $I$. Therefore the logarithm of the infinite product in \eqref{infiniteproduct} is continuously differentiable on $I$; clearly so is the infinite product itself. Multiplying by the missing factors $1 \le n \le N_0$, it follows that the infinite product \eqref{Grdef} defining $G_r(\alpha, x)$ is uniformly convergent on $I$, $G_r (\alpha,x)$ is continuous on $I$, and continuously differentiable on $I$ except at its (finitely many) zeroes. This proves all claims on $G_r(\alpha,x)$. Repeating the arguments above for the factors $1 \le n \le N_0$, we deduce \[ \begin{split} &f(\delta_k +x/q_k) q_k \prod_{n=1}^{N_0} \frac{\left| f^2 \left( \frac{n}{q_k} - \left( \left\{ \frac{nq_{k-1}}{q_k} \right\} - \frac{1}{2} \right) \delta_k \right) - f^2 \left( \frac{2x+q_k \delta_k}{2q_k} \right) \right|}{f^2 \left( \frac{n}{q_k} \right)} \\&= 2 \pi |x+C_{[k]} E_{[k]}| \prod_{n=1}^{N_0} \left| \left( 1-C_{[k]} E_{[k]} \frac{\{ n \alpha_{[k]} \} - \frac{1}{2}}{n} \right)^2 - \frac{\left( x+ \frac{C_{[k]} E_{[k]}}{2} \right)^2}{n^2} \right| +O(q_k^{-2}) \end{split} \] with an additive instead of multiplicative error term, since the factors are not bounded away from zero. Lemma \ref{cauchylemma} also shows that the product on the right hand side of \eqref{N0<n<psiqk} can be extended to all $N_0<n$ up to a negligible error, therefore combining \eqref{pqkproduct}--\eqref{N0<n<psiqk} and the previous formula, we obtain \[ P_{q_k}(\alpha ,x) = \left( 1+O \left( \frac{\log q_k}{\psi (q_k)} + \frac{\psi (q_k)^3}{q_k^2} \right) \right) G_{[k]}(\alpha ,x) + O(q_k^{-2}). \] The optimal choice is $\psi (t)=t^{1/2} \log^{1/4} t$. This finishes the proof of Theorem \ref{limittheorem}. \end{proof} \section*{Acknowledgements} CA is supported by the Austrian Science Fund (FWF), projects F-5512, I-3466, I-4945 and Y-901. BB is supported by FWF project Y-901. We want to thank Agamemnon Zafeiropoulos for drawing our attention to the papers of Bettin and Drappeau, and for keeping us informed about his joint work with Grepstad and Neum\"uller.
1008.2560
\section{Introduction} \label{sec: intro} Measurements of the distance--redshift relation of type Ia supernovae (SNeIa) firmly established contemporary accelerated cosmological expansion \citep{riess98, perlmutter98} and SNIa distances remain one of the most promising probes of dark energy. To determine whether the accelerated cosmological expansion is caused by an ubiquitous dark energy or large-scale deviations from general relativity, it is necessary to measure both the expansion of the universe and the dynamics of the structure formation \citep{zhang_etal05,linder05,zhan_knox06,wang_etal07,huterer_linder07, linder_cahn08,zhan_etal08,mortonson_etal08,zhang08,hearin_zentner09,zhao_etal10, kb09}. The SNIa luminosity distance test provides information about the expansion rate of the universe, but does not provide information on structure formation (though SNIa magnifications may achieve this in the future, see Refs.~\citep{metcalf99,dodelson_vallinotto06,zentner_bhattacharya09}). Peculiar velocities are related to densities through a continuity equation, so peculiar velocity statistics provide one avenue to study the growth of cosmic structure (e.g., \citep{fabian07}). The most well-explored option for probing the peculiar velocity field is via redshift-space distortions imprinted on the galaxy power spectrum (e.g., \citep{linder08,white08,percival08}). Peculiar velocities may be detectable with future microwave experiments via the kinetic Sunyaev-Zeldovich effect \citep{bk06,bk07,bk08} and from large samples of SNeIa with spectroscopic redshifts \citep{SNvel}. In this paper, we examine the possibility of utilizing the mean pairwise velocity statistic, measured from SNeIa in a large photometric survey, to constrain dark energy. Two well-studied statistics derivable from a sample of line-of-sight peculiar velocities are the {\em velocity correlation function} and the {\em mean pairwise velocity} \citep{ferreira99, sheth01}. The former is a two-point statistic expressing correlations in the peculiar velocities of objects are as a function of their separation. The mean pairwise velocity is a measure of the typical relative velocity of objects at a given separation. Peculiar velocities are sensitive to both the rate of structure growth in the universe and the rate of expansion of the universe. Therefore, peculiar velocity measurements on cosmological scales may constrain the dark energy that drives cosmological acceleration and quenches late-time structure growth. Traditionally, the bulk flow velocity has been measured by coupling measured galaxy redshifts with local distance indicators such as the fundamental plane of early-type galaxies \citep{feldman03,sarkar07}, the Tully-Fisher relation \citep{courteau00,borgani00,borgani00b}, or surface brightness fluctuations \citep{blakeslee00}. More recent studies \citep{watkins07, feldman08, feldman09} have measured significant bulk flows on scales of $100$~Mpc. Radial velocity measurements have been used to reconstruct the velocity and density fields \citep{dekel99}. Reconstruction methods provide a way to test the gravitational instability theory and to measure the bias between the galaxy and mass density fields. Such studies are limited to the relatively local Hubble flow ($z \lesssim 0.1$), primarily because a constant fractional error in distance corresponds to a larger velocity interval at higher redshifts, an error that eventually overcomes the signal. Type Ia supernovae, in contrast, are well-calibrated standard candles, and at cosmological distances SNeIa are more reliable distance indicators than those previously used for measuring peculiar velocities. Indeed, the dipole and quadrupole moments of the local bulk flow velocity have been measured to higher precision with the current data set of a few hundred SNeIa than with reconstructions based upon catalogs of many thousands of galaxies \citep{SNvel}. SNeIa that are physically near each other exhibit coherent motion as they are influenced by correlated density structures. Therefore, the errors in the luminosity distance measurements of pairs of SNeIa should be correlated at low redshift ($z \lesssim 0.1$). Ignoring this correlation can lead to systematic biases in the determination of dark energy parameters \citep{SNLSvel,hui_greene06,cooray_sn06}. Alternatively, one can treat these correlated shifts in luminosity distance as ``signal,'' because peculiar velocities depend upon cosmological parameters. This signal has led to useful, independent constraints on the low-redshift normalization of the matter power spectrum, $\sigma_8$, and the total matter density, $\Omega_m$ \citep{SNvel107,SNvel07}. Unfortunately, direct measurements of the velocity correlation remain limited to relatively low redshifts. Even in an optimistic scenario of measurements of one million SNeIa, all with full spectroscopic follow-up, the velocity correlation can only be measured to a redshift of $z \simeq 0.5$ \citep{zhang08}. In contrast to peculiar velocity correlation measurements, mean pairwise velocity is a linear statistic so its errors vary more mildly with redshift. In this study, we show that it will be possible to obtain interesting cosmological information from mean pairwise velocities to a redshift of $z=0.9$ in a large photometric survey of SNeIa, such as that planned for the Large Synoptic Survey Telescope (LSST) which is anticipated to increase dramatically our current catalog of SNe1a, by a factor of nearly 1000. We demonstrate that such a measurement can provide dark energy constraints that complement luminosity distance measurements under optimistic, but reasonable, assumptions. The constraints from mean pairwise velocities are also useful because they may be estimated with relatively little additional observational effort beyond that already required to use SNeIa to map luminosity distance or to detect cosmic lensing magnification \citep{zentner_bhattacharya09}. We find that combining the mean pairwise velocity measurements with distance measurements of SNeIa will sharpen constraints on the dark energy parameters compared to those inferred from luminosity distances alone. In particular, mean pairwise velocity constraints can improve the dark energy Figure of Merit from SNeIa as defined by the Dark Energy Task Force \citep{detf} (DETF) by a factor of 1.8. We additionally demonstrate that mean pairwise velocities, being a differential statistic, are potentially much less sensitive to systematic errors than other commonly considered observational techniques. Ultimately, this property may make mean pairwise velocities one of the most practically useful probes of dark energy. Following the DETF, we describe the dark energy in terms of three phenomenological parameters: its current energy density $\Omega_\Lambda$ and two parameters describing the redshift evolution of its equation of state, $w_0$ and $w_a$, such that $w(a) = w_0 + (1-a)w_a$. The additional cosmological parameters upon which the velocity field depends are the large-scale normalization of the matter power spectrum $\Delta_{\zeta}$, the power-law index of the primordial power spectrum $n_S$, the Hubble parameter $h$, the curvature of the universe $\Omega_k$, and the present-day matter density $\Omega_m$. In addition, we treat the photometric redshift (photo-z) dispersion, $\sigma_z$, as a free parameter with priors. We label our set of parameters ${\bf p}$. We consider a fiducial cosmological model similar to the WMAP 5-year results \citep{WMAP5}: $\Delta_{\zeta}=2.0 \times 10^{-9}$, $n_S=0.95$, $h=0.71$, $\Omega_k=0$, $\Omega_m= 0.25$, $\Omega_\Lambda= 0.75$, $w_0= -1$, and $w_a= 0$. The paper is organized as follows. In Section II, we describe our assumed SNIa survey specifications and review the estimation of supernova line-of-sight peculiar velocities from observed supernova brightnesses and redshifts. Section III describes a halo model calculation of the mean pairwise velocity as a function of cosmological parameters. Sections IV and V quantify various sources of systematic and statistical errors that impact SNIa pairwise velocity measurements respectively. We present our results for dark energy parameter constraints in Section VI, using two different sets of prior constraints. We also derive limits on systematic effects that must be obtained to have the resulting parameter bias be smaller than the calculated statistical errors. In Section VII, we summarize the kinds of observational efforts required to meet the prospects outlined in this paper, along with a brief discussion of the systematic error properties of mean pairwise velocities compared to other dark energy probes. \section{Large-Area Photometric Supernova Surveys} \label{sec: survey_specs} Forthcoming large-scale imaging surveys such as LSST or the Panoramic Survey Telescope and Rapid Response System (PanSTARRS) \citep{panstarrs,LSST,LSSTBook} will discover $10^4$ to $10^6$ SNeIa. These SNeIa may be observed with broadband photometry with exposures spaced several days apart. To infer cosmological parameters from peculiar velocities, reliable distance measurements are needed. These will likely be obtained from a well-characterized subset of the supernovae discovered by any survey. The particular characteristics of this subset depend upon survey strategy and are difficult to anticipate. For ease of comparison with published studies, we adopt survey specifications similar to what may be achieved with a survey similar to LSST. We assume a total of $3\times 10^5$ SNeIa out to $z=1.2$, collected over a dedicated supernova survey region of 300 square degrees; this corresponds to a SNIa surface density of 1000 deg$^{-2}$. This number density corresponds to the ``d2k'' survey described in \citep{zhan08} and such a dedicated survey may be undertaken as part of the science goals of the LSST \citep{LSSTBook}. We assume redshifts estimated using broadband photometry with a redshift-dependent, normally-distributed error of $\sigma_z=\sigma_{z0}(1+z)$. DETF specifies an error range of $\sigma_{z0}=0.01$ for an optimistic scenario to $\sigma_{z0}=0.05$ for a pessimistic scenario; in our parameter forecasts, we allow $\sigma_{z0}$ to vary along with the cosmological parameters. Following \citet{zhan08}, we model the SNeIa redshift distribution as \begin{equation} \frac{d^3n}{d\Omega\, dz\, dt} \propto \begin{cases} \exp(3.12z^{2.1})-1, & \text{$z\le 0.5$,} \\ \left(\exp(3.12z^{2.1})-1\right)\exp(-12.2(z-0.5)^2), & \text{$z>0.5$.} \end{cases} \label{snrate} \end{equation} To the extent that SNeIa are standardizable candles, photometric observations will yield a distance modulus $\mu$ and a luminosity distance $d_L$ via the usual relation \begin{equation} \mu = 2.17 \ln\left(\frac{d_L}{\rm Mpc}\right) + 25. \label{mudef} \end{equation} The luminosity distance is obtained from the cosmological redshift $z$ via the definition \begin{equation} d_L(z) = (1+z) d_C(z) = (1+z)c\int_0^z \frac{dz'}{H(z')} , \label{dLz} \end{equation} where $d_C(z)$ is the comoving line-of-sight distance to a galaxy at redshift $z$, $H(z)$ is the Hubble parameter as a function of redshift, and a geometrically flat universe has been assumed in the second equality. The evolution of the Hubble parameter, and thus the luminosity distance, depends on the assumed cosmological model. For a given supernova, its measured redshift is the difference between its cosmological redshift and the additional Doppler shift due to its line-of-sight velocity, \begin{equation} z_{\rm meas} = z(\mu) - \frac{v_{\rm los}}{c}(1+z(\mu)), \label{zmeas} \end{equation} where its cosmological redshift $z(\mu)$ can be obtained from its observed luminosity by inverting Eqs.~(\ref{mudef}) and (\ref{dLz}). The factor of $(1+z)$ in Eq.~(\ref{zmeas}) accounts for the cosmological redshift between the rest frame and the observation frame. For a given supernova with observed redshift and luminosity, its line-of-sight velocity can be obtained by rearranging Eq.~(\ref{zmeas}) into \begin{equation} v_{\rm los} = \frac{cz(\mu) - cz_{\rm meas}}{1+z_{\rm meas}} , \label{vlos} \end{equation} where we have replaced $z(\mu)$ by $z_{\rm meas}$ in the denominator, which will always be a good approximation for objects at cosmological distances where the first term in Eq.~(\ref{zmeas}) is large compared to the second term. Traditional peculiar velocity estimates using other standard candles at cosmological distances have been hampered by errors in distance estimates, which propagate into errors in $z(\mu)$. For a galaxy with cosmological redshift $z=0.03$, a $10\%$ error in distance corresponds to an error in inferred cosmological redshift equivalent to a peculiar velocity of 1000 km/s, with the size of the error increasing proportional to redshift for $z \leq 1$. Large-area supernova surveys offer two main advantages. First, supernovae are bright enough and good enough standard candles to provide convenient distance estimators out to $z=1$ and beyond. Second, the anticipated large number of supernovae hold the promise of determining average distances far more precisely than individual distances, allowing precise determination of average velocity statistics from large catalogs of supernovae. Of course, realizing this promise requires controlling systematic errors in both distance and redshift observations to a high level, so that averages over large ensembles of SNeIa reflect the actual velocity statistic. Both systematic and statistical errors will be considered following the next Section, which outlines the application of the mean pairwise velocity statistic to supernova surveys. \section{Mean Pairwise Peculiar Velocity} \label{sec: theory} The mean pairwise velocity $v(r,a)$ at a comoving separation $r$ and scale factor $a=1/(1+z)$ is the average over all pairs at a fixed comoving separation of the relative peculiar velocity of the two galaxies projected along the line joining them. That is, \begin{equation} v(r,a)= \frac{1}{N(r)}\sum_{i \ne j} ({\bf v}_i- {\bf v}_j) \cdot {\bf \hat r}, \label{eq:vij} \end{equation} where ${\bf v}_i$ is the peculiar velocity of supernova $i$ and $\bf {\hat r}$ is the unit vector in the direction of the separation of the two objects. The sum is over $N(r)$ pairs at a given comoving separation $r$. (Note that the quantity which we write throughout this paper as ``$v(r,a)$'' is commonly written in the literature as ``$v_{ij}(r,a)$'' or ``$v_{12}(r,a)$.'' We use this notation to avoid potential confusion with subscript labels for individual galaxies that we use below.) The mean pairwise velocity for dark matter particles may be derived using the pair conservation equation \cite{davis77}. However for galaxies, the pair conservation equation needs to be modified to account for evolution \cite{sheth01}. The resulting mean pairwise velocity for SNIa host galaxies with a comoving separation $r$ at a mean scale factor $a$ (assuming that the redshift difference between the two galaxies corresponds to a scale factor difference much smaller than $a$) can be written \begin{equation} v(r,a)=-\frac{2}{3}H(a)a\frac{d \ln D_a}{d \ln a}b_{\rm gal}(a)\frac{r\bar{\xi}^{\rm dm}(r,a)}{1+\xi^{\rm gal}(r,a)} , \label{v12} \end{equation} where \begin{equation} \xi^{\rm dm}(r,a)= \frac{D_a^2}{2\pi^2r} \int_0^{\infty} dk\,k \sin(kr) P(k) \end{equation} is the dark matter two-point correlation function, $P(k)$ is the dark matter power spectrum at wavenumber $k$, $H(a)$ is the Hubble parameter at a given redshift, and $D_a$ is the linear growth factor as a function of time, normalized so that $D_{a}=1$ at $z=0$. We also define the dark matter correlation function averaged over separations less than $r$ to be \begin{equation} \bar{\xi}^{\rm dm}(r,a) = \frac{3}{r^3}\int_0^r dr'\,r'^2 \xi^{\rm dm}(r',a). \end{equation} We are interested in the large-scale limit, so we model the correlation function of supernova host galaxies using a deterministic linear bias relative to the dark matter $b_{\rm gal}(z)$, defined by \begin{equation} \xi^{\rm gal}(r,a)=b_{\rm gal}^2(z) \xi^{\rm dm}(r,a). \end{equation} The bias $b_{\rm gal}(z)$ in general varies with the galaxy separation, a variety of galaxy properties, and redshift \citep{galbias}. In the large-scale limit, scale-independent bias is a fairly good assumption. Following Ref.~\citep{zhan08}, we model $b_{\rm gal}(z)$ as $b_{\rm gal}(z)=1.0+0.6z$ to obtain the fiducial value of the galaxy bias as a function of redshift. With future photometric surveys potentially detecting more than a billion galaxies, we can expect that the correlation function of samples of galaxies matching the SN hosts can be measured to percent-level accuracy or better. Thus the uncertainty in $b_{\rm gal}$ will primarily be due to uncertainty in the cosmological parameters affecting the dark matter correlation function. We express $b_{\rm gal}$ in terms of the galaxy and predicted dark matter correlation functions, and use this bias value in Eq.~(\ref{v12}). Only the line-of-sight component of the velocity can be obtained from observations, while the mean pairwise velocity involves all three directional components of the velocity. We use the estimator for the mean pairwise velocity given a data set of line-of-sight velocities developed in Ref.~\citep{ferreira99}. Consider two galaxies $i$ and $j$ at comoving positions ${\bf r}_i$ and ${\bf r}_j$ moving with peculiar velocities ${\bf v}_i$ and ${\bf v}_j$. The radial component of velocities can be written as ${v}^{r}_i= {\bf \hat r}_i\cdot{\bf v}_i$ and ${v}^{r}_j= {\bf\hat r}_j\cdot{\bf v}_j$. Then an estimate for the pairwise velocity of the two galaxies $v_{ij}^{\rm est}$ is defined by $\langle {v}^{r}_i-{v}^{r}_j\rangle = v_{ij}^{\rm est}{\bf\hat r}\cdot({\bf\hat r}_i+ {\bf\hat r}_j)/2$, where ${\hat r}$ is the unit vector along the line joining the two galaxies. If we now consider a catalog of line-of-sight galaxy velocities, minimizing $\chi^2$ between the actual pairwise velocities and the estimate of the pairwise velocity at a given separation $r$ gives an estimator for the pairwise velocity Eq.~(\ref{v12}) based on the catalog, \begin{equation} v^{\rm est}(r,a)= \frac{\sum_{\rm pairs}(v^{r}_i- v^{r}_j)p_{ij}}{\sum_{\rm pairs} p_{ij}^2} , \label{v_est} \end{equation} where the sums are over all pairs $i\neq j$ of galaxies at comoving separation $r$ and $p_{ij}= {\bf\hat r} \cdot ({\bf\hat r}_i + {\bf\hat r}_j)/2$. Note that this form for the projection tensor $p_{ij}$ is applicable in the flat-sky limit and breaks down for large angular separations; in particular it is zero if the two galaxies are in opposite sky directions. In this paper, we consider a model supernova survey of 300 square degrees in a compact sky region, and the $p_{ij}$ expression given here is always valid. To extend the results here to a full-sky survey, or to survey patches which are separated by large angles, a more complicated projection tensor must be used. The derivation is not conceptually difficult, but this will be deferred to future work giving more detailed estimates of signal-to-noise ratios for particular observing strategies. Equation (\ref{v_est}) is a function of the separation $r$ between the two galaxies. To measure this distance, we must use the estimated locations of each galaxy; this is subject to errors which will be quantified in the next Section. The separation that is measured directly is the angle between two galaxies on the sky. This angle can be converted to the transverse component of the distance between the two galaxies using the angular diameter distances corresponding to their redshifts. The expression in Eq.~(\ref{v_est}) is a very simple estimator which weights all pairs of velocities uniformly. A more careful analysis of real data would use, for example, a signal-to-noise weighting in the sum. This is not a major correction to the analysis in this paper, as we limit the sums in Eq.~(\ref{v_est}) to pairs with separations smaller than 100 Mpc; at larger separations the signal becomes small. In principle, a signal-to-noise weighting can squeeze more information out of the data, using pairs with larger separations, but it does not qualitatively change our results. Our estimator is accurate, as we have shown explicitly in Fig.~4 of Ref.~\cite{bk07}, but suboptimal; an optimal estimator will somewhat improve the constraining capability of a velocity survey compared to the analysis here, so our estimator is conservative. As we discuss further in Section~\ref{sec:sys_err} and Section~\ref{sec:results}, we can mitigate the influence of systematic redshift errors by considering a related projected statistic, where the mean pairwise velocity is taken as a function of the angular separation of the two galaxies rather than as a function of their three-dimensional separation. This is given by \begin{equation} {\tilde v}(\theta,a) = \int_0^{\pi_{\rm max}} d\pi_t P(\pi_t | \theta, a) v(r, a) , \label{v_projected} \end{equation} where the line-of-sight comoving separation $\pi_t = d_C(a_2) - d_C(a_1)$ and $P(\pi_t | \theta, a)$ is the probability that a pair has line-of-sight separation $\pi_t$ given that it has an angular separation on the sky $\theta$. We can write the three-dimensional separation $r$ in terms of the angular separation $\theta$ and $\pi_t$ as \begin{equation} r=\sqrt{\theta^2d_M(a)^2 + \pi_t^2}, \label{rdef} \end{equation} where \begin{equation} d_M(a) = \begin{cases} cH_0^{-1}\Omega_k^{-1/2} \sinh\left[ \Omega_k^{1/2}d_C(a) / (cH_0^{-1})\right], & \Omega_k > 0\\ d_C(a), & \Omega_k = 0\\ cH_0^{-1}|\Omega_k|^{-1/2} \sin\left[ |\Omega_k|^{1/2}d_C(a) / (cH_0^{-1})\right], & \Omega_k < 0 \end{cases} \label{dMdef} \end{equation} is the transverse comoving distance to scale factor $a$; here $\Omega_k$ is the effective curvature density, $\Omega_k = 1 - \Omega_m - \Omega_\Lambda$ (see Ref.~\cite{hog99} for a lucid discussion of various distance measures in cosmology). If the redshift difference is small compared to unity, $\pi_t H(z_1) \approx c(z_2 - z_1)$, though we compute the separation in full for all pairs. For a spatially flat universe, $d_M(a) = d_C(a)$. In our case, we always consider separations with $r\ll cH_0^{-1}$ since the signal is only significant on these scales. We therefore always have $d_M(a) \approx d_C(a)$ to good accuracy, and for simplicity we make this assumption throughout the rest of the paper and use comoving distances entirely. We consider pairs of galaxies with line-of-sight comoving separations up to a maximum value $\pi_{\rm max}$ (in practice, we will measure redshift-space rather than comoving separations; we consider the impact of this in Section~\ref{subsec:photoz_effect}). The probability of a pair having line-of-sight separation $\pi_t$ given that it has an angular separation on the sky $\theta$ is \begin{equation} P(\pi_t | \theta, a) = \frac{1 + \xi^{\rm gal}(r, a)} {\int_0^{\pi_{\rm max}} d\pi_t \left[ 1 + \xi^{\rm gal}(r, a)\right]} \label{Ppitheta} \end{equation} for $\pi_t < \pi_{\rm max}$ and $P(\pi_t |\theta, a) = 0$ for $\pi_t > \pi_{\rm max}$. An estimator for ${\tilde v}(\theta,a)$ from line-of-sight velocity data is easily obtained by substituting $v^{\rm est}(r,a)$ for $v(r,a)$ in Eq.~(\ref{v_projected}). To compare with data, we bin this statistic in angular separation and redshift, putting each pair in the redshift bin corresponding to the mean photometric redshift of the two galaxies in the pair. In this manner all pairs are included regardless of binning; we have verified that our results remain similar when modest changes are made to projection and binning schemes. Note a correction for scatter in measured redshifts must also be included, as discussed below in the following Section. Changing the maximum separation $\pi_{\rm max}$ considered in Eq.~(\ref{v_projected}) will modify the signal-to-noise ratio in measuring the projected pairwise velocity. A larger $\pi_{\rm max}$ increases the total number of pairs considered, but the signal-to-noise ratio for each pair decreases at larger $\pi_t$ (as measurement errors remain approximately unchanged but signal strength decreases), so their contribution is small. For the purposes of this paper, we adopt a cutoff of $\pi_{\rm max}= 100$ Mpc, which captures the great majority of the pairwise velocity signal. As a test of this effect, we find that including pairs out to separations two times larger only changes the signal-to-noise in measuring the projected pairwise velocity by around 10\%. Based on this, we conclude that including data from pairs with separations larger than 100 Mpc should give only minimal improvements in parameter constraints compared to those presented here. We also impose a miminum separation of 20 Mpc on the pairs we consider, to eliminate any systematic errors related to nonlinear effects. The mean pairwise velocity is a declining function as separation increases from 20 Mpc to 100 Mpc, as shown in Fig.~\ref{fig: photoz}; at smaller scales, it turns over and decreases in linear theory. \section{Systematic Errors} \label{sec:sys_err} \subsection{Photometric Redshift Errors} \label{subsec:photoz_effect} Large imaging surveys will detect so many galaxies that it will not be feasible to obtain spectroscopic redshifts for the vast majority. We must settle for photometric redshift estimates determined from the fluxes measured in the various observed bands. These photometric redshifts will be less accurate than spectroscopic redshifts, and may have complex error distributions. Here we consider a measured redshift distribution described by a Gaussian of standard deviation $\sigma_z$ centered at the true redshift of each object. We neglect a possible photometric redshift bias for two reasons: First, in realistic surveys this bias can be calibrated by comparison with a manageable number of spectroscopic SNIa observations \citep{detf,zhan08,zentner_bhattacharya09}. We emphasize that we utilize a normal distribution for definiteness, but a well-calibrated error distribution is what is necessary to proceed; errors need not be Gaussian in practice. Second, the expected level of photometric redshift bias is likely to be a small effect \citep{pinto05,frieman09} compared to the systematic errors in estimating distances that we consider below. As a result, we do not explicitly carry a bias through in the equations below, but we will present a test of the impact of a bias in photometric redshifts in Sec.~\ref{sec:results}. In contrast, the photo-z dispersion, $\sigma_z$, essentially smooths the estimated velocity distribution of the observed sample and propagates scatter into galaxy pair separations. The latter effect can cause not only a scatter in inferred cosmological parameter values, but also a systematic shift, which we calculate here. The mean pairwise velocity $v(r,a)$, given in Eq.~(\ref{v12}), assumes that the three-dimensional separation $r$ between the SNeIa or their host galaxy pairs are known accurately; however, there will be non-negligible errors in observed redshifts. Our simple normal-error model for the distribution of the photometric redshift $z_p$, given a true redshift $z$, is \begin{equation} P(z_p|z,\sigma_z)= \frac{1}{\sqrt{2\pi\sigma_z^2}}\exp[-(z-z_p)^2/(2\sigma_z^2)]. \label{photoz} \end{equation} We take the photo-z dispersion to be $\sigma_z=\sigma_{z0}(1+z)$ with $\sigma_{z0}$ ranging from 0.01 to 0.05 \citep{pinto05,detf,wang_etal07,zhan08,frieman09}. We explore the sensitivity of our results to prior knowledge of $\sigma_{z0}$ in \S~\ref{sec:results}. Using Eq.~(\ref{photoz}) and the expression $H(z_p)\pi_{\rm t}=c(z_{p2}-z_{p1})$ for the local Hubble expansion about each SNIa, where $z_{p2}-z_{p1}$ is the photometric redshift difference between a pair of supernovae, we write the probability of obtaining the observed line-of-sight separation $\pi_{\rm obs}$ for a given, true comoving line-of-sight separation $\pi_{\rm t}$ as \begin{equation} P(\pi_{\rm obs}|\pi_{\rm t},\sigma_\pi)= \frac{1}{\sqrt{2\pi\sigma_\pi^2}}\exp\left[-(\pi_{\rm obs}-\pi_{\rm t})^2/(2\sigma_\pi^2)\right] , \label{separation_los} \end{equation} where $\sigma_\pi= \sqrt{2}c\sigma_z/H(z)$ \citep{frieman09}. We assume that the photometric redshifts $z_p$, although they include the effects of peculiar motions, give a better measurement of the galaxy line-of-sight separation than the cosmological redshifts $z(\mu)$, which must be determined via a distance measurement with uncertainties on the order of 10\%; hence the line-of-sight positions of SNeIa are estimated using $z_p$. The factor $\sqrt{2}$ in relating $\sigma_\pi$ to $\sigma_z$ accounts for uncertainties in the positions of the two galaxies in a pair, which are added in quadrature. Combining Eqs.~(\ref{v12}), (\ref{v_projected}), and (\ref{separation_los}), we get an expression for the projected, mean pairwise velocity accounting for a significant dispersion in photometric redshifts, \begin{equation} {\tilde v}(\theta,a|\sigma_\pi(a))= \int_0^{\pi_{\rm max}} d\pi_t \int_0^\infty d\pi_{\rm obs} P(\pi_t|\theta,a)P(\pi_{\rm obs}|\pi_{\rm t},\sigma_\pi(a))\, v((\theta^2 d_C(a)^2 + \pi_t^2)^{1/2},a). \label{v12perp} \end{equation} We propose using this statistic as a cosmological probe. We consider only positive values of $\pi_t$, so we count each pair only once. This remains true if in some cases (due to errors) $\pi_t$ scatters below zero (in which case the separation is positive when the two members of the pair are exchanged). Both Eq.~(\ref{separation_los}) and the expression for $\sigma_\pi$ are valid only when $|z_{p2}-z_{p1}|\ll 1$; however, we should always be in this limit. The maximum true separation we consider, $\pi_{\rm max}=100$ Mpc, corresponds to a redshift difference ranging from 0.024 to 0.042 as $z$ ranges from 0 to 1; photo-z errors will broaden the distribution of separations via a Gaussian kernel with dispersion $\sigma=\sqrt{2} \sigma_z=\sqrt{2}\sigma_{z0}(1+z)$, which gives $\sigma_z = 0.056$ at $z=1$ for $\sigma_{z0}=0.02$. To the degree that the assumption of small $|z_{p2}-z_{p1}|$ is violated, the small distance error induced by this approximation remains negligible, as the pairwise velocity does not vary rapidly on any scales of interest. One additional caveat is that these relations hold only for sufficiently large angular separations, corresponding to comoving separations greater than approximately 5 Mpc, so that nonlinear effects due to velocities within gravitationally bound objects (``fingers of god'') are insignificant. \begin{figure*} \begin{center} \begin{tabular}{cc} \resizebox{85mm}{!}{\includegraphics{plots/vijphotoz.eps}} \resizebox{85mm}{!}{\includegraphics{plots/vijp.eps}} \end{tabular} \caption{ Effect of photo-z errors on the mean pairwise velocity as a function of the three-dimensional separation $r$ (left panel) and on the projected mean pairwise velocity as a function of angular separation $\theta$ (right panel). The solid line represents the spectroscopic sample where the positions of the SNIa host galaxies are known accurately. The dashed line corresponds to a scenario where the dispersion in the photo-z distribution about the true redshift is given by $\sigma_z=\sigma_{z0}(1+z)$ with $\sigma_{z0}=0.01$, whereas the dot-dashed line represents the case when $\sigma_{z0}=0.02$. Projected statistics vary less with $\sigma_{z0}$, so they are less sensitive to systematic errors in this quantity.} \label{fig: photoz} \end{center} \end{figure*} The left panel of Figure~\ref{fig: photoz} shows the effect of photo-z errors on mean pairwise velocity measurements as a function of three-dimensional separation $r$. For a photo-z error $\sigma_{z0}=$ in the range 0.01 to 0.02, the overall amplitude of the mean pairwise velocity is suppressed by a factor of 3 to 4 at separations $r \le 50\,{\rm Mpc}/h$ compared to the case where all redshifts are known perfectly. As the separation increases, this suppression becomes less prominent. This is largely because the three-dimensional separations of the SNeIa are uncertain by an amount given by the photo-z error, which may be large compared to the three-dimensional distances when the separation is small. For pairs which are farther apart (and often have distances dominated by their transverse separations), this smearing effect has much less impact. The right panel of Fig.~\ref{fig: photoz} shows the projected mean pairwise velocity as a function of angular separation for different assumed photo-z errors. Note that because of the integration along the line of sight, changing photo-z errors by a factor of two, from $\sigma_{z0}=0.01$ to $\sigma_{z0}=0.02$, causes only a 10\% to 15\% change in the amplitude of the statistic. \subsection{Evolution in the Luminosities of Type Ia Supernovae} \label{subsec:sys_err} Evolution in intrinsic SNIa properties is one of the most important potential sources of systematic error that could bias estimates of cosmological parameters from pairwise velocities. For instance, the mean intrinsic luminosity of SNeIa could vary significantly over time. If this were unaccounted for, the inferred distance moduli for supernovae would have a systematic error whose amplitude is a function of redshift. Following Ref.~\citep{detf}, we model an error in the distance modulus $\mu$ as \begin{equation} \mu= \mu_{\rm true} + \Delta\mu = \mu_{\rm true} + \mu_L z + \mu_Q z^2, \label{eq:musys} \end{equation} where $\mu_L$ and $\mu_Q$ are parameters quantifying the linear and quadratic dependence of the systematic error on redshift. A systematic error in distance modulus propagates into an error in the inferred cosmological redshift $z(\mu)$ in Eq.~(\ref{vlos}), while a systematic error in the photometric redshift directly affects $z_{\rm meas}$. Propagating errors through Eqs.~(\ref{mudef}) and (\ref{dLz}) gives \begin{equation} \Delta z = \frac{0.46\Delta\mu(1+z)}{f(z)}, \label{deltaz} \end{equation} where we define the function \begin{equation} f(z) \equiv 1 + \frac{(1+z)^2}{d_L(z)H(z)} . \label{fzdef} \end{equation} The resulting error in the line-of-sight velocity for a given supernova is \begin{equation} \Delta v_{\rm los} = \frac{0.46c\Delta \mu}{f(z_{\rm meas})}. \label{deltav} \end{equation} Using the measured redshift instead of the cosmological redshift in this expression gives an error on the order of a few percent at redshifts of interest. This systematic shift can be applied directly to the estimator Eq.~(\ref{v_est}) to evaluate the impact systematic errors have upon a given supernova velocity catalog. Alternately, we can apply this systematic error to Eq.~(\ref{eq:vij}) to get an estimate of the size of the resulting shift in the pairwise velocity statistic. Consider a pair of supernovae with measured redshifts $z_1$ and $z_2$. Each of them has their three-dimensional velocity systematically shifted in the line-of-sight direction by an amount $\Delta v_{\rm los}$; the component of this shift along the vector connecting the two galaxies is $\Delta v_{\rm los} \pi_t/r$ where $\pi_t$ is their separation along the line of sight and $r$ is the distance between the galaxies. Their pairwise velocity gets a systematic shift given by \begin{equation} \Delta v(r,a) = \frac{c\pi_t}{r}\left(\frac{\Delta z_2}{1+z_2} - \frac{\Delta z_1}{1+z_1}\right) = \frac{0.46c\pi_t}{r}\left(\frac{\Delta\mu_2}{f(z_2)} - \frac{\Delta\mu_1}{f(z_1)}\right) \simeq \frac{0.46 H(z_1) \pi_t^2}{f(z_1)r}(\mu_L+2z_1\mu_Q), \label{deltavij} \end{equation} where for the last expression we have used the fact that the difference in the second expression is dominated by the difference in distance modulus, rather than the much smaller difference in $f(z)$. In replacing both redshifts by $z_1$ in this expression, we have assumed that the redshift difference for a given pair is small compared to unity, which will be the case for any pair separations for which the mean pairwise velocity is significant. For a given value of $r$ and $a=(1+z_1)^{-1}$, the only quantity which varies between different pairs is the line-of-sight separation term, $\pi_t^2$, whose average over random pairs is $r^2/2$. Averaging the final expression in Eq.~(\ref{deltavij}) over all pairs with a given separation replace $\pi_t^2/r$ by $r/2$ and gives the systematic error in the mean pairwise velocity for pairs with comoving separation $r$ and mean redshift $z$ as \begin{equation} \Delta v(r,z) = 0.23 r\frac{H(z)(\mu_L +2z\mu_Q)}{f(z)} . \label{deltavpair} \end{equation} For the systematic error in the projected statistic, we substitute Eq.~(\ref{deltavpair}) into Eq.~(\ref{v_projected}), which yields \begin{equation} \Delta {\tilde v}(\theta,z) = 0.23\frac{H(z)}{f(z)}(\mu_L + 2\mu_Q z)\int_0^{\pi_{\rm max}} d\pi_t P(\pi_t | \theta, z) \sqrt{\theta^2 d_C(z)^2 + \pi_t^2} , \label{deltatildevpair} \end{equation} with $P(\pi_t | \theta, z)$ given by Eq.~(\ref{Ppitheta}). This expression is used in Sec.~\ref{sec:results} to estimate how small this systematic error must be so that it does not dominate the statistical errors in mean pairwise velocity measurements of dark energy parameters. \section{Statistical Errors} \label{sec: stat_err} The line-of-sight velocity for a supernova is inferred by combining a redshift measurement and a distance estimate obtained from a brightness measurement. Here we assume Gaussian random errors for both the redshift and brightness measurements, and find the resulting statistical error in the mean pairwise velocity. We also give an expression for the sample variance (sometimes referred to as cosmic variance) error in this quantity, which results from the fact that its intrinsic value in the limited volume we probe may not match the universal mean. \subsection{Apparent Magnitude and Redshift Errors} \label{subsec: mea_err} For a given supernova, we assume normal errors of $\sigma_\mu$ and $\sigma_z$ on the distance modulus and the measured redshift. Propagating through Eq.~(\ref{vlos}) using Eq.~(\ref{deltaz}) and adding the resulting errors in quadrature gives \begin{equation} \delta v_{\rm los}(z)^2 = \frac{0.21 c^2}{f(z)^2}\sigma_\mu^2 + \frac{c^2}{(1+z)^2}\sigma_z^2. \label{deltavlos} \end{equation} In evaluating the first term, we have assumed that $\sigma_z$ is small compared to $z_{\rm meas}$, which should be a good approximation for photometric redshifts of SNeIa \citep{pinto05,frieman09}. This allows us to neglect the effect of errors in $z_{\rm meas}$ on the value of $f(z)$. Note that in actual measurements the errors in photometric redshifts may be significantly non-Gaussian, requiring a more sophisticated treatment; here we explore Gaussian errors to give an approximate guideline for the relevant levels of uncertainty. Gravitational lensing may increase the dispersion in the measured distance moduli of SNeIa beyond that of intrinsic luminosity scatter and random measurement errors. In the weak lensing limit (convergence $\kappa \ll 1$), the dispersion due to lensing is \citep{bernardeau_etal97,valageas00,dodelson_vallinotto06,zentner_bhattacharya09} \begin{equation} \label{eq:lens} \sigma_{\rm lens}^2(z) \approx 1.69\Omega_m^2H_0^2\int_0^z dz' \frac{W^2(z',z)}{H(z)} \int dk\,k P(k,z'), \end{equation} where $W(z',z)=H_0 d_A(z') d_A(z',z)/d_A(z)$, $d_A(z)$ is the angular diameter distance to redshift $z$, and $d_A(z',z)$ is the angular diameter distance between redshifts $z'$ and $z$. The quantity $P(k,z)$ is the matter power spectrum; we evaluate it using the numerical fits of \citet{smith_etal03}. We thus have a total standard error on the distance modulus for a single supernova composed of three pieces: \begin{equation} \sigma_\mu^2 = \sigma_{\rm obs}^2 + \sigma_{\rm SN}^2 + \sigma_{\rm lens}^2 , \label{sigmamu_pieces} \end{equation} where $\sigma_{\rm obs}$ is the random scatter due to measurement noise and $\sigma_{\rm SN}$ is the intrinsic scatter in supernova intrinsic luminosity. Where not otherwise specified, we take $\sigma_{\rm SN} = 0.1$ independent of redshift, following recent estimates \cite{detf}, and assume $\sigma_{\rm obs} \ll \sigma_{\rm SN}$, which should be satisfied for upcoming large surveys like LSST. To obtain the standard error in the mean pairwise velocity, we begin by assuming that each individual line-of-sight velocity has a normally-distributed error with standard deviation $\delta v_{\rm los}$. Then for any data bin, applying standard propagation of errors to Eq.~(\ref{v_est}) gives: \begin{equation} \delta v^{\rm est} = \sqrt{2} \delta v_{\rm los} \biggl(\sum_{\text{pairs}}p^2_{ij}\biggr)^{-1/2}, \label{deltav_est1} \end{equation} assuming that fractional errors in the $p_{ij}$ are modest; we expect this to hold, as these values can be evaluated using redshift distances, rather than the comparatively uncertain distance measurements that drive the uncertainty in individual speeds. For each pair, $p_{ij}^2\simeq \cos^2\varphi$, where $\varphi$ is the angle between the comoving line-of-sight vector and the vector connecting the comoving supernova positions. This angle will be distributed randomly for each pair; the expected mean value of $p_{ij}^2$ over a large number of pairs is $0.5$. Thus the standard error in the mean pairwise velocity in a particular redshift and separation bin is \begin{equation} \delta v^{\rm est}(r,a) = 2\frac{\delta v_{\rm los}(a)}{\sqrt{N(r,a)}}, \label{deltavpairwise} \end{equation} where $N(r,a)$ is the total number of pairs used to estimate the mean pairwise velocity in a given redshift bin with mean scale factor $a$ and separation bin with mean separation $r$. The standard error on the projected statistic can be expressed as a sum over pairs in the same way as $v(r,a)$, except the sum is over $N(\theta,z)$ pairs in a given angular separation bin about $\theta$ instead of a given real-space separation $r$. The same calculation applies, except that now the average value of $p_{ij}^2$ for a bin in $\theta$ will not be 0.5. For a given pair, the projector $p_{ij} = \pi_t/r$, where $\pi_t$ is the comoving radial separation of the pair (as defined in section \ref{sec: theory}). Analogous to Eq.~(\ref{deltav_est1}), the error on the projected statistic in a bin can be written as \begin{equation} \delta v^{\rm est} = \sqrt{2} \delta v_{\rm los} \biggl(\sum_{\text{pairs}}\frac{\pi_t^2}{r^2}\biggr)^{-1/2}. \label{deltavtilde_est1} \end{equation} The sum must be evaluated by integrating over all the pairs in a given angular bin, giving \begin{equation} \delta{\tilde v}(\theta,z) = \delta v_{\rm los}(z)\left[\frac{2}{N(\theta,z)}\int_0^{\pi_{\rm max}} d\pi_t P(\pi_t|\theta, z)\frac{\pi_t^2}{\theta^2 d_C(z)^2 + \pi_t^2}\right]^{-1/2}, \label{deltavpairproj} \end{equation} with $P(\pi_t | \theta, z)$ given by Eq.~(\ref{Ppitheta}). For a bin in angle covering a range from $\theta_{\rm low}$ to $\theta_{\rm high}$ and a mean redshift bin with a range from $z_{\rm low}$ to $z_{\rm high}$, we can derive the number of pairs in this bin from Eq.~(\ref{snrate}). Consider a supernova at redshift $z_1$. Any second supernova which lies in the angular bin will be contained in a sky region with area $2\pi(\cos\theta_{\rm low} - \cos\theta_{\rm high}) \simeq \pi(\theta_{\rm high}^2 - \theta_{\rm low}^2)$, where the second expression is valid for small angles. The second supernova at redshift $z_2 \geq z_1$ must satisfy $z_{\rm low} \leq (z_1 + z_2)/2 \leq z_{\rm high}$ for the pair to be in the redshift bin, and $c(z_2 - z_1)/H(z_1) < \pi_{\rm max}$ for the comoving line-of-sight separation to be less than $\pi_{\rm max}$. These conditions are equivalent to \begin{equation} z_{\rm low} - \frac{\pi_{\rm max}H(z_{\rm low})}{2c} < z_1 < z_{\rm high} ~~~{\rm and} \label{z1range} \end{equation} \begin{equation} 2z_{\rm low} - z_1 < z_2 < {\rm min}\left[2z_{\rm high} - z_1, \, z_1 + \frac{\pi_{\rm max} H(z_1)}{c}\right]. \label{z2range} \end{equation} Then neglecting the effect of any spatial clustering of supernovae, the total number in the bin is simply \begin{equation} N(\theta_{\rm low},\theta_{\rm high};z_{\rm low},z_{\rm high}) \simeq \pi(\theta_{\rm high}^2 - \theta_{\rm low}^2)\int dz_1\frac{d^2n}{dz\,d\Omega}(z_1) \int dz_2 \frac{d^2n}{dz\,d\Omega}(z_2) , \label{Nbin} \end{equation} where the limits on the $z$ integrals are given in Eqs.~(\ref{z1range}) and (\ref{z2range}); note the $z_2$ integral must be performed first since its limits depend on $z_1$. The function $d^2n/dz d\Omega$ is just Eq.~(\ref{snrate}) normalized to the total number of supernovae assumed per unit solid angle on the sky. \subsection{Sample Variance} \label{subsec:cos_err} In addition to the measurement errors for individual galaxy velocities, there is an additional uncertainty in comparing estimates of the mean pairwise velocity to models, resulting from the fact that we only sample a finite volume in which the realized average pairwise velocity may differ from the mean taken over the entire Universe. Here we give an expression for the covariance between the projected mean pairwise velocity measured in different redshift and angular separation bins resulting from this effect (generally referred to as sample or cosmic variance). \begin{widetext} Consider a mean pairwise velocity statistic binned in pair separation, $r$, and scale factor, $a$. For the three-dimensional mean pairwise velocity, Eq.~(\ref{v12}), the sample covariance between two bins in separation and scale factor $[r,a]_m$ and $[r,a]_n$ for a survey volume $V_\Omega$ can be written as \cite{bk07} \begin{eqnarray} C(r_m,r_n;a_m,a_n)&=&\frac{32\pi H(a_m)a_m H(a_n)a_n}{9V_\Omega (1+\xi^{\rm gal}(r_m,a_m))(1+\xi^{\rm gal}(r_n,a_n))}\nonumber\\ &&\qquad\qquad\qquad\qquad \times\left(\frac{d \ln D_a}{d \ln a}\right)_{a_m}\left(\frac{d \ln D_a}{d \ln a}\right)_{a_n} \int dk k^2 |P(k)|^2j_1(kr_m)j_1(kr_n). \label{C_sample} \end{eqnarray} We now integrate along the line-of-sight accounting for the photo-z errors and obtain an expression for the sample covariance of the projected mean pairwise velocity as a function of perpendicular separation, \begin{eqnarray} C(\theta_m,\theta_n;a_m,a_n)&=& \int_0^\infty d\pi^{(m)}_{\rm obs} \int_0^\infty d\pi^{(m)}_{\rm t} P(\pi^{(m)}_{\rm t}|\theta_m,a_m) P(\pi^{(m)}_{\rm obs}|\pi^{(m)}_{\rm t})\nonumber\\ &&\qquad\qquad\qquad \times\int_0^\infty d\pi^{(n)}_{\rm obs} \int_0^\infty d\pi^{(n)}_{\rm t} P(\pi^{(n)}_{\rm t}|\theta_n,a_n) P(\pi^{(n)}_{\rm obs}|\pi^{(n)}_{\rm t}) C(r_m,r_n;a_m,a_n), \label{C_proj} \end{eqnarray} using Eqs.~(\ref{Ppitheta}) and (\ref{separation_los}). \end{widetext} The total statistical error covariance matrix is the sum of the sample covariance, Eq.~(\ref{C_proj}), and the statistical error, Eq.~(\ref{deltavpairproj}): \begin{equation} C_{\rm total}(\theta_m,\theta_n;a_m,a_n)=C(\theta_m,\theta_n;a_m,a_n) +\delta_{mn} \delta{\tilde v}^2(\theta_m,a_m) . \label{C_vpij_t} \end{equation} In the following Section, we use this total covariance matrix to estimate the observability of SNeIa peculiar velocities and their utility to cosmology. \section{Results} \label{sec:results} \subsection{The Signal-To-Noise Ratio of Projected Mean Pairwise Velocity Measurements} \label{subsec:StoN} \begin{figure*} \begin{center} \begin{tabular}{c} \resizebox{85mm}{!}{\includegraphics{plots/StoNsigz0=0.01300sqdeg.eps}} \resizebox{85mm}{!}{\includegraphics{plots/StoNsigz0=0.02300sqdeg.eps}}\\ \resizebox{85mm}{!}{\includegraphics{plots/StoNsigz0=0.03300sqdeg.eps}} \resizebox{85mm}{!}{\includegraphics{plots/StoNspec300sqdeg.eps}} \end{tabular} \caption{ The signal-to-noise per angular bin for the projected mean pairwise velocity for different redshift bins as a function of angular separation $\theta$, for a catalog with $3\times 10^5$ total host galaxies over 300 square degrees of sky, and a distance modulus scatter for each host galaxy of $\sigma_{\rm SN} = 0.1$ plus the scatter due to lensing magnification. The pairs are binned by their photometric redshifts and the central values of the redshift bins are shown; each redshift bin has a width of $\Delta z=0.2$. The maximum angle for each redshift bin corresponds to the angle subtended by 100 Mpc at the bin's mean redshift; the range in angles from 0 to the maximum angle is divided into 10 angular bins. The top panels show the signal-to-noise for a photo-z normal error given by $\sigma_z= \sigma_{z0}(1+z)$, where $\sigma_{z0}= 0.01$ (left) and $\sigma_{z0}= 0.02$ (right). The lower left (right) panel shows the signal-to-noise for $\sigma_{z0}= 0.03$ (left) and the ``spectroscopic'' limit with $\sigma_{z0}=0.001$ (right). } \label{fig:StoN} \end{center} \end{figure*} As seen in Fig.~\ref{fig: photoz}, the projected velocity statistic given by Eq.~(\ref{v_projected}) is far less sensitive to photometric redshift errors than the non-projected pairwise velocity. We therefore will use this statistic both to estimate the signal-to-noise of pairwise velocity measurements and to determine the resulting constraints on cosmological parameters. The simple pairwise velocity should yield comparable or better constraints in the limit of small photometric redshift errors, but the results will be more sensitive to $\sigma_z$. Figure~\ref{fig:StoN} shows the signal-to-noise ratio per angular bin for measurements of the projected mean pairwise velocity as a function of angular separation $\theta$, for our fiducial survey giving $3\times 10^5$ total host galaxies over 300 square degrees of sky, and a distance modulus scatter for each host galaxy of $\sigma_{\rm SN} = 0.1$ plus the scatter due to lensing magnification. Pairs are binned in 6 redshift bins equally spaced between $z=0$ and $z=1.2$. For each redshift bin, 10 bins in angle are used, equally spaced for angles ranging from $\theta=0$ up to the angle subtended by our maximum pair separation of 100 Mpc at the mean redshift for the redshift bin. The maximum angle considered therefore decreases as the redshift increases, causing the curves in Fig.~\ref{fig:StoN} to truncate at differing values of $\theta$. The mean pairwise velocity is detectable at a wide range of angular separations and redshifts. The top left panel of Figure~\ref{fig:StoN} shows that such a measurement with a photometric redshift error of $\sigma(z)=0.01(1+z)$ yields a signal-to-noise ratio between 2 and 9 over a range in angular scales for all but the most extreme redshift bins with $z > 0.8$. The redshift distribution of observed SNeIa peaks around $z=0.5$ in our LSST-like model, so the closer we get to that redshift range, the more host galaxy pairs we average over and the better we can measure velocity statistics. Note also that although the number of pairs increases at larger separation, the amplitude of the mean pairwise velocity decreases, yielding an overall decrease in the signal-to-noise for bins with larger separations. The top right panel of Fig.~\ref{fig:StoN} shows the signal-to-noise ratio for a photometric redshift error of $\sigma_z=0.02(1+z)$. After this doubling of the photometric redshift error, the signal-to-noise decreases by around 30\%. Even for $\sigma_z=0.03(1+z)$ (lower left panel of Fig.~\ref{fig:StoN}), we still reach a signal-to-noise of around 3 for the redshift bins at $z=0.5$ and $z=0.7$. The lower right panel shows a best-case scenario, assuming that spectroscopic redshifts are obtained for each supernova host galaxy; for simplicity, we define a spectroscopic redshift to have $\sigma_{z}=0.001(1+z)$. This redshift error is generally obtainable only from spectroscopy of the hosts (rather than the SNe themselves), primarily because of the large breadth of SNIa spectral features, but also due to the peculiar velocities of SNe with respect to their galaxy's center, which can reach a few hundred km s$^{-1}$. Spectroscopic redshifts for large samples of hosts (though likely not all, since many will be fainter than the SNe) would be quite feasible with a 5000--fiber, large field of view multi-object spectrograph like that currently proposed for the BigBOSS project \citep{schlegel09}. If supernova samples cover 300 square degrees, as assumed above, a minimum of 43 BigBOSS pointings would be required to cover this sky region, yielding more than 200,000 redshifts; larger samples can be obtained by revisiting each pointing with different fiber placements. The proposed BigBOSS survey would use the Kitt Peak 4-meter telescope for only 100 nights per year; such a supernova project would require only a small fraction of the remaining time available. In this ``spectroscopic limit,'' the signal-to-noise in measuring the mean pairwise velocity generally improves by around a factor of two compared to the $\sigma_z=0.01(1+z)$ case. \subsection{Parameter Space and Formalism} \label{formalism} Now we investigate the constraints on dark energy parameters from a SNIa projected mean pairwise velocity measurement, and assess the complementarity of these constraints to performing the luminosity distance test based on the same data. For the sake of simplicity, we perform a Fisher matrix analysis similar to those in Refs.~\citep{bk07, zentner_bhattacharya09}. In order to compute constraints on $\Omega_\Lambda$, $w_0$ and $w_a$, we marginalize over the remainder of the parameter space, consisting of the parameters $\Delta_\zeta$, $n_S$, and $h$. We also treat $\sigma_{z0}$, describing the photometric redshift dispersion, as a parameter since the binned mean pairwise velocity signal depends on this quantity. In addition to the marginalized constraints on $\Omega_{\Lambda}$, $w_0$, and $w_a$, we quantify the additional constraining power of pairwise velocities by evaluating the quantity $[\sigma(w_p)\sigma(w_a)]^{-1}$ for comparison to the DETF summary tables \citep{detf}. We refer to this as the ``Figure of Merit'' (FoM) for convenience, although in the DETF report this term refers to a slightly different quantity (the inverse area of the $95\%$ confidence limit ellipse in the $w_p-w_a$ plane) which is proportional to $[\sigma(w_p)\sigma(w_a)]^{-1}$. The derived parameter $w_p$ is the equation of state at the ``pivot'' (i.e. best-constrained) redshift, defined as $w_p=w_0+(1-a_p)w_a$ with $a_p= 1+[F^{-1}]_{w_0w_a}/[F^{-1}]_{w_aw_a}$. The Fisher matrix for the projected mean pairwise velocity can be written as \begin{equation} F_{\alpha\beta}= \sum_{m,n}\frac{\partial {\tilde v}(m)}{\partial p_\alpha}C_{\rm total}^{-1}(mn)\frac{\partial {\tilde v}(n)}{\partial p_\beta} , \label{fisher} \end{equation} where we have abbreviated the projected mean pairwise velocity in the nth angular separation and redshift bin as ${\tilde v}(n)$, $C_{\rm total}(mn)$ is the total covariance matrix between bins $m$ and $n$ given by Eq.~(\ref{C_vpij_t}), and $p_{\alpha}$ indexes the parameters in the vector ${\bf p}$. The Fisher matrix provides a local estimate of the parameter covariance, so the standard error on parameter $p_\alpha$ marginalized over the other parameters is $\sigma(p_{\alpha}) = [F^{-1}]_{\alpha \alpha}$ (no summation implied). Prior constraints on any of the parameters ${\bf p}$ which are normally distributed are simple to incorporate. If parameter $p_\alpha$ has a Gaussian prior with standard error $\sigma_\alpha$, we simply add the diagonal matrix $\rm{diag}(1/\sigma_\alpha^2)$ to the Fisher matrix $F_{\alpha\beta}$. Priors with non-normal statistical distributions require a more detailed statistical framework rather than a simple Fisher matrix approximation. \subsection{Statistical Constraints on Dark Energy Parameters} \label{constraints} In computing constraints on the dark energy parameters $\Omega_\Lambda$, $w_0$, and $w_a$, we first assume a reasonable calibration spectroscopic sample of 1500 SNeIa, comprising 250 supernovae in each redshift bin spread uniformly over the 6 redshift bins spanning $0<z<1.2$. The fractional error on the photo-z dispersion, $\delta \sigma_{z0}/\sigma_{z0}$ in this case is around $1/\sqrt{500}$, or approximately $5\%$ (assuming Gaussian errors). We therefore incorporate a Gaussian prior on $\sigma_{z0}$ centered on the true value and with $\sigma = 0.05 \sigma_{z0}$; however, as we show below in Fig.~\ref{fig: photoz_fisher}, the pairwise velocity statistic is relatively insensitive to the choice of a prior on $\sigma_{z0}$, so this choice should not significantly affect our results, even if the actual error on $\sigma_{z0}$ is much larger. We compute the standard errors obtainable on the dark energy parameters using a range of supernova distance modulus dispersions $\sigma_{\rm SN}$ and photometric redshift dispersions $\sigma_{z0}$. We consider three possible values of the intrinsic supernova absolute magnitude dispersion given by $\sigma_{\rm SN}=0.05$, $\sigma_{\rm SN}=0.1$, and $\sigma_{\rm SN}=0.2$. For each value of $\sigma_{\rm SN}$, we explore four possible values of photometric redshift dispersion, $\sigma_{z0}=0.001$ (the ``spectroscopic limit''), $\sigma_{z0}=0.01$, $\sigma_{z0}=0.02$, and $\sigma_{z0}=0.03$. The optimistic but reasonable supernova luminosity distance test assumed in the DETF report corresponds to $\sigma_{\rm SN}=0.1$ and $\sigma_{\rm z0}=0.01$ so these choices constitute a sensible baseline for comparison to other techniques. The strength of the dark energy constraints obtained is relatively sensitive to the amount of prior information assumed. First, we can make the same assumptions used by the Dark Energy Task Force \citep{detf}. They assume constraints on all parameters (including covariances) at the level expected for measurements of the microwave background power spectrum by the Planck satellite. For this, we employ the Planck Fisher Matrix provided by the DETF. In addition, DETF assume a 11\% Gaussian prior on the value of $h$ \cite{hst}. Note that a spatially flat universe is {\it not} assumed. We also assume no systematic error on either redshift or distance modulus measurements; limits on these systematics required to attain the statistical error levels presented here are discussed below. The results are given in Table~\ref{tab:vijconstraint_detf}. For the nominal DETF survey case, mean pairwise velocities give a standard error on $w_0$ of $\sigma(w_0)=0.45$ and a standard error on $w_a$ of $\sigma(w_a)=0.98$. This constraint on $w_0$ is comparable to the DETF Stage-IV constraints from ground-based optical baryon acoustic oscillations or galaxy cluster counts, while not as good as those from Stage-IV supernova luminosity distances. For $w_a$, mean pairwise velocity constraints are significantly better than the optical survey-based BAO projection; slightly weaker than the pessimistic BAO projections for space-based or radio observations and for the optimistic galaxy cluster projection; and halfway between the optimistic and pessimistic DETF supernova luminosity distance projection. However, all of these methods trail the Stage IV weak lensing projections in constraining power. Among the dark energy probes resulting from a large ground-based optical survey like LSST, mean pairwise velocities compare well with both the baryon acoustic oscillation and the supernova luminosity distance probes \cite{zentner_bhattacharya09}. To quantify this, we consider the improvement in dark energy parameters obtained by adding the mean pairwise velocity probe to the supernova luminosity distance probe resulting from the same sample. The mean pairwise velocity can be measured using the supernova data from a large survey telescope with little additional cost compared to simply constraining dark energy using the resulting supernova Hubble diagram. Figure~\ref{fig: planck_dl_vel} shows joint constraints on the dark energy parameters combining projected peculiar velocity measurements and the SNIa luminosity distance test, using the same priors as Table~\ref{tab:vijconstraint_detf}. The left panel shows the $1\sigma$ constraint in the $w_0-\Omega_\Lambda$ plane and the right panel shows the constraint in the $w_a-\Omega_\Lambda$ plane, after marginalizing over the remainder of parameter space. Incorporating peculiar velocity information significantly reduces the size of the ellipses in the dark energy parameter space: the marginalized constraint on $\Omega_\Lambda$ improves by a factor of 1.7, on $w_0$ by a factor of 1.2, and on $w_a$ by a factor of 1.5, giving an overall improvement in the Figure of Merit by a factor of 1.8. (As a point of comparison, corresponding constraints with no priors from other measurements are included in Fig.~\ref{fig: photoz_fisher}.) Note that unlike the case of peculiar velocity measurements, the constraints derived from the SNIa luminosity distance are sensitive to the error in mean redshift of a bin and hence the cosmological constraints derivable from the SNIa luminosity distance depend much more on the amount of prior knowledge of the photo-z distribution \citep{huterer04,zentner_bhattacharya09}, as well as being much more sensitive to intrinsic SNIa luminosity evolution. \begin{figure*} \begin{center} \begin{tabular}{cc} \resizebox{85mm}{!}{\includegraphics{plots/fisherw0odeallomegak.eps}} \resizebox{85mm}{!}{\includegraphics{plots/fisherwaodeallomegak.eps}} \end{tabular} \caption{ Joint dark energy constraints obtainable from an LSST-like future supernova survey, combining constraints from the supernovae luminosity distance test, priors on Hubble parameter \citep{hst} and constraints from Planck with those obtainable from supernova velocity statistics. A flat universe is not assumed. A fiducial value for the photometric redshift error of $\sigma_{z0}=0.01$ is assumed, with a Gaussian prior on $\sigma_{z0}$ of 5\%. The Fisher matrices for the Planck priors and the SNIa luminosity distance priors are obtained from the DETF report \citep{detf}. The luminosity distance Fisher matrix represents the DETF LSST supernovae optimistic (LST-o) survey. The blue (dark) shaded region shows the constraint obtainable from the supernova luminosity distance test only. The red (grey) region shows the constraint when priors from a Planck survey are combined with the distance test. The green (innermost, light shaded) region shows the joint constraint combining the distance test, a Planck prior and mean peculiar velocity measurements for the SNIa host galaxies. } \label{fig: planck_dl_vel} \end{center} \end{figure*} We have also considered statistical dark energy constraints from a more constraining, but still realistic, set of priors. In particular, a measurement of the Hubble parameter based on an improved, NGC 4258-calibrated distance ladder with an estimated overall error of 5\% has recently been reported \cite{rie09}. Furthermore, requiring the dark energy probe itself in combination with microwave background data to determine the geometry of the universe is likely overly restrictive. Measurements of the baryon acoustic oscillation scale from the Sloan Digital Sky Survey Data Release 7, combined with WMAP 5-year data, give a constraint on the curvature parameter of $\Omega_k = -0.013\pm 0.007$, even for a very general cosmological model which allows both a nonflat universe and a value of $w_0$ different from -1 \cite{per10}. Additionally, since a flat universe is an unstable fixed point for standard cosmological evolution, we have an overwhelming theoretical prejudice for $\Omega_k=0$ to high precision. Therefore, a prior assumption of a flat universe is both reasonable and strongly suggested by data. Table~\ref{tab:vijconstraint} gives the standard errors on the dark energy parameters for a flat universe, with gaussian priors for $h$ (5\%), $\Delta_\zeta$ (5\%), and $n_S$ (1\%), the latter two being current limits from WMAP 7-year data \cite{kom10}. The assumption of a flat universe and a tighter prior on $h$ lead to much stronger dark energy constraints than do DETF priors. With these priors, a DETF-assumed supernova sample with $\sigma_{\rm SN}=0.1$ and $\sigma_{z0}=0.01$ gives a measurement of $\Omega_\Lambda$, $w_0$ and $w_a$ with standard errors of 0.024, 0.27, and 0.41, respectively, using the mean pairwise velocity alone. For comparison, the constraints on $w_0$ for Stage IV experiments computed in the DETF report (but for the original set of priors) are worse for clusters, comparable for baryon acoustic oscillations, and better for the supernova Hubble diagram. Our constraint on $w_a$, on the other hand, is better than for any of the Stage IV experiments aside from the optimistic weak lensing scenarios. Of course, the constraining power of other probes will also increase with the more restrictive set of priors we assume in Table~\ref{tab:vijconstraint}. This makes a direct comparison with these other methods beyond the scope of this paper. Our primary point is that under optimistic, but reasonable, assumptions, SNIa peculiar velocities can be useful by themselves and at the very least can serve as a valuable complementary probe and cross-check for systematic errors, while requiring little additional investment. However, note that Table~\ref{tab:vijconstraint} shows that broader photo-z distributions and/or larger intrinsic SNIa dispersions can quickly diminish the returns on SNIa peculiar velocities. This calculation also suggests the potential constraining power of pairwise velocity statistics from future survey observations. If a large photometric supernova survey were combined with follow-up spectroscopic redshifts for supernova host galaxies, the standard error in the redshift could be reduced by a factor of 10 to $\sigma_{z0}=0.001$, corresponding to the first column of Table~\ref{tab:vijconstraint}. In this case, the error on $w_0$ shrinks to $\sigma(w_0) \simeq 0.10$ and the error on $w_a$ is nearly $\sigma(w_a) \simeq 0.16$. Understanding Type-Ia supernovae well enough to push $\sigma_{\rm SN}$ down by a factor of 2 to $\sigma_{\rm SN}=0.05$ would reducethe error on $w_a$ by another factor of two, to 0.08. Few other proposed probes have comparable potential to constrain $w_a$. \begin{table} \begin{center} \begin{tabular}{| c | ccc | ccc | ccc | ccc |} \hline \phantom{a} & \multicolumn{3}{| c |}{$\sigma_{z0}=0.001$} & \multicolumn{3}{| c |}{$\sigma_{z0}=0.01$} & \multicolumn{3}{| c |}{$\sigma_{z0}=0.02$} & \multicolumn{3}{| c |}{$\sigma_{z0}=0.03$}\\ \hline $\sigma_{\rm SN}$ & \ \ $\sigma(\Omega_\Lambda)$ & \ \ $\sigma(w_0)$ & \ \ $\sigma(w_a)$ & \ \ $\sigma(\Omega_\Lambda)$ & \ \ $\sigma(w_0)$ & \ \ $\sigma(w_a)$ & \ \ $\sigma(\Omega_\Lambda)$ & \ \ $\sigma(w_0)$ & \ \ $\sigma(w_a)$ & \ \ $\sigma(\Omega_\Lambda)$ & \ \ $\sigma(w_0)$ & \ \ $\sigma(w_a)$\\ \hline 0.05 & \ \ 0.016 & \ \ 0.22 & \ \ 0.36 & \ \ 0.032 & \ \ 0.28 & \ \ 0.48 & \ \ 0.05 & \ \ 0.43 & \ \ 0.89 & \ \ 0.059 & \ \ 0.78 & \ \ 1.7 \\ 0.1 & \ \ 0.039 & \ \ 0.30 & \ \ 0.55 & \ \ 0.056 & \ \ 0.45 & \ \ 0.98 & \ \ 0.094 & \ \ 0.62 & \ \ 2.52 & \ \ 0.13 & \ \ 1.46 & \ \ 2.81\\ 0.2 & \ \ 0.051 & \ \ 0.42 & \ \ 0.72 & \ \ 0.074 & \ \ 0.59 & \ \ 1.84 & \ \ 0.18 & \ \ 1.36 & \ \ 4.71 & \ \ 0.24 & \ \ 2.87 & \ \ 6.14\\ \hline \end{tabular} \end{center} \caption{ Dark energy parameter constraints derived from mean pairwise velocity statistics. Photometric redshifts are assumed to be normally distributed about the true $z$, with $\sigma_z=\sigma_{z0}(1+z)$. We show results for $\sigma_{z0}=0.001$, 0.01, 0.02, and 0.03, and three different values for the uncertainty in supernova distance moduli, $\sigma_{\rm SN}=0.05$, 0.1 and 0.2. The fiducial values of the dark energy parameters are $\Omega_\Lambda=0.75$, $w_0= -1$ and $w_a= 0$. We assume zero systematic errors related to SNIa evolution, i.e.\ $\mu_L=\mu_Q=0$. We assume the same priors used in the DETF report: Planck satellite priors from its projected measurement of the microwave background power spectrum (using the Fisher matrix supplied by the DETF) and a Gaussian prior on $h$ with a standard error of 11\%. This table does not assume a flat spatial geometry for the universe. } \label{tab:vijconstraint_detf} \end{table} \begin{table} \begin{center} \begin{tabular}{| c | ccc | ccc | ccc | ccc |} \hline \phantom{a} & \multicolumn{3}{| c |}{$\sigma_{z0}=0.001$} & \multicolumn{3}{| c |}{$\sigma_{z0}=0.01$} & \multicolumn{3}{| c |}{$\sigma_{z0}=0.02$} & \multicolumn{3}{| c |}{$\sigma_{z0}=0.03$}\\ \hline $\sigma_{\rm SN}$ & \ \ $\sigma(\Omega_\Lambda)$ & \ \ $\sigma(w_0)$ & \ \ $\sigma(w_a)$ & \ \ $\sigma(\Omega_\Lambda)$ & \ \ $\sigma(w_0)$ & \ \ $\sigma(w_a)$ & \ \ $\sigma(\Omega_\Lambda)$ & \ \ $\sigma(w_0)$ & \ \ $\sigma(w_a)$ & \ \ $\sigma(\Omega_\Lambda)$ & \ \ $\sigma(w_0)$ & \ \ $\sigma(w_a)$\\ \hline 0.05 & \ \ 0.004 & \ \ 0.047 & \ \ 0.08 & \ \ 0.012 & \ \ 0.16 & \ \ 0.23 & \ \ 0.024 & \ \ 0.34 & \ \ 0.49 & \ \ 0.049 & \ \ 0.65 & \ \ 1.03 \\ 0.1 & \ \ 0.009 & \ \ 0.10 & \ \ 0.16 & \ \ 0.024 & \ \ 0.27 & \ \ 0.41 & \ \ 0.046 & \ \ 0.62 & \ \ 0.94 & \ \ 0.1 & \ \ 1.42 & \ \ 1.96\\ 0.2 & \ \ 0.022 & \ \ 0.28 & \ \ 0.41 & \ \ 0.061 & \ \ 0.63 & \ \ 0.89 & \ \ 0.1 & \ \ 1.48 & \ \ 1.93 & \ \ 0.2 & \ \ 2.86 & \ \ 4.18\\ \hline \end{tabular} \end{center} \caption{ Same as Table~\ref{tab:vijconstraint_detf}, but we assume $\Omega_k = 0$ and instead of Planck priors we assume Gaussian priors with standard errors of 5\% on $h$ and $\Delta_\zeta$ and 1\% on $n_S$ (comparable to errors from current measurements). } \label{tab:vijconstraint} \end{table} \subsection{Systematic Error in Distance Modulus} \begin{table} \begin{center} \begin{tabular}{| c | ccc |} \hline $\mu_L=\mu_Q$ & \ \ $\Omega_\Lambda$ & \ \ $w_0$ & \ \ $w_a$ \\ \hline 0.01/$\sqrt{2}$ & \ \ 10.2\% & \ \ 4.1\% & \ \ 20.2\%\\ 0.03/$\sqrt{2}$ & \ \ 20.3\% & \ \ 10.0\% & \ \ 41.0\%\\ 0.05/$\sqrt{2}$ & \ \ 40.9\% & \ \ 37.5\% & \ \ 80.1\%\\ \hline \end{tabular} \end{center} \label{tab:bias} \caption{ The ratio of the parameter bias due to systematic error to the statistical uncertainties on these parameters. A photometric redshift distribution with dispersion $\sigma_z=0.01(1+z)$ is assumed. We assume $\mu_L=\mu_Q$ to compute the systematic bias. Then we set $\mu_L=\mu_Q=0$ and compute the statistical uncertainty and report the ratio of systematic bias to statistical errors $\Delta p/\sigma_{p}$, where $p=\Omega_\Lambda$, $w_0$, or $w_a$ . We assume $\Omega_k=0$ and assume Gaussian priors with standard error of 5\% on $h$ and $\Delta_\zeta$ and 1\% on $n_s$. } \end{table} The potential statistical sensitivity of any dark energy probe can only be realized if systematic errors can be controlled to a level where their effect on cosmological parameters is small compared to the statistical errors. For the supernova data set considered here, systematic errors may effect both observables: the distance modulus and the photometric redshift. This section considers distance modulus systematics, while the following section analyzes the effect of redshift errors. Section~\ref{subsec:sys_err} gives a simple phenomenological model for the effect of SNIa evolution with redshift, in terms of the parameters $\mu_L$ and $\mu_Q$. The resulting systematic error on cosmological parameters induced by this systematic error can be estimated using a Fisher matrix approach. The bias in parameter $p_\alpha$ can be written as \begin{equation} \delta p_{\alpha}= \sum_{\beta}[F^{-1}]_{\alpha\beta}\sum_{m,n}\Delta{\tilde v}(m) C_{\rm total}^{-1}(mn)\frac{\partial {\tilde v}(n)}{\partial p_\beta} \label{fisherbias} \end{equation} where $\Delta {\tilde v}$, obtained by substituting Eq.~(\ref{deltavpair}) for $v(r,a)$ in Eq.~(\ref{v_projected}), is the systematic shift in the observable $\tilde v$ due to the systematic error characterized by nonzero values of $\mu_L$ and $\mu_Q$. We calculate the bias in each parameter due to SNIa evolution assuming a photometric redshift distribution with spread $\sigma_z=0.01(1+z)$ and the evolution model given by Eq.~(\ref{eq:musys}). We can then compare the systematic bias with the statistical errors on dark energy parameters assuming $\mu_L=\mu_Q=0$, as computed in Table~\ref{tab:vijconstraint}. The ratios of the bias of the dark energy parameters to their statistical errors are reported in Table~III for several representative choices of $\mu_L$ and $\mu_Q$. For reference, DETF took evolution in SNIa luminosity with $\mu_L=\mu_Q=0.01/\sqrt{2}$ as their optimistic scenario. We find that the maximum bias incurred in $\Omega_{\Lambda}$ and $w_0$ is less than 40\% as large as the statistical error on these parameters as long as $\mu_L=\mu_Q\le 0.05/\sqrt{2}$ (five times larger than the DETF optimistic systematic error). For $w_a$, the systematic bias is 40\% of the statistical error for $\mu_L=\mu_Q\le 0.03/\sqrt{2}$, and increases to 80\% of the statistical error for $\mu_L=\mu_Q= 0.05/\sqrt{2}$. If the actual unrecognized evolution of SNIa luminosity is similar to that assumed in the DETF report, the resulting systematic bias in dark energy parameters should be insignificant compared to the statistical error. Note that these comparisons conservatively use the statistical error incorporating our more restrictive prior than in the DETF report. The larger statistical errors with the DETF priors admit substantially larger systematic errors. \subsection{Systematic Errors in Photometric Redshifts} We have also tested how a possible bias $\Delta z_p$ in the photo-z distribution might impact the dark energy constraints obtainable from pairwise velocity statistics. If $\Delta z_p$ is not a strong function of redshift (i.e., it does not vary considerably within one of our redshift bins with width $\delta z \simeq 0.2$), then the bias affects both galaxies in each pair in approximately the same manner. The mean pairwise velocity relies on the difference between the two velocities so nearly all of the effects of a photo-z bias tend to cancel. The residual is a small misestimation of the location of the redshift bin, which translates into a small error in cosmological parameters. For example, assuming a bias in photometric redshifts of $\Delta z_p \approx 0.002(1+z)$ degrades the constraints on cosmological parameters by less than 2\% of the statistical errors. This stands in stark contrast to the strong dependence of the luminosity distance test on photometric redshift biases (e.g., \citep{huterer04,zentner_bhattacharya09}) and the similar sensitivity of probes such as weak gravitational lensing to biased photometric redshifts (e.g., \citep{hearin_etal10}). The signal we measure, the redshift-binned projected mean pairwise velocity Eq.~(\ref{v12perp}), depends on the scatter in photometric redshifts so we also must estimate the systematic error due to uncertainty in the photometric redshift dispersion. We assume that the distribution of the difference between photo-z's and spectroscopic redshifts is a standard normal distribution; in reality this distribution is likely more complex. The results here are a simple effective model for the distribution of photometric redshifts. Figure~\ref{fig: photoz_fisher} and Table~\ref{tab:photoz} show marginalized statistical constraints on dark energy parameters from mean pairwise velocity only, under three strongly different assumptions regarding the photometric redshift error. The blue (gray) and the black shaded regions show the two extreme cases. The blue shaded area shows the $1\sigma$ constraint when we assume no prior knowledge of the uncertainty in the photo-z error and allow $\sigma_{z0}$ to be determined from the same data used to constrain cosmology. The black region indicates the constraints when $\sigma_{z0}$ is known exactly. We emphasize that this does not mean that the photometric redshift is equal to the true redshift. There is still a non-negligible dispersion in photometric redshifts in this case; however, we have assumed that the photometric redshift distribution is well understood, perhaps due to calibration with several thousand spectra \citep{zentner_bhattacharya09}. The red (light shaded) region represents the case when the prior on $\sigma_{z0}$ is a Gaussian centered at the true value with sigma equal to its fiducial value, $\sigma_{z0}=0.01$. Constraints on $w_0$, $w_a$ and $\Omega_\Lambda$ change by only about 10\% between the case where $\sigma_{z0}$ is uncertain at the 100\% level and one where we assume a perfectly-calibrated photometric redshift distribution. This results from the fact that the mean pairwise velocity is proportional to the redshift difference between galaxies in a pair, but photometric redshift errors do not correlate with the velocity we are trying to measure. Fig.~\ref{fig: photoz_fisher} shows that even weak prior knowledge of the photo-z distribution yields constraints comparable to a scenario where the photo-z error distribution is known exactly. \begin{figure*} \begin{center} \begin{tabular}{cc} \resizebox{85mm}{!}{\includegraphics{plots/fisherw0ode500sqdegDETF.eps}} \resizebox{85mm}{!}{\includegraphics{plots/fisherwaode500sqdegDETF.eps}} \end{tabular} \caption{ Dark energy parameter constraints from mean pairwise SNIa velocities (in the absence of complementary cosmological probes), for the same survey as in Fig.~\ref{fig:StoN}. The left and the right panels show the 1$\sigma$ contour in the $w_0-\Omega_{\Lambda}$ and the $w_a-\Omega_{\Lambda}$ planes. Photometric redshift errors of $\sigma_z=0.01(1+z)$ are assumed. The black shaded region represents the case when the photo-z distribution is known accurately. The red (light shaded) region applies a prior such that the uncertainty in the photometric redshift error, $\delta \sigma_{z0}$, is equal to the value of $\sigma_{z0}$; this is highly conservative. The blue (gray) shaded region shows the constraint when we have no prior knowledge about the photo-z error distribution (e.g., zero supernovae with spectroscopic redshifts). } \label{fig: photoz_fisher} \end{center} \end{figure*} \begin{table} \begin{center} \begin{tabular}{| c | ccc |} \hline $\sigma(\sigma_{z0})$ & \ \ $\sigma(\Omega_\Lambda)$ & \ \ $\sigma(w_0)$ & \ \ $\sigma(w_a)$ \\ \hline ${\rm no\, prior}$ & \ \ 0.046 & \ \ 0.29 & \ \ 0.62\\ ${\rm prior\, (100\%\, error\, in\, \sigma_{z0})}$ & \ \ 0.023 & \ \ 0.25 & \ \ 0.37\\ ${\rm prior\, (zero\, error\, in\, \sigma_{z0})}$ & \ \ 0.018 & \ \ 0.21 & \ \ 0.31\\ \hline \end{tabular} \end{center} \caption{ Impact of prior information about photo-z distributions on the dark energy parameter constraints derived from mean pairwise velocity statistics (calculated in the absence of complementary cosmological probes). The photometric redshift of an SN is assumed to be normally distributed about its true value with $\sigma_z=\sigma_{z0}(1+z)$; for our standard scenario we take $\sigma_{z0}=0.01$. The fiducial values cosmology considered has $\Omega_\Lambda=0.75$, $w_0= -1$ and $w_a= 0$. We assume $\Omega_k=0$ and assume Gaussian priors with standard error of 5\% on $h$ and $\Delta_\zeta$ and 1\% on $n_s$.} \label{tab:photoz} \end{table} \section{Discussion and Future Prospects} \label{sec: discuss} With vastly increased numbers of Type Ia supernova detections on the horizon, a new statistical probe of dark energy using supernova peculiar velocities will be possible. The Dark Energy Task Force, when considering future supernova measurements, made the optimistic but reasonable assumptions that individual supernovae will have a photometric redshift determined with a standard error $0.01$ at redshift $z=0$, and a distance modulus determined with an error of 0.1. If these levels are attained for the nominal $3\times 10^5$ SNeIa which will be detected in a targeted supernova survey area by the LSST, the resulting dark energy constraints from the mean pairwise velocities of these supernovae are interestingly good, comparable to projections for a variety of Stage IV techniques. In particular, pairwise peculiar velocities alone give a slightly stronger dark energy constraint as the optimistic projection for an optical baryon acoustic oscillation probe, and constraints which are towards the optimistic ends of the galaxy cluster abundance and optical supernova Hubble diagram probes. Having another independent method for constraining dark energy is invaluable, since all of these measurements will likely be limited by systematic error control. Comparison of inferred dark energy parameters from multiple independent experiments is even more important than the combined statistical power of multiple measurements. In addition, the science return from mean pairwise velocity measurements comes essentially ``for free,'' as it uses the same data sets from which supernova luminosity distance measurements will be built. An extension of these observations which can significantly improve the strength of dark energy constraints is the addition of spectroscopic redshifts. Surveys like Pan-STARRS and LSST will provide only photometric redshifts, and the sheer number of objects they observe makes obtaining spectroscopic redshifts for even small subsets of the total objects a massive challenge. Given the numbers used in this paper, LSST will detect on the order of 100 new SNeIa per night of the survey. The supernovae themselves are transient and widely spread over the survey area, limiting the total number which can be observed simultaneously. Obtaining immediate redshift follow-up for all of these objects would be a large logistical challenge, even if a dedicated telescope were available. An alternative is to obtain redshifts of host galaxies after their SNe have faded; this can be done much more efficiently, as many host galaxies in a particular field of view could be targeted simultaneously with multi-object spectrographs. Baryon acoustic oscillation observations, in particular, are pioneering the development of very large fiber spectrographs which can obtain thousands of redshifts simultaneously. The challenge for this strategy is that many of the host galaxies will be at redshifts above 0.5, and many of the host galaxies themselves are dim compared to their supernovae. Host galaxy followup would require a large telescope and large amounts of observing time. The spectrograph proposed for the BigBOSS survey, which has been designed to obtain high-throughput spectroscopy of 5000 galaxies at a time over a 7 square degree field of view using the 4m Mayall telescope at Kitt Peak and the Blanco telescope at CTIO \citep{schlegel09}, would be well suited for this task. Such a spectroscopic survey of supernova hosts would be a major undertaking, but could lead to highly competitive dark energy constraints and leverages instruments and data already planned for other purposes. Another possible avenue for improvement is better standardization of SNIa intrinsic luminosities. Here our baseline assumption, along with the DETF, is that SNIa distance moduli will be known with a standard error of around 0.1. It is an open question whether we eventually will understand the SNIa explosion mechanism in enough detail, and have sufficient observational information, to model some portion of this scatter and reduce the effective random error. Magnification due to gravitational lensing provides an additional source of scatter, which can be partly understood due to its strongly non-Gaussian distribution, but for our nominal model survey we are not limited by lensing scatter. The marginalized constraint on $w_a$, the most challenging dark energy parameter to measure, can be improved by a factor of two if the scatter in the intrinsic supernova distance modulus is halved. A mean pairwise velocity measurement for $3\times 10^5$ supernovae with spectroscopic redshifts and an intrinsic distance modulus scatter of 0.05 would constrain $w_a$ with a standard error of 0.08 using our set of current prior constraints. The statistical power of any given dark energy measurement is only half of the story, as all of these measurements are likely to be heavily dependent on systematic error control. Because of its nature as a differential measurement, the mean pairwise velocity technique offers favorable prospects for controlling systematic errors. Differential measurements have long been exploited in measurements of the cosmic microwave background fluctuations precisely for their systematic error advantages. In particular, we have demonstrated that several obvious systematic error sources are not likely to dominate the dark energy constraints. First, uncertainty about the level of scatter in photometric redshifts about their true values has only a weak effect on dark energy constraints, and mild priors obtainable from modest spectroscopic calibration efforts give results that are nearly the same as exact knowledge of the photometric redshift scatter. We have not considered non-Gaussian errors in photometric redshifts, but any scatter which is characterized at the levels of the normal errors considered here is unlikely to induce any significantly larger systematic errors. Second, a bias in the photometric redshift distribution has very little effect on our constraints, as long as the bias varies slowly with redshift, because a constant redshift bias doesn't affect the pairwise velocities. This is in marked contrast to both the supernova luminosity distance and weak lensing techniques. Both of these widely discussed routes to dark energy constraints are very sensitive to photometric redshift biases \citep{huterer04,zentner_bhattacharya09}, where a redshift bias can mimic a shift in dark energy parameters. Third, a systematic error in distance modulus due to unrecognized evolution in mean supernova luminosity with redshift will be a small effect provided the magnitude of the error is within a factor of 3 of that considered in the DETF report. This potential source of error can also be addressed by testing the rich information in supernova spectra and time series at different redshifts for any evidence of evolution in intrinsic supernova properties. While detailed modeling of potential systematic errors is required to understand any particular experiment, it is plausible that the systematic errors associated with mean pairwise velocities will be substantially less severe than other leading techniques for probing dark energy. We also note that mean pairwise velocities can be used to constrain gravitational explanations for the accelerating expansion of the universe. This technique has the advantage of probing structure growth over a wide range in redshift, while also being sensitive to the expansion rate; the comparison between these two quantities is the key to constraining alternate gravity models \citep{jai08,hu07,lin09}. Pairwise velocities from a much smaller sample of galaxy clusters, with more precise velocities obtained via the kinematic Sunyaev-Zeldovich effect, have already been shown to offer potentially interesting constraints on modifications of gravity \citep{kb09}. The pairwise velocity statistic offers a particularly simple route to a probe of modified gravity. In linear perturbation theory, the evolution of the growth factor $D(a)$ is given to a very good approximation by $d\ln D/d\ln a = \Omega(a)^\gamma$, where $\gamma$ is nearly a constant and takes the value $\gamma\approx 0.55$ for general relativity \citep{linder05}; see Ref.~\citet{jus09} for a highly accurate approximation to $D(a)$. Other gravitation theories can have different values of $\gamma$; for example, DGP gravity \citep{dgp00} has $\gamma=0.68$ \citep{linder_cahn08}. Examining Eq.~(\ref{v12}), we see that the mean pairwise velocity (on the left side) depends linearly on $d\ln D/d\ln a$, as well as $H(a)$, the (linear regime) galaxy bias factor, and correlation function information. Other cosmological tests, such as the supernova Hubble diagram, will directly constrain $H(a)$, while correlation functions will be measurable directly from the data set used. The linear clustering bias of host galaxies can be constrained in a number of ways; e.g., by direct comparison of galaxy correlation functions to the matter power spectrum derived from gravitational lensing; with galaxy three-point correlation functions \citep{mcbride10} or angular bispectra \citep{verde02_1}; or (if a large spectroscopic sample is available) by combining redshift-space distortions \citep{linder08,white08,percival08} with mean pairwise velocity statistics. Assuming these other quantities will be measured with errors which are small compared to our velocity errors, a measurement of $v(r,a)$ will provide an estimate of $d\ln D/d\ln a$ in several bins in $a$, which can then be used to constrain $\gamma$. If for each bin in $a$ we have independent measurements of $v(r,a)$ in five radial bins with a signal-to-noise ratio of around 3 in each bin (see Fig.~\ref{fig:StoN}), then the amplitude of the function $v(r,a)$ can be constrained with a fractional error of around $0.33/\sqrt{5}$ or 0.16. Assuming this is the dominant error, the fractional error on $d\ln D/d\ln a$ is also around 0.16. By propagation of errors, the resulting error on $\gamma$ is then $0.16/\ln\Omega_m(a)$, and hence ranges from 0.15 to 0.45, depending on the redshift bin. This would give approximately a 25\% to 80\% measurement of $\gamma$ in each redshift bin (assuming $\gamma$ takes its general relativistic value), providing a significant constraint on many theoretical alternatives to general relativity. With spectroscopic redshifts, these constraints would improve by a factor of three due to the increase in signal-noise ratio in each angular bin; then the best redshift bin alone might provide a 10\% measurement of $\gamma$, comparable to projected constraints from weak lensing \citep{hearin_zentner09}. Prospects for constraining modified gravity with a large supernova survey will be explored in more detail elsewhere. Dark energy is simultaneously one of the most important problems in physics today, and one of the most elusive to address observationally. Mean pairwise velocities extracted from a large survey of SNeIa can provide an important arrow in the dark energy quiver and should be considered alongside any of the other methods now being actively pursued. If simply piggybacked on existing plans for supernova luminosity distance tests, pairwise velocities offer independent dark energy constraints which are competitive with other methods. If augmented by spectroscopic redshift followup observations, pairwise velocities alone may provide important constraints on dark energy, with constraints on $w_a$ of 0.1 or better. Perhaps most importantly, this technique provides not only statistical power but potentially strong control of systematic errors. Additionally, it allows tests of the nature of gravity which cannot be obtained using distance measurements alone. We anticipate that Type Ia supernova peculiar velocity statistics will be in the vanguard of dark energy constraints over the coming years. \begin{acknowledgments} The authors would like to thank Michael Wood-Vasey for useful discussions about the potential of imaging surveys to improve cosmological constraints from Type Ia supernovae and Daniel Holz for discussion about the lensing of supernovae. An anonymous referee provided a number of helpful suggestions to clarify various points, and prompted discovery of a factor-of-two mistake in computing statistical errors. SB was partly supported by the Mellon Predoctoral Fellowship at the University of Pittsburgh during this project and the LDRD program of Los Alamos National Lab. ARZ is funded by the University of Pittsburgh, the National Science Foundation through grant AST 0806367, and by the Department of Energy. AK is supported by NSF grant AST 0807790. JAN is funded by the University of Pittsburgh, the National Science Foundation through grant AST 0806732, and by the Department of Energy. \end{acknowledgments}
1008.1892
\section{Introduction} Due to the incomplete Fourier plane sampling of radio synthesis observations, deconvolution is essential for making high fidelity images. The traditional CLEAN \cite{Hogbom} algorithm and its variants are widely used for this deconvolution. There are several limitations of clean being applied to a typical image. First, the centroids of point sources will not exactly match pixel coordinates on a regular grid. Secondly, some sources might be extended, thus requiring more than one clean component. Therefore, since its invention, several studies on the performance of clean and its limitations have been conducted. In \cite{Schwarz} for instance, the convergence and residual errors in terms of a least squares fit for the Fourier plane data is discussed. The work \cite{Briggs} (ch. 6) focuses mainly on clean components and their analogy to Fourier components of the sky image. The fundamental limitations of image pixelization (or having a regular grid of clean components) is studied in \cite{PIX}, especially in the case where sources are located off a pixel center. In this paper, we focus on improving the deconvolution of bright, extended sources that are barely resolved. In order to arrive at our results, we use statistical estimation theory to derive some fundamental limits of clean in deconvolving such sources. Compared to the work of \cite{PIX} which give numerical bounds, we derive analytical bounds on its performance. Moreover, compared to \cite{Briggs} which takes a deterministic approach to study clean component placement, we take a statistical approach to derive the Cramer-Rao lower bound \cite{Kay,Behery}. The limitations of deconvolving an extended source with a set of clean components have been overcome by using clean components that have different scale sizes. For instance, \cite{MSCLEAN},\cite{Leshem10} gives a comprehensive overview of this approach and comparisons with similar existing approaches. However, using multi scale pixels would still be limited by the resolution limit of the interferometer in case of barely resolved sources. In order to overcome this deficiency, we consider using a two dimensional orthonormal basis instead of a set of clean components. As a real example of this technique, we select one such basis called the shapelet basis \cite{SHP1} and apply this technique to Westerbork Synthesis Radio Telescope (WSRT) low frequency observations. Shapelets have been extensively used in astronomical image processing applications \cite{SHP1},\cite{SHP4}, including deconvolution of radio interferometric data \cite{SHP3}. In this paper, we use shapelets for high fidelity imaging when clean based algorithms perform poorly. \section{Mathematical preliminaries} In this section we derive some fundamental limitations of clean and try to explain the reason that an orthonormal basis could improve on using clean components to model extended structure. For simplicity, we first consider a one dimensional image and its corresponding Fourier plane (axis). The image axis is given by $l$ and the corresponding visibility axis is $u$. The corresponding units are radians and wavelengths, respectively. \subsection{Interferometric imaging} Let us consider observing a point source at the origin, whose brightness is given by $\delta(l)$ (the Dirac delta function). The visibilities correspond to the Fourier transform of $\delta(l)$, which is $1$. We only observe at a set of discrete points on the $u$ axis, and this is equal to sampling by the weighted sampling function \begin{equation} \Pi(u)=\sum_i w(u_i) \delta(u-u_i) \end{equation} where $w(u_i)$ correspond to the weight we assign to the $i$-th sampling point, which is at $u_i$ on the $u$ axis. For the remainder we consider all weights to be unity. The observed image $I(l)$ is the inverse discrete Fourier transform \begin{eqnarray} I(l)&=&\sum_k \biggl(\sum_i 1 \delta(u-u_i) \biggr) \exp(j2\pi l u_k)\\\nonumber &=& \sum_k \exp(j2\pi l u_k) \end{eqnarray} which is the point spread function (PSF). In order to denote the nominal resolution limit, we use $b=1/max(|u|)$. \subsection{Pixelization error} Now, let us consider a point source which is displaced by $l_0$ from the origin, which has $\gamma_0$ brightness, which is represented by $\gamma_0 \delta(l-l_0)$. The sampled visibility at the $i$-th point on the $u$ axis is given by \begin{equation} \label{vis} y_i=\gamma_0 \exp(-j2\pi l_0 u_i) + n_i \end{equation} where $n_i$ is the observation noise. We assume the noise to be white, uncorrelated complex (circular) Gaussian with zero mean and variance $\sigma^2$. Due to pixelization, we represent this point source with the pixel at the origin, if $l_0$ is small enough. We estimate the magnitude $\alpha$ of the clean component at the origin by minimizing the least squared error. The error at the $i$-th sampling point will be \begin{equation} \xi_i=\gamma_0 \exp(-j2\pi l_0 u_i) + n_i - \alpha \end{equation} and the total (average) error to be minimized is \begin{equation} \label{E} \xi^2=\frac{1}{N}\sum_i E\{ \xi_i \xi^{\star}_i \}=\frac{1}{N} \sum_i \gamma_0^2+\sigma^2+\alpha^2-2 \alpha\gamma_0 \cos(2\pi l_0 u_i) \end{equation} where $N$ is the total number of sampling points on the $u$ axis. The solution for $\alpha$ is obtained by solving $\frac{\partial \xi^2}{\partial \alpha}=0$ \begin{equation} \label{ahat} \hat{\alpha}=\frac{\gamma_0}{N} \sum_i \cos(2\pi l_0 u_i) \end{equation} and substituting (\ref{ahat}) to (\ref{E}) gives the minimum error, $\hat{\xi}_i = \gamma_0 \exp(-j2\pi l_0 u_i) + n_i - \hat{\alpha}$. This error can be minimized by shifting the pixel grid by $l_0$, as shown in \cite{PIX}. Hence, this does not cause a real problem in deconvolution. \subsection{Clean component placement at arbitrary locations} We relax the pixelization requirement and assume we could place a clean component at any location. As before, we observe a point source, with magnitude $\gamma_0$, positioned at $l_0$ on the image axis. The noisy observed visibilities are given by (\ref{vis}). Let ${\bf y}\buildrel\triangle\over=[y_1,y_2,\ldots,y_N]^T$ and ${\bf u}\buildrel\triangle\over=[u_1,u_2,\ldots,u_N]^T$ represent the vectorized forms of the observed visibilities and the observation coordinates, respectively. We assume there are $N$ observation points, the aforementioned vectors have dimensions $N\times1$. The likelihood of the observation is given by \begin{eqnarray} \label{probl} \lefteqn{p({\bf y}|\sigma^2,l_0,\gamma_0,{\bf u})}&&\\\nonumber &\mbox{}=&\frac{1}{(\pi \sigma^2)^N} \exp\biggl(\frac{-1}{\sigma^2} \sum_i |y_i-\gamma_0\exp(-j 2 \pi l_0 u_i)|^2\biggr) \end{eqnarray} The variance of estimating $l_0$ and $\gamma_0$ (or the Cramer-Rao lower bound) are given by \begin{equation} \label{Varl} Var\bigl( {\hat l_0} \bigr)\geq \frac{\sigma^2}{8 \gamma_0^2 \pi^2 \sum_i u_i^2},\ \ Var\bigl({\hat \gamma_0}\bigr) \geq \frac{\sigma^2 }{2 N} \end{equation} Note that similar results have been derived for a single interferometer \cite{Behery}, or a single point in ${\bf u}$. We see from (\ref{Varl}) that the error in estimation of $l_0$ is not only dependent on the noise $\sigma^2$, but also dependent on the sampling points on the ${\bf u}$ axis, which is the resolution limit of the interferometer. \subsection{Partially resolved sources} The more challenging case in deconvolution is when the source cannot be represented as a pure point source, or as a single clean component. The simplest example for this is having two sources, with magnitudes $\gamma_0$ and $\gamma_1$, shifted by $l_0$ and $l_1$, respectively. The observed (dirty) image is \begin{equation} I(l)=\sum_i \biggl( \gamma_0 \exp(-j2\pi l_0 u_i)+ \gamma_1 \exp(-j2\pi l_1 u_i) \biggr) \exp(j2\pi l u_i) \end{equation} The ability to correctly estimate the magnitudes and positions is dependent on the sampling on the $u$ axis. For some cases, we will be unable to estimate them accurately, as we shall see later. Obviously, this happens when the two sources become closer together. In Figs. \ref{two_resolved} and \ref{two_unresolved}, we have presented dirty images (and the corresponding visibility amplitudes) for barely resolved and unresolved cases, respectively. \begin{figure}[htbp] \begin{minipage}{0.98\linewidth} \begin{minipage}{0.48\linewidth} \centering \centerline{\epsfig{figure=twocomp_resolved_img_ieee.eps,width=3.7cm}} \vspace{0.1cm} \centerline{(a)}\smallskip \end{minipage} \begin{minipage}{0.48\linewidth} \centering \centerline{\epsfig{figure=twocomp_resolved_vis_ieee.eps,width=3.7cm}} \vspace{0.1cm} \centerline{(b)}\smallskip \end{minipage} \end{minipage} \caption{Two point sources, almost unresolved case. The image (a) and the corresponding visibility amplitudes (b) are given. The magnitudes are $\gamma_0=1.0$, $\gamma_1=0.4$. The positions are $l_0=0.1b$, $l_1=0.8b$, where $b$ is the resolution. Peaks are clearly seen at positions close to $l_0$ and $l_1$.\label{two_resolved}} \end{figure} \begin{figure}[htbp] \begin{minipage}{0.98\linewidth} \begin{minipage}{0.48\linewidth} \centering \centerline{\epsfig{figure=twocomp_unresolved_img_ieee.eps,width=3.7cm}} \vspace{0.1cm} \centerline{(a)}\smallskip \end{minipage} \begin{minipage}{0.48\linewidth} \centering \centerline{\epsfig{figure=twocomp_unresolved_vis_ieee.eps,width=3.7cm}} \vspace{0.1cm} \centerline{(b)}\smallskip \end{minipage} \end{minipage} \caption{Two point sources, unresolved case. The image (a) and the corresponding visibility amplitudes (b) are given. The magnitudes are $\gamma_0=1.0$, $\gamma_1=0.4$. The positions $l_0=0.1b$, $l_1=0.2b$ are too close to be resolved due to the finite resolution $b$.\label{two_unresolved}} \end{figure} As usual, the observed visibilities are given by \begin{equation} \label{vis2} y_i=\gamma_0 \exp(-j2\pi l_0 u_i)+ \gamma_1 \exp(-j2\pi l_1 u_i) + n_i \end{equation} and the likelihood is \begin{eqnarray} \label{probl2} \lefteqn{p({\bf y}|\sigma^2,l_0,l_1,\gamma_0,\gamma_1,{\bf u})}&&\\\nonumber &=&\frac{1}{(\pi \sigma^2)^N} \exp\biggl(\frac{-1}{\sigma^2} \sum_i |y_i-\gamma_0\exp(-j 2 \pi l_0 u_i)\\\nonumber &&- \gamma_1\exp(-j 2 \pi l_1 u_i)|^2\biggr) \end{eqnarray} The ML estimate is obtained by minimizing the cost $J$ \begin{eqnarray} \label{Jcost2}\nonumber \lefteqn{J}&=&\frac{-1}{\sigma^2}\sum_i |y_i|^2-y_i\biggl(\gamma_0 \exp(j2\pi l_0 u_i)+ \gamma_1 \exp(j2\pi l_1 u_i)\biggr)\\\nonumber &&-y_i^{\star}\biggl(\gamma_0 \exp(-j2\pi l_0 u_i) + \gamma_1 \exp(-j2\pi l_1 u_i)\biggr) \\ &&+\gamma_0^2+\gamma_1^2+2 \gamma_0\gamma_1 \cos\bigl(2\pi (l_0-l_1) u_0 \bigr) \end{eqnarray} with respect to $l_0$,$l_1$,$\gamma_0$, and $\gamma_1$. Let ${\bmath \theta}=[l_0,l_1,\gamma_0,\gamma_1]^{T}$ be the parameter vector to be estimated. Then the Fisher information matrix is given by \begin{equation} \label{F2} {\mathcal F}({\bmath \theta}) =-E\{ \frac{\partial}{\partial {\bmath \theta}}\frac{\partial}{\partial {\bmath \theta}^{T}} J \} \end{equation} and the Cramer Rao bound is given by the diagonal entries of the inverse of ${\mathcal F}({\bmath \theta})$: \begin{eqnarray} \label{crlb2} Var(\hat{l}_0) \ge [{\mathcal F}^{-1}({\bmath \theta})]_{1,1},\ \ Var(\hat{l}_1) \ge [{\mathcal F}^{-1}({\bmath \theta})]_{2,2},\\\nonumber Var(\hat{\gamma}_0) \ge [{\mathcal F}^{-1}({\bmath \theta})]_{3,3},\ \ Var(\hat{\gamma}_1) \ge [{\mathcal F}^{-1}({\bmath \theta})]_{4,4}. \end{eqnarray} In Fig. \ref{two_crlb}, we have given the CRLB for our example case. \begin{figure}[htbp] \begin{minipage}{0.98\linewidth} \begin{minipage}{0.48\linewidth} \centering \centerline{\epsfig{figure=twocomp_crlb_l_ieee.eps,width=3.7cm}} \vspace{0.1cm} \centerline{(a)}\smallskip \end{minipage} \begin{minipage}{0.48\linewidth} \centering \centerline{\epsfig{figure=twocomp_crlb_gamma_ieee.eps,width=3.7cm}} \vspace{0.1cm} \centerline{(b)}\smallskip \end{minipage} \end{minipage} \caption{Variation of the Cramer Rao lower bound with the spacing between two point sources. The variance in estimating the positions $l_0$,$l_1$ are given in (a) and the variance in estimating the magnitudes $\gamma_0$ and $\gamma_1$ are given in (b). The true magnitudes are $\gamma_0=1.0$, $\gamma_1=0.4$ while the noise variance is $\sigma^2=0.2^2$. It is clearly seen that as the two sources come closer than about $0.4b$, the variance (or the estimation error) increases significantly.\label{two_crlb}} \end{figure} It is straightforward to extend the results derived in (\ref{crlb2}) for a two dimensional visibility plane, where $u$ and $v$ are the visibility axes. We again consider two sources, with magnitudes $\gamma_0$,$\gamma_1$, positioned at $(l_0,m_0)$ and $(l_1,m_1)$ respectively. The sampled visibility at the $i$-th point on the $uv$-plane is given by \begin{equation} \label{vis2d} y_i=\gamma_0 \exp^{\bigl(-j2\pi (l_0 u_i+m_0 v_i)\bigr)}+ \gamma_1 \exp^{\bigl(-j2\pi (l_1 u_i+m_1 v_i)\bigr)} + n_i \end{equation} As shown in \cite{SHPfull}, we can derive bounds for the parameter set ${\bmath \theta}=[l_0,m_0,l_1,m_1,\gamma_0,\gamma_1]$. We have given a numerical example in Fig. \ref{two_crlb_2d} for this case. As seen on Fig. \ref{two_crlb_2d} (b), as the two sources come closer, the variance in estimating their positions increase. \begin{figure}[htbp] \begin{minipage}{0.98\linewidth} \begin{minipage}{0.48\linewidth} \centering \centerline{\epsfig{figure=twocomp_crlb_2d_uvcov_ieee.eps,width=3.7cm}} \vspace{0.1cm} \centerline{(a)}\smallskip \end{minipage} \begin{minipage}{0.48\linewidth} \centering \centerline{\epsfig{figure=twocomp_crlb_2d_sum_ieee.eps,width=3.7cm}} \vspace{0.1cm} \centerline{(b)}\smallskip \end{minipage} \end{minipage} \caption{Cramer Rao lower bounds for a two dimensional case. The $uv$ coverage is given in (a). In (b), the total variance in estimating a source position, $Var(l_0)+Var(m_0)+Var(l_1)+Var(m_1)$ is given. The axes indicate the separation of the two sources, i.e. $|l_0-l_1|/b$ and $|m_0-m_1|/b$. The amplitudes are fixed at $\gamma_0=1.0$, $\gamma_1=0.4$, while the noise $\sigma=0.2$. The nominal resolution is $b=1/max(\sqrt{u^2+v^2})$.\label{two_crlb_2d}} \end{figure} As discussed in \cite{Briggs}, any extended source could be represented by clean components equivalent to the Fourier components of the brightness distribution of that source. However, the accuracy of this representation is limited when we have finite resolution due to lower bounds in estimation of positions and magnitudes of those clean components. Thus, it is futile to make the image grid arbitrarily small hoping the accuracy of our modeling of extended sources improve. Any pixel based deconvolution algorithm would run into this limitation and this forces us to find alternative methods to model such sources. \subsection{Deconvolution using an arbitrary basis} Because of the limitation of modeling an extended source by using multiple clean components, we strive to improve this by using other forms of image representation. Let us define an arbitrary basis as \begin{equation} {\mathbb S}=\{s_1(u),s_2(u),\ldots,s_K(u)\} \end{equation} where $s_k(u)$ is the $k$-th basis function at $u$, on the visibility plane (axis). If we observe an extended source, the $i$-th visibility point can be given as \begin{equation} \label{sumu} y_i=\sum_k \theta_k s_k(u_i)+n_i \end{equation} where ${\bmath \theta}=[\theta_1, \theta_2,\ldots, \theta_K]^T$ are the $K$ parameters we need to estimate. Representing the bases $\mathbb S$ evaluated at $u$ by ${\bf s}(u)=[s_1(u),s_2(u),\ldots,s_K(u)]^T$, we have the vectorized form \begin{equation} y_i={\bf s}^{T}(u_i) {\bmath \theta} + n_i \end{equation} and combining all visibility points in vector ${\bf y}$ we have \begin{equation} \label{lst} {\bf y}={\bf S} {\bmath \theta} +{\bf n} \end{equation} where ${\bf S}=[{\bf s}(u_0),{\bf s}(u_1),\ldots,{\bf s}(u_N)]^{T}$ and ${\bf n}\buildrel\triangle\over=[n_1,n_2,\ldots,n_N]^T$. This is the well studied linear statistical model and the likelihood can be expressed as \begin{equation} p({\bf y}|{\bmath \theta},\sigma^2)=\frac{1}{\pi^N \det{(\sigma^2 {\bf I})}} \exp\biggl(\frac{-1}{\sigma^2}({\bf y}-{\bf S}{\bmath \theta})^H({\bf y}-{\bf S}{\bmath \theta})\biggr) \end{equation} The ML estimate is $\widehat{\bmath \theta} ={\bf S}^{\dagger}{\bf y}$ where ${\bf S}^{\dagger}$ is the matrix pseudo inverse of ${\bf S}$. From \cite{Kay}, we get the Cramer-Rao lower bound as \begin{equation} \label{Varth} Cov({\bmath \theta}) =({\bf S}^H(\sigma^2 {\bf I})^{-1}{\bf S})^{-1} =\sigma^2 ({\bf S}^H{\bf S})^{-1}. \end{equation} Using (\ref{Varth}), we get the variance of estimation of the $k$-th parameter $\theta_k$ as the $k$-th diagonal entry of $\sigma^2 ({\bf S}^H{\bf S})^{-1}$. Note that this is minimized if ${\bf S}^H{\bf S}={\bf I}$. In other words, if we choose the basis such that ${\bf S}$ is unitary, we get the lowest error. This is the primary motivation behind having an orthonormal basis instead of clean components. \subsection{Information theoretic bounds} An important question we should answer is the maximum number of basis functions or clean components that can be used to represent any given source. Following the arguments presented in \cite{Slepian}, we see that most sources have compact support both in the image plane and the Fourier plane. In the latter case, the support is also limited by the distribution of sampling points (baselines). Hence we can use Landau Pollak theorem \cite{Landau} to limit the degrees of freedom of any source that can be seen from any interferometer. If the support area in the image plane is $A_{im}$ and the corresponding support in the Fourier plane is $A_{uv}$, then the number of degrees of freedom is bounded by $A_{im}\times A_{uv}$. This can be used as a criterion to limit the number of clean components (hence the pixel size) as well as the number of basis functions that can be effectively used to model any given source. \section{Results} As an example, we present results of an observation of Cygnus A, using the Westerbork Synthesis Radio Telescope. Around 150 MHz, the source Cygnus A is barely resolved by the WSRT. Hence, traditional clean based algorithms fail to perform satisfactorily. In Fig. \ref{cyga_example} (a), we have given the results obtained using clean. In this case, the dynamic range (ratio of the peak flux to the noise in the image) is about 10,000. However, by using shapelet basis functions we can improve the result to reach a dynamic range of well over 500,000 as seen in Fig. \ref{cyga_example} (b). \section{Conclusions} We have presented limitations of pixel based deconvolution of extended sources in radio interferometry. We have both theoretically and based on real data, shown that by using suitable orthonormal basis functions, we could overcome this limitation. Although we have chosen shapelets as our example basis functions, future work should focus on finding better basis functions in terms of performance and in terms of computational efficiency. \begin{figure}[htbp] \begin{minipage}{0.98\linewidth} \begin{minipage}{0.98\linewidth} \centering \centerline{\epsfig{figure=cyga_clean.eps,width=5.0cm}} \vspace{0.1cm} \centerline{(a)}\smallskip \end{minipage} \vspace{0.1cm} \begin{minipage}{0.98\linewidth} \centering \centerline{\epsfig{figure=cyga_my_best,width=5.0cm}} \vspace{0.1cm} \centerline{(b)}\smallskip \end{minipage} \end{minipage} \caption{Deconvolved images of the area surrounding Cygnus A (a) using CLEAN (about 1000 clean components) and (b) using shapelet deconvolution (about 400 modes). The dynamic range of (a) is about 10,000 while in (b) it is about 500,000. Cygnus A (peak flux 10 kJy) is at the center of the image and has been subtracted. The noise in (b) is about 20 mJy. Far more fainter sources are seen on (b) compared to (a).\label{cyga_example}} \end{figure} \bibliographystyle{IEEEtran}
2005.12119
\section*{Introduction} The radiative transfer of the cosmic microwave background (CMB) is based on the numerical resolution of a hierarchy of equations coupling CMB multipoles, together with Einstein equations for the dynamics of linear metric perturbations. As the CMB is polarized, we have in general a triple hierarchy, with temperature multipoles (related to intensity $I$), and electric and magnetic type multipoles for linear polarization (related to $Q$ and $U$ Stokes parameters). In principle, a fourth hierarchy must be added for circular polarization $V$, but at linear order in perturbation theory it is not generated by Compton collisions. Hence we have in general as many hierarchies as Stokes parameters, that is a total of three coupled hierarchies. An optimal hierarchy valid for flat Friedmann-Lema\^itre (FL) cosmologies and with only one set of variables for linear polarization was introduced in \cite{Polnarev1985} and developed further in~\cite{Crittenden:1993ni,Kosowsky:1994cy,Ma1995,Zaldarriaga:1996xe}. It was extended to curved FL cosmologies in~\cite{Tram:2013ima} (TL13 hereafter), leading to a method that was numerically implemented in CLASS\footnote{\url{http://class-code.net}}~\cite{ClassI,ClassII}. The full (i.e., non-optimal) triple hierarchy was developed for the flat case in \cite{TAM1} and for the curved case in~\cite{TAM2} and we name it the {\it Total Angular Momentum} (TAM) hierarchy. Finally, the $1+3$ covariant approach of \cite{Maartens:1994qq,Gebbie:1999jp,Maartens:1998xg,Challinor:1998xk,Challinor:1999xz,Challinor:2000as,Lewis:2002nc}, which is implemented in CAMB~\cite{Lewis:1999bs,CAMB}, can be mapped to the standard cosmological perturbation theory~\cite{Bruni:1991kb,Osano:2006ew}. It was found to be equivalent to the TAM approach written in the (matter comoving) synchronous gauge. Following \cite{TAM1,TAM2}, we summarise in the next section how the triple hierarchy is obtained by expanding temperature and polarization anisotropies into a complete set of normal modes, valid for any spatial curvature. We then detail in section~\ref{SecOptimal} the key steps needed to reduce it to an optimal double hierarchy, following TL13. Such reduction is based on a factorization of normal modes into a common orbital function (a plane wave) and a local angular dependence depending on the normal mode considered. However, it has been recently shown in \cite{Harmonics} (PP19 hereafter) that for curved cosmologies, and contrary to what is stated in \cite{TAM2}, this factorization is not valid. As the optimal hierarchy derivation relies crucially on this factorization, its implementation in the presence of spatial curvature, as described in TL13, is compromised. Since there are hints of mild positive curvature from CMB data~\cite{Aghanim:2018eyx,NatureK}, it becomes crucial to estimate the errors introduced by the optimal hierarchy in the curved space cases, and this is performed in section~\ref{SecComparison}. We discuss why in most cases the error is very small. We describe the modifications implemented in CLASS allowing the user to choose either the TAM or the optimal hierarchies when computing angular power spectra. These modifications will be publicly available in a forthcoming CLASS release. \section{Total angular momentum hierarchy} \subsection{Normal modes} Temperature anisotropies depend only the observer's position in spacetime, that is, on the conformal time $\eta$ and the position in space $\gr{x}$, and on the direction of propagation of the photon $\gr{n}$, which is opposite to the direction of observation. Polarization, which is described by the combinations $Q \pm {\rm i} U$ of Stokes parameters, has the same spacetime dependence. Temperature and polarization anisotropies are then decomposed along a complete set of normal modes ${}_s M_j^m(\gr{x},\gr{n};\gr{q})$ (with the dependence on the mode $\gr{q}$, the position $\gr{x}$ and the direction of propagation $\gr{n}$ often not written explicitly\footnote{Our normal modes ${}_s M_j^m$ correspond to the ones of TL13, the ${}_s \overline{G}^{(jm)}$ of PP19, and the ${}_s G_j^m$ of \cite{TAM1,TAM2}. The mode vector $\gr{q}$ corresponds to $\gr{\nu} \sqrt{|K|}$ in PP19, and its norm $q$ is related to the $k$ (used to define tensor harmonics) by $q^2 = k^2+(1+|m|)K$.}), which are projections of tensor valued harmonics, as \begin{equation}\label{ThetaG} \Theta = \sum_{jm}\int \frac{{\rm d}^3 \gr{q}}{(2\pi)^3}\,\Theta_j^m(\gr{q},\eta) \,{}_0 M_j^m(\gr{x},\gr{n};\gr{q})\,, \end{equation} and \begin{align}\label{EBG} &Q\pm {\rm i} U = \sum_{jm}\int \frac{{\rm d}^3 \gr{q}}{(2\pi)^3}\,\\ &\qquad \times \left[E_j^m(\gr{q},\eta) \pm {\rm i} B_j^m(\gr{q},\eta) \right]\,{}_{\pm 2} M_j^m(\gr{x},\gr{n};\gr{q})\,.\nonumber \end{align} Here $m\in[-2,2]$ is the mode index, standing respectively for scalars ($m=0$), vectors ($|m|=1$) and tensors ($|m|=2$), while $j\geq0$ is the multipole index. The normal modes depend on curvature $K$ of spatial sections\footnote{Recall that $|K|=\ell_c^{-2}$, where $\ell_c$ is the curvature length of spatial sections.}, and are expressed in terms of radial functions and spin-weighted spherical harmonics. A comprehensive set of their properties is collected in PP19. \subsection{Hierarchy} The evolution of anisotropies is governed by the Boltzmann equation \begin{subeqnarray}\label{BoltzmannTheta} \left(\partial_\eta + \gr{n} \cdot {\bm \nabla} + \tau'\right)\Theta &=& {\cal C}_\Theta + {\cal G}\,,\\ \left(\partial_\eta+ \gr{n} \cdot {\bm \nabla}+ \tau'\right) (Q\pm{\rm i} U) &=& {\cal C}_{Q\pm{\rm i} U}\,, \label{BoltzmannQU} \end{subeqnarray} where $\tau'$ is the Compton scattering rate. The function ${\cal G}$ accounts for the gravitational effects due to metric perturbations, and it is decomposed on normal modes similarly to \eqref{ThetaG}, hence defining the multipoles ${\cal G}_j^m$. The only non-vanishing gravitational sources satisfy $j\leq2$ (with $|m|\leq j$), and can be found in e.g.~\cite{TAM2,Tram:2013ima,Harmonics}. The collision terms ${\cal C}_\Theta$ and $ {\cal C}_{Q\pm{\rm i} U}$ are also expanded on normal modes, similarly to \eqref{ThetaG} and \eqref{EBG}, hence defining the multipoles $ {}^\Theta{\cal C}_j^m $, ${}^E{\cal C}_j^m $ and ${}^B{\cal C}_j^m $. The only non-vanishing contributions are also restricted to $j \leq 2$, and can be found in \cite{TAM2}. Using \begin{eqnarray}\label{MasterGstreaming} &&-\gr{n}\cdot {\bm \nabla} \left({}_s M_j^m \right) = \frac{{\rm i} q m s}{j(j+1)} \,{}_s M_j^m \\ &&+\frac{1}{2j+1}\left[-{}_s \kappa_j^m\, {}_s M_{j-1}^m +{}_{s} \kappa_{j+1}^m \,{}_s M_{j+1}^m \right]\nonumber \end{eqnarray} with coupling coefficients \begin{equation}\label{Defkappaslm} {}_s \kappa_j^m \equiv \sqrt{\frac{(j^2-m^2)(j^2-s^2)}{j^2}}\sqrt{q^2-K j^2}\,, \end{equation} we obtain immediately the TAM hierarchy~\cite{TAM2} \begin{align}\label{Hierarchy} &\partial_\eta \Theta_j^{m} = {\cal G}_j^m +{}^\Theta {\cal C}_j^m -\tau' \Theta_j^m \\ &\qquad \qquad +\left[\frac{{}_0 \kappa_j^m}{2j-1} \Theta_{j-1}^m - \frac{{}_0 \kappa_{j+1}^m}{2j+3} \Theta_{j+1}^m \right], \nonumber\\ &\partial_\eta E_j^{m} = {}^E {\cal C}_j^m -\tau' E_j^m\nonumber\\ &+\left[\frac{{}_2 \kappa_j^m}{2j-1} E_{j-1}^m - \frac{{}_2 \kappa_{j+1}^m}{2j+3} E_{j+1}^m -\frac{2m q}{j(j+1)} B_j^m\right],\nonumber\\ &\partial_\eta B_j^{m} = {}^B{\cal C}_j^m -\tau' B_j^m\nonumber\\ &+ \left[\frac{{}_2 \kappa_j^m}{2j-1} B_{j-1}^m - \frac{{}_2 \kappa_{j+1}^m}{2j+3}B_{j+1}^m +\frac{2m q}{j(j+1)} E_j^m\right]. \nonumber \end{align} A Boltzmann code must solve this set of equations, along with the evolution of metric perturbations (in a given gauge) which enter the gravitational sources, for various values of the mode magnitude $q$. The temperature and polarization angular spectra are then obtained from convolutions with the initial perturbations power spectra, and take simple forms for statistically isotropic initial conditions, see e.g. section 2.E of \cite{TAM2} or section 3.4 of \cite{Lesgourgues:2013bra}. In these equations, $m\in[-j,j]$ can be positive or negative, but since hierachies with the same $j$ and opposite $m$ return identical results, calculations can be performed for $m\geq0$ only. \subsection{Integral solutions} Since the Boltzmann hierarchy is infinite in $j$, we must in practice truncate at a $j_{\rm max}$ sensibly larger than the maximum $j$ we are interested in, so as to avoid errors introduced by the truncation. It is in practice much faster to solve only for a limited number of multipoles, that is to truncate at a low $j_{\rm max}$, and to reformulate the solutions of the Boltzmann hierarchy as an integral on sources involving these lowest multipoles. This line of sight method was first introduced in \cite{TAM1,Seljak:1996is}. It is indeed found that the solutions of the hierarchy \eqref{Hierarchy} are \begin{align}\label{IntSol} \frac{\Theta_j^m(\gr{q},\eta_0)}{2j+1} &= \int_0^{\eta_0} {\rm d} \eta {\rm e}^{-\tau} \!\!\!\!\! \sum_{j'=m, ..., 2} \! {}_0 \epsilon_j^{(j'm)}(\chi;q)\\ &\qquad \times \left[{}^\Theta{\cal C}_{j'}^m(\gr{q},\eta) + {\cal G}_{j'}^m(\gr{q},\eta) \right]\,,\nonumber\\ \frac{ E_j^m(\gr{q},\eta_0)}{2j+1} &= \int_0^{\eta_0} {\rm d} \eta {\rm e}^{-\tau} \, {}_2 \epsilon_j^{(2,m)}(\chi;q) \, {}^E{\cal C}_{2}^m(\gr{q},\eta)\,\,,\nonumber\\ \frac{B_j^m(\gr{q},\eta_0)}{2j+1} &= \int_0^{\eta_0} {\rm d} \eta {\rm e}^{-\tau} \,{}_2 \beta_j^{(2,m)}(\chi;q) {}^E{\cal C}_{2}^m(\gr{q},\eta)\,,\nonumber \end{align} where $\chi = \eta_0-\eta$ is the radial distance. The optical depth is $\tau$ (such that ${\rm d} \tau/{\rm d} \eta = -\tau'$ and with $\tau(\eta_0)=0$) and the ${}_s \epsilon_\ell^{(jm)}(\chi;q)$ and ${}_s \beta_\ell^{(jm)}(\chi;q)$ are the electric and magnetic type radial functions (reported in section 4 of PP19), initially introduced in \cite{Tomita1982,Abbott1986,TAM2} for curved spaces\footnote{${}_0 \epsilon_\ell^{(jm)}$ corresponds to $\phi_\ell^{(jm)}$ in \cite{TAM2,Tram:2013ima}, and ${}_2 \epsilon_\ell^{(2,m)}, {}_2 \beta_\ell^{(2,m)}$ to $\epsilon_\ell^{(m)},\beta_\ell^{(m)}$. Note also that ${}_s \alpha_\ell^{(jm)}$, ${}_s \epsilon_\ell^{(jm)}$ and ${}_s \beta_\ell^{(jm)}$ correspond to ${}_s \bar\alpha_\ell^{(jm)}$, ${}_s \bar\epsilon_\ell^{(jm)}$ and ${}_s \bar\beta_\ell^{(jm)}$ of PP19.}. These results follow from the structure of the Boltzmann equation~\eqref{BoltzmannTheta}, once written in an integral form (for instance, for temperature, $ {\rm d}/{\rm d} \tau ({\rm e}^{-\tau}\Theta) = {\rm e}^{-\tau}[ {\cal C}_\Theta + {\cal G}]$), and using the Rayleigh expansion (e.g. Eq.~(7.39) of PP19) to express the normal modes of gravitational and collisional sources in terms of the normal modes evaluated at the observer (that is at $\chi=0$), see section 7.4 of PP19 for more details. Finally, the unlensed angular spectra $C_\ell^{X}$ for $X\in[TT,TE,EE,BB]$ are given by the integral over $\gr{q}$ of products of $\Theta_j^m(\gr{q},\eta_0)$, $E_j^m(\gr{q},\eta_0)$, $B_j^m(\gr{q},\eta_0)$ multiplied by the primordial power spectra. \subsection{Hierarchy truncation} The radial functions involved in the integral solutions~\eqref{IntSol} have a variety of recursive properties. In particular, setting $s=\ell$ in Eq. (D.5) of PP19, and promoting the changes $s\leftrightarrow m$ and $j\leftrightarrow \ell$ by means of Eqs.(3.26) and (3.27) of the same reference, we can show that (see also section 5.4.5 of \cite{RiazueloPhD}) \begin{eqnarray}\label{MagicalTruncation} &&\left(\frac{{\rm d}}{{\rm d} \chi} +(\ell+1+m) \cot_K(\chi)\right) \,{}_{s}\alpha^{(j=m,\pm m)}_\ell \\ &&-\frac{{}_{s}\kappa_{\ell}^m}{(\ell-m)} \, {}_{s} \alpha^{(j=m,\pm m)}_{\ell-1} \pm {\rm i} \frac{s \nu}{\ell} {}_{s}\alpha^{(j=m,\pm m)}_\ell=0\,,\nonumber \end{eqnarray} where ${}_{\pm s}\alpha^{(j,m)}_\ell = {}_s\epsilon^{(j,m)}_\ell\pm{\rm i}\, {}_s\beta^{(j,m)}_\ell$ and $\cot_K(\chi)$ corresponds to either $\sqrt{|K|}\coth (\chi \sqrt{|K|})$, $\sqrt{K}\cot (\chi \sqrt{K})$, or $1/\chi$ when $K$ is smaller than, greater than or equal to zero, respectively. One can deduce from \eqref{IntSol} and (\ref{MagicalTruncation}) that \begin{enumerate} \item if non-vanishing sources are located only very deep in the past (at distances such that $\chi = \eta_0-\eta \simeq \eta_0$), \item if we can ignore sources with $j>|m|$ (which is in general not the case), \end{enumerate} then the temperature multipoles satisfy \begin{eqnarray}\label{Truncation1} \partial_\eta \Theta^{m}_j&\simeq& -(j+1+m)\cot_K(\eta) \Theta^{m}_j\\ &+&\frac{{}_{0}\kappa_{j}^m}{(j-m)} \,\frac{2j+1}{2j-1} \Theta^{m}_{j-1} \,.\nonumber \end{eqnarray} Similarly, and using the fact that ${}_s \alpha_\ell^{(j,m)} = {}_m\alpha_\ell^{(j,s)}$ in \eqref{MagicalTruncation}, we find under the same first assumption (but relaxing the second one) that the polarisation multipoles satisfy \begin{eqnarray}\label{Truncation2} \partial_\eta E_j^m &\simeq& -(j+3)\cot_K(\eta) \,E^{m}_j \\ &+&\quad\frac{{}_{2}\kappa_{j}^m}{(j-2)} \, \frac{2j+1}{2j-1} E^{m}_{j-1} + \frac{m q}{j} B^{m}_j\,,\nonumber \end{eqnarray} with $B_j^m$ satisfying the same approximate relation with replacements $E_j^m \to B_j^m$ and $B_j^m \to - E_j^m$. Equations \eqref{Truncation1} and \eqref{Truncation2} are only approximate, but they can be used in practice to truncate the hierarchy at a $j_{\rm max}$, so as to minimize spectrum reflection that a direct truncation of \eqref{Hierarchy} would induce. \section{Optimal hierarchy}\label{SecOptimal} It has been conjectured in \cite{TAM2} and assumed in \cite{Tram:2013ima} that the normal modes can be separated into the product of a local angular structure and some eigenmode functions $\Delta$ normalized to $|\Delta|=1$: \begin{equation}\label{WrongFactor} {}_s M_j^m = (-{\rm i})^j\sqrt{\frac{4\pi}{2j+1}}\, {}_s Y_{j}^m(\gr{n}) \Delta(\gr{x}, \gr{q})~. \end{equation} In reality, this property is lost in the presence of spatial curvature, as detailed in section 6.7 of PP19. In the flat case, where the function $\Delta=\exp({\rm i} \gr{q}\cdot\gr{x})$ consists of ordinary plane waves, a series of simplifications leads to the optimal hierarchy, which we now review. First, for temperature, one can expand the non scalar perturbations ($m \neq 0$) using the same normal modes as for scalar perturbations ($m=0$). Then, instead of using the $\Theta_j^m$, one can use a new set of multipoles $F_j^m$ defined by \begin{equation}\label{DefineFjmHierarchy} \sum_j \Theta_j^m \,{}_0M_j^m \propto Y_m^m\sum_j (2j+1)F_j^m \, {}_0M_j^0\, \,. \end{equation} Note that the factor $(2j+1)$ and the global pre-factor (not shown here) are pure conventions in the definition of $F_j^m$, and that the new multipoles are defined for $j \geq 0$ (unlike $\Theta_j^m$ which is defined for $j\geq |m|$). If the factorization of eq.~(\ref{WrongFactor}) holds, this relation is unchanged when replacing ${}_{0} M_j^m \to {}_{0} Y_j^m$. Then, using the orthogonality relation of spherical harmonics, one finds that the $\Theta_j^m$ can be deduced from the $F_j^m$ using Gaunt integrals (angular integrals over three spin-weighted spherical harmonics). These relations are collected in appendix B of TL13. For scalar modes, since $Y^0_0=1/\sqrt{4\pi}$, the Gaunt integral becomes trivial, such that the mutipoles $F_j^0$ and $\Theta_j^0$ are just related by numerical factors. We see that for temperature, switching to the optimal hierarchy amounts in expanding along the basis of angular functions $Y_m^m Y_j^0$ instead of $Y_j^m$. This explains why the source terms remain compact: given the contraction rules of spherical harmonics, the source terms in the two hierarchies are simply related through Clebsh-Gordan coefficients. For instance, for the gravitational source terms ${\cal G}_j^m$, restricted to $|m|\leq j \leq 2$, we immediately see that ${\cal G}_m^m$ sources $F_0^m$ (since they are both factors of $Y_m^m \propto Y_m^m Y_0^0$), that ${\cal G}_j^0$ sources $F_j^0$ (both factors of $Y_j^0 \propto Y_0^0 Y_j^0$), and finally that ${\cal G}_2^{\pm1}$ sources $F_1^{\pm 1}$ (since $Y_2^{\pm1} \propto Y_1^{\pm1} Y_1^0$). The source coming from Thomson scattering also remains simple because the baryon velocity has a dipolar structure, $j=1$, that can only source $F_0^{\pm1}$ and $F_1^0$ (following the same reasoning as for ${\cal G}_1^m$). Second, for polarization, the problem can be simplified by use of symmetries. The Stoke parameter combinations ${Q+{\rm i} U}$ and ${Q-{\rm i} U}$ both start from vanishing initial conditions and grow according to the Boltzmann equations (\ref{BoltzmannQU}), which differ only at the level of the collision terms ${\cal C}_{Q+{\rm i} U}$ and ${\cal C}_{Q-{\rm i} U}$. However the Thomson scattering cross section has a quadrupolar structure giving \begin{equation}\label{CollQiU} {\cal C}_{Q\pm{\rm i} U} =-\sqrt{6}\tau' \!\! \sum_{m=-2}^2 \int \!\! \frac{{\rm d}^3 \gr{q}}{(2\pi)^3}\, P^{(m)} {}_{\pm 2} M_2^m\,, \end{equation} with $P^{(m)}(\gr{q},\eta) \equiv \left(\Theta_2^m-\sqrt{6}\,E_2^m\right)/10$. In general, these source terms give no useful relation between ${Q+{\rm i} U}$ and ${Q-{\rm i} U}$. However, if the factorization property \eqref{WrongFactor} holds, ${\cal C}_{Q\pm{\rm i} U}$ can be written as \begin{equation}\label{CollQiUopt} {\cal C}_{Q\pm{\rm i} U} \propto \sum_{m=-2}^2 \left[ \int \frac{{\rm d}^3 \gr{q}}{(2\pi)^3}\, P^{(m)} \Delta \right] {}_{\pm 2} Y_2^m ~, \end{equation} where the bracketed integral only depends on $(\gr{x}, \eta)$ and is the same for $Q+iU$ and $Q-iU$. Thus each mode $m$ sources identical contributions to ${Q+{\rm i} U}$ and ${Q-{\rm i} U}$ up to a ratio ${}_{- 2} Y_2^m/{}_{+ 2} Y_2^m$ that only depends on the direction $\gr{n}$. By taking the sum and the difference of eqs.~(\ref{BoltzmannQU}), one reaches similar conclusions for $Q$ and ${\rm i} U$: each mode $m$ sources identical contributions to the Stokes parameters up to a factor \begin{equation}\label{EBratio} \frac{{{\rm i} U}}{Q}=\frac{{}_2Y_2^m - {}_{-2}Y_2^m }{{}_2Y_2^m + {}_{-2}Y_2^m}~. \end{equation} Note that (\ref{EBratio}) also holds for scalar modes, for which ${}_2Y_j^0 \!\! = \!\! {}_{-2}Y_j^0$ and $U$ is not sourced. When computing CMB spectra, we consider statistically independent initial conditions for each mode $m$, and thus solve the Boltzmann equations for one mode $m$ at a time. Thus we can solve only for $Q$ and assume that ${\rm i} U$ is given by eq.~(\ref{EBratio}). In general, the sum of the two equations \eqref{EBG} shows that $Q$ is related to polarization electric and magnetic multipoles as \begin{equation} Q = \frac{1}{2}\sum_{jm} \int \frac{{\rm d}^3 \gr{q}}{(2\pi)^3} \left(E_j^m {\cal E}_j^m + {\rm i} B_j^m {\cal B}_j^m \right)\,, \end{equation} where we have defined the E and B type normal modes \begin{eqnarray} {\cal E}_j^m &\equiv& \left({}_2M_j^m + {}_{-2}M_j^m\right)\,,\\ {\cal B}_j^m &\equiv& \left({}_2M_j^m - {}_{-2}M_j^m\right)\,. \end{eqnarray} In the optimal scheme, $Q$ can instead be expanded in a single hierarchy of multipoles $G_j^m$ that involves the same normal modes ${}_0M_j^0$ as the temperature expansion: \begin{eqnarray}\label{DefineGjmHierarchy} \sum_{j} \! E_j^m {\cal E}_j^m \!\!+\! {\rm i} B_j^m {\cal B}_j^m \! \propto \! \tilde{\cal E}^m \!\! \sum_j \! (2j\!\!+\!\!1) G_j^m {}_0M_j^0\,, \end{eqnarray} where $\tilde{\cal E}^m(\gr{n})$ is chosen to simplify the Boltzmann hierarchy as much as possible. Again, if the factorization property \eqref{WrongFactor} holds, this relation can be written with ${}_{\pm 2} M_j^m \to {}_{\pm 2} Y_j^m$, and if $\tilde{\cal E}^m$ is a spherical harmonic, we can find the relation between $G_j^m$ and $(E_j^m, B_j^m)$ using Gaunt integrals, as detailed in appendix B of TL13. According to eq.~(\ref{CollQiU}), ${\cal C}_Q(\gr{n}) \propto {\cal E}_2^m(\gr{n})$. Thus, for $m\neq0$, choosing $\tilde{\cal E}^m \propto {\cal E}_2^m$ leads to a simple Boltzmann hierarchy. Indeed, in the right-hand side of eq.~(\ref{DefineGjmHierarchy}), scattering can only source the multipoles such that $Y_j^0$ is direction-independent, that is, $G_0^m$. For $m=0$, in order to recover the equations reported in \cite{Ma1995}, TL13 chose $\tilde{{\cal E}}^0$ to be a constant factor (instead of ${\cal E}_2^0$) such that the multipoles $G_j^0$ relate to $Q$ exactly as $F_j^0$ relate to $\Theta$. This choice comes however at the expense of an additional source term for $G_2^0$ in the hierarchy, and of less straightforward relations between $E_j^0$ and the $G_j^0$. Having reduced the expansion on the simpler normal modes ${}_0 M_j^0$, one gets temperature and polarization hierarchies that are both very similar to the scalar temperature hierarchies of the TAM method, \begin{subeqnarray} \partial_\eta F_j^m &=& \frac{1}{2j+1}\left({}_0 \kappa_j^0 F_{j-1}^m- {}_0 \kappa_{j+1}^0 F_{j+1}^m\right)\nonumber\\ &&\quad -\tau' F_j^m + \mathfrak{u}_j^m\,,\\ \partial_\eta G_j^m &=& \frac{1}{2 j+1}\left({}_0 \kappa_j^0 G_{j-1}^m- {}_0 \kappa_{j+1}^0 G_{j+1}^m\right)\nonumber\\ &&\quad -\tau' G_j^m + \mathfrak{v}_j^m\,, \end{subeqnarray} with the sources $\mathfrak{u}_j^m,\mathfrak{v}_j^m$ and exact definitions for the $F_j^m,G_j^m$ given in TL13. Also, since the free-streaming part has been reduced in all cases to the same form as scalar temperature multipoles, the hierarchies for $F_j^m,G_j^m$ are truncated using \eqref{Truncation1} with $m=0$ in all cases, that is Eq. (2.34) of TL13. Finally, the temperature and polarization spectra can be computed using eq.~\eqref{IntSol}, with the same radial functions as in the TAM method, but with the expression of the source functions $^{\Theta}{\cal C}_j^m$, $^{E}{\cal C}_j^m$ and ${\cal G}_j^m$ derived in the optimal hierarchy. The optimal hierachy equations were already derived in TL13, but the goal of this section was to show explicitly that, at various steps in the derivation, it is necessary to assume the factorization ansatz of eq.~\eqref{WrongFactor}. As found in PP19, in the presence of spatial curvature, this factorization does not hold, such that the optimal hierarchy should not be used in principle. \section{Comparison of hierarchies.}\label{SecComparison} \subsection{Implementation in CLASS} Previous versions of the CLASS code were only using the optimal hierarchy. For the purpose of comparing the two schemes, we have implemented both of them, with a new input parameter {\tt hierarchies = optimal, tam}. Our modifications will be available in the next release of the code (v3.0). For the first three multipoles of the scalar temperature hierarchy, instead of following ($\Theta_0^0$, $\Theta_1^0$, $\Theta_2^0$) or ($F_0^0$, $F_1^0$, $F_2^0$), the code follows three components of the perturbed photon stress-energy tensor that match the conventions of \cite{Ma1995}: \begin{subeqnarray} \delta_\gamma &=& F_0^0 = 4 \Theta_0^0~, \\ \theta_\gamma &=& \frac{3k}{4} F_1^0 = k \Theta_1^0~, \\ \sigma_\gamma &=& \frac{1}{2s_2} F_2^0 = \frac{2}{5s_2} \Theta_2^0~, \end{subeqnarray} with $k=\sqrt{q^2-K}$ and $s_2 = \sqrt{1-3K/k^2}$. For all other multipoles and modes, the code follows the quantities ($F_\ell^m$, $G_\ell^m$) in the optimal mode and ($\Theta_\ell^m$, $E_\ell^m$, $B_\ell^m$) in the TAM mode. We have discussed the two hierachies in the context of photon anisotropies, but the same formalism applies to decoupled massless or massive neutrinos, or more generally to ultra-relativistic species ({\it ur}) and non-cold dark matter ({\it ncdm}), as they are called in CLASS). The only difference in such cases is the absence of both polarization and collision terms. For scalar modes, in absence of polarization, the TAM and optimal hierarchies are mathematically equivalent, even when $K\neq0$. This can be seen in the definition of the $F_j^0$ multipoles in equation (\ref{DefineFjmHierarchy}). With $m=0$, given that $Y_0^0=1/\sqrt{4 \pi}$, we see that the expansions in $\Theta_j^0$ and in $F_j^0$ are performed along the same normal modes $_{0}M_j^0$. Then, even if $_{0}M_j^0$ is not separable in curved space, the optimal hierarchy can be obtained from the TAM one by replacing $\Theta_j^0 \to (2j+1) F_j^0$ (up to a constant factor $\frac{1}{4}$ coming from an arbitrary choice of normalization in \eqref{DefineFjmHierarchy}). For photons, there is still a difference in the temperature evolution, coming from the fact that the temperature hierarchy couples to distinct polarisation hierarchy(ies). But this is not the case for the {\it ur} and {\it ncdm} species, and thus there is no need to implement explicitly the TAM hierarchy for them. On the other hand, for tensor modes, we expect the optimal hierarchy to be only approximate in the curved case, due to the non-separability of the normal modes $_{0}M_j^2$, which implies that $\Theta_j^2$ and $(2j+1) F_j^2$ are not exactly related by Gaunt integrals. This is potentially relevant for the calculation of the spectra of CMB anisotropies, since photon and neutrino are coupled gravitationally through their shear tensors. In both CLASS and CAMB, for tensor modes, the impact of massive neutrinos (or more generally {\it ncdm}) perturbations on the CMB angular spectra can be accounted in two ways: (i) either using the full Boltzmann hierarchy of {\it ncdm} perturbations discretized on a grid in momentum space; or (ii) by splitting {\it ncdm} at each time $\eta$ in two components: an ultra-relativistic component with density $\rho=3p_{ncdm}$, treated as an enhancement of the {\it ur} species and thus coupled gravitationally to the photons, and a non-relativistic component with density $\rho=\rho_{ncdm} - 3p_{ncdm}$, assumed to have a negligible shear and thus no gravitational coupling with photon tensor perturbations. The second scheme is faster and accurate enough (at least for neutrinos becoming non-relativistic after photon decoupling) for being the default in CLASS. In that case, for tensor modes, the code follows the {\it ur} perturbations but not the {\it ncdm} ones. Here, we limit our analysis to the case where this approximation is used. Thus, we coded the two hierarchies for {\it ur} tensor perturbations, but not for the {\it ncdm} tensor perturbations. Depending on the used scheme, the code follows either the multipoles $F_{ur,j}^{~~2}$ (optimal) or $\Theta_{ur,j}^{~~2}$ (TAM). The gravitational wave equation is then sourced by the shear $\pi_{ur} = \frac{8}{5} \Theta_2^2$, replaced by eq.~(B.27) of TL13 in the optimal case. For scalar modes, we implemented the TAM hierarchy in both the synchronous and newtonian gauge. In the next section, we show comparison plots obtained in the synchronous gauge, but we checked explicitly that the curves are identical in the newtonian gauge. \subsection{Accuracy of the hierarchies} We turn to the evaluation of the difference between both hierarchies in the curved case. Since the optimal hierarchy is mathematically valid only in the flat case, we expect differences proportional to $|\Omega_K|$ in the angular spectra. In principle, some cancellations could occur such that the difference would scale with a higher power of $\Omega_K$; but we checked explicitly that this is not the case: the differences between the CMB spectra computed by CLASS in the two schemes scale indeed linearly with the curvature density fraction. Both implementations rely on the line-of-sight integral with identical radial functions. Since these functions account for projection effects from $q$-space to harmonic space, the geometrical effects induced by curvature -- that govern, for instance, the angular scale of the acoustic peaks -- are correctly accounted for in the two approaches. Differences can only arise from slightly different values of the source functions that appear in eqs.~\eqref{IntSol}: $^\Theta{\cal C}_j^m(\gr{q},\eta)$, ${\cal G}_j^m(\gr{q},\eta)$ (with $j=0,1,2$) and $^E{\cal C}_2^m(\gr{q},\eta)$ in the two schemes. Figure \ref{Plot1a1b} shows such differences at the level of the tensor polarization source function $P^{(2)}$ of eq.~\eqref{CollQiU}, which is related to the sources of eqs.~\eqref{IntSol} through $^E{\cal C}_2^2=-\sqrt{6}\tau' P^{(2)}$. \begin{figure}[!htb] \includegraphics[width=\columnwidth]{Sources1.pdf} \caption{Sources for tensor modes $P^{(2)}$ used in the line of sight method. The continuous line are computed with the optimal hierarchy, and the dashed lines with the the TAM hierarchy. The thicker lines are for the mode $k=0.0005\,{\rm Mpc}^{-1}$, and the thiner lines are for $k=0.001\,{\rm Mpc}^{-1}$. We only show the case of negative curvature with $\Omega_K=0.1$, but positive curvature sources are extremely similar. The lower panel shows the difference between the curves of the upper panel. }\label{Plot1a1b} \end{figure} In each of the two schemes, the source functions are derived from equations that are sensitive to curvature only through: \begin{enumerate} \item coefficients ${}_s \kappa_j^m$ in the hierarchies, that involve factors like $\sqrt{1-nK/q^2}$ for various integers $n$, \item initial conditions, \item the background evolution at very small redshift. \end{enumerate} Since the two schemes share the same initial conditions and background evolution, and since they have a common flat-space limit, differences can only be caused by $\sqrt{1-nK/q^2}$ factors. Thus these differences must be more significant at large wavelengths, that is, for small multipoles. \begin{figure*}[!htb] \includegraphics[width=0.49\linewidth]{ScalarPosK.pdf} \includegraphics[width=0.49\linewidth]{ScalarNegK.pdf}\\ \includegraphics[width=0.49\linewidth]{TensorPosK.pdf} \includegraphics[width=0.49\linewidth]{TensorNegK.pdf} \caption{Relative differences of spectra $C_\ell^{\rm optimal}/C_\ell^{\rm TAM}-1$ for $TT$ spectra [blue (lower) lines] and $EE$ spectra [red (upper) lines]. Left panels are with positive curvature $\Omega_K<0$, and the right panels with negative curvature $\Omega_K>0$. Positive values are in continuous lines, and negative values in dashed lines. Top and bottom panels are for scalar and tensor perturbations, respectively. The cosmological parameters are the one of the last column in table 2 of \cite{Aghanim:2018eyx}, except for the modification in $\Omega_K$ which is accompanied by a modification in $\Omega_{\Lambda}$.}\label{Plot2a2b2c2d} \end{figure*} The terms related to photon multipoles in the source functions of eqs.~\eqref{IntSol} are all multiplied by the visibility function $-\tau' e^{-\tau}$, which peaks around the times of recombination and reionization. We expect the differences between the hierarchies to manifest themselves more clearly around the time of reionization. Indeed, on the last scattering surface, the sources emerge from the tight-coupling regime, while at reionization free-streaming has entirely shaped them. Since the major difference between the hierarchies is the treatment of free-streaming, we expect that they have more impact on contributions from reionization. However, this contribution is subdominant in angular spectra, excepted for polarization spectra at low $\ell$. This induces a global suppression of differences, excepted on scales corresponding to the reionization bump in the polarization spectra. Furthermore, there are several properties which conspire to eventually reduce even more the differences in angular spectra (see Figs.~\ref{Plot2a2b2c2d}) which we now detail. For scalar temperature, we have already seen that the Boltzmann hierarchy of the two schemes are equivalent (because all quantities are expanded along the same normal modes $_0M_j^0$), up to the term that couples the temperature and polarization hierarchies. This term is part of $^\Theta{\cal C}_j^0$ in eq.~\eqref{Hierarchy}, and proportional to $P^{(0)}$. Intuitively, it represents the flow of power from temperature to polarization induced by Thomson scattering. The CMB is known to be only slightly polarized, precisely because this flow is very small. Since the different polarization hierarchies only affect the temperature hierarchy through this term, the difference they induce on the evolution of temperature multipoles is very small. Finally, the scalar temperature spectrum $C_\ell^{TT}$ is inferred from the scalar temperature source function of eq.~\eqref{IntSol}, which depends mainly on temperature multipoles, on the baryon velocity field and on metric perturbations; the electric quadrupole moment $E_2^0$ brings only a very small correction. Thus we expect a very minor impact of polarization errors on the scalar temperature spectrum. This is confirmed by the blue (lower) curves in the top panels of Figure~\ref{Plot2a2b2c2d}. The difference between the scalar spectra $C_\ell^{TT}$ predicted by the two hierarchies peaks at small $\ell$'s, and is at most of the order of $10^{-5}|\Omega_K|$. We checked explicitly that most of the difference comes from the value of the source terms (and in particular of the quadrupole $E_2^0$) around the time of reionization: in a cosmological model with reionization switched off, the difference between the spectra is orders of magnitude smaller. For scalar polarization, we do expect a larger difference, because the Boltzmann hierarchies of the two schemes are not anymore exactly equivalent for $\Omega_K \neq 0$. For $m=0$, in the TAM scheme, $B_j^0$ multipoles are not sourced and remain null. Equation (B.11) of TL13 gives an explicit relation between $E_j^0$ and $G_j^0$, but according to our previous discussion, this relation would be exact only for separable normal mode functions ${}_sM_j^0$, that is for $K=0$. The source function in the polarization line-of-sight integrals, $^E{\cal C}_2^0=-\sqrt{6}\tau' P^{(0)}$, involves the sum \begin{equation} P^{(0)}=(\Theta_2^0 -\sqrt{6} E_2^{0})/10~. \end{equation} In flat space, using (B.11) of TL13, $\sqrt{6} E_2^0$ would be exactly equal to $-\frac{5}{4} (G_0^2+G_2^2)$. In curved space, one can explicitly check that the term $\sqrt{6} E_2^0$ coming from the solution of the $E_j^0$ hiearchy and the term $-\frac{5}{4} (G_0^2+G_2^2)$ coming from the solution of the $G_j^2$ hierarchy differ by $\sqrt{1-nK/q^2}$--like factors. However, in both schemes, $P^{(0)}$ is dominated by the contribution of the temperature quadrupole, correctly given in the two schemes by $\Theta_2^0=5/4F_2^0$, and to which the polarization multipoles only bring a small correction. Since the temperature hierarchy is almost unaffected by errors in the optimal scheme, differences in the solution of the polarization hierarchies do not fully propagate to the polarization spectra. This explains why the error on $C_\ell^{EE}$, shown in the red (upper) curves in the top panels of Figure~\ref{Plot2a2b2c2d}, is still very small, of the order of $10^{-2}|\Omega_K|$ at small $\ell$'s. It is however~$\sim 10^3$ times larger than the largest difference for the temperature. Since the difference between the two schemes has more impact at the reionization epoch, the residuals are the largest in the range $\ell \leq 20$ corresponding to the reionization bump in $C_\ell^{EE}$. For tensor modes, differences are expected to be even larger, since in that case, both temperature and polarization hierarchies are different. However the impact of the hierarchies is reduced again by another consideration in the temperature case. The tensor temperature source function in eqs.~\eqref{IntSol} is given by the sum $-H'+\kappa' P^{(2)}$, where $H$ is the gravitational wave transfer function. As already seen in Figure~\ref{Plot1a1b}, the term $P^{(2)}$ is clearly sensitive to the difference between the hierarchies, especially around the time of reionization. However, the tensor temperature power spectrum is dominated by the term $-H'$ that represents an integrated Sachs-Wolfe effect caused by gravitational waves. This term is given by the same Einstein equations in the two schemes, and only depends very weakly on the choice of hierarchy, in spite of the small back-reaction of photon and neutrino shear on $H$. Thus, once more, we find a very small impact of the optimal hierarchies on the tensor temperature spectrum, of the order of $10^{-3}|\Omega_K|$ (see the blue (lower) curves in the bottom panels of Figure~\ref{Plot2a2b2c2d}). Finally, for tensor polarization, the source term in eqs.~\eqref{IntSol} is only given by $P^{(2)}$, and thus by temperature and polarization mutipoles. This is the only case in which we find that the optimal hierarchy induces a potentially relevant error, of the order of $0.5|\Omega_K|$ for $C_\ell^{EE}$ and $\ell \leq 10$ (see the red (upper) curves in the bottom panels of Figure~\ref{Plot2a2b2c2d}). The range $\ell \leq 10$ coincides with the reionization bump in the tensor $C_\ell^{EE}$, which is again consistent with the fact that the difference between hierarchies has more impact around reionization than recombination. We find essentially identical results for the $C_\ell^{BB}$ spectrum. Since in these sections we were interested in extremely small differences between the angular spectra, we ran CLASS with enhanced accuracy settings (namely, the ones of the public precision parameter file {\tt cl\_ref.pre}). Note that even with such settings, a comparison with the CAMB code suggests that both Einstein-Boltzmann solvers are accurate at least at the $10^{-4}$ level~\cite{Lesgourgues:2011rg}. However, the level of convergence of each of the two codes against an increase in their own precision parameters is much better than that. Thus showing residuals smaller than $10^{-4}$ is still meaningful when one wants to highlight the effect of just one type of error (in our case, the one induced by the optimal hierarchy). Even when the residuals shown in Figure~\ref{Plot2a2b2c2d} are below $10^{-4}$, they show the specific impact of switching between hierarchies, even in the presence of comparable or larger sources of errors in other aspects of the code. To check this, we tried several accuracy settings between default precision and {\tt cl\_ref.pre}, and found that our residuals are stable and well-converged at least for $\ell<200$. For $\ell>200$ this was not always the case and we choose to limit the plots to the first range. But given that there is a solid analytical argument for the error to decrease with $\gr{q}$ and $\ell$, it is sufficient to obtain converged results for small multipoles. For scalar modes, we plotted the results obtained using the synchronous gauge, but we found nearly identical curves when running CLASS in the newtonian gauge. The residuals are nearly equal even in ranges when the error induced by the optimal hierarchy is smaller than the one induced by the newtonian gauge (the two gauges agree at the level of $10^{-6}$). This brings further confirmation that our residuals correctly capture the error induced by the optimal hierarchy only. \subsection{Efficiency of the hierarchies} To compare the efficiency of the two approaches as implemented in CLASS, we need to make several choices. Indeed, the result of timing tests should depend on many factors like: \begin{itemize} \item the level of precision: high precision (in particular, a larger truncation multipole $j_{\rm max}$) is more favorable to the optimal hierarchy; the choice of algorithm to solve Ordinary Differential Equations (ODEs) is also important; \item the underlying cosmology: with more ingredients involved, the weight of photons in the system of perturbation equations decreases, and the difference between hierarchies is less pronounced; \item the timing method: if we compare the total execution time of the code in the two cases, the result will depend a lot on the requested output; this is not the case if we only compare the time $\Delta t_{\rm ODE}$ spent by CLASS in the loop over $q$-modes, during which the system of ODEs is integrated over time for either scalar or tensor perturbations; \item the chosen number of parallel threads, the compiler, the optimization flags, etc. \end{itemize} Here we will focus on the ratio of $\Delta t_{\rm ODE}$ when the two hierarchies are used, while running CLASS with default precision, and thus with the {\tt ndf15} ODE solver \cite{ClassII}. The default precision settings of the current version of CLASS have been optimized for accurate MCMC fits of Planck data. For the optimal hierarchy, they give $j_{\rm max}=12$ for scalar temperature, 10 for scalar polarization and 5 for tensor temperature and polarization. In the TAM hierarchy, to be consistent, we should keep the same truncation for scalar modes and increase $j_{\rm max}$ by two for tensor modes\footnote{ Indeed, the functions $Y_m^m$ and $\tilde{\epsilon}^m$ that appear in the relations between the expansions in equations (\ref{DefineFjmHierarchy}, \ref{DefineGjmHierarchy}) have the geometry of a monopole for scalar modes and of a quadrupole for tensor modes.}. Thus we set $j_{\rm max}=7$ by default in the TAM tensor case. By comparing with the results of the previous sections (obtained with high precision settings and $j_{\rm max}=50$ in all cases), we checked that with such default precision settings, the accuracy level is roughly the same in the two schemes. In these timing tests, we assumed a $\Lambda$CDM model with massless neutrinos, spatial curvature and tensor modes, and we asked only for CMB output (in CLASS syntax, {\tt output = tCl,pCl,lCl}). Our results are however independent of $\Omega_K$ and apply also to flat models. We quote relative differences when the code is run sequentially, using the compiler {\tt gcc 9.2.0} with option {\tt -O4}. We find that for scalar modes, the time interval $\Delta t_{\rm ODE}$ is the same in the two schemes up to negligible (percent level) differences. This is consistent with the fact that the two hierarchies involve roughly the same number of photon mutipoles: 13+11=24 in the optimal case, and 13+9=22 in the TAM case since the scalar magnetic mutipoles $B_j^0$ vanish and do not need to be defined. For tensor modes, we find a 13\% speed up in the optimal scheme. In that case, the optimal hierarchy involves 12 multipoles and the TAM hierarchy 18 multipoles. Thus, with a line-of-sight method and standard precision requirements, the efficiency of the two schemes is very similar. Choosing one of them is mainly a matter of taste. Given that the optimal hierarchy is accurate enough for most purposes, in our implementation, we kept it as the default choice for continuity with previous CLASS versions. \section*{Conclusion} The incorrect relation between the Stokes parameters $Q$ and $U$ assumed by the optimal hierarchy leads to different source terms in the line-of-sight integrals, especially around the time of reionization, when the sources are shaped by the details of the free-streaming solution. In the observable angular spectra, differences remain very small, because the reionisation epoch accounts only for a small part of the total spectra. They are further suppressed for scalar modes by the dominant role of temperature multipoles, correctly handled by both hierarchies, and for tensor temperature by the dominant role of metric perturbations. They are thus predominately seen in the tensor polarization spectra, on the scale of the reionization bump ($\ell \leq 10$). For instance, for $|\Omega_K|=0.1$, the tensor polarization spectra are affected at the $5\%$ level for such multipoles. In the future, if cosmological observations came to prefer a slightly curved universe with, for instance, $|\Omega_K| \sim 0.02$, using the TAM hierarchy instead of the optimal one would be important for reconstructing the tensor-to-scalar ratio from $C_\ell^{BB}$ with an accuracy of 1\%. However, if the current bound from Planck+BAO gets confirmed, $|\Omega_K|<0.002$ (68\%CL), the optimal hierarchy is sufficient to guarantee a 0.1\% accuracy on the tensor polarization spectra. On the other hand, if one is interested on the transfer of super-Hubble or supercurvature modes, then it is crucial to rely on the TAM hierarchy. For instance, it has been shown that very long (i.e., maximal wavelength) modes on the top of isotropic spacetimes are equivalent to Bianchi universes~\cite{Pontzen2010,Duality}, and in that case it is crucial to rely on the correct TAM hierarchy to infer the observational consequences with this approach~\cite{PPLetter}. Using the TAM hierarchy might also be of importance for checking the validity of consistency theorems in single field inflation~\cite{Maldacena:2002vr}, which allows one to connect the primordial bispectra in squeezed configurations to products of the primordial spectra \cite{Creminelli:2011sq,Mirbabayi:2014hda}, since it involves very large-scale modes modulating the small scales dynamics. \acknowledgements{We would like to thank Thomas Tram and Nils Sch\"oneberg for very valuable help and comments. TP thanks the Brazilian Funding Agency CNPq (grants 311527/2018-3 and 438689/2018-6) for the financial support.}
1610.01984
\section{Introduction} The $\sigma$ Orionis cluster contains several hundred young stars surrounding the multiple star system $\sigma$ Orionis \citep[see review by][]{walter08}. The clustering of 15 B-type stars in the region was first noted by \citet{garrison67}, and it was included in the catalog of open clusters by \citet{lynga81}. The discovery of a large population of low-mass pre-main sequence stars in the area around $\sigma$ Orionis was reported by \citet{walter97,walter98}. Subsequent photometric and spectroscopic searches have identified additional low-mass and substellar candidate members \citep[e.g.,][]{bejar99, bejar11, zapatero00, sherry04, caballero08, lodieu09, hernandez14, koenig15}. With an age of about 2$-$3 Myr \citep{sherry08}, about 30\% to 50\% of low-mass stars ($M < 1$ $M_\odot$) in the cluster retain their accretion disks \citep[e.g.,][]{oliveira06,hernandez07,luhman08,sacco08,pena_ramirez12}. Distance estimates to the cluster range from 330 to 450 pc \citep[e.g.,][]{walter08}. The multiple star system $\sigma$ Orionis (HD 37468, WDS J05387-0236) lies at the center of the cluster. The five main components include the O9~V star $\sigma$ Ori A, the B0.5~V star $\sigma$ Ori B at a separation of 0\farcs25 \citep{burnham1894,edwards76}, the A2~V star $\sigma$ Ori C at 11$''$, the B2~V star $\sigma$ Ori D at 13$''$, and the helium-rich, magnetic B2~Vpe star $\sigma$ Ori E at 42$''$ \citep{struve1837,greenstein58,landstreet78}. A more extensive description of the multiplicity of wider or fainter components in the system is described by \citet{caballero14}. The pair $\sigma$ Ori A,B has an orbital period of about 157 yr \citep{heintz74,heintz97,hartkopf96,turner08}. The A component was suspected to be a spectroscopic binary based on the appearance of double lines in the spectrum \citep{frost1904,miczaika50,bolton74}, but it was not until recently that a double-lined spectroscopic binary orbit was measured; the spectroscopic pair $\sigma$ Ori Aa,Ab has a period of 143 days \citep{simondiaz11,simondiaz15}. In this paper we report spatially resolved measurements of the close triple system ($\sigma$ Ori Aa,Ab,B) using long baseline optical/infrared interferometry and also present new spectroscopic radial velocity measurements. Combining the visual orbits with the new and previously published radial velocities yields the dynamical masses of the three components and the distance to the system. Precise dynamical masses of O-stars are needed for testing the predictions from different sets of evolutionary models for massive stars \citep{maeder95,gies03,weidner10,massey12,morrell14}. Additionally, a precise orbital parallax to the $\sigma$ Orionis cluster provides an accurate distance for determining the age of the cluster and characterizing the physical properties and disk life-times for the stars, brown dwarfs, and planetary-mass members in the region. \section{Interferometric Observations of the $\sigma$ Orionis Triple} A general overview of optical interferometry and measures of the interference fringes (visibility amplitude and closure phase) can be found in reviews on the subject \citep{lawson00,monnier03,haniff07}. The visibility amplitudes provide information on the size, shape, and structure of the source. The closure phases are particularly sensitive to asymmetries in the light distribution. \subsection{CHARA Observations and Data Reduction} Interferometric data on the $\sigma$ Orionis triple system were collected between 2010 and 2013 at the CHARA Array located on Mount Wilson, California. The array has six 1\,m telescopes arranged in a $Y$ configuration with baselines ranging from 34 to 331 m \citep{tenbrummelaar05}. There are two telescopes in each arm, labeled as E (East), W (West), and S (South). We used the Michigan Infrared Combiner \citep[MIRC;][]{monnier04,monnier06} to combine the light from three to six telescopes simultaneously. All data were collected after the photometric channels were installed in MIRC; the photometric channels measure the amount of light received from each telescope during the observations to improve the calibration \citep{che10}. We used the low spectral resolution prism ($R \sim 42$) to disperse the fringes across eight spectral channels in the $H$-band ($\lambda = 1.5-1.8 \mu$m). Table~\ref{tab.log} provides an observing log that lists the UT date, HJD, telescope configuration, interferometric calibrator stars used during the observations, the number of visibility and closure phase measurements recorded on each night, and the median seeing corrected to zenith in the $V$-band reported by the tip-tilt sytem during the $\sigma$ Orionis observations. The CHARA data were reduced using the standard MIRC reduction pipeline \citep[e.g.,][]{monnier07}. For nearly all nights, we used a coherent integration time of 75 ms to improve the signal to noise. On UT 2010 November 5, we found differences in the visibility calibration using the 75 ms coherent integration time compared with the default value of 17 ms; this was probably because of rapid time variability in the seeing. For that night, we used the squared visibilities from the 17 ms integration times and the closure phases from the 75 ms integration times. The data were calibrated using observations of single stars of known angular sizes observed before and/or after the target. The adopted angular diameters for the calibrator stars are listed in Table~\ref{tab.cal}. For HD 25490 and HD 33256 we computed the angular diameters by modeling their spectral energy distributions using the method described in \citet{schaefer10}. The calibrated data were averaged over 5$-$30 minute observing blocks. Based on calibrator studies, we applied minimum uncertainties of 5\% on the squared visibilities and 0\fdg3 on the closure phases. We corrected the wavelength scale according to the wavelength calibration computed by \citet{monnier12}. The precision in the absolute wavelength calibration is good to $\pm$0.25\%. Examples of the calibrated squared visibility amplitudes and closure phases of $\sigma$ Orionis are shown in Figures~\ref{fig.vis2_bin} and~\ref{fig.t3_bin}. The calibrated data files with the systematic uncertainties and wavelength correction applied will be available through the Optical Interferometry Database developed by the Jean-Marie Mariotti Center\footnote{http://www.jmmc.fr/oidb.htm}. \begin{figure*} \plotone{f1.eps} \caption{Squared visibilities of $\sigma$ Orionis measured with MIRC at the CHARA Array on UT 2011 September 29 (filled black circles). The red crosses indicate the visibilities derived from the best-fit scaled binary model. The observations have been averaged over 5 min observing blocks. The S1-S2 baseline has been excluded from the fit (see text). } \label{fig.vis2_bin} \end{figure*} \begin{figure*} \plotone{f2.eps} \caption{Closure phases of $\sigma$ Orionis measured with MIRC at the CHARA Array on UT 2011 September 29 (filled black circles). The red crosses indicate the closure phases derived from the best-fit scaled binary model. The observations have been averaged over 5 min observing blocks. Closure triangles that include the S1-S2 baseline have been excluded from the fit (see text). } \label{fig.t3_bin} \end{figure*} \subsection{CHARA Astrometric Results} The diffraction limit of a single 1\,m CHARA telescope in the $H$-band corresponds to $\sim$ 0\farcs4 on the sky. Therefore, light from all three components in the $\sigma$ Orionis triple (Aa, Ab, B) is recorded in the field of view of the detector (set by the injection of light into the optical fibers of MIRC). However, given the width of the MIRC spectral channels ($\Delta \lambda \sim 0.035$~$\mu$m) and the corresponding coherence length ($\lambda^2/\Delta\lambda \sim 75$~$\mu$m), the wide 0\farcs25 component $\sigma$ Ori B contributes only incoherent light on all but the shortest baselines, degrading the fringe amplitude by a constant amount set by the percentage of light coming from the wide component. For the shortest baselines (e.g., S1-S2 with a baseline length of 34~m), light from the wide component adds coherently to produce additional periodic variations in the visibilities and closure phases. To simplify the model fitting, we excluded from the fit the S1-S2 baseline and all closure triangles that included both the S1 and S2 telescopes. A binary star produces a periodic signal in the complex fringe visibilities \citep{boden00}. The presence of the wide third component adds incoherent flux that can be accounted for by scaling the complex visibilities, \begin{equation} V = \frac{f_1V_1 + f_2V_2 \exp{[-2\pi i (u\Delta \alpha+ v\Delta \delta)]}}{(f_1 + f_2 + f_3)} \end{equation} where ($\Delta \alpha, \Delta \delta$) are the close pair binary separation in R.A. and Decl., ($u,v$) are the baseline components projected on the sky, $V_1$ and $V_2$ are the uniform disk visibilities of the primary and secondary components with angular diameters $\theta_1$ and $\theta_2$, and $f_1$, $f_2$, and $f_3$ are the flux fractions from each of the three components ($f_1 + f_2 + f_3 = 1$). When $f_3$ is non-zero, the peaks in the periodic visibility curves no longer rise to 1. The real and imaginary parts of the complex visibility are combined to form the squared visibility amplitude between each pair of telescopes and the closure phase for each set of three telescopes. We fit the squared visibility amplitudes and closure phases measured with MIRC using this scaled binary model, assuming angular diameters for the component stars of $\theta_{\rm Aa} = 0.27$ mas and $\theta_{\rm Ab}$ = 0.21 mas (see Section~\ref{sect.npoi_orb}). The adopted values are larger than the angular diameters predicted by \citet[][0.14 mas and 0.12 mas]{simondiaz15}. However, because the stellar diameters are unresolved by the interferometer, the effect on the model fitting is small. The flux contributions change by about 1$-$2\%, while the binary positions remain consistent within the 1\,$\sigma$ uncertainties. We followed an adaptive grid search procedure \citep[similar to the method described in][]{gallenne15} where we searched through a grid of separations in R.A. and Decl. and performed a Levenberg-Marquardt least-squares minimization using the IDL mpfit\footnote{http://cow.physics.wisc.edu/\raisebox{0.1em}{\tiny$\sim$\,}craigm/idl/idl.html} routine \citep{markwardt09} to determine the best fit binary solution for each step in the grid. We retained the solution with the lowest $\chi^2$ and examined the $\chi^2$ space to check for possible alternative solutions. For most epochs we found a unique solution with a second minimum reflected through the origin but with the fluxes of the components in the close pair flipped (no other solutions were typically found within $\Delta\chi^2 > 100-10,000$ from the best fit). For the data taken on UT 2013 November 3, we found an alternative solution with $\Delta\chi^2$ = 12 from the best fit solution; in addition to the higher $\chi^2$, the alternative position is not consistent with the orbital motion mapped in Section~\ref{sect.orbAaAb}. On UT 2010 November 4, we found multiple solutions in the $\chi^2$ maps with $\chi^2 < 25$. This was likely caused by a combination of the limited $(u,v)$ coverage during the observation and poor data calibration because of possible alignment drifts during the long time interval to find fringes combined with poor seeing conditions as the target was setting (altitude $\sim$ 36$^\circ$). Because of the ambiguities in the solutions, we do not report a position for this night. Table~\ref{tab.sepPA_mirc} lists the separation $\rho$, position angle $\theta$ (measured east of north), and component flux contributions during each of the MIRC observations obtained at the CHARA Array. Uncertainties in the binary positions were computed from the covariance matrix and include correlations between the binary separation in R.A. and Decl. In Table~\ref{tab.sepPA_mirc}, we report the semi-major axis, semi-minor axis, and position angle of the major axis of the error ellipse ($\sigma_{\rm maj}, \sigma_{\rm min}, \phi$, respectively). We compared these uncertainties against $\chi^2$ maps generated from a two-dimensional grid search using fixed steps in separation; the error ellipses are in agreement with the size and orientation of the 1\,$\sigma$ ($\Delta \chi^2 = 1$) confidence intervals from the $\chi^2$ maps. On average, the components contribute a mean of 47.7\% $\pm$ 5.9\% (Aa), 27.4\% $\pm$ 5.2\% (Ab), and 24.9\% $\pm$ 8.0\% (B) of the total light recorded on the detector in the $H$-band. These fractional flux contributions are very similar to those in the $V$-band as estimated by \citet{simondiaz15}, $48\% $, $28\% $ and $24\% $, respectively. The larger uncertainties derived for the binary positions on 2013 November 3 and 11 are likely caused by a combination of poor seeing conditions that made finding and tracking the fringes difficult, and the limited $(u,v)$ coverage obtained from the smaller number of telescopes on which fringes could be found. The binary position is expected to change more rapidly on these nights since the companion is near periastron, however, the expected motion on the sky during the time-frame of the observations is smaller than the measurement uncertainties. Breaking the data into smaller time blocks that were fit independently resulted in positions that varied randomly with even bigger error ellipses. Therefore we report the average positions based on the fit to all measurements on each night. As a check on our results, we also fit the MIRC data using a triple model that includes the relative separation between all three components, $\sigma$ Ori Aa,Ab,B. To minimize the effects of time smearing, we used calibrated data files that were averaged over shorter 2.5 min observing blocks. To account for time smearing across the observing blocks, we computed the triple model at 10 second intervals and averaged over the complex visibilities. We also accounted for bandwidth smearing which reduces the fringe coherence at separations comparable to the width of the fringe packet following the formalism in \citet{kraus05}. Summing the visibilities at the location of each component, the complex visibility of a triple system is given by, \begin{eqnarray} V &=& \big[ f_1 c_1(\tau) V_1 e^{-2 \pi i (u \Delta \alpha_{1} + v \Delta \delta_{1})} \nonumber \\ & & + f_2 c_2(\tau) V_2 e^{-2 \pi i (u \Delta \alpha_{2} + v \Delta \delta_{2})} \nonumber \\ & & + f_3 c_3(\tau) V_3 e^{-2 \pi i (u \Delta \alpha_{3} + v \Delta \delta_{3})} \big] \times \frac{1}{f_1 + f_2 + f_3} ~~~~~~ \end{eqnarray} where $(\Delta \alpha_{n}, \Delta \delta_{n})$ are the separations in R.A. and Decl.\ between the primary, secondary, and tertiary components ($n$ = 1, 2, 3) and the phase center. In the analysis of the MIRC data, we assumed the phase center to be the photocenter of $\sigma$ Ori Aa,Ab. The coherence for a rectangular bandpass profile is given by \begin{equation} c_n(\tau) = \frac{\sin{(\pi \tau_{n} \Delta\lambda/\lambda^2)}}{\pi \tau_{n} \Delta\lambda/\lambda^2} \end{equation} where the optical path length delays are given by \begin{equation} \tau_{n} = \lambda (u \Delta \alpha_{n} + v \Delta \delta_{n}) \end{equation} and $\Delta \lambda$ is the width of the wavelength channel and $\lambda$ is the central wavelength. The triple model reproduces the variation in the visibilities and closure phases on the baselines and triangles that include the S1 and S2 telescopes as shown in Figure~\ref{fig.triple}. However, the triple fit is further complicated by changes in seeing and telescope-dependent tip-tilt corrections that influence the measured photocenter of the system and the corresponding phase shift of the fringes. The wide component is over-resolved on the longer baselines, so it is primarily the short S1-S2 baseline that samples the wide pair separation. Because of this limited baseline coverage on the sky, the $\chi^2$ maps for the wide component separation sometimes have multiple peaks that are consistent with the data. On the other hand, the close pair separations derived from the triple model are stable and within the uncertainties of those from the scaled binary fit. We opted to report the simpler scaled binary solution as our final results. \begin{figure*} \scalebox{0.55}{\includegraphics{f3a.eps}} \scalebox{0.55}{\includegraphics{f3b.eps}} \caption{Squared visibilities (left) and closure phases (right) of $\sigma$ Ori measured with MIRC at the CHARA Array on UT 2011 September 29 (black circles). We show only the shortest baseline and closure triangles that include the S1-S2 telescopes. The observations have been averaged over 2.5 min observing blocks. The blue plus signs show the best-fit scaled binary model fit to all baselines and triangles. The observations obtained with the shortest baseline are fit much better by a triple model (red crosses) that directly includes the position of the wide companion $\sigma$ Ori B.} \label{fig.triple} \end{figure*} \subsection{VLTI Observations and Data Reduction} $\sigma$ Orionis was observed with the AMBER \citep{petrov07} beam combiner at the VLTI \citep{scholler07} using the Antu (UT1), Kueyen (UT2) and Yepun (UT4) 8.2\,m telescopes on UT 2008 October 14 (HJD 2454753.7). The data were recorded with the low-resolution mode ($R=35$) in the $H$ and $K$ bands. The longest baseline between UT1 and UT4 is nominally 130 m in length. A single observation of the science target was sandwiched between two calibrator observations, one of HD 34137 and the other of HD 36059, with diameters of $0.73 \pm 0.02$ mas and $0.51 \pm 0.01$ mas, respectively \citep{bonneau06,bonneau11}. The data were reduced using the amdlib pipeline \citep{tatulli07,chelli09} but only the top 30\% visibility data in terms of signal-to-noise ratio were used to reduce the influence of periods of poor group-delay fringe tracking. Seeing was 0\farcs8 on average, but vibrations present in the UT infrastructure limited the fringe contrast. The transfer function was linearly interpolated between the two calibrator measurements to the epoch of the science observation. With a field of view of about 60 mas with the UTs, AMBER only sees the close pair. The measured separation and position angle are $\rho=4.30\pm0.52$ mas and $\theta = 174\fdg70 \pm 4\fdg7$. The fit to the data is shown in Fig.~\ref{AMBER_fig_orbit}. The fitted magnitude difference between Ab and Aa is $0.57\pm0.03$ in the $H$ band and $0.55\pm0.02$ in the $K$ band. The $H$ band value is consistent with the CHARA value of $\Delta H=0.60$ mag. \begin{figure} \plottwo{f4a.eps}{f4b.eps}\\ \plottwo{f4c.eps}{f4d.eps}\\ \caption{AMBER $H$ and $K$ (squared) visibilities and closure phases. Each color in the visibility plots represents a different baseline pair. The best-fit binary model is overplotted as the solid black line.} \label{AMBER_fig_orbit} \end{figure} \subsection{NPOI Observations and Data Reduction} NPOI observations \citep{armstrong98} of $\sigma$ Orionis were collected over a period from 2000 to 2013. Initially, the observations were obtained with the 3-beam combiner, and then, starting in 2002, with the 6-beam hybrid combiner \citep{benson03}. The NPOI beam combiners disperse the light and record the visibility spectra from 550 nm to 850 nm in 16 spectral channels. In total, some 59 nights of observations were executed, of which 26 nights were of good quality. Observations of calibrator stars were interleaved with the science target. Table~\ref{NPOI_table_obs} gives information on dates, configurations, and calibrator stars observed for each night. A configuration is given as a triple of stations (e.g. ``AC-AE-W7'', using astrometric stations Center and East, as well as imaging station W7) if data from all three baselines were used, including the corresponding closure phase. If a single baseline is listed, squared visibility data from that baseline were used but no closure phase data were available involving this baseline. The calibrators were selected from a list of single stars maintained at NPOI with diameters estimated from $V$ and $(V-K)$ using the surface brightness relation published by \citet{mozurkewich03} and \citet{vanbelle09}. Estimates for $E(B-V)$ derived by comparison of the observed colors to theoretical colors as a function of spectral type as given by Schmidt-Kaler in \citet{aller82} were used to derive extinction estimates $A_V$. These were compared to measurements based on maps by \citet{drimmel03} and used to correct $V$ if the two estimates agreed within 0.5 magnitudes. Even though the surface brightness relationship based on $(V-K)$ colors is to first order independent of reddening, we included this small correction because our principal calibrator, $\epsilon$ Orionis (HD 37128), is a B-supergiant at more than 400 pc distance and has a predicted apparent diameter of 1.01 mas. Based on an analysis of calibrator stars observed using the Mark III interferometer, \citet{mozurkewich91} measured uniform disk diameters of 0.86 mas $\pm$ 0.16 mas (at 800 nm) and 1.02 $\pm$ 0.12 mas (at 450 nm) for $\epsilon$ Orionis. However, because the star was barely resolved on the Mark III baselines (up to 38 m in length), we decided to use our estimate as the more precise value. On the longest NPOI baseline that we used (E6-W7, 79 m), and in the middle of the bandpass (700 nm), the expected squared visibility of $\epsilon$ Orionis is 0.45. The information for all of the calibrators is given in Table~\ref{NPOI_table_cal}. The NPOI data and their reduction were described by \citet{hummel98,hummel03}. We used an new version of the OYSTER\footnote{http://www.eso.org/$\sim$chummel/oyster} NPOI data reduction package written in GDL\footnote{http://gnudatalanguage.sourceforge.net}. The pipeline automatically edits the 1-second averages produced by another pipeline directly from the raw frames, based on expected performance such as the variance of fringe tracker delay, photon count rates, and narrow angle tracker offsets. Visibility bias corrections are derived as usual from data recorded away from the stellar fringe packet. After averaging the data over the full length of an observation, the closure phases and the transfer function of the calibrators were interpolated to the observation epochs of $\sigma$ Orionis. For the calibration of the visibilities, the pipeline used all calibrator stars observed during a night to obtain smooth averages of the amplitude and phase transfer functions using a Gaussian kernel of 80 minutes in length. The residual scatter of the calibrator visibilities and phases around the average set the level of the calibration uncertainty and was added in quadrature to the intrinsic data errors. Considerable effort was invested in algorithms that automatically edit the visibility data based on the variance of the delay-line positions following the procedures described by \citet[][Section 4.2]{hummel03} and adapted to more complicated source structures where the signal-to-noise ratio is low. Especially in the case of $\sigma$ Orionis, deep visibility minima exist on the baselines typically employed by our observations. A final step was therfore added to detect problems by comparing the results to the predictions of the final model derived later from all data sets. An amplitude calibration error of typically a few percent in the red channels and up to 15\% in the blue channels was added in quadrature to the intrinsic error of the visibilities. The phase calibration was good to $\sim 2^\circ$. Nevertheless, because of small changes in atmospheric conditions between the observations of the calibrators and the science target we used additional {\em baseline-based} calibration factors (``floating calibration'') to allow minor adjustments of the visibility spectra to obtain better fits to the orbital elements (and magnitude differences) of the triple system. Two thirds of the spectra were adjusted by less than 25\%, the remainder were mostly low SNR spectra. Because the components of $\sigma$ Orionis Aa, Ab, and B are unresolved (see Section~\ref{sect.npoi_orb}), the maximum visibility amplitude was fixed to unity. This procedure will not bias the astrometric results because the binary separation is constrained mostly by the variation of the visibility data with wavelength (Fig.~\ref{NPOI_fig_vissq325}). The magnitude differences between the components across the 550$-$850 nm band were determined to be 0.5 $\pm$ 0.1 mag for Ab$-$Aa and 1.5 $\pm$ 0.2 mag for B$-$A. We assumed that the magnitude differences between the components are the same across the $V$ and $I$ bands; this is expected since both components are hot stars and should have similar colors. \subsection{NPOI Astrometric Results} \label{sect.npoi_astrom} Because of the large angular separation of the tertiary component ($\sigma$ Ori B), rapid variations of the visibility amplitude occur on the shorter NPOI baselines, while they are completely smeared out on the longer baselines due to the finite width of the spectral channels. The number of fringes in the central envelope of an interferogram is given by $N=2\lambda/\Delta\lambda=2R$ where $\Delta\lambda$ is the width of the bandpass and $R$ is the equivalent spectral resolving power of the spectrometer. The fringe amplitude decreases to zero towards the edge of the envelope. One fringe spacing corresponds to $\lambda/B$ radians on the sky, where $B$ is the projected baseline length. Since the smallest baselines employed for our observations are about 20 M$\lambda$ long (in the reddest channel at 850 nm), the fringe spacing is about 10 mas, and thus the field of view is about 300 mas in diameter if we consider a loss in (squared) amplitude of about 60\% and $R=30$ for the NPOI spectrometers. Since the NPOI channel bandpasses are known, complex visibilities predicted by a model of the triple system are computed on a sufficiently fine wavelength grid, and then averaged over the bandpasses before converting them to squared visibilities and closure phase for comparison to the observed quantities. An example of the rapid variations of the (squared) visibility amplitude on the AN0-W7 baseline is shown in Figure~\ref{NPOI_fig_vissq325}, together with the predicted values from our final model (discussed in Section~\ref{sect.npoi_orb}). Small errors in the predicted position of the tertiary relative to the close binary can lead to significant deviations between the data and the model. Therefore, we first improved our knowledge of the tertiary orbit. The elements published by \citet{turner08} were based on adaptive optics and speckle measurements, the last of which dates back to the end of 2001. While our NPOI observations started around the same time, the early data sets did not allow for the unambiguous identification of the location of the tertiary (if detected at all) because of the close and regular spacing of the local minima in the $\chi^2$ surface which is caused by undersampling of the fast variations of the visibility amplitude in combination with often parallel orientation of the baselines relative to the direction of the tertiary. The first night to provide an unambiguous identification of the position of the tertiary was on 2010 March 25, as one of the baselines, AN0-W7, rotated close to an orthogonal orientation to the wide binary orientation, causing a change in the ``wavelength'' of the visibility oscillation (seen in Fig.~\ref{NPOI_fig_vissq325}). We then added this epoch to the measurements of $\sigma$ Ori A,B available from the Washington Double Star Catalog and refit the orbital elements. Subsequently, five more nights were identified with similar quality, and were used to refine the orbital elements. Finally, all nights with a pronounced minimum of $\chi^2$ at the predicted position of the tertiary were included in the fit. The results are given in Table~\ref{NPOI_table_abc} which gives the date, Julian year of the observation (at 7 UT), the number of measured visibilities, the derived separation (relative to the center of mass of the close pair), position angle, and the semi-axes and position angle of the uncertainty ellipses. The last two columns give the deviation of the fitted relative binary position $(\rho,\theta)$ from the model values. The uncertainty ellipses were computed from fits to contours of the $\chi^2$ surfaces near the minima rather than deriving them from the interferometric PSF. This accounts for the limitations of fitting a component position very far from the phase center. We scaled the contours to result in a reduced $\chi^2$ of unity at the minimum. The positions of $\sigma$ Ori A,B are in good agreement with measurements made at similar times by \citet{simondiaz15} and \citet{aldoretta15}. \begin{figure*} \plotone{f5.eps} \caption{NPOI squared visibilities for 2010 March 25. Panels a-f correspond to observations at 03:38, 03:42, 04:08, 04:17, 04:32, and 04:36 UT. The data shown are for baseline AN0-W7. } \label{NPOI_fig_vissq325} \end{figure*} After the orbit of the tertiary was revised, astrometric positions of the secondary were fit to the visibility data for each night separately (with fixed tertiary positions derived from the tertiary orbit). Error ellipses were estimated using the $\chi^2$ surface maps centered on the position of the secondary. The $\chi^2$ contour interval was selected to give a reduced $\chi^2$ close to unity when fitting the astrometric positions with an orbit for the close pair. This resulted in using the $\Delta \chi^2 = 40$ confidence interval. Correlations in the visibility amplitudes between the 16 channels, related to atmospheric seeing variations, reduces the number of independent data points and explains partly the size of this interval. Table~\ref{NPOI_table_ab} lists the results for the separation and position angle of $\sigma$ Ori Aa,Ab derived from the NPOI data, the semi-axes and position angle of the uncertainty ellipses, and the residuals compared with the orbit fit. \section{CTIO Spectroscopy} We obtained new spectrocopic radial velocity measurements of $\sigma$ Orionis Aa,Ab using the 1.5\,m telescope at CTIO. We obtained 40 observations on 29 nights using the Fiber Echelle (FE) Spectrograph\footnote{\url http://www.ctio.noao.edu/$\sim$atokovin/echelle/FECH-overview.html} ($R=25,000$, $\lambda$ = 4800--7000 \AA) between UT 2008 September 23 and 2009 February 21. Additional observations were obtained using the Chiron fiber-fed echelle spectrometer \citep{tokovinin13} equipped with an image slicer ($R=78,000$, $\lambda$ = 4550--8800 \AA) on 10 nights between UT 2012 November 4 and 2013 February 2 and 11 nights between UT 2016 January 21 and March 27. The Chiron observations were concentrated near periastron passage of the close pair. All of the spectra were corrected to a heliocentric velocity scale prior to measurement. For the FE data, we measured the velocities of the He I 5876 line because it is in the same order as the interstellar Na I D lines, which provide a good velocity fiducial. For the Chiron data, we fit five He I lines ($\lambda\lambda$ 4713, 4921, 5876, 6678, and 7065 \AA) and He II ($\lambda=4686$ \AA). The He I lines are stronger in the cooler, less massive component while He II is stronger in the more rapidly rotating hotter star. We fit two Gaussian components to each line to measure the radial velocities of both components. We allowed the central wavelength, width, and amplitude of the Gaussian components to vary independently for each fit. The He II 4686 and He I 6678 line profiles are fairly clean, while contamination from weak lines from the cooler star in the three bluest He I lines required fitting up to three additional Gaussian components. We treat these additional components as nuisance parameters. Telluric lines are present at 5876 \AA\ and are a significant problem at 7065 \AA. We generated a telluric spectrum by filtering these spectra with a low-pass filter to remove the higher frequency narrow lines while preserving the He I line profiles, and then fit these ``cleaned" spectra. To check the wavelength stability, we measured the insterstellar Na D1 and D2 lines in all of the spectra. At the lower resolution of the FE, contamination by the telluric lines can distort the Na D profiles as they shift due to the heliocentric correction. In fact, there is a small annual distortion in the measured velocity of the Na D lines in the FE spectra. The median radial velocities measured from the FE spectra are $21.72 \pm 0.41$ km\,s$^{-1}$ for Na D1 and $22.63 \pm 0.69$ km\,s$^{-1}$ for Na I D2, where the uncertainties are the standard deviations from the mean. With the higher resolution Chiron spectra we were able to fit both interstellar components (a weaker line at about $+10$ km\,s$^{-1}$) and avoid the stronger telluric features. The stronger lines have stable radial velocities with a median of $22.55 \pm 0.21$ km\,s$^{-1}$ for Na D1 and $22.54 \pm 0.19$ km\,s$^{-1}$ for Na D2. The Chiron instrumental resolution is about 3.8 km\,s$^{-1}$. There seem to be no significant offsets between the two instrument zero-points. \citet{hobbs69} resolved the Na D lines into two components with velocities of 20.5 and 24.0 km\,s$^{-1}$ at higher spectral resolution ($\sim$ 0.51 km\,s$^{-1}$ ); these would average to 22.3 km\,s$^{-1}$, consistent with our measurements. The median radial velocities of $\sigma$ Ori Aa and Ab, measured from the selected spectral lines, are presented in Table~\ref{tab.rv}. Based on the Gaussian line widths, we derived rotational velocities of $v \sin{i} \approx 125$ km\,s$^{-1}$ for Aa and $v \sin{i} \approx 43$ km\,s$^{-1}$ for Ab (assuming no limb-darkening). We did not fit for the weak and broad stationary lines from $\sigma$~Ori~B, which are difficult to detect without detailed modeling \citep{simondiaz11,simondiaz15}. Because the spectral profiles of $\sigma$~Ori~B are so shallow, their presence creates only a slight depression of the continuum near line center and has little influence on the velocity measurements of components Aa and Ab. \begin{figure*} \plottwo{f6a.eps}{f6b.eps} \caption{Radial velocities of $\sigma$ Orionis Aa,Ab published by \citet[][left]{simondiaz15} and measured at CTIO (right). The blue circles show the velocities measured for Aa and the red squares show the velocities for Ab. Overplotted are the radial velocity curves derived from the simultaneous orbit fit to both sets of spectroscopic data and the interferometric positions measured at the CHARA Array. The residuals for each component are shown in the lower panels.} \label{fig.orbit_sb2} \end{figure*} The first two columns of Table~\ref{tab.orb} show the spectroscopic orbital parameters derived by \citet{simondiaz15} compared with those derived from the CTIO radial velocities. There are systematic differences between the radial velocity semi-amplitudes ($K_{Aa}$, $K_{Ab}$) and the systemic velocity $\gamma$ derived from each set of data. Sim\'{o}n-D\'{i}az et al.\ cross-correlated the spectra against atmospheric models, fitting many lines simultaneously, which could account for the higher precision of their velocity semi-amplitudes. The systematic differences could result from the different methods used to fit the blended lines, as well as differences in the wavelength calibration. A comparison of the radial velocity measurements is shown in Figure~\ref{fig.orbit_sb2}. A simultaneous orbit fit to both sets of data, along with the interferometric positions, is discussed in Section~\ref{sect.orbAaAb}. \section{Orbits and Derived Properties of the $\sigma$ Orionis Triple} \subsection{Visual and Spectroscopic Orbit of the Close Pair $\sigma$ Orionis Aa,Ab} \label{sect.orbAaAb} We fit a simultaneous orbit to the higher precision interferometric positions of $\sigma$ Orionis Aa,Ab measured with CHARA in Table~\ref{tab.sepPA_mirc}, the published radial velocities reported by \citet{simondiaz15}, and the CTIO radial velocities in Table~\ref{tab.rv}. We compare the fit to the NPOI positions of the close pair in Section~\ref{sect.npoi_orb}. Before computing the joint orbit fit, we fit each set of data independently and scaled the measurement uncertainties to force the reduced $\chi^2_\nu = 1$ for each of the CHARA and two radial velocity sets. The measurement uncertainties in the interferometric positions were increased by a factor of 2.24, indicating that the error bars from the covariance matrix are underestimated; we report the scaled uncertainties in Table~\ref{tab.sepPA_mirc}. The reduced $\chi^2_\nu$ for the radial velocity data from \citet{simondiaz15} was already close to 1, so we did not adjust those uncertainties. The measurement errors for the CTIO radial velocities were decreased by a factor of 0.66 (the uncertainties listed in Table~\ref{tab.rv} are the unscaled values). Using the scaled uncertainties, we then fit the measured positions and radial velocities simultaneously using a Newton-Raphson method to minimize $\chi^2$ by calculating a first-order Taylor expansion for the equations of orbital motion. The last column of Table~\ref{tab.orb} provides the orbital parameters determined from the joint fit, including the period $P$, time of periastron passage $T$, eccentricity $e$, angular semi-major axis $a$, inclination $i$, position angle of the line of nodes $\Omega$, argument of periastron passage for the primary $\omega_{\rm Aa}$, and the radial velocity amplitudes of the primary and secondary $K_{Aa}$ and $K_{Ab}$. We allowed for a shift in the systemic velocity $\gamma$ between the two sets of spectroscopic radial velocities. Figures \ref{fig.orbit_sb2} and \ref{fig.orbit_vb} show the simultaneous spectroscopic and visual orbit fits. The orbital phase and radial velocity residuals for the simultaneous fit are listed in Table~\ref{tab.rv}. For comparison, we also list in Table~\ref{tab.orb} the orbital parameters determined from the fits to each set of data independently. The velocity amplitudes derived from the joint fit depend more on the higher precision radial velocities published by \citet{simondiaz15} than on the CTIO radial velocities. The uncertainty in the wavelength calibration of $\pm$0.25\% for MIRC \citep{monnier12}, will systematically increase or decrease the angular separations measured for the close pair. To account for this, we varied all of the separations systematically by $\pm$0.25\% and re-fit the orbital parameters. The second uncertainty listed for the semi-major axis for the simultaneous fit in Table~\ref{tab.orb} shows the size of the systematic uncertainty on the orbital fit. \begin{figure} \plotone{f7.eps} \caption{Visual orbit of $\sigma$ Orionis Aa,Ab based on the simultaneous fit to the interferometric positions measured using the MIRC beam combiner at the CHARA Array, the radial velocities published by \citet{simondiaz15}, and the CTIO radial velocities. The black circles mark the position of the companion Ab relative to Aa while the red ellipses show the size and orientation of the 1\,$\sigma$ uncertainties. The arrow indicates the direction of motion.} \label{fig.orbit_vb} \end{figure} \subsection{Visual Orbit of the Wide Pair $\sigma$ Orionis A,B} \label{sect.npoi_orb} As discussed in Section~\ref{sect.npoi_astrom}, we computed the orbital elements for the tertiary orbit ($\sigma$ Ori A,B) based on the positions derived from the NPOI data (Table~\ref{NPOI_table_abc}) together with all available measurements from the Washington Double Star Catalog. The orbital elements are given in Table~\ref{NPOI_table_orbit}. The tertiary orbit is shown with the NPOI measurements in Figure~\ref{NPOI_fig_orbit_abc} and all available measurements in Figure~\ref{NPOI_fig_orbit_wds}. We used the Levenberg-Marquardt method for fitting the orbital elements to the data. In addition to measuring the positions of the secondary and tertiary components during each individual night, we also fit the orbital parameters directly to the NPOI visibility data. This has the advantage of better constraining the system parameters which do not change from night to night. We used the Levenberg-Marquardt procedure \citep{press92} to perform a non-linear least-squares fit to the visibility data and solved simultaneously for the orbital parameters for both orbits (Aa,Ab and A,B) and the magnitude difference between each component. We fixed the component diameters at values of 0.27 mas, 0.21 mas, and 0.17 mas for components Aa, Ab, and B, respectively. These diameters were estimated based on their $V$ magnitudes (derived from the fitted magnitude differences and the total magnitude of the system $V=3.80$ mag) and adopting the same $(V-K)$ color of $-0.69$ for all three components (as they are all of early type; derived using the total magnitude of the system of $K=4.49$ mag). Such small diameters are unresolved on the baselines used during our observations. Because of the large number of fit parameters, the numerical partial derivatives of $\chi^2$ with respect to the model parameters were based on step sizes optimized to give similar increases in $\chi^2$ for each parameter. The reduced $\chi^2$ of the fits to the visibility data was 1.66 ($\chi^2=3.1$ without the floating calibration). The orbital elements of $\sigma$ Orionis Aa,Ab derived from the NPOI data agree with the parameters derived from the CHARA data within 0.1$-$2.0\,$\sigma$, but are less precise so we do not report the NPOI parameters explicitly. However, Figure~\ref{NPOI_fig_orbit_ab} shows that the NPOI astrometric positions are in good agreement with the CHARA orbit. When the tertiary is detected by the NPOI, it can be used as a phase reference to measure the absolute motions of component Aa and Ab relative to their center of mass. This provides an independent estimate of their mass ratio $M_{\rm Ab}/M_{\rm Aa}$. In Fig.~\ref{NPOI_fig_secmass} we show the reduced $\chi^2$ of the fit to the NPOI visibility data as a function of the mass of Ab, which shows a minimum at $M_{\rm Ab} = 13.5 \pm 0.4$ $M_\odot$, assuming a fixed mass for the primary of the $M_{\rm Aa}$ = 16.9 $M_\odot$. Away from this value, the relative positions of the three components change as the center of mass of the close binary shifts because of the change in the mass ratio between Aa and Ab. As a check, we also show that the $\chi^2$ does not vary with tertiary mass, as may be expected from the fact that this changes only the phase center of the triple system. Given the uncertainty of $M_{\rm Ab}$ fit to the NPOI data, we do not consider the difference from the value of the dynamical mass derived in Section~\ref{sect.mass} to be significant. \begin{figure} \plotone{f8.eps} \caption{Orbital motion of the tertiary component ($\sigma$ Ori B) relative to the photo-center of the close pair ($\sigma$ Ori Aa,Ab). The error ellipses show the positions measured with NPOI in Table~\ref{NPOI_table_abc}. The solid line is the best-fit orbit and the dashed line is the orbit computed by \citet{turner08}.} \label{NPOI_fig_orbit_abc} \end{figure} \begin{figure} \plotone{f9.eps} \caption{Orbit of the tertiary ($\sigma$ Ori B) shown with all of the measurements available in the Washington Double Star Catalog. The high precision measurements in the north-east (upper-left) quadrant are the AstraLux measurements published by \citet{simondiaz15}. The solid line indicates periastron and the arrow shows the direction of motion. } \label{NPOI_fig_orbit_wds} \end{figure} \begin{figure} \plotone{f10.eps} \caption{Orbital positions of $\sigma$ Ori Ab relative to Aa as measured from the NPOI observations. The single VLTI AMBER observation is included as well ($\Delta\alpha=0.5$ mas, $\Delta\delta=-4.5$ mas). Overplotted is the orbit determined from the analysis of the CHARA MIRC observations. } \label{NPOI_fig_orbit_ab} \end{figure} \begin{figure} \plotone{f11.eps} \caption{Reduced $\chi^2$ as a function of secondary mass (top panel; $\sigma$ Ori Ab) and tertiary mass (bottom panel; $\sigma$ Ori B). } \label{NPOI_fig_secmass} \end{figure} \subsection{Stellar Masses and Distance} \label{sect.mass} Using the orbital parameters of the close pair $\sigma$ Ori Aa,Ab in Table~\ref{tab.orb}, we derive dynamical masses of $M_{\rm Aa}$ = 16.99 $\pm$ 0.20 $M_\odot$ and $M_{\rm Ab}$ = 12.81 $\pm$ 0.18 $M_\odot$. The orbital parallax of $\pi$ = 2.5806 $\pm$ 0.0088 mas gives a distance $d =$ 387.5 $\pm$ 1.3 pc to the $\sigma$ Orionis system. The total mass contained in the triple can be derived from the orbital parameters of the wide pair $\sigma$ Ori A,B in Table~\ref{NPOI_table_orbit} and the orbital parallax $\pi$, \begin{eqnarray} M_{\rm tot} &=& M_{\rm Aa} + M_{\rm Ab} + M_{\rm B} = a_{\rm AB}^3/(\pi^3 P^2_{\rm AB}) \nonumber \\ &=& 41.4 \pm 1.1~M_\odot. \nonumber \end{eqnarray} Combined with the individual masses of Aa and Ab, this yields the mass of the tertiary component of $M_{\rm B}$ = 11.5 $\pm$ 1.2 $M_\odot$. The derived physical properties of the $\sigma$ Orionis system are summarized in Table~\ref{tab.properties}. \section{Discussion} \subsection{Comparison of stellar masses with evolutionary models} \citet{simondiaz15} compared spectroscopically derived physical properties of $\sigma$ Ori Aa, Ab, and B with evolutionary tracks for rotating stars in the Milky Way computed by \citet{brott11} to derive evolutionary masses of $M_{\rm Aa}$ = 20.0 $\pm$ 1.0 $M_\odot$, $M_{\rm Ab}$ = 14.6 $\pm$ 0.8 $M_\odot$, and $M_{\rm B}$ = 13.6 $\pm$ 1.1 $M_\odot$. These masses are systematically larger than the dynamical masses we computed in Table~\ref{tab.properties}. Additionally, the ages derived by \citet[][Aa: 0.3$^{+1.0}_{-0.3}$ Myr, Ab: 0.9$^{+1.5}_{-0.9}$ Myr, B: 1.5$^{+1.6}_{-1.9}$ Myr]{simondiaz15} are smaller than the typical age adopted for the $\sigma$ Orionis cluster of 2$-$3 Myr \citep[e.g.,][]{sherry08}. Future progress on resolving the discrepancies in the masses and ages could involve refining the component temperatures and luminosities, or adjusting the input parameters for the evolutionary models, especially because the evolution of massive stars is strongly dependent on their rotation and metallicity \citep{brott11,ekstrom12}. \citet{weidner10a} studied the empirical correlation between the mass of a cluster and its most massive member. Our dynamical mass for $\sigma$ Ori Aa of 16.99 $\pm$ 0.20 $M_\odot$ provides an additional high precision mass measurement of the most massive member of the $\sigma$ Orionis cluster. Estimates of the total mass of the cluster range from 225 $\pm$ 30 $M_\odot$ \citep{sherry04} down to $\sim$ 150 $M_\odot$ \citep{caballero07}; these estimates are strongly dependent on the membership selection, assumed reddening, multiplicity, and evolutionary models used to estimate the masses. \subsection{Distance to the $\sigma$ Orionis Cluster} The distance to the $\sigma$ Orionis cluster has remained a large source of uncertainty in determining the age of the cluster and characterizing the physical properties and disk life-times for the stars, brown dwarfs, and planetary-mass members in the region. The {\it Hipparcos} parallax of $\sigma$ Orionis itself (2.84 $\pm$ 0.91 mas) yields a distance with a large uncertainty of $352^{+166}_{-85}$ pc \citep{perryman97}. The new reduction of the {\it Hipparcos} data gives a parallax of 3.04 $\pm$ 8.92 mas \citep{vanleeuwen07b,vanleeuwen07}, resulting in a slightly smaller distance of 329 pc, but with a much larger uncertainty. $\sigma$ Orionis presents a difficult problem for the {\it Hipparcos} analysis as it is bright and occasionally saturated, and the signals from the three components are mixed. This required an individual component solution in the original reduction \citep{esa97}. Such individual attention was not possible in all cases for the new reduction and accounts for the large uncertainty (F.\ van Leeuwen, 2016, priv. comm.). Nevertheless, the original and new {\it Hipparcos} reductions yield parallaxes that agree within the uncertainty of the original reduction. The orbital parallax that we measure is two orders of magnitude more precise and provides an independent check of the {\it Hipparcos} parallax for this triple star system and could be of use as a check of {\it GAIA} parallaxes for multiple stars. Several other methods have been used to estimate the distance to $\sigma$ Orionis. \citet{francis12} computed a distance to the $\sigma$ Orionis cluster of $446 \pm 30$ pc based on the average {\it Hipparcos} parallaxes measured for 15 members. By comparing the apparent magnitudes and the dynamical mass from the visual orbit of $\sigma$ Ori A,B with evolutionary models, \citet{caballero08d} derived a smaller distance of $334^{+25}_{-22}$ pc, or 385 pc if the system is treated as a triple. Using main-sequence fitting to the bright members in the $\sigma$ Orionis cluster, \citet{sherry08} derived a distance of $420 \pm 30$ pc while \cite{mayne08} derived a distance of $389^{+34}_{-24}$ pc. The variation in these distance estimates is large, although, for the most part, the values overlap within the range of their 1\,$\sigma$ uncertainties. Our distance from the orbital parallax of the $\sigma$ Orionis multiple system of 387.5 $\pm$ 1.3 pc provides a significant improvement in the precision compared with the previous estimates of the cluster distance, and will reduce the uncertainties in future estimates of the age of the cluster based on isochrone fits. \subsection{Alignment of the Inner and Outer Orbits} The alignment of the orbits between the inner and outer pairs in heirarchical multiple systems can probe the initial conditions of star formation \citep{fekel81,sterzik02}. The relative inclination between the inner and outer orbits is given by \begin{eqnarray} \cos{\Phi} &=& \cos{i_{\rm wide}}\cos{i_{\rm close}} \pm \sin{i_{\rm wide}}\sin{i_{\rm close}} \nonumber \\ & & \times \cos{(\Omega_{\rm wide} - \Omega_{\rm close})} \end{eqnarray} \citep{fekel81} where $i_{\rm close}$ and $\Omega_{\rm close}$ are the inclination and position angle of the ascending node for the close orbit while $i_{\rm wide}$ and $\Omega_{\rm wide}$ are the same parameters for the wide orbit. Coplanar orbits will have a relative alignment close to $\Phi = 0$. For the wide visual pair, $\sigma$ Ori A,B, there exists a 180$^\circ$ ambiguity between $\Omega$ and $\omega$. For the close pair, $\sigma$ Ori Aa,Ab, $\omega$ is defined by the spectroscopic orbit, so there is no ambiguity with $\Omega_{\rm close}$. Using the orbital parameters in Tables~\ref{tab.orb} and \ref{NPOI_table_orbit}, and accounting for the ambiguity in $\Omega_{\rm wide}$, this leads to two possibilities for the relative inclination between the inner and outer orbits of $120\fdg0 \pm 2\fdg6$ or $126\fdg6 \pm 2\fdg0$. Therefore, the alignment of the two orbits in the $\sigma$ Orionis triple are within $\sim 30^\circ$ of orthogonal. The orbital motion of the inner pair is prograde \citep[in the direction of increasing position angles;][]{heintz78} while the motion of the outer pair is retrograde, as indicated by the directional arrows in Figures~\ref{fig.orbit_vb} and \ref{NPOI_fig_orbit_wds}. This situation is not necessarily rare; the inner and outer orbits in the Algol triple are also nearly orthogonal, with opposing directions of motion \citep{zavala10,baron12}. \section{Conclusions} We obtained interferometric observations of the triple star $\sigma$ Orionis using the CHARA Array, NPOI, and VLTI. We revised the orbital parameters for the wide A,B pair and present the first visual orbit for the close Aa,Ab pair, fit simultaneously with new and previously published radial velocities. The orbit of the close pair is eccentric ($e \sim 0.78$) but the stars are reliably separated at periastron ($\rho_{\rm min} \sim 0.91$ mas). Through our analysis of the orbital motion in the triple system, we derived dynamical masses of $M_{\rm Aa}$ = 16.99 $\pm$ 0.20 $M_\odot$, $M_{\rm Ab}$ = 12.81 $\pm$ 0.18 $M_\odot$, and $M_{\rm B}$ = 11.5 $\pm$ 1.2 $M_\odot$, and a distance of 387.5 $\pm$ 1.3 pc. The orbital parallax places the $\sigma$~Orionis system about $7\%$ closer to the Sun than the Orion Nebula Cluster, which lies at a distance of $415 \pm 5$ pc based on VLBI parallaxes \citep{reid14}. Two other bright members of the Orion OB1b association are also known triples, $\zeta$~Ori \citep{hummel13} and $\delta$~Ori \citep{richardson15}. The outer tertiary star appears to be a rapid rotator in each of $\sigma$~Ori \citep[$V\sin i = 250$ km~s$^{-1}$;][]{simondiaz15}, $\zeta$~Ori \citep[$V\sin i = 350$ km~s$^{-1}$;][]{hummel13}, and $\delta$~Ori \citep[$V\sin i = 252$ km~s$^{-1}$;][]{richardson15}. This suggests that the angular momentum of the natal cloud was transformed mainly into orbital angular momentum for the stars of the inner binary and into spin angular momentum for the outer tertiary star. It is also possible that these triples began life as trapezium systems of four stars in which dynamical processes led to a merger of one pair that we see today as the distant rapid rotator. Joint interferometric and spectroscopic studies offer the means to determine the outcome products of the dynamical processes of massive star formation. \acknowledgements We thank Deane Peterson for initially proposing to observe $\sigma$ Orionis with NPOI, and acknowledge his and Tom Bolton's support of the project during the initial phase. We appreciate P.\ J.\ Goldfinger, Nic Scott, and Norm Vargas for providing operational support during the CHARA observations. We are grateful to Ming Zhao for collecting an early set of CHARA data on $\sigma$~Orionis before the photometric channels were installed in MIRC. We thank Jim Benson and the NPOI observational support staff whose efforts made the observations possible. We appreciate Floor van Leeuwen for a helpful discussion on the parallaxes of multiple stars observed by the {\it Hipparcos} mission. We thank the referee for providing feedback to improve the manuscript. This work is based on observations obtained with the Georgia State University Center for High Angular Resolution Astronomy Array at Mount Wilson Observatory. The CHARA Array is supported by the National Science Foundation (NSF) under Grant No. AST-1211929. GHS and DRG acknowledge support from NSF Grant AST-1411654. Institutional support has been provided from the GSU College of Arts and Sciences and the GSU Office of the Vice President for Research and Economic Development. The Navy Precision Optical Interferometer is a joint project of the Naval Research Laboratory and the US Naval Observatory, in cooperation with Lowell Observatory and is funded by the Office of Naval Research and the Oceanographer of the Navy. FMW thanks Dennis Assanis, Provost of Stony Brook University, for enabling access to Chiron spectrograph, operated by the SMARTS consortium, through a Research Support grant. SK acknowledges support from a European Research Council Starting Grant (Grant Agreement No.\ 639889). This research has made use of the SIMBAD astronomical literature database, operated at CDS, Strasbourg, France and the Washington Double Star Catalog maintained at the U.S. Naval Observatory. \facilities{CHARA, CTIO:1.5m, NPOI, VLTI}
1610.02174
\section{Introduction} \label{introduction} As a simple four-atomic molecule, formaldehyde, H$_2$CO, also known as methanal, is of great fundamental interest. Its rotational spectrum is of great importance for radio astronomy. It was only the seventh molecule to be detected in the interstellar medium in 1969 \cite{H2CO_det_1969}, the fourth one that was detected by means of radio astronomy and only the third poly-atomic molecule; see, for example, the Interstellar \& Circumstellar Molecules page\footnote{http://www.astrochymist.org/astrochymist\_ism.html} of The Astrochymist\footnote{http://www.astrochymist.org/}. In the detection letter, the molecule was observed in absorption toward strong continuum sources, most of them dense and warm molecular clouds, so-called hot-cores. Formaldehyde was also detected in cold dark clouds, which are also dense, with the $^1$H hyperfine structure (HFS) splitting partially resolved \cite{H2CO_dark-clouds_1969}, and in less dense translucent \cite{H2CO_translucent_1987} and even less dense diffuse clouds \cite{H2CO_diffuse_1990}. It was also detected in the circumstellar envelopes of late-type stars, such as the C-rich protoplanetary nebula around V353~Aur \cite{H2CO_C-rich_PPN_1989}, also known as AFGL~618, CRL~618, or the Westbrook Nebula, the O-rich protoplanetary nebula around QX~Pup \cite{H2CO_O-rich_PPN_1992}, also known as OH231.8+4.2 or the Rotten Egg Nebula, or the C-rich asymptotic giant branch star CW~Leo \cite{H2CO_C-rich_AGB_2004}, also known as IRC+10216 or the Peanut Nebula. The H$_2$CO molecule was the second molecule after OH to be detected in galaxies different from our Milky Way, here the two near-by galaxies NGC~253 and NGC~4945 \cite{H2CO_extragal_1974}; it was also detected in more distant galaxies \cite{H2CO_B0218_1996}. Formaldehyde is also one of the few molecules for which maser activity was not only detected in galactic sources \cite{H2CO_maser_1974}, but also in extragalactic sources \cite{H2CO_maser_1986}. Numerous minor isotopic species were also detected in space, among them H$_2 ^{13}$CO \cite{H2C-13-O_1969}, H$_2$C$^{18}$O \cite{H2CO-18_1971}, HDCO \cite{HDCO_1979}, and D$_2$CO \cite{D2CO_1990} as the first multiply deuterated molecule in space. Unlabeled atoms refer to $^1$H, $^{12}$C, and $^{16}$O. The detection of D$_2$CO was made in the Orion~KL region, a site of high-mass star formation, where deuterium in formaldehyde was enriched by several orders of magnitudes with respect to the interstellar D/H ratio of $\sim 1.5 \times 10^{-5}$. Even higher degrees of deuteration were found in the molecular clouds surrounding low-mass proto-stars, such as IRAS~16293-2422 \cite{D2CO_IRAS16293_1998}. In fact, deuteration has become a means to investigate the evolutionary stage of low-mass proto-stars. The H$_2$CO main species may be used to probe the density in denser regions of the interstellar medium \cite{H2CO_density_1980} and to determine the kinetic temperature \cite{H2CO_T-kin_1993}. The ratios of H$_2 ^{13}$CO to H$_2$C$^{18}$O have been used to infer the $^{13}$C$^{16}$O/$^{12}$C$^{18}$O double ratio in molecular clouds \cite{H2CO_1316-1218_1981,H2CO_1316-1218_1985}, which in turn may be used to determine $^{12}$C/$^{13}$C and/or $^{16}$O/$^{18}$O ratios. Formaldehyde was also seen in Earth's stratosphere employing microwave limb-sounding with the Odin satellite \cite{H2CO_ODIN_2007}; it is more commonly studied in the troposphere using infrared or UV/visible spectroscopy among other techniques \cite{H2CO_ODIN_2007}. The molecule was also detected in the comae of several comets, the first one being comet Halley, where it was identified tentatively by infrared spectroscopy \cite{H2CO_Halley-IR_1986,H2CO_Halley-IR_1987}, later unambiguously using microwave spectroscopy \cite{H2CO_Halley-VLA_1989}. Formaldehyde was among the first molecules whose rotational spectrum and dipole moment were studied by means of microwave spectroscopy \cite{H2CO_rot_dip_1949}. A plethora of further studies on the rotational spectrum of H$_2$CO, its isotopologues, not only in the ground, but also excited vibrational states were published over the years. The rotational spectra of H$_2$CO and its isotopologues began to be explored in the terahertz region in the second half of the 1990s, starting with the main isotopologue \cite{H2CO_rot_1996}. Investigations of HDCO and D$_2$CO \cite{HDCO_D2CO_rot_1999}, H$_2 ^{13}$CO \cite{H2C-13-O_rot_2000}, H$_2$C$^{18}$O \cite{H2CO-18_rot_2000}, and again H$_2$CO \cite{H2CO_rot_2003} followed. The most recent study involved studies of HDCO and D$_2$CO samples between 1.1 and 1.5~THz \cite{HDCO_D2CO_etc_rot_2015}. Important data were obtained for numerous isotopic species with HD$^{13}$C$^{18}$O and D$_2 ^{13}$C$^{18}$O being the rarest ones. In addition, the data sets of the already well-characterized HDCO and D$_2$CO isotopic species were improved somewhat \cite{HDCO_D2CO_etc_rot_2015}. Excited vibrational states of H$_2$CO were also investigated up to terahertz frequencies \cite{H2CO_vibs_rot_2009}. The spectroscopic parameters were improved by ground state combination differences (GSCDs) for H$_2$CO \cite{H2CO_with_GSCDs_2000} and by far-infrared spectra for D$_2$CO \cite{D2CO_FIR_2004} and D$_2 ^{13}$CO \cite{D2C-13-O_FIR_2005}. In addition, the rotational spectra of formaldehyde in the ground and excited vibrational states were used to characterize a spectrometer system based on difference frequency generation \cite{H2CO_laser-mixing_2012}. The most abundant formaldehyde isotopic species, for which terahertz data are lacking, is H$_2$C$^{17}$O. Assuming that H$_2 ^{13}$C$^{18}$O is too rare to be detected at submillimeter wavelengths, H$_2$C$^{17}$O is the only one for which terahertz data are needed. Flygare and Lowe studied five $a$-type $Q$-branch transitions below 14~GHz which had $K_a = 1$ and 2 and resolved the $^{17}$O HFS splitting almost completely \cite{H2CO-17_rot_1965}. Davies et al. extended the measurements up to 150~GHz with HFS resolved to a varying degree \cite{H2CO-17_rot_1980a}. These data were superseded by more extensive and more accurate measurements by Cornet et al. which extended up to 294~GHz and which were reported only shortly thereafter \cite{H2CO-17_rot_1980b}. In order to improve the predictions of the rotational spectrum of H$_2$C$^{17}$O especially for observations with the Atacama Large Millimeter/submillimeter Array (ALMA) \cite{ALMA_2008}, we recorded transitions from 0.56~THz up to 1.50~THz. Additionally, we recorded transitions of H$_2$C$^{18}$O and H$_2$C$^{16}$O in the region of 1.37~THz to 1.50~THz. We combined our new data with previously reported data for which the initially reported uncertainties were critically evaluated. This led to improved spectroscopic parameters which include $^{17}$O or H nuclear hyperfine structure parameters. \section{Experimental details} \label{exptl_details} The rotational spectrum of H$_2$C$^{17}$O was recorded in selected regions between 568 and 658~GHz and between 848 and 927~GHz with the Cologne Terahertz Spectrometer (CTS) that is described in detail elsewhere \cite{CTS_1994}. Two phase-locked backward wave oscillators (OB~80, OB~82) were used as sources and a magnetically tuned, liquid-He-cooled InSb hot electron bolometer (QMC Instruments Ltd.) was used as detector. The measurements were carried out in a 4~m long glass cell at room temperature at pressures around 1 to 2~Pa. The cell was equipped with windows made from high density polyethylene (HDPE). Our study on H$_2$CO \cite{H2CO_rot_2003} may serve as an example for the accuracy achievable with the CTS. Rotational spectra of H$_2$C$^{17}$O, H$_2$C$^{18}$O, and H$_2$C$^{16}$O were recorded in selected regions between 1.35 and 1.50~THz using a VDI (Virginia Diodes, Inc.) Amplified Multiplier Chain driven by an Agilent E8257D microwave synthesizer as source and an InSb bolometer as detector. Measurements were carried out in a 3~m long glass cell at room temperature at pressures around 1 to 2~Pa for weaker lines, down to around 0.1~Pa for stronger lines. The cell was again equipped with HDPE windows. Our study on low-lying vibrational states $\varv_8 \le 2$ of methyl cyanide \cite{MeCN_v8_le_2_rot_2015} may serve as an example for the accuracy achievable with this spectrometer system. Formaldehyde was generated by heating a small sample of commercial paraformaldehyde briefly with a heat-gun. Frequency modulation was used throughout with demodulation at $2f$, which causes an isolated line to appear approximately as a second derivative of a Gaussian. \begin{figure} \begin{center} \includegraphics[angle=0,width=7.0cm]{20-19K5.pdf} \end{center} \caption{Detail of the formaldehyde terahertz spectrum displaying the $J = 20 - 19$, $K_a = 5$ rotational transitions of H$_2$C$^{17}$O, H$_2$C$^{18}$O, and H$_2$C$^{16}$O (from top to bottom) with resolved asymmetry splitting. The asymmetry splitting of H$_2$C$^{17}$O is between those of the heavier and the lighter isotopologue, as can be expected. The weak feature near 1457378~MHz in the bottom panel is unassigned.} \label{asy-splitting} \end{figure} \section{Spectroscopic analysis} \label{analysis} H$_2$C$^{16}$O is an asymmetric top molecule close to the prolate limit ($\kappa = -0.9610$ with a dipole moment of 2.3317~D \cite{H2CO-div-isos_dipole_1977} along the $a$ inertial axis, which is also the C$_2$ symmetry axis. The asymmetry and the dipole moment change only slightly with isotopologue. The two equivalent H nuclei lead to spin-statistical weight factors of 1 and 3 for rotational states with $K_a$ even (\textit{para}) and odd (\textit{ortho}), respectively. At high resolution, HFS splitting may be resolved for \textit{ortho} transitions. This is usually only achieved at radio frequencies (RF) or in the microwave (MW) region. The splitting was also resolved in astronomical observations of colder environments, but only for $K_a = 1$ and low values of $J$. Even though formaldehyde's proximity to the prolate limit would make Watson's $S$ reduction the natural choice for fitting of its rotationally resolved spectra to some spectroscopists, use of the $A$ reduction is not so far-fetched. In fact, it was the $A$ reduction that was most commonly applied until fairly recently \cite{H2CO_rot_1996,HDCO_D2CO_rot_1999}. There was only one detailed consideration of the $S$ reduction in earlier reports \cite{H2CO_S-red_1983}, but the results present for the main isotopologue were actually slightly worse in the $S$ reduction. The advantage of the $S$ reduction has only lately become increasingly apparent in most of the diagonal distortion parameters which are smaller in magnitude for the sextic distortion parameters in the $S$ reduction compared to the $A$ reduction, and the differences are more pronounced in the octic distortion parameters \cite{H2C-13-O_rot_2000,H2CO_rot_2003,H2CO_with_GSCDs_2000}. The situation is less clear for the off-diagonal distortion parameters, $d_1$, $d_2$, etc. in the $S$ reduction, $\delta _K$, $\delta _J$, etc. in the $A$ reduction. But it is the large number of off-diagonal distortion parameters needed to fit the formaldehyde spectra and their relatively large magnitudes which cause the pronounced differences between the two reductions. Prediction and fitting of the rotational spectra were made with Pickett's SPCAT and SPFIT programs \cite{spfit_1991}. Our new data were fit together with previously reported line frequencies, and we consulted the original references to check for the reported uncertainties. In almost all instances, we used the initially reported uncertainties, which is different from some studies where uncertainties had been increased considerably, usually without justification. In very few cases of transition frequencies with larger residuals, the uncertainties were increased slightly or the transition frequencies were omitted from the line lists. Transitions with HFS splitting were used as such. In order to keep the line list short, each isotopic species was defined twice in its parameter file, with and without HFS. Overlapping HFS or asymmetry components were treated in the fit as intensity-weighted averages, in contrast to most other fitting programs which treat each overlapping components as one piece of information with exactly the assigned frequency, which may increase the rms error unless uncertainties were increased beyond the usual extent. Some higher order parameters were evaluated from other isotopic species, usually H$_2$C$^{16}$O, by scaling the parameters with appropriate powers of $B+C$ and $B-C$; $A-(B+C)/2$ was very similar among the three species and was not considered for scaling. Even if such scaling is not the best choice for all parameters, it is often a good approximation. Such scaling was used, for example, for $^{13}$C and $^{15}$N isotopic species of methyl cyanide \cite{MeCN_13C-vib_rot_2016}. There are different sign conventions concerning the nuclear spin-rotation parameters and the nuclear spin-nuclear spin coupling parameters. The sign conventions in SPFIT are such that in the first case the magnetic moment of H is positive. This convention is common nowadays in rotational spectroscopy, e.g., \cite{H2CO-div-isos_dipole_1977}, but is opposite to nuclear magnetic resonance and to earlier rotational studies, e.g., \cite{H2CO-17_rot_1965,H2CO-17_rot_1980b}. The sign convention in the second case is that the nuclear spin-nuclear spin coupling parameters of homo-diatomics are negative; there appears to be no clear preference for this or for the opposite sign convention. \subsection{H$_2$C$^{17}$O} \label{17O-analysis} The $^{17}$O isotope is the rarest of the stable oxygen isotopes with a terrestrial abundance of 0.00038 \cite{composition_elements_2009}. The isotope possesses a nuclear spin of 5/2 which gives rise to HFS splitting caused by the nuclear electric quadrupole and the nuclear magnetic dipole moments. Initial predictions of the rotational spectrum of H$_2$C$^{17}$O were generated from the data reported by Flygare and Lowe \cite{H2CO-17_rot_1965} and Cornet et al. \cite{H2CO-17_rot_1980b}. Both studies resolved $^{17}$O HFS splitting to a different degree depending on the quantum numbers and the frequency region; no HFS splitting caused by the H nuclei was reported. Initial spectroscopic parameters were taken from the latter study which were converted to the $S$ reduction subsequently. Additional higher order parameters were derived from H$_2$C$^{16}$O \cite{H2CO_rot_2003}. The initially reported uncertainties were used for essentially all transition frequencies and essentially all reported HFS information was used. Some modifications were made to the list of transition frequencies from Ref.~\cite{H2CO-17_rot_1965}. There was a typographical error in the $1_{10} - 1_{11}$ center frequency; an increase by 0.5~MHz is agrees almost within uncertainty with the frequency calculated from the final set of spectroscopic parameters and was used in the final line list. The remaining data were reproduced slightly outside the uncertainties on average. Therefore, uncertainties of the more poorly fitting data, $2_{11} - 2_{12}$ center frequency and of two HFS splittings of the $6_{24} - 6_{25}$ transition were doubled. In addition, one HFS splitting of the $2_{11} - 2_{12}$ transition, involving a weak HFS component, was omitted. These modifications affected obviously the partial rms error of this data set and, to a lesser extent, the rms error of the entire fit; the parameter values and their uncertainties were only slightly affected. Despite the low $^{17}$O isotopic abundance, the strengths of the formaldehyde absorption lines were sufficient to obtain reasonable signal-to-noise ratios for H$_2$C$^{17}$O lines in the present study, see Fig.~\ref{asy-splitting}. The detected transitions involve $\Delta K_a = 0$ $R$-branch transitions with $7 \le J \le 22$ and $K_a$ up to 7. None of the observed transitions displayed HFS splitting, as may be expected. The spectroscopic parameters determined in the final fit are almost complete up to sixth order, only $H_K$ and $H_J$ were kept fixed to the estimated values. In addition, two independent quadrupole parameters, $\chi _{aa}$ and $\chi _{bb}$, were determined along with three nuclear spin-rotation parameters. $C_{cc}$ was retained in the fit because its uncertainty is commensurate with those of $C_{aa}$ and $C_{bb}$. The value and the uncertainty of $\chi _{cc}$ were derived from the tracelessness of the quadrupole tensor. An edited version of the fit file is available as supplementary material. The final spectroscopic parameters of H$_2$C$^{17}$O are given in Table~\ref{parameters} together with those of H$_2$C$^{18}$O and H$_2$C$^{16}$O. The rms error of the final fit is 0.870, meaning that the experimental data have been reproduced within their uncertainties on average. The partial rms errors are 1.019, 0.793, and 0.903 for the data from Flygare and Lowe \cite{H2CO-17_rot_1965}, from Cornet et al. \cite{H2CO-17_rot_1980b}, and from the present investigation, respectively. \begin{table*} \begin{center} \caption{Spectroscopic parameters$^a$ (MHz) of formaldehyde isotopologues with $^{17}$O, $^{18}$O, and $^{16}$O along with number of lines and rms error (both unit less).} \label{parameters} {\footnotesize \begin{tabular}{lr@{}lr@{}lr@{}l} \hline Parameter & \multicolumn{2}{c}{H$_2$C$^{17}$O} & \multicolumn{2}{c}{H$_2$C$^{18}$O} & \multicolumn{2}{c}{H$_2$C$^{16}$O} \\ \hline $A-(B+C)/2$ & 246452&.397~(95) & 247253&.578~(54) & 245551&.4495~(40) \\ $(B+C)/2$ & 35513&.40370~(32) & 34707&.84108~(25) & 36419&.11528~(25) \\ $(B-C)/4$ & 1148&.454801~(90) & 1097&.2174152~(59) & 1207&.4358721~(33) \\ $D_K$ & 19&.448~(33) & 19&.5203~(151) & 19&.39136~(53) \\ $D_{JK}$ & 1&.257644~(30) & 1&.2021350~(85) & 1&.3211073~(93) \\ $D_J \times 10^3$ & 67&.10965~(90) & 64&.30788~(135) & 70&.32050~(50) \\ $d_1 \times 10^3$ & $-$9&.70379~(87) & $-$9&.08202~(31) & $-$10&.437877~(47) \\ $d_2 \times 10^3$ & $-$2&.27013~(64) & $-$2&.07709~(38) & $-$2&.501496~(33) \\ $H_K \times 10^3$ & 4&.03 & 4&.03 & 4&.027~(22) \\ $H_{KJ} \times 10^6$ & 6&.13~(56) & 2&.615~(77) & 10&.865~(79) \\ $H_{JK} \times 10^6$ & 6&.949~(25) & 6&.380~(9) & 7&.465~(16) \\ $H_J \times 10^9$ & 5&.70 & 9&.41~(170) & 3&.54~(33) \\ $h_1 \times 10^9$ & 26&.67~(135) & 27&.23~(51) & 32&.272~(58) \\ $h_2 \times 10^9$ & 43&.47~(50) & 37&.60~(27) & 47&.942~(74) \\ $h_3 \times 10^9$ & 13&.87~(27) & 12&.135~(67) & 15&.966~(15) \\ $L_{K} \times 10^6$ & $-$0&.610 & $-$0&.610 & $-$0&.610~(177) \\ $L_{KKJ} \times 10^9$ & $-$5&.7 & $-$5&.5 & $-$5&.85~(19) \\ $L_{JK} \times 10^9$ & 0&.35 & 0&.33 & 0&.367~(85) \\ $L_{JJK} \times 10^9$ & $-$0&.098 & $-$0&.091 & $-$0&.1057~(92) \\ $l_2 \times 10^{12}$ & $-$0&.30 & $-$0&.26 & $-$0&.345(50) \\ $l_3 \times 10^{12}$ & $-$0&.36 & $-$0&.31 & $-$0&.427(19) \\ $l_4 \times 10^{12}$ & $-$0&.126 & $-$0&.104 & $-$0&.1520~(32) \\ $p_5 \times 10^{18}$ & 2&.60 & 2&.06 & 3&.33 \\ & & & & & & \\ \multicolumn{7}{l}{$^{17}$O hyperfine parameters} \\ \hline $\chi _{aa}$ & $-1$&.903~(16) & & & & \\ $\chi _{bb}$ & 12&.381~(10) & & & & \\ $\chi _{cc}$$^b$ & $-$10&.478~(10) & & & & \\ $C_{aa} \times 10^3$ & $-$366&.4~(25) & & & & \\ $C_{bb} \times 10^3$ & $-$26&.5~(8) & & & & \\ $C_{cc} \times 10^3$ & 0&.4~(8) & & & & \\ & & & & & & \\ \multicolumn{7}{l}{$^1$H hyperfine parameters} \\ \hline $S({\rm HH}) \times 10^3$ & & & $-$17&.933~(98) & $-$17&.685~(42) \\ $C_{\| \ast} \times 10^3$ $^c$ & & & $-$3&.391 & $-$3&.368~(46) \\ $C_{\perp} \times 10^3$ $^c$ & & & $-$0&.2481 & $-$0&.2603~(135) \\ $C_{-} \times 10^3$ $^c$ & & & 1&.0943~(136) & 1&.1292~(80) \\ $C_{aa} \times 10^3$ $^b$ & & & & & $-$3&.629~(35) \\ $C_{bb} \times 10^3$ $^b$ & & & & & 1&.998~(20) \\ $C_{cc} \times 10^3$ $^b$ & & & & & $-$2&.519~(22) \\ no. of lines$^d$ & 181& & 147& & 2043&$^f$ \\ rms error$^e$ & 0&.870 & 0&.735 & 0&.904 \\ \hline \end{tabular}\\[2pt] } \end{center} $^a$\footnotesize{Watson's $S$ reduction was used in the representation $I^r$. Numbers in parentheses are one standard deviation in units of the least significant figures. Parameter values without uncertainties were estimated and kept fixed in the analyses, see end of general part of section~\ref{analysis}.}\\ $^b$\footnotesize{Derived parameter.}\\ $^c$\footnotesize{$C_{\| \ast} = C_{aa} - (C_{bb} + C_{cc})/2$; $C_{\perp} = (C_{bb} + C_{cc})/2$; $C_{-} = (C_{bb} - C_{cc})/4$.}\\ $^d$\footnotesize{Different pieces of information; i.e., a small number of multiple measurements of, e.g., one transition have been counted separately.}\\ $^e$\footnotesize{Value for the entire fit. Additional details at the end of Sections~\ref{17O-analysis}, \ref{18O-analysis}, and \ref{16O-analysis}.}\\ $^f$\footnotesize{Including 1609 GSCDs.} \end{table*} \subsection{H$_2$C$^{18}$O} \label{18O-analysis} The $^{18}$O isotope has a terrestrial abundance of 0.0020, more than five times that of $^{17}$O \cite{composition_elements_2009}. The abundance difference translates into a gain of signal-to-noise or a shorter integration time by a factor of $\sim$30 or an appropriate combination, see Fig.~\ref{asy-splitting}. Initial predictions of its rotational transitions were taken from the Cologne Database for Molecular Spectroscopy, CDMS \cite{CDMS_1,CDMS_2}; these data are based on our previous study of H$_2$C$^{18}$O \cite{H2CO-18_rot_2000}. The transitions recorded in the present study cover $\Delta K_a = 0$ $R$-branch transitions with $19 \le J \le 22$ and $K_a$ up to 11. Among the previously published data, resolved HFS splitting was reported for two $\Delta K_a = 0$ $Q$-branch transition with $K_a = 1$ and $J = 1$ \cite{H2CO-div-isos_6cm_1971} and 2 \cite{H2CO-div-isos_2cm_1972}, respectively. This HFS information was used in the present investigation especially to facilitate astronomical observations. Initial sets of $^1$H HFS parameters were derived from the main isotopic species, see Sect.~\ref{16O-analysis}. Neglecting vibrational effects, the spin-spin coupling parameters are expected to be equal, and the spin-rotation parameters $C_{gg}$ scale with the respective rotational parameters $B_g$. A satisfactory fit was obtained with just the spin-spin coupling parameter $S$(HH) and $C_{-} = (C_{bb} - C_{cc})/4)$ released. These are the parameters on which the HFS splitting of these transitions depends to first order. No combination of three or even four $^1$H HFS parameters yielded a significantly better fit. Moreover, the changes from the initial parameters were deemed to be too large for some of the parameters if more than two parameters were released in the fits. In case of the $J = 1$ transition frequencies, the $F = 0 - 1$ and $F = 2 - 2$ HFS components differ by $\sim$0.8~kHz, and the transition frequency published for the latter component corresponded much better to the intensity-weighted average of the two components. Therefore, we assigned the frequency to the intensity-weighted average in the final fit. In case of the $J = 2$ transitions, the $F = 2 - 2$ and $F = 2 - 3$ HFS components are close in frequency, and the frequency assigned to the stronger $F = 2 - 2$ component differed considerably from the calculated position for this component as well as for the intensity-weighted average of the two components. Therefore, this transition frequency was omitted from the final fit. All further rotational data were used as in our previous analysis \cite{H2CO-18_rot_2000}. These involve a large body of MW and mmW data from Cornet and Winnewisser \cite{H2CO-div-isos_rot_1980} along with three RF transition \cite{H2CO-div-isos_RF_1974} and one mmW transition \cite{H2CO_div-isos_vibs_rot_1978}. The set of spectroscopic parameters determined for H$_2$C$^{18}$O is almost the same as for H$_2$C$^{17}$O, except that $H_J$ was released in the fit of the former. An edited version of the fit file is available as supplementary material. The final spectroscopic parameters of H$_2$C$^{18}$O are also given in Table~\ref{parameters}. The rms error of the entire fit is 0.735, indicative of conservative uncertainties in some data sets. The partial rms error of the HFS containing data \cite{H2CO-div-isos_6cm_1971,H2CO-div-isos_2cm_1972} in the fit is 1.065, that from Refs.~\cite{H2CO-div-isos_rot_1980,H2CO-div-isos_RF_1974} are 0.825 and 0.567, respectively. Finally, the rms errors of our previous \cite{H2CO-18_rot_2000} and present studies are 0.556 and 0.903, respectively. \subsection{H$_2$C$^{16}$O} \label{16O-analysis} Initial predictions of the rotational transitions of the main isotopic species were also taken from the CDMS \cite{CDMS_1,CDMS_2}; these data are based on our previous study of H$_2$C$^{16}$O \cite{H2CO_rot_2003}. In the present investigation, frequencies were determined for $\Delta K_a = 0$ $R$-branch transitions with $18 \le J \le 21$ and $K_a$ up to 15, for four $\Delta K_a = 2$ transitions, and for one $\Delta K_a = 0$ $Q$-branch transition with $J = 26$ and $K_a = 1$. In order to determine the best possible set of HFS parameters, in particular for astronomical observations, we evaluated the information in the available original reports because effects of HFS were usually omitted in previous studies \cite{H2CO_rot_1996,H2CO_rot_2003,H2CO-div-isos_rot_1980,H2CO_div-isos_vibs_rot_1978}. In the course of this process, we noticed that uncertainties of previous data were increased in Ref.~\cite{H2CO_rot_1996} to usually 1~kHz in cases in which the originally reported uncertainties were smaller than this value. The most likely explanation would be the difficulty to reproduce the data to within the reported uncertainties. This, in turn, may be explained by the reluctance to use a sufficiently large set of off-diagonal distortion parameters or by the adherence to the A reduction. In addition, uncertainties appeared to have been increased for transitions with unresolved asymmetry splitting for which the calculated asymmetry splitting was much larger than the uncertainties. As in the case of H$_2$C$^{18}$O, resolved HFS splitting was reported for two $\Delta K_a = 0$ $Q$-branch transition with $K_a = 1$ and $J = 1$ \cite{H2CO-div-isos_6cm_1971} and 2 \cite{H2CO-div-isos_2cm_1972}, respectively; the $J = 2$, $F = 2 - 2$ transition frequency omitted for H$_2$C$^{18}$O was also omitted for H$_2$C$^{16}$O. Further HFS information originated from an RF investigation of H$_2$C$^{16}$O \cite{H2CO_RF_1973}. Hyperfine free transition frequencies were taken from Ref.~\cite{H2CO_rot_1996} with additional original data \cite{H2CO-div-isos_rot_1980,H2CO_div-isos_vibs_rot_1978,H2CO_RF_1973,H2CO_RF_1968,H2CO_RF_1977,H2CO_RF_1981,H2CO_review_1972}. Further data come from our previous study \cite{H2CO_rot_2003}, from a study of a spectrometer system employing difference frequency generation \cite{H2CO_laser-mixing_2012}, and from GSCDs generated from IR spectra in the 3.5~$\mu$m region \cite{H2CO_IR-3p5mu_2006} which were used in a previous ground states study \cite{H2CO_with_GSCDs_2000}. In almost all instances, we use here the originally reported uncertainties; only in very few cases uncertainties were increased slightly if residuals were larger than the reported uncertainties and if the partial rms error of a given data set was substantially larger than 1.0. If residuals were much larger than the reported uncertainties, the corresponding transition frequencies were omitted from the final fit. Besides the HFS component mentioned before, this applies to three $K_a = 2$ RF transitions \cite{H2CO-div-isos_dipole_1977}. Multiple data with MW accuracy were retained in the line list if the uncertainties were similar in magnitude. The omitted transitions involve mostly far-infrared laser-sideband data with uncertainties around 1~MHz \cite{H2CO_rot_1996}. The set of rotational and centrifugal distortion parameters is essentially identical to that of our previous study \cite{H2CO_rot_2003}; the only difference is the inclusion of an estimate of $p_5$ as only parameter that was kept fixed in the fit. This parameter was derived from our study on H$_2 ^{13}$CO \cite{H2C-13-O_rot_2000}. In addition, the nuclear spin-nuclear spin coupling parameter $S({\rm HH})$ and two sets (in two different fits) of three nuclear spin-rotation parameters were determined. An edited version of one fit file is available as supplementary material. The final spectroscopic parameters of H$_2$C$^{16}$O are also given in Table~\ref{parameters}. The rms error of the entire fit is 0.904, this value is dominated by the GSCDs, for which the partial rms error is 0.947. Numerous other subsets of the data have rms errors around 0.7, the remaining RF data from Tucker et al. \cite{H2CO-div-isos_6cm_1971,H2CO-div-isos_2cm_1972} are at the upper end (1.006), among the larger subsets, rather low values were obtained for the Kiel lines (0.333) \cite{H2CO_rot_1996} and the Cologne lines (0.506) from the same study \cite{H2CO_rot_1996}. The rms error of our new lines is 0.726. \section{Discussion and conclusion} \label{Discussion} The rotational and centrifugal distortion parameters of H$_2$C$^{17}$O, which have been determined through fitting, compare favorably with those of H$_2$C$^{18}$O and H$_2$C$^{16}$O, their values are essentially in all instances between those of the heavier and the lighter isotopologue, see Table~\ref{parameters}. The value of $h_1$ appears to be an exception, but its uncertainty is large, and an increase by two to three times the uncertainty would bring it to the expected value. The H$_2$C$^{18}$O value of $H_J$ is larger than the H$_2$C$^{16}$O value, but the uncertainty of the former is quite large. Also, the decrease of $H_{KJ}$ from H$_2$C$^{16}$O over H$_2$C$^{17}$O to H$_2$C$^{18}$O is more pronounced than would be expected from the scaling mentioned above, but the change in the remaining parameters is rather close to what would be expected from such scaling. The improvement in the distortion parameters of H$_2$C$^{17}$O are quite obvious as the $R$-branch transitions were extended from $J = 4 - 3$ near 300~GHz to $J = 22 - 21$ near 1500~GHz. In addition, $K_a$ extends now to 7, up from previously 4. The improvement is also pronounced for H$_2$C$^{18}$O as most of the previous data was limited to below 835~GHz with two additional transitions near 1.87~THz. The situation is more complex for H$_2$C$^{16}$O. The uncertainties of some parameters changed only slightly, decreased by factors of around 1.5 to 2 for several others, and even by factors of $\sim$4 for $d_1$ and $L_{KKJ}$. The present $^{17}$O HFS parameters are slightly better determined than those from the initial investigation \cite{H2CO-17_rot_1965} as can be expected because of additional data from a later study \cite{H2CO-17_rot_1980b}; uncertainties in the later study are surprisingly worse in part than those in the earlier study. The spin-spin coupling parameters $S$(HH) may appear quite different among the two isotopic species H$_2$C$^{18}$O and H$_2$C$^{16}$O, but the differences are less than two times the combined uncertainties. Because of the uncertainties, one should take with a grain of salt that the value calculated from the ground state HH distance, derived from $A_0$, is $-$17.907~kHz and thus closer to the value of H$_2$C$^{18}$O. Inclusion of higher $K_a$ HFS splitting information in the fit improved the uncertainty of $C_{aa}$ by almost a factor of 3 and those of $C_{bb}$ and $C_{cc}$ by factors of $\sim$4. Predictions of the rotational spectra of the three formaldehyde isotopologues will be available in the catalog section\footnote{http://www.astro.uni-koeln.de/cdms/} of the CDMS~\cite{CDMS_1,CDMS_2}. Edited fit files are deposited as supplementary material. In addition, line, parameter, and fit files, along with other auxiliary files, will be available in the spectroscopy data section\footnote{http://www.astro.uni-koeln.de/site/vorhersagen/daten/H2CO/} of the CDMS. \section*{Acknowledgements} We acknowledge support by the Deutsche Forschungsgemeinschaft via the collaborative research grant SFB~956 project B3.
2110.10803
\section{Introduction} \label{introduction} Extreme multi-label classification (XMLC) is a supervised learning problem, where only a few labels from an enormous label space, reaching orders of millions, are relevant per data point. Notable examples of problems where XMLC framework can be effectively leveraged are tagging of text documents~\citep{Dekel_Shamir_2010}, content annotation for multimedia search~\citep{Deng_et_al_2011}, and diverse types of recommendation, including webpages-to-ads~\citep{Beygelzimer_et_al_2009b}, ads-to-bid-words~\citep{Agrawal_et_al_2013,Prabhu_Varma_2014}, users-to-items~\citep{Weston_et_al_2013, Zhuo_et_al_2020}, queries-to-items~\citep{Medini_et_al_2019}, or items-to-queries~\citep{Chang_et_al_2020}. These practical applications impose new statistical challenges, including: 1) long-tail distribution of labels---infrequent (tail) labels are much harder to predict than frequent (head) labels due to the data imbalance problem; 2) missing relevant labels in learning data---since it is nearly impossible to check the whole set of labels when it is so large, and the chance for a label to be missing is higher for tail than for head labels~\citep{Jain_et_al_2016}. Many XMLC models achieve good predictive performance by just focusing on head labels~\citep{Wei_Li_2018}. However, this is not desirable in many of the mentioned applications (e.g., recommendation and content annotation), where tail labels might be more informative. To address this issue \citet{Jain_et_al_2016} proposed to evaluate XMLC models in terms of propensity-scored versions of popular measures (i.e., precision$@k$, recall$@k$, and nDCG$@k$). Under the propensity model, we assume that an assignment of a label to an example is always correct, but the supervision may skip some positive labels and leave them not assigned to the example with some probability (different for each label). In this work, we introduce the Bayes optimal inference procedure for propensity-scored precision$@k$ for probabilistic classifiers trained on observed data. While this approach can be easily applied to many classical models, we particularly show how to implement it for probabilistic label trees (\Algo{PLT}s)~\citep{Jasinska_et_al_2016}, an efficient and competitive approach to XMLC, being the core of many existing state-of-the-art algorithms (e.g., \Algo{Parabel}~\citep{Prabhu_et_al_2018}, \Algo{extremeText}~\citep{Wydmuch_et_al_2018}, \Algo{Bonsai}~\citep{Khandagale_et_al_2019}, \Algo{AttentionXML}~\citep{You_et_al_2019}, \Algo{napkinXC}~\citep{Jasinska-Kobus_et_al_2020}, and \Algo{PECOS} that includes \Algo{XR-Linear}~\citep{Yu_et_al_2020} and \Algo{X-Transformers}~\citep{Chang_et_al_2020} methods). We demonstrate that this approach achieves very competitive results in terms of statistical performance and running times.% \section{Problem statement} \label{sec:problem_statement} In this section, we state the problem. We first define extreme multi-label classification (XMLC) and then the propensity model. \subsection{Extreme multi-label classification} \label{subsec:xmlc} Let $\mathcal{X}$ denote an instance space, and let $\calL = [m]$ be a finite set of $m$ class labels. We assume that an instance $\vec{x} \in \mathcal{X}$ is associated with a subset of labels $\calL_{\vec{x}} \subseteq \mathcal{L}$ (the subset can be empty); this subset is often called the set of \emph{relevant} or \emph{positive} labels, while the complement $\calL \backslash \calL_{\vec{x}}$ is considered as \emph{irrelevant} or \emph{negative} for $\vec{x}$. We identify the set $\calL_{\vec{x}}$ of relevant labels with the binary vector $\vec{y} = (y_1,y_2, \ldots, y_m)$, in which $y_j = 1 \Leftrightarrow j \in \calL_{\vec{x}}$. By $\mathcal{Y} = \{0, 1\}^m$ we denote the set of all possible label vectors. In the classical setting, we assume that observations $(\vec{x}, \vec{y})$ are generated independently and identically according to a probability distribution $\prob(\vec{x}, \vec{y})$ defined on $\mathcal{X} \times \mathcal{Y}$. Notice that the above definition concerns not only multi-label classification, but also multi-class (when $\|\vec{y}\|_1=1$) and $k$-sparse multi-label (when $\|\vec{y}\|_1\le k$) problems as special cases. In case of XMLC we assume $m$ to be a large number (e.g., $\ge 10^5$), and $\|\vec{y}\|_1$ to be much smaller than $m$, $\|\vec{y}\|_1 \ll m$.% \footnote{We use $[n]$ to denote the set of integers from $1$ to $n$, and $\|\vec{x}\|_1$ to denote the $L_1$ norm of $\vec{x}$.} The problem of XMLC can be defined as finding a \emph{classifier} $\vec{h}(\vec{x}) = (h_1(\vec{x}), h_2(\vec{x}),\ldots, h_m(\vec{x}))$, from a function class $\mathcal{H}^m: \mathcal{X} \rightarrow \mathbb{R}^m$, that minimizes the \emph{expected loss} or \emph{risk}: \begin{equation} L_\ell(\vec{h}) = \mathbb{E}_{(\vec{x},\vec{y}) \sim \prob(\vec{x},\vec{y})} (\ell(\vec{y}, \vec{h}(\vec{x}))\,, \end{equation} where $\ell(\vec{y}, \hat{\vec{y}})$ is the (\emph{task}) \emph{loss}. The optimal classifier, the so-called \emph{Bayes classifier}, for a given loss function $\ell$ is: $ \vec{h}^*_\ell = \argmin_{\vec{h}} L_\ell(\vec{h}) \,. $ \subsection{Propensity model} \label{subsec:propensity_model} In the case of XMLC, the real-world data may not follow the classical setting, which assumes that $(\vec{x}, \vec{y})$ are generated according to $\prob(\vec{x}, \vec{y})$. As correct labeling (without any mistakes or noise) in case of an extremely large label set is almost impossible, it is reasonable to assume that positive labels can be missing~\citep{Jain_et_al_2016}. Mathematically, the model can be defined in the following way. Let $\vec{y}$ be the original label vector associated with $\vec{x}$. We observe, however, $\tilde{\vec{y}} = (\tilde{y}_1, \ldots, \tilde{y}_m)$ such that: \begin{equation} \begin{array}{l l} \prob(\tilde{y}_j = 1 \, | \, y_j = 1) = p_j\,, & \prob(\tilde{y}_j = 0 \, | \, y_j = 1) = 1 - p_j \,,\\ \prob(\tilde{y}_j = 1 \, | \, y_j = 0) = 0 \,, & \prob(\tilde{y}_j = 0 \, | \, y_j = 0) = 1 \,, \\ \end{array} \end{equation} where $p_j \in [0,1]$ is the propensity of seeing a positive label when it is indeed positive. All observations in both training and test sets do follow the above model. The propensity does not depend on $\vec{x}$. This means that for the observed conditional probability of label $j$, we have: \begin{equation} \tilde{\eta}_j(\vec{x}) = \prob(\tilde{y}_j = 1 \, | \, \vec{x}) = p_j\prob(y_j = 1 \, | \, \vec{x}) = p_j \eta_j(\vec{x}) \,. \end{equation} Let us denote the inverse propensity by $q_j$, i.e. $q_j = \frac{1}{p_j}$. Thus, the original conditional probability of label $j$ is given by: \begin{equation} \eta_j(\vec{x}) = \prob(y_j = 1 \, | \, \vec{x}) = q_j\prob(\tilde{y}_j = 1 \, | \, \vec{x}) = q_j \tilde{\eta}_j(\vec{x}) \,. \end{equation} Therefore, we can appropriately adjust inference procedures of algorithms estimating $\tilde{\eta}_j(\vec{x})$ to act optimally under different propensity-scored loss functions. \section{Bayes optimal decisions for Propensity-scored Precision@k} \label{sec:bayes-optimal-decision} \citet{Jain_et_al_2016} introduced propensity-scored variants of popular XMLC measures. For precision$@k$ it takes the form: \begin{equation} psp@k(\tilde \vec{y}, \vec{h}_{@k}(\vec{x})) = \frac{1}{k} \sum_{j \in \hat \mathcal{L}_{\vec{x}}} q_j \assert{\tilde{y}_j = 1} \,, \end{equation} where $\hat \mathcal{L}_{\vec{x}}$ is a set of $k$ labels predicted by $\vec{h}_{@k}$ for $\vec{x}$. Notice that precision$@k$ ($p@k$) is a special case of $psp@k$ if $q_j = 1$ for all $j$. We define a loss function for propensity-scored precision@$k$ as $\ell_{psp@k} = - psp@k$. The conditional risk for $\ell_{psp@k}$ is then: \begin{eqnarray*} L_{psp@k}(\vec{h}_{@k} \, | \, \vec{x}) & = & \mathbb{E}_{\tilde{\vec{y}}} \ell_{psp@k}(\tilde{\vec{y}},\vec{h}_{@k}(\vec{x})) \\ & = & - \sum_{\tilde{\vec{y}} \in \mathcal{Y}} \prob(\tilde{\vec{y}} \, | \, \vec{x}) \frac{1}{k} \sum_{j \in \hat \mathcal{L}_{\vec{x}}} q_j \assert{\tilde{y}_j = 1} \\ & = & - \frac{1}{k} \sum_{j \in \hat \mathcal{L}_{\vec{x}}} q_j \sum_{\tilde{\vec{y}} \in \mathcal{Y}} \prob(\tilde{\vec{y}} \, | \, \vec{x}) \assert{\tilde{y}_j = 1} \\ & = & - \frac{1}{k} \sum_{j \in \hat \mathcal{L}_{\vec{x}}} q_j \tilde{\eta}_j(\vec{x}) \,. \end{eqnarray*} The above result shows that the Bayes optimal classifier for $psp@k$ is determined by the conditional probabilities of labels scaled by the inverse of the label propensity. Given that the propensities or their estimates are given in the time of prediction, $psp@k$ is optimized by selecting $k$ labels with the highest values of $q_j \tilde{\eta}_j(\vec{x})$. \section{Propensity-scored Probabilistic label tress} Conditional probabilities of labels can be estimated using many types of multi-label classifiers, such as decision trees, k-nearest neighbors, or binary relevance (\Algo{BR}) trained with proper composite surrogate losses, e.g., squared error, squared hinge, logistic or exponential loss~\citep{Zhang_2004, Agarwal_2014}. For such models, where estimates of $\tilde{\eta}_j(\vec{x})$ are available for all $j \in \calL$, application of the Bayes decision rule for propensity-scored measures is straightforward. However, in many XMLC applications, calculating the full set of conditional probabilities is not feasible. In this section, we introduce an algorithmic solution of applying the Bayes decision rule for $psp@k$ to probabilistic label trees (\Algo{PLT}s). \subsection{Probabilistic labels trees (\Algo{PLT}s)} We denote a tree by $T$, a set of all its nodes by $V_T$, a root node by $\root_T$, and the set of leaves by $L_T$. The leaf $l_j \in L_T$ corresponds to the label $j \in \calL$. The parent node of $v$ is denoted by $\pa{v}$, and the set of child nodes by $\childs{v}$. The set of leaves of a (sub)tree rooted in node $v$ is denoted by $L_v$, and path from node $v$ to the root by $\Path{v}$. A \Algo{PLT} uses tree $T$ to factorize conditional probabilities of labels, $\eta_j(x) = \prob(y_j = 1 \vert \vec{x})$, $j \in \mathcal{L}$, by using the chain rule. Let us define an event that $\mathcal{L}_{\vec{x}}$ contains at least one relevant label in $L_{v}$: $z_v = (|\{j : l_j \in L_v\} \cap \calL_{\vec{x}}| > 0)$. Now for every node $v \in V_T$, the conditional probability of containing at least one relevant label is given by: \begin{equation} \prob(z_v = 1|\vec{x}) = \eta_v(\vec{x}) = \prod_{v' \in \Path{v}} \eta(\vec{x}, v') \,, \label{eqn:plt-factorization-prediction} \end{equation} where $\eta(\vec{x}, v) = \prob(z_v = 1 | z_{\pa{v}} = 1, \vec{x})$ for non-root nodes, and $\eta(\vec{x}, v) = \prob(z_v = 1 \, | \, \vec{x})$ for the root. Notice that (\ref{eqn:plt-factorization-prediction}) can also be stated as recursion: \begin{equation} \eta_v(\vec{x}) = \eta(\vec{x}, v) \eta_{\pa{v}}(\vec{x}) \,, \label{eqn:plt-estimates-factorization-recursion} \end{equation} and that for leaf nodes we get the conditional probabilities of labels: \begin{equation} \eta_{l_j}(\vec{x}) = \eta_j(\vec{x}) \,, \quad \textrm{for~} l_j \in L_T \,. \label{eqn:plt_leaf_prob} \end{equation} To obtain a \Algo{PLT}, it suffices for a given $T$ to train probabilistic classifiers from $\mathcal{H} : \R^d \mapsto [0,1]$, estimating $\eta(\vec{x}, v)$ for all $v \in V_T$. We denote estimates of $\eta$ by $\hat{\eta}$. We index this set of classifiers by the elements of $V_T$ as $H = \{ \hat{\eta}(v) \in \mathcal{H} : v \in V_T \}$. \subsection{Plug-in Bayes optimal prediction \Algo{PLT}s} \input{figs/algo-psplt} An inference procedure for \Algo{PLT}s, based on \Algo{uniform-cost search}, has been introduced in \citep{Jasinska_et_al_2016}. It efficiently finds $k$ leaves, with highest $\hat{\eta}_j(\vec{x})$ values. Since inverse propensity is larger than one, the same method cannot be reliably applied to find leaves with the $k$ highest products of $q_j$ and $\hat{\tilde{\eta}}_j(\vec{x})$. To do it, we modify this procedure to an \Algo{$A^*$-search}-style algorithm. To this end we introduce cost function $f(l_j, \vec{x})$ for each path from the root to a leaf. Notice that: \begin{equation} q_j\hat{\tilde{\eta}}_j(\vec{x}) = \exp \Bigg(-\bigg(- \log q_j - \sum_{v \in \Path{l_j}} \log \hat{\tilde{\eta}}(\vec{x}, v) \bigg)\Bigg) \,. \end{equation} This allows us to use the following definition of the cost function: \begin{equation} f(l_j, \vec{x}) = \log q_{\max} - \log q_j - \sum_{v \in \Path{l_j}} \log \hat{\tilde{\eta}}(\vec{x}, v) \,, \end{equation} where $q_{\max} = \max_{j \in \calL} q_j$ is a natural upper bound of $q_j \hat{\tilde{\eta}}_j(\vec{x})$ for all paths. We can then guide the \Algo{A*-search} with function $\hat f(v, \vec{x}) = g(v, \vec{x}) + h(v, \vec{x})$, estimating the value of the optimal path, where: \begin{equation} g(v, \vec{x}) = - \sum_{v' \in \Path{v}} \log \hat{\tilde{\eta}}(\vec{x}, v') \end{equation} is a cost of reaching tree node $v$ from the root, and: \begin{equation} h(v, \vec{x}) = \log q_{\max} -\log \max_{j \in \calL_{v}}q_j \end{equation} is a heuristic function estimating the cost of reaching the best leaf node from node $v$. To guarantee that \Algo{$A^*$-search} finds the optimal solution---top-$k$ labels with the highest $f(l_j, \vec{x})$ and thereby top-$k$ labels with the highest $q_j\hat{\tilde{\eta}}_j(\vec{x})$---% we need to ensure that $h(v, \vec{x})$ is admissible, i.e., it never overestimates the cost of reaching a leaf node~\citep{Russell_Norvig_2016}. We also would like $h(v, \vec{x})$ to be consistent, making the \Algo{$A^*$-search} optimally efficient, i.e., there is no other algorithm used with the heuristic that expands fewer nodes~\citep{Russell_Norvig_2016}. Notice that the heuristic function assumes that probabilities estimated in nodes in a subtree rooted in $v$ are equal to 1. Since $\log 1 = 0$, the heuristic comes to finding the label in the subtree of $v$ with the largest value of the inverse propensity. Algorithm~\ref{alg:ps-plt-prediction} outlines the prediction procedure for \Algo{PLT}s that returns the top-$k$ labels with the highest values of $q_j\hat{\tilde{\eta}}_j(\vec{x})$. We call this algorithm Propensity-scored PLTs (\Algo{PS-PLT}s). The algorithm is very similar to the original \Algo{Uniform-Cost Search} prediction procedure used in \Algo{PLT}s, which finds the top-$k$ labels with the highest $\hat{\eta}_j(\vec{x})$. The difference is that nodes in \Algo{PS-PLT} are evaluated in the ascending order of their estimated cost values $\hat f(v, \vec{x})$ instead of decreasing conditional probabilities $\hat{\eta}_v(\vec{x})$. \begin{restatable}{theorem}{optimal-efficiency-of-psplt} \label{thm:optimal-efficiency-of-psplt} For any $T, H, \vec{q}$, and $\vec{x}$ the Algorithm~\ref{alg:ps-plt-prediction} is admissible and optimally efficient. \end{restatable} \begin{proof} \Algo{$A^*$-search} finds an optimal solution if the heuristic $h$ is admissible, i.e., if it never overestimates the true value of $h^*$, the cost value of reaching the best leaf in a subtree of node $v$. For node $v \in V$, we have: \begin{equation} h^*(v, \vec{x}) = \log q_{\max} - \log \max_{j \in \calL_{v}} q_j - \!\!\!\sum_{v'\in \Path{l_j}\setminus\Path{v} } \!\!\! \log \hat{\tilde{\eta}}(\vec{x}, v') \,. \end{equation} Since $\hat{\tilde{\eta}}(\vec{x}, v) \in [0, 1]$ and therefore $\log \hat{\tilde{\eta}}(\vec{x}, v) \le 0$, we have that $h^*(v, \vec{x}) \ge h(v, \vec{x})$, for all $v \in V_T$, which proves admissibility. \Algo{$A^*$-search} is optimally efficient if $h(v, \vec{x})$ is consistent (monotone), i.e., its estimate is always less than or equal to the estimate for any child node plus the cost of reaching that child. Since we have that $\max_{j \in L_{\pa{v}}} q_j \ge \max_{j \in L_{v}} q_j$, and the cost of reaching $v$ from $\pa{v}$ is $-\log(\hat{\tilde{\eta}}(\vec{x}, v))$ which is greater or equal 0, it holds that $h(\pa{v}, \vec{x}) \le h(v, \vec{x}) - \log(\hat{\tilde{\eta}}(\vec{x}, v))$. \end{proof} The same cost function $f(l_j, \vec{x})$ can be used with other tree inference algorithms (for example discussed by \citet{Jasinska-Kobus_et_al_2020}), including \Algo{beam search}~\citep{Kumar_et_al_2013}, that is approximate method for finding $k$ leaves with highest $\hat{\eta}_j(\vec{x})$. It is used in many existing label tree implementations such as \Algo{Parabel}, \Algo{Bonsai}, \Algo{AttentionXML} and \Algo{PECOS}. We present \Algo{beam search} variant of \Algo{PS-PLT} in the Appendix. \section{Experimental results} \label{sec:experimental-results} \input{figs/table-psplt-results} \input{figs/table-psplt-times} In this section, we empirically show the usefulness of the proposed plug-in approach by incorporating it into \Algo{BR} and \Algo{PLT} algorithms and comparing these algorithms to their vanilla versions and state-of-the-art methods, particularly those that focus on tail-labels performance: \Algo{PFastreXML}~\citep{Jain_et_al_2016}, \Algo{ProXML}~\citep{Babbar_Scholkopf_2019}, a variant of \Algo{DiSMEC}~\citep{Babbar_Scholkopf_2017} with a re-balanced and unbiased loss function as implemented in \Algo{PW-DiSMEC}~\citep{Qaraei_et_al_2021} (class-balanced variant), and \Algo{Parabel}~\citep{Prabhu_et_al_2018}. We conduct a comparison on six well-established XMLC benchmark datasets from the XMLC repository~\citep{Bhatia_et_al_2016}, for which we use the original train and test splits. Statistics of the used datasets can be found in the Appendix. For algorithms listed above, we report results as found in respective papers. Since true propensities are unknown for the benchmark datasets, as true $\vec{y}$ is unavailable due to the large label space, for empirical evaluation we model propensities as proposed by \citet{Jain_et_al_2016}: \begin{equation} p_j = \prob(\tilde{y}_j= 1 \, | \, y_{j} = 1) = \frac{1}{q_j} = \frac{1}{1 + C e^{-A \log (N_j + B)}} \,, \end{equation} where $N_j$ is the number of data points annotated with label $j$ in the observed ground truth dataset of size $N$, parameters $A$ and $B$ are specific for each dataset, and $C = (\log N - 1)(B + 1)^A$. We calculate propensity values on train set for each dataset using parameter values recommended in \citep{Jain_et_al_2016}. Values of $A$ and $B$ are included in Table~\ref{tab:psplt-vs-sota}. We evaluate all algorithms with both propensity-scored and standard precision$@k$. We modified the recently introduced \Algo{napkinXC}~\citep{Jasinska-Kobus_et_al_2020} implementation of \Algo{PLT}s, \footnote{Repository with the code and scripts to reproduce the experiments: \url{https://github.com/mwydmuch/napkinXC}} which obtains state-of-the-art results and uses the \Algo{Uniform-Cost Search} as its inference method. We train binary models in both \Algo{BR} and \Algo{PLT}s using the \Algo{LIBLINEAR} library~\citep{liblinear} with $L_2$-regularized logistic regression. For \Algo{PLT}s, we use an ensemble of 3 trees built with the hierarchical 2-means clustering algorithm (with clusters of size 100), popularized by \Algo{Parabel}~\citep{Prabhu_et_al_2018}. Because the tree-building procedure involves randomness, we repeat all \Algo{PLT}s experiments five times and report the mean performance. We report standard errors along with additional results for popular $L_2$-regularized squared hinge loss and for \Algo{beam search} variant of \Algo{PS-PLT} in the Appendix. The experiments were performed on an Intel Xeon E5-2697 v3 2.6GHz machine with 128GB of memory. The main results of the experimental comparison are presented in Table~\ref{tab:psplt-vs-sota}. Propensity-scored \Algo{BR} and \Algo{PLT}s consistently obtain better propensity-scored precision$@k$. At the same time, they slightly drop the performance on the standard precision$@k$ on four and improve it on two datasets. There is no single method that dominates others on all datasets, but \Algo{PS-PLT}s is the best sub-linear method, achieving best results on $psp@\{1,3,5\}$ in this category on five out of six datasets, at the same time in many cases being competitive to \Algo{ProXML} and \Algo{PW-DiSMEC} that often require orders of magnitude more time for training and prediction than \Algo{PS-PLT}. In Table~\ref{tab:psplt-vs-plt-test-time}, we show CPU train and test times of \Algo{PS-PLT}s compared to vanilla \Algo{PLT}s, \Algo{PfasterXML}, \Algo{ProXML} and \Algo{PW-DiSMEC} on our hardware (approximated for the last two using a subset of labels). \section{Conclusions} In this work, we demonstrated a simple approach for obtaining Bayes optimal predictions for propensity-scored precision$@k$, which can be applied to a wide group of probabilistic classifiers. Particularly we introduced an admissible and consistent inference algorithm for probabilistic labels trees, being the underlying model of such methods like \Algo{Parabel}, \Algo{Bonsai}, \Algo{napkinXC}, \Algo{extremeText}, \Algo{AttentionXML} and \Algo{PECOS}. \Algo{PS-PLT}s show significant improvement with respect to propensity-scored precision$@k$, achieving state-of-the-art results in the group of algorithms with sub-linear training and prediction times. Furthermore, the introduced approach does not require any retraining of underlining classifiers if the propensities change. Since in real-world applications estimating true propensities may be hard, this property makes our approach suitable for dynamically changing environments, especially if we take into account the fact that many of \Algo{PLT}s-based algorithms can be trained incrementally~\citep{Jasinska_et_al_2016,Wydmuch_et_al_2018,You_et_al_2019,Jasinska-Kobus_et_al_2021}. \section*{Acknowledgments} Computational experiments have been performed in Poznan Supercomputing and Networking Center. \bibliographystyle{ACM-Reference-Format} \balance
1107.1054
\section{Introduction} CTB 37A (also called G348.5+0.1, $\rmn{RA}(2000)=17^{\rmn{h}} 14^{\rmn{m}} 06^{\rmn{s}}$, $\rmn{Dec.}~(2000)=-38\degr 32\arcmin$) was discovered by \citet{b5} in the radio band and has a shell-type morphology with an angular size of 15 arcmin. From Very Large Array observations at wavelengths of 6, 20, and 90 cm, \citet{b6} reported that this supernova remnant (SNR) is expanding in an inhomogeneous region, and is part of a complex composed of three SNRs: CTB 37A, G348.7+0.3 (also called CTB 37B), and G348.5-0.0. From the {\it ASCA} Galactic plane survey data of G348.5+0.1, \citet{b28} found that the X-ray spectra of the SNR was heavily absorbed by interstellar matter with $N_{\rm H}\sim2\times10^{22}$ ${\rm cm^{-2}}$ and the size of X-ray emission was comparable to its radio structure. \citet{b31} detected OH masers with velocities at about 20 and 60 km s$^{-1}$, in the direction of CTB 37A at 1720 MHz. \citet{b53} surveyed the environment of this remnant with associated OH 1720 MHz masers in the CO J=1-0 transition with the 12 Meter Telescope of the NRAO. They reported that a number of molecular clouds are interacting with the SNR shock fronts. \citet{b20} using {\it Chandra} and {\it XMM-Newton} data showed the presence of thermal X-rays from the Northeast part, an extended non-thermal X-ray source, CXOU J171419.8-383023 in the Northwest part and a ${\gamma}$-ray source, HESS J1714-385, coincident with the remnant. They found a high absorbing column density of $N_{\rm H}\sim3\times10^{22}$ ${\rm cm^{-2}}$. They claimed that the observed X-ray morphology was a result of interaction with the inhomogeneous medium surrounding the remnant and this inhomogeneity was also responsible for the break-out radio morphology. \citet{b51} using observations with the Fermi-LAT, have revealed ${\gamma}$-ray emission from this SNR; the spectrum of the source coincident with CTB 37A was fitted by a power-law (PL) model with an exponential cutoff at energy $E_{\rm cut}$=4.2 GeV. Considering the lack of evidence for contribution by a pulsar and the presence of maser emission for the remnant they proposed that ${\gamma}$-rays result from SNR-molecular clouds interactions. The distance to the CTB 37A has been estimated from 21 cm absorbtion measurement to be in between 6.7-13.7 kpc by \citet{b52}. From velocity measurements of molecular clouds associated with the remnant, \citet{b53} adopted a distance of 11.3 kpc. So we will use d=11.3 kpc for our calculations throughout this work. The location of CTB 37A containing OH maser sources \citep{b31}, 1FGL J1714.5-3830 \citep{b51}, X-ray source CXOU J171419.8-383023, and HESS J1714-385 \citep{b20}, as well as two additional SNRs (G348.5-0.0 and G348.7+0.3) make this SNR interesting and important. In addition, its morphology implies a similarity to the recently proposed group of mixed-morphology (MM) SNRs, increasing its importance. High quality imaging and spectra obtained from the data provided by X-ray observatory {\it Suzaku} are used to produce the results in this work. The organization of this paper is as follows: we describe the {\it Suzaku} X-ray observation of CTB 37A, including details of data reduction in Section 2. The image and spectral analysis are given in Sections 3 and 4, respectively. Finally, considering its morphology and investigating radial variation of the electron temperature, we discuss the implications of MM class, the nature of thermal and non-thermal components of CTB 37A in Section 5. \section[]{Observation and Data reduction} {\it Suzaku} \citep{b17} is the fifth Japanese X-ray astronomy satellite launched on 2005 July 10. {\it Suzaku} has observed CTB 37A on 2010 February 20. The observation ID and exposure time are 504097010 and 53.8 ks, respectively. The X-ray Imaging spectrometer (XIS; \citet{b12}) consists of four sets of X-ray CCD camera system (XIS0, 1, 2 and 3). XIS0, 2, and 3 have front-illuminated (FI) sensor and provide coverage over the energy range 0.4$-$12 keV, while XIS1 has a back-illuminated (BI) sensor providing greater sensitivity at lower energies (0.2$-$12 keV). The XIS has a field of view (FOV) of 17.8$\times$17.8 arcmin$^{2}$ ($1024\times1024$ pixels). Each XIS CCD has an $^{55}$Fe calibration source, which can be used to calibrate the gain and test the spectral resolution of data taken using this instrument. The XIS2 sensor was available only until 2006, therefore, we use data of XIS0, XIS1, XIS3. Reduction and analysis of the data were performed following the standard procedure using the {\sc headas} software package of version 6.5 and spectral fitting was performed with {\sc xspec} version 11.3.2 \citep{b2}. The XIS was operated in the normal full-frame clocking mode, with the standard $3\times3$ and $5\times5$ editing mode. We generated XIS response matrices using the {\sc xisrmfgen} software, which takes into account the time-variation of the energy response. As for generating ancillary response files (ARFs), we used {\sc xissimarfgen} \citep{b7}. The latest version of the relevant {\it Suzaku} CALDB files were also used. \section{Image Analysis} Figure 1 shows XIS1 image in 0.3$-$10 keV energy band. We extracted the spectrum from the brightest region represented with the outermost solid circle centered at $\rmn{RA}(2000)=17^{\rmn{h}} 14^{\rmn{m}} 30^{\rmn{s}}$, $\rmn{Dec.}~(2000)=-38\degr 32\arcmin 07\arcsec$ with a radius of 5.5 arcmin. To derive the radial variation of the electron temperature $kT_{\rm e}$, we take four apertures with sizes of 0$-$1.5, 1.5$-$2.5, 2.5$-$3.5, 3.5$-$4.5 arcmin. The extended non-thermal X-ray source (CXOU J171419.8-383023) is excluded from spectral analysis as shown in Fig. 1. The dashed black circle centered at ($\rmn{RA}(2000)=17^{\rmn{h}} 14^{\rmn{m}} 19^{\rmn{s}}$, $\rmn{Dec.}~(2000)=-38\degr 24\arcmin 36\arcsec$ with radius 1.7 arcmin) represents the background region. The lower left corner in the FOV that contains calibration source emission is also extracted. Figure 2 shows the XIS0 image in 0.3$-$10 keV energy band, which is overlaid with the radio image obtained at 843 MHz by \citet{b25} for comparison. Dark crosses indicate the direction of the detected OH (1720 MHz) maser emission at velocities $\sim$ $-65$ km s$^{-1}$ associated with CTB 37A, and the white crosses show the positions of maser emission at velocities $\sim$ $-22$ km s$^{-1}$ \citep{b31}. The diamond indicates the position of CXOU J171419.8-383023 and the circle represents the location of HESS J1714-385 \citep{b20}. \section{Spectral Analysis} XIS spectra were extracted using {\sc xselect} version 2.4a from all the XISs with a circular extraction region of radius 5.5 arcmin and are grouped with a minimum of 50 counts bin$^{-1}$. We fit the spectra with a collisional ionization equilibrium (CIE) model with variable abundances ({\sc xspec} model ``VMEKAL''; \citet {b15,b16,b13}) modified by interstellar absorbtion (wabs in {\sc xspec}, \citet {b18}). The parameters of the absorbing column density ($N_{\rm H}$) and electron temperature ($kT_{\rm e}$) are set free while all elements were fixed at solar abundances \citep {b1}. The best-fitting reduced $\chi^{2}$/d.o.f. for this model is 2222.5/832= 2.67. To find out if there was any contribution from non-thermal emission we added a PL component (VMEKAL+PL) yielding a better reduced $\chi^{2}$ of 935.5/830 = 1.13. Then, Mg, Si, S and Ar lines in the spectrum were set free, while the rest were fixed at their solar values, we found insignificant improvement in $\chi^{2}$ value (891.6/826 = 1.08). Therefore, we decided to fix the abundances of Mg, Si, S, and Ar at their solar values. In Table 1, we present the best-fitting parameters and the statistics obtained with an absorbed VMEKAL+PL with corresponding errors at 90 per cent confidence level (2.7 $\sigma$). Figure 3 shows the spectra of the XIS0, XIS1, and XIS3 simultaneously, in the energy range of 0.3$-$10 keV that is taken from the region shown by the solid dark circle (the outermost) presented in Fig. 1. On the other hand, we performed annular spectral analysis for four regions shown by the circles in Fig. 1 to be able to derive the radial temperature variations of CTB 37A. The annular regions are spaced by 1 arcmin from the innermost circle with radius r=1.5 arcmin. VMEKAL+PL spectral model were also fitted to each annulus, while fitting them we kept the absorbing column density $N_{\rm H}$ at its best-fitting value for the entire remnant. Figure 4 shows the electron temperature variations with respect to the radius. \begin{table*} \centering \begin{minipage}{140mm} \caption{Best-fitting parameters and $\chi^{2}$ values of the spectral fitting in the full energy band (0.3$-$10 keV) with an absorbed VMEKAL+PL model with corresponding errors at 90 per cent confidence level (2.7 $\sigma$).} \begin{tabular}{@{}cccc@{}} \hline Component&Parameters & VMEKAL+PL&\\ \hline Wabs&$N_{\rm H}$($\times10^{22}$$\rm cm^{-2})$ & 2.9 $\pm 0.1$& \\ VMEKAL &$kT_{\rm e}$(keV) & 0.63 $\pm 0.02$&\\ Abundance\footnote{(1) indicates that the elemental abundance is fixed at solar \citep{b1}.}&Mg & (1)&\\ &Si & (1)&\\ &S & (1)&\\ &Ar & (1)&\\ &VEM\footnote{Volume emission measure VEM=$\int n_{\rm e}n_{\rm H}$dV in the unit of $10^{58}$ $\rm cm^{-3}$, where $n_{\rm e}$ and $n_{\rm H}$ are number densities of electrons and protons, respectively, and V is the X-ray emitting volume.}&72.3 $\pm 2.1$& \\ &Flux\footnote{Unabsorbed flux in the $0.3-10$ keV energy band in the unit of $10^{-9}$ erg $\rm s^{-1}$$\rm cm^{-2}$.}&1.36 $\pm 0.03$ & \\ PL &Photon Index &1.6 $\pm 0.1$&\\ &norm($\times10^{-2}$photons $\rm cm^{-2}s^{-1}$) & 1.9 $\pm 0.2$ &\\ &Flux\footnote{Total unabsorbed flux of the sum of the VMEKAL and PL components in the $0.3-10$ keV energy band in the unit of $10^{-9}$ erg $\rm s^{-1}$$\rm cm^{-2}$.} &1.5 $\pm 0.1$ &\\ &$\chi^{2}$/d.o.f. &935.6/830=1.13 & \\ \hline \end{tabular} \end{minipage} \end{table*} \section{Discussion and Conclusions} In this work, we provide a description of the X-ray emission of CTB 37A based on {\it Suzaku} archival data. We obtained a clear image and high quality spectra of diffuse X-ray emission. We have examined the thermal and non-thermal emissions coming from the remnant. The X-ray spectrum of CTB 37A is characterized by thermal emission dominated by K-shell emission lines of Mg, Si, S, and Ar which are clearly detected. As seen from Fig. 2, the radio emission of CTB 37A comprises a partial shell towards the north and east and an extended outbreak to the south, while the X-ray emission has a deformation along the southwest limb, where the morphology appears indented. The reason for such a deformation may be due to the inhomogeneous medium along this specific region and this may well be supported by the fact that remnant is close to the Galactic plane and several OH masers at 1720 MHz are detected towards CTB 37A \citep {b31} (shown by crosses). It may also be relevant that $\gamma$-ray emission thought to be associated with the interaction between CTB 37A and dense surrounding material that has been detected with the {\it Fermi LAT} \citep {b51}. \subsection{Implications for mixed-morphology} SNRs were originally divided into shell-like, Crab-like (plerionic), and composite (shell-like containing plerions) remnants \citep {b42} according to their X-ray morphology. Recently, an additional MM class (also called thermal composite) appeared, which are center-filled in X-rays and shell-like at radio wavelengths \citep {b23, b24, b34}. As seen from Fig. 2, CTB 37A has a shell-like morphology in the radio band while centrally-filled in X-ray band, in this regard, our first impression is CTB 37A seems to be a MM SNR. Examples of well-known MM SNRs include W28 \citep {b48}, G290.1-0.8 \citep {b49}, and IC 443 \citep {b4}. \citet {b3} reported important results about this class by compiling a list of 26 MM SNRs. The X-ray characteristics of MM remnants have been defined by \citet {b34} as follows: (1) The radial temperature distribution is relatively flat; (2) The X-ray emission arises primarily from shocked interstellar material, and not from ejecta; (3) The remnants are typically located close to molecular clouds or very dense regions; and (4) The dominant X-ray emission is thermal in nature. Subsequent studies indicated that MM SNRs had a complex plasma structure with multiple components (e.g. \citet {b48}) and enhanced abundances (e.g. \citet {b50,b49}), and they have evolved over $\sim$$10^{4}$ yr, which means that the plasma is in CIE or an overionization condition \citep {b47}. As can be seen from Fig. 4, the annular analysis of CTB 37A may well be indicating a very small scale radial variation in its temperature between the selected regions. The best-fitting metal abundances are found to be solar in general, confirming the absence of ejecta contamination in selected regions. It may support the idea that the X-ray emission originates from the shocked interstellar material. The plasma of CTB 37A is in a collisional ionization equilibrium condition and is located in a region with density variation, possibly associated with molecular clouds. The plasma has thermal and non-thermal emission, but the emission is dominated by thermal component ($\sim90$ per cent of the total X-ray flux). These X-ray properties of CTB 37A exemplify the typical characteristic of MM SNRs as defined by \citep {b34}. There are a few models that can produce centrally enhanced X-ray emission such as evaporation of clouds left relatively unspoiled after the passage of the SNR blast wave (e.g. \citet {b9}) and ``fossil'' thermal radiation that is detectable as thermal X-rays from the hotter interior as the shell of an expanding SNR cools below $\sim$$10^{6}$ K and becomes invisible due to interstellar absorbtion (e.g. \citet {b42}), as the SNR evolves, the temperature and density of the hot interior plasma gradually become uniform through thermal conduction (e.g. \citet {b10}) or evolution in a medium with a density gradient viewed along the line of sight \citep {b26}. The evaporation model requires dense clouds, the thermal conduction model requires a relatively high density ambient medium. CTB 37A is located in a region of greatly varying density, with OH maser sources indicating interaction with molecular clouds. In this regard, the center-filled X-ray morphology of CTB 37A is consistent with evaporating clouds model. The radial temperature variation in the plasma of CTB 37A ($kT_{\rm e}$ $\sim$ 0.6-0.8 keV) is consistent with other MM SNRs such as W44 \citep {b29,b21}, 3C391 \citep {b32,b33} and HB21 \citep {b27}. Very small temperature variation in the plasma of CTB 37A as shown Fig. 4 can be explained by both evaporation and thermal conduction models. Future deep X-ray observations and detailed spectral analysis of this remnant would give more detailed information to compare with theoretical models that produce MM SNRs. \subsection{Thermal component} The X-ray emission of CTB 37A is dominated by thermal emission that can be best described by an absorbed CIE plasma model (VMEKAL) with an absorbing column density of $N_{\rm H}\sim 3\times10^{22}$ $\rm cm^{-2}$, an electron temperature of $kT_{\rm e}\sim0.6$ keV, and solar abundances of Mg, Si, S, and Ar, which indicate a shocked interstellar/circumstellar material origin. For full ionization equilibrium, the ionization timescale, $\tau=n_{\rm e}t$, is required to be $\geq$ $10^{12}$ $\rm cm^{-3}$s, where t is the plasma age or the time since the gas was shock-heated \citep {b14}. To determine the age of the remnant, $n_{\rm e}$ should be estimated from the emission measure, $n_{\rm e}n_{\rm H}V$, which is related to the normalization of the VMEKAL model according to the equation, norm=$n_{\rm e}n_{\rm H} V$/($4\pi d^{2}$$10^{14}$), where V is the X-ray emitting volume, $n_{\rm H}$ is the volume density of hydrogen and d is the distance. For simplicity, we assumed the emitting region to be a sphere of radius 5.5 arcmin. Considering the possibility that less than the entire volume is filled, we write the volume V=$V_{\rm s}f$, where $V_{\rm s}$ is the full spherical volume, $f$ is the filling factor. We then carry the $f$ factor through our calculations to show the explicit dependence of each derived quantity on this factor. Knowing that the SNR is at a distance of 11.3 kpc and $n_{\rm e}=1.2n_{\rm H}$, we estimated the emission volume to be $V\sim 6.7\times10^{59}f$ ${\rm cm^{3}}$. Consequently, we find an ambient gas density of $\sim$1$f^{-1/2}$ ${\rm cm}^{-3}$ and age of $\sim$$3\times10^{4}f^{1/2}$ yr (assuming $n_{\rm e}t\sim 1\times10^{12}$ $\rm cm^{-3}$s), implying that CTB 37A is a middle-aged SNR. Finally, we calculated total mass of the X-ray emitting plasma, $M_{\rm x}$, by $M_{\rm x}$=$m_{\rm H}n_{\rm e}V$$\sim 530 f^{1/2}{M\sun}$, where $m_{\rm H}$ is mass of a hydrogen atom, $\mu$=0.604 is the mean atomic weight. \subsection{Power-law component} The Suzaku X-ray spectral data of CTB 37A is well fitted with a thermal component and an additional hard component. There could be a few reasons for the hard X-ray emision: (i) an association with a classical young pulsar, (ii) a contribution from an extended non-thermal X-ray source (CXOU J171419.8-383023), (iii) overionization of the plasma which produces excess hard emission as has been the case for IC443 \citep {b60}. The hard component is well fitted by a PL model with a photon index value of $\sim$1.6. This value is consistent with that of classical young pulsar value ranging in between 1.1 and 1.7 \citep {b37}. However, there is no pulsar reported that is associated with this remnant. In the Northwest region of the CTB 37A, an extended non-thermal X-ray source (CXOU J171419.8-383023) is reported ($\rmn{RA}(2000)=17^{\rmn{h}} 14^{\rmn{m}} 20^{\rmn{s}}$, $\rmn{Dec.}~(2000)=-38\degr 30\arcmin 20\arcsec$) by \citet {b20}. In their work, a non-thermal emission from the source with a spectral index of $\sim$1.32 is found which is lower (harder) than our best-fitting value of $\sim$1.6. Although we excluded CXOU J171419.8-383023 (with radius $2.1$ arcmin) from our spectra during our spectral analysis, our fits required a non-thermal component. To investigate this, we performed spectral analysis also for individual regions by selecting small rectangular regions that are being further away from the known extended non-thermal source. The non-thermal flux is found to be stronger for the selected small regions nearby the source compared to the ones further away. We have obtained an unabsorbed flux value of $F_{\rm x}\sim1.4\times10^{-9}$ erg $\rm s^{-1}$$\rm cm^{-2}$ for the non-thermal extended source in the 0.3$-$10 keV energy range. When we compare it with the unabsorbed flux value ($F_{\rm x}\sim0.14\times10^{-9}$ erg $\rm s^{-1}$$\rm cm^{-2}$) of PL component of best-fitting, we find a factor of ten difference between them. This difference indicates that the extended source is most likely the origin of the PL component and emission scattered from the source into the field of the rest of the remnant by a broad point spread function of the {\it Suzaku} mirrors. The spectral studies of CTB 37A indicate that the plasma is best described by a thermal component in CIE condition with solar elemental abundances and a non-thermal component with a photon index of $\sim$1.6. Thermal emission possibly originates from the shocked interstellar material with ambient gas density of $\sim$1$f^{-1/2}$${\rm cm^{-3}}$. The best spectral fits require an ionization timescale of $\tau\geq$ $10^{12}$ $\rm cm^{-3}$s, implying an age of $\sim$3$\times10^{4}f^{1/2}$ yr. The origin of the power-law component is more likely the effect of the contribution from the extended source (CXOU J171419.8-383023) located in the Northwest part of the remnant. CTB 37A is most likely a new member of mixed-morphology SNR. \section*{Acknowledgments} We thank Dr. Patrick Slane for his valuable comments and suggestions which helped to improve the overall quality of the manuscript. AS is supported by T\"{U}B\.{I}TAK PostDoctoral Fellowship. This work is supported by the Akdeniz University Scientific Research Project Management and by T\"{U}B\.{I}TAK under project codes 108T226 and 109T092. The authors also acknowledge the support by Bo\u{g}azi\c{c}i University Research Foundation under 2010-Scientific Research Project Support (BAP) project no:5052.
2211.14609
\section{Introduction} Given dramatic changes in lifestyles in modern society, millions of people nowadays are affected by anxiety, depression, exhaustion, and other emotional problems. The demand for proper emotional care is accumulating and accelerating at an increasingly rapid rate. Music, as being extensively studied in literature \cite{Koelsch2014, Taruffi2017, Siddharth2019, Ehrlich2019}, is proven to be one of the most effective and accessible manners to evoke emotions and influence moods \cite{goldstein1980thrills}. And it has been seen and reported that people are spending more time listening to music over the past decades with the development of portable music players and streaming platforms like YouTube, Spotify, and iTunes \cite{AudienceNet2018, Music3602017}. Inevitably, music shapes our life in various ways, including helping people better focus their attention, releasing more dopamine for a better mood, exercising with more energy, and boosting our creativity. People's sensation towards a music piece is subtle and might be distinct every time he/she listens to it. Biometric signals are collected along with people's self-reporting to understand human emotional reactions towards music. With the development of less obtrusive, non-invasive EEG devices, EEG becomes the dominant modality for studying brain activities, including emotion recognition in human-computer interactions (HCI) studies \cite{mihajlovic2014wearable}. A large number of studies have explored the music-induced neural correlates of perception and cognition, and provided theoretical supports for the application of music-based emotion regulation. People are looking forward to the day when wearable EEG devices could bridge the gap between the theoretical studies and the applications that can benefit users in their daily life. However, more accessible, easy-to-use, and interactive approaches for effective emotion regulation and mental health are still largely under-explored. \begin{figure*}[tbp!] \centering \includegraphics[width=\columnwidth]{figures/comparison.pdf} \caption{Workflow comparison of existing systems based on the emotion recognition model (left) and our proposed system (right). The upper framework in the block is the training process, and the lower framework is the testing process. MAs is the acronym for Music Alternatives.} \label{compare} \end{figure*} The introduction of EEG-based emotion recognition models makes it possible that computers can be empathetic. Existing studies are primarily based on emotion recognition models, as shown on the left of Figure \ref{compare}. Some of them employ definitive approaches without users' self-reporting to provide user-determined feedback \cite{ali2016eeg, Adamos2016, Ramirez2018, sourina2012real, moore2013systematic}. However, the emotion recognition algorithms vary widely and will reach different interpretations for similar data \cite{reed2017learning, yoon2017comparison}. Users would over-accept and defer to the feedback, even if it contradicts with users’ own interpretations \cite{hollis2018being}. Another drawback is the limited music choices. To work with the emotion recognition model, the system needs to label the music pieces based on users' evoked emotions. With the limited emotional states, a large music library would make their selection strategy ineffective and inefficient. And it's unfeasible to ask users to listen to a large number of songs to acquire labels. Some research asked users or professional music therapists to select music before the experiment. Other studies labeled music by the emotion expressed in music (``music emotion'') \cite{jun2010music, lu2009novel, Juslin2004}, and then recommended a specific song by matching its emotion to the detected user's emotional state. Such a strategy assumes that the music-listening preferences and styles of an individual are always consistent with his/her emotional state at the moment. However, the music-listening styles of users are so varying \cite{ferwerda2014enhancing}, and their emotional reactions change over time and under different situations \cite{thoma2012emotion}. And more importantly, emotions that a listener experiences are distinct from the judgments of what emotion is expressed in music \cite{lewis2010handbook}, which was proved by the low match rate in our study. Most existing research uses discrete emotion models with states like `happy', `angry', and `sad', or continuous models like Russell's circumplex model \cite{russell1980circumplex} for emotion evaluation. One problem is that systems relying on the emotion recognition models assume that the stimulus would elicit emotions effectively \cite{alarcao2017emotions}. Thus when the emotion is not successfully elicited, self-assessment would be tough, and the performance of such systems would be questionable. A more essential problem is that the detected emotional states only act as a reference and don't link the music and users in a relation. Under such circumstances, users can regulate emotions themselves by searching a playlist with a certain `mood' on streaming platforms and switching songs referring to their emotional states. Given all the limitations and shortcomings above, it is argued that the conventional emotion recognition models are insufficient and unreliable for the design of emotion regulation applications. To design an individual-oriented, truly personalized music-based emotion regulation application, we need to ask and address two questions: (1) Will the user like the chosen/recommended music? (2) Will this music emotionally affect the user in a way he/she wants? Although some existing music recommendation approaches have partially addressed these two questions by collecting information about users' profiles and preferences \cite{Hu2011, Schedl2018}, there is still one essential question to be considered in terms of the complex and diverse variations of the user's emotion over time: What is the user's current emotional state and what will his/her emotional state be after listening to this song? With the advances of ubiquitous HCI technologies, it is necessary and also becomes possible to address these questions for understanding and handling human's ever-changing emotions. To this aim, in this study, we propose an emotion regulation system that predicts the emotion variation (i.e., the emotion change after listening to a song compared with before) before any recommendation. In this system, the song and the user's current emotional state are two factors that decide how his/her emotion is going to be changed after listening to this song. Therefore, two independent variables: the music and the current EEG information, define and indicate the consequential emotion variation, as shown on the right of Figure \ref{compare}. Concatenations of each song selected in music alternatives (MAs) and the current EEG information are fed into the prediction model sequentially, then a music piece is chosen from MAs based on their predicted emotion variations. Unlike the definitive emotional state, emotion variation describes how much the emotion is changed. That means, it will be determined based on an individual's current emotional state, and will be affected or induced by emotional stimuli (listening to a song in our case). It requires less effort for users to report even when the emotion sustains, which is ``neutral'' or a close-to-zero value. And the model is able to deal with the situation that ``I'm feeling better but still sad,'' which is a positive variation but not a ``joyful'' state after listening to a song. Discrete emotion models with the classes like ``happy'', ``angry'', and ``sad’' are not suitable for the assessment of emotional variations. Thus we apply Russell's circumplex model with valence and arousal coordinates (Figure~\ref{Russell}) for users' self-assessment, which is called valence-arousal (v/a) scores in this study. Yet, the challenge is that, continuous values of emotion variation are not realistic or feasible for models to learn; instead, discrete emotion classes are learned to represent people's typical feelings \cite{lin2010eeg, baumgartner2006emotion}. Thus, for the proposed emotion variation prediction model, we seek to simplify users' continuous v/a scores into four classes as the four quadrants in the valence-arousal model by positive/negative values, by which we predict the direction of emotion variations instead of the absolute variation amplitude. The training process concatenates EEG features with song features to train the binary valence and arousal models offline. In the testing process, a user needs to first designate his/her desired emotion variation, and their current EEG information will be collected. Meanwhile, music alternatives (MAs) are coarsely selected to narrow down songs for the prediction model, and are combined with EEG information to predict the direction of the user's emotional variation on the fly. Finally, a song from MAs will be chosen if the predicted class of v/a scores matches the user's designated v/a score. \begin{figure}[tbp!] \centering \includegraphics[width=0.5\columnwidth]{figures/russell.pdf} \caption{Russell's circumplex model of affect \protect\cite{russell1980circumplex}. Locations of affects on the model correspond to their valence and arousal values. The y-axis represents arousal (intensity) and the x-axis represents valence.} \label{Russell} \vspace{-15pt} \end{figure} In this paper, we aim to break the limitations of existing music-based emotion regulation systems and make the following contributions: (1) our approach bridges the gap between the theoretical studies and the practical and usable interactive systems for daily usage by proposing a dynamic emotion variation model, instead of using the conventional definitive emotion recognition methods; (2) based on the qualitative prediction of emotion variations, our system is able to recommend proper songs, being optimized towards both users' listening preferences and their desired change of emotion; and (3) the user's different emotion variations resulting from one song can be distinguished by the system, by which we evaluate the user's emotion instability and imply that it could be utilized as an indicator for mental health-related applications. Lastly, a simple questionnaire was developed and conducted to evaluate the user acceptance and usability of the proposed system. \section{Related Work} \subsection{Emotion Models} In affective science studies, the terms of affect, emotion, and mood are precisely differentiated. Affect is not directed at a specific event and usually lasts for a short time. Emotion is intense and directed, which is stimulated by some events or objects and also lasts for a short time. Mood lasts longer and is less intense \cite{liu2017many}. Unlike affective science studies, our study leaves aside the debate of whether emotions should be represented as a point in a dimensional valence-arousal space, as well as the distinction between affect, emotion, and mood. Researchers introduced various emotion models to conceptualize human emotion, among which discrete and dimensional models are employed mostly \cite{eerola2013review}. Discrete models, relating to the theory of basic emotions \cite{ekman1992argument}, have been widely adopted because their concepts are easily explained and discriminated. However, studies, even including all variants of basic emotions, can hardly cover the whole range of human emotions. And some of the categories can be musically inappropriate when evaluating the emotion evoked by music or emotion expressed by music \cite{balkwill1999cross}. Dimensional models are represented by continuous coordinates, the most famous of which is the circumplex model \cite{russell1980circumplex} represented by valence and arousal. The circumplex model has received significant support in studies of emotions, cross-culture comparison, and psychometric studies \cite{posner2005circumplex}. Affects represented by four combinations of these two bipolar coordinates can be induced by musical pieces \cite{rickard2004intense, vieillard2008happy}, and show unique neural patterns \cite{altenmuller2002hits}. Because of a more inclusive representation of emotions, the dimensional model has been extensively used in the Affective Computing (AC) research community \subsection{Music Emotion} The inherent emotional state of an acoustic event (``music emotion'' \cite{Schyff2017}) is conveyed by a composer while writing it or a musician while performing it \cite{weninger2013acoustics}. And it can be classified by low-level descriptors (LLDs), such as zero-crossing rate, Mel-frequency cepstral coefficients (MFCC), spectral roll-off, and so on. Extensive prior studies have been performed and dedicated to the development of automatic Music Information Retrieval (MIR) systems. Functions for calculating acoustic features were packed in toolboxes such as the MIR toolbox \cite{lartillot2007matlab}, openSMILE \cite{eyben2013recent}, and librosa \cite{mcfee2015librosa}. Databases with music emotion have been developed for MIR competitions, such as the annual music information retrieval evaluation exchange (MIREX) that started in 2007 \cite{downie20082007}. However, they introduced five discrete emotions in the task instead of assigning a continuous affect model. For a better understanding of the music effects psychologically, we referred to the 1000 songs database \cite{soleymani20131000}, where the annotations of all music emotions were done by crowdsourcing on the Amazon Mechanical Turk, based on Russell's valence-arousal model. Low-quality and unqualified annotators have been removed to guarantee the quality of all annotations. \subsection{EEG-based Emotion Recognition} Recognizing people's extensively varying induced emotions is demanding and challenging. A variety of physical and physiological information have been employed to estimate human emotions in the past two decades, such as facial expressions \cite{black1995tracking,essa1997coding}, heartbeats \cite{valenza2013nonlinear}, speech \cite{dellaert1996recognizing, nwe2001speech}, body gestures \cite{shibata2012emotion}, and EEG \cite{takahashi2004remarks, lin2010eeg, alzoubi2009classification}. EEG is a non-invasive electrophysiological technique that uses small electrodes placed on the scalp to record brainwave patterns. With the advances in Brain-Computer Interface (BCI) technologies, EEG signals can be collected from wearable devices with one to hundreds of channels \cite{gui2019survey}. For example, Emotiv EPOC+ neuro-headset with 14 channels was proven to be user-friendly and effective for mood induction studies \cite{rodriguez2013validation}. Nowadays, it has been well studied and recommended that, as a common practice, the power spectra of the EEG can be divided into five bands --- delta ($\delta$: 1-3 Hz), theta ($\theta$: 4-7 Hz), alpha ($\alpha$: 8-13 Hz), beta ($\beta$: 14-30 Hz), and gamma ($\gamma$: 31-50 Hz) \cite{mantini2007electrophysiological}. The band-related brain activities have been explored to represent various emotions \cite{dasdemir2017analysis}, including the asymmetry at the anterior areas of the brain \cite{schmidt2001frontal}. Self-reporting is still the most traditional and common way to obtain the subject's emotional states; but it has been long questioned because of the subjectivity and instability of each individual viewpoint. Self-Assessment Manikin (SAM) \cite{bradley1994measuring} was highlighted to assist the self-assessment of emotions in some projects \cite{nie2011eeg, daly2015identifying, khosrowabadi2010eeg}. However, it is still challenging for most subjects to accurately assess and label their definitive emotional states. Instead, emotion variations -- how does an event or stimulus change the emotion from the last state, either positively or negatively --- is easier for subjects to evaluate and describe, which hence are selected as the benchmark in our study. \subsection{Emotion-oriented Music Therapy and Recommendation} Emotion-oriented music therapy helps people achieve a ``delighted'' emotional state (discrete model) or get a higher valence (continuous model). Ramirez et al. introduced a musical neuro-feedback approach based on Schmidt's study to treat depression in elderly people \cite{ramirez2015musical}, aiming to increase their arousal and valence based on the v/a model. The same approach was also applied to palliative care of cancer patients \cite{ramirez2018eeg}. Both studies indicated that music could indeed help patients modulate their emotions to positive arousal and positive valence and thus improve their quality of life. Earlier, Sourina et al. proposed an adaptive music therapy algorithm based on subject-dependent EEG-based emotion recognition, and detected the satisfactoriness of subjects towards the therapy music in real-time \cite{sourina2012real}. They designed the recommender for six discrete emotions in Russell's v/a model, and participants can assign those emotions to the system for getting the corresponding music pieces, similar to the workflow shown on the left side of Figure \ref{compare}. One potential limitation of all those methods is that they all interpreted the user's emotions through a predefined algorithm (e.g., the difference of the alpha power between the left and right scalp to represent the subject's valence level) rather than personalized self-assessment. Another problem is that they acquired feedback from the users after they listened to the music. The predefined algorithm itself was calculated by the collected EEG thus the result could only be an indicator, rather than the benchmark. Instead, self-assessment with the participant's volitional involvement is the most reliable benchmark. Many other studies have been carried out to recognize music-induced emotions by physiological signals like electrocardiogram (ECG), heart rate, blood pressure, and respiration rate \cite{lin2010eeg, agrafioti2011ecg}. However, none of the prior studies has included people's current emotional state regarding the decision to make music recommendations. People are constantly changing over time, and the interpretation of musical emotion varies among different listeners \cite{soleymani20131000}. Even though people's previous data was collected, music similarity and emotion analysis was conducted \cite{zhu2006integrated}, or the dynamic model of users' preference was built \cite{rafailidis2015modeling}, the event of listening to music is never truly connected with people's emotional state in their equations. We believe that a system should be delivered by the functions of both user and music to benefit users' emotions in an interactive process. \begin{figure}[tbp!] \centering \includegraphics[width=\columnwidth]{figures/flow.pdf} \caption{The flow diagram of the training and testing processes in the proposed system. The gray arrow shows the offline process of training, and the black arrows show the online process of testing.} \label{flow} \end{figure} \section{Data Collection} \subsection{Self-assessment} Self-assessment/reporting approaches are the most common way to evaluate emotions and serve as the ground truth. Our application demands users to assess their emotion variations induced by a song on Russell's circumplex model, which is marked as the evaluated valence/arousal (v/a) scores as shown in Figure \ref{compare}. Participants were instructed to move two sliding bars ranging from $-5$ to $5$ continuously with scores shown on the bottom after introducing the meaning and difference of ``valence'' and ``arousal.'' The vertical sliding bar is for the arousal score, and the horizontal sliding bar is for the valence score. The window of sliding bars showed up right after the subject listened to each song, and the window also showed up at the beginning of the testing experiment for subjects to designate their desired emotion variations. \subsection{Music Database} Music pieces were selected from the ``Emotion in Music Database (1000 songs)'' \cite{soleymani20131000}, which contains 744 songs (excluding redundant pieces). Songs were from eight genres in general: Blues, Electronic, Rock, Classical, Folk, Jazz, Country, and Pop, and were cut into 45-seconds excerpts with a 44,100 Hz sampling rate. The emotion expressed in each song was evaluated on Russell's circumplex model by a minimum of 10 workers found and selected through Amazon Mechanical Turk (MTurk). They both annotated static and continuous emotions expressed in songs. However, the excerpt provided to the user is considered as a variable overall in this paper -- we only predict users' emotional variation after listening to each excerpt, thus we only employ those static annotations as their v/a annotations. To reduce the experiment load and verify the feasibility of the proposed system at the preliminary stage, we provided the first 25 seconds of the 45-seconds excerpts to human subjects, which was long enough to induce listeners' emotions \cite{juslin2001music}. Values of v/a annotations of each music excerpt range from one to nine. We re-scaled them to positive and negative values by minus five, which resulted in four classes/quadrants for coarsely selecting MAs in testing experiments. It should be mentioned that, the v/a annotations here represent the emotion expressed in music, which is decided by the composer and entertainer. In contrast, the v/a scores represent the emotion variations evaluated by users after they listen to a music piece. \subsection{Participants \& Procedure} A total of five participants (4 males; aged 22 to 31 years) were recruited to participate in the experiment, and there is no clinical (including psychological and mental) problem reported by those participants. Participants were labeled as \textit{s01} to \textit{s05}, respectively, and the model was built for each participant individually. The training and testing processes were different: the former one was to collect data and build the prediction model offline, and the testing process was an online process that needs to collect and analyze the data in real-time to recommend music. To testify the robustness of the system towards user's varying emotional states, we split the experiment into 13 days: six days of training experiments, two days of testing experiments, and the other five days of the real-life scenario evaluations that were modified from testing experiments. In addition, 54 people, including the five participants, were invited to participate in an online questionnaire study. They were asked to respond to 10 questions according to their own experiences independently and anonymously. The responses were used to investigate the public's demand and acceptance of emotion regulation with an intelligent music recommendation system and wearable EEG device. All of the experiments were approved by the Internal Review Board (IRB) at [University is hidden for double-blinded review]. Participants were asked to understand the arousal and valence emotions respectively, with the help of the discrete emotions labeled in Russell's model as shown in Figure \ref{Russell} before the experiment. And there is no restriction regarding their normal music listening experiences or the emotional state before an experiment. As a factor of music effects, music volume was adjusted by participants to their habitual level before the experiments. To build the prediction model offline, songs in the training process were chosen randomly from each quadrant based on their v/a annotations. A trial of a training experiment had three sequential parts: EEG data collection (20s), music playing (25s), and users' self-assessment (15s). EEG data collected and a music piece played in each trial formed an observation in training data, and users' self-assessment serves as the ground truth. Considering the comfortable wearing time of the Emotiv EPOC+ device is about half an hour, we designed twelve trials (each trial lasts for one minute) for each experiment, in which twelve songs were from four quadrants evenly, and two experiments for each day. Plus five-minute EEG device setup and three-minutes rest between two experiments, in total 32 minutes were required for one subject each day. EEG data were collected at the Resting State with Open Eyes (REO), and the next trial started right after the self-assessment. After training the emotion variation prediction model, participants took part in the testing experiments, as shown with the `black' arrows in Figure \ref{flow}. Testing was a real-time process and there were 22 trials for each testing experiment. Each trial had seven steps as shown on the lower-right of Figure \ref{compare}: (1) the participant designated the desired emotion variation (designated v/a scores); (2) five music alternatives (MAs) were randomly selected from the same class as the designated v/a scores based on their v/a annotations; (3) collected EEG data; (4) extracted EEG features and concatenated with the features of each music alternative; (5) concatenated feature vectors are input into the prediction model sequentially and returned five results; (6) the song of which the predicted class of v/a scores was located in the user's designated class was played; (7) participants reported the emotion variations based on the presented song. It is possible that no song in MAs matches the participant's desired emotion variation. If so, the system will go back to step (2) and repeat the following steps until a new song matches. The maximum iteration was set to five to avoid the system delay, and a song from the fifth iteration will be randomly selected if still no song matched. In the case when several songs matched, the system randomly selected one. Thus, the time of each trial in testing experiments was unfixed. And the proper time length of EEG collection between playing music pieces will be decided by referring to the classification accuracy of training experiments with different time windows of EEG data. \begin{figure}[!tbp] \begin{subfigure} \centering \includegraphics[width=.31\linewidth,height=.24\linewidth]{figures/var_1.pdf} \end{subfigure}% ~ \begin{subfigure} \centering \includegraphics[width=.31\linewidth,height=.24\linewidth]{figures/var_2.pdf} \end{subfigure} ~ \begin{subfigure} \centering \includegraphics[width=.31\linewidth,height=.24\linewidth]{figures/var_4.pdf} \end{subfigure} ~ \newline \begin{subfigure} \centering \includegraphics[width=.31\linewidth,height=.24\linewidth]{figures/var_6.pdf} \end{subfigure} ~ \begin{subfigure} \centering \includegraphics[width=.31\linewidth,height=.24\linewidth]{figures/var_7.pdf} \end{subfigure} \caption{Comparison of mean-variances of EEG features between intra-experiments and inter-experiments for five participants. The x-axis is the extracted EEG feature and there are 40 in total. The y-axis is the mean and standard deviation of feature variances. The intra-variances are calculated from trials in one experiment, and the inter-variances are calculated from trials in two continuous experiments.} \label{fig:var} \end{figure} Testing experiments were modified to accommodate the requirements of a more usable system in the real-life usage scenario. In this scenario, users only designated the desired emotion variation once, and EEG data were collected once at the beginning of the system usage to reduce users' active interaction efforts. And the ``current'' information from users was used to select a list of songs instead of one song. We assumed that the desired emotion variation would sustain, and one-time EEG data would be representative of users' current state within one experiment. To verify this hypothesis, we calculated and compared the variance of EEG features extracted from the training sessions within one experiment (intra) and two experiments (inter). The inter-test was combined with the tail half and head half trials of two continuous experiments to guarantee the same number of trials as the intra-test. We drew the mean-variance of intra- and inter-experiments for each participant, as shown in Figure \ref{fig:var}. It is shown that, on average, for five participants, $89.5\%$ of $40$ EEG features in intra-experiments have lower mean variance than the ones in inter-experiments. We thus assume that, with reasonable variance, EEG information is representative of users' ``current states'' and one-time EEG collection would work properly within twelve pieces of music (one experiment) for real-life scenarios. \section{Methods} \subsection{Music Feature Extraction} We extracted and selected features of each music piece by using Librosa \cite{mcfee2015librosa}. Each song was a one-channel MP3 file with a sampling rate of 44,100 Hz. Both time-domain and frequency-domain features were included, as shown in Table~\ref{tab:mus_fv}. Chromagram is a mid-level feature, which closely relates to ``twelve-pitch classes''. Tempogram represents the local auto-correlation of the onset strength envelope, and thus holds an important position for its rhythm information. In the case of roll-off frequency, different percentages refer to the roll percent of the energy of the spectrum in that particular frame. We chose four different values of the roll-off percentage to approximate the maximum and minimum frequencies in different ranges. The music feature matrix was normalized before further processing. And the feature selection and reduction were processed through the following steps: (1) removing constant columns which have $100\%$ same values; (2) removing quasi-constant features with threshold $90\%$; (3) removing columns with the Pearson correlation greater than 0.92. The number of extracted features is 1,665, and the number of selected features after the above steps is reduced to 174. \subsection{EEG Feature Extraction} Emotiv EPOC+ provided 14 channels of EEG data (AF3, F7, F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8, and AF4) with 128 Hz sampling rate. Raw EEG data were first high-pass filtered by 8 Hz and then low-pass filtered by 49 Hz through EEGLAB \cite{delorme2004eeglab}, as alpha (8–13 Hz), beta (14–30 Hz), and gamma (31–50 Hz) power bands are proven to be relevant with emotion \cite{kim2013review}. Noises of eye blinks (around 3 Hz) were rejected by the high-pass filter, and high-frequency noises like eye movements, muscle activity, and environmental noise were rejected by the low-pass filter. The artifact was rejected by implementing ``Reject continuous data'' on EEGLAB, which used the spectrum thresholding algorithm to detect artifacts. The artifact channels were detected by implementing automatic channel rejection, which used the joint probability of the recorded electrode with ``kurtosis'' and returned the measured value for each channel, and we rejected channels with measure values higher than 20. \begin{table}[!tbp] \centering \caption{Extracted music features in the time- and frequency-domains} \label{tab:mus_fv} \begin{tabular}{l r} Features & Number\\ \midrule Zero-Crossing Rate & 1\\ Bandwidth order 2,3,4 & 3 \\ Mel-Scaled Spectrogram & 128\\ MFCC &13 \\ Chromagram from STFT & 12\\ Root-Mean-Square & 1\\ Spectral Centroid & 1\\ Spectral Contrast & 7\\ Spectral Flatness & 1\\ Spectral Roll-off 5,10, 85, 95 ($\%$) & 4\\ Tempogram &384\\ \bottomrule \end{tabular} \end{table} Before feature extraction, we defined three time windows in order to determine the proper time length for EEG data collection in online testing experiments. To accommodate the lowest frequency of interest, the half cycle should fit comfortably within the window \cite{budzynski2009introduction}. Normally 0.5 Hz is taken as the low end, which mandates a time window of at least one second. And we double the low end to 2 seconds as some data with artifacts would be rejected by pre-processing. The other two window lengths are 5 and 10 seconds, respectively. Features were extracted based on three window lengths of EEG data separately. Fast Fourier Transform (FFT) was employed with 3, 2, and 1 point moving-average corresponding to the time windows of $\{10, 5, 2\}$. Then we re-grouped the frequencies into four bands: alpha (8-12 Hz), beta-1 (12-16 Hz), beta-2 (16-32 Hz), and gamma (32-48 Hz) bands. The first set of features are power differences of each band between the right and left channels: AF4/AF3, F8/F7, F4/F3, and FC6/FC5, because differences between the right and the left frontal lobe were proved to be related with valence in many research \cite{henriques1991left, davidson1992emotion, allen2004issues}. The second set of features is the band power of electrodes placed on the temporal lobe: T7, T8, and the neighboring sensors: P7, P8, for the reason that the temporal lobe was directly related to the region where the sound is processed \cite{gloor1997temporal}. The last set of features is the mean and standard deviation (std) of the power of all channels. Thus the total number of EEG features is $40$ $(10 \times 4)$, and they were normalized feature-wise before being combined with music features. Sequential Backward Selector (SBS) was utilized to decrease the dimension of music features before being concatenated with EEG features in order to balance the amounts of music and EEG features. SBS is a greedy search algorithm removing the feature one by one to maximize the performance until reaching the number of features that were designated. Three time windows $\{10, 5, 2\}$ of EEG features were concatenated with selected features of the music piece in the same trial separately. Features for the arousal model and valence model were selected by SBS independently from concatenated features for each participant. \subsection{Classification Approach} Before testing, training data was used to build the classification model and decide the optimal time length for EEG data collection. As we discussed, predicting continuous emotion variation values is impractical and infeasible. For example, $[-0.1, -0.1]$ and $[0.1, 0.1]$ are two v/a scores, they are close in a regression model but distinct in terms of the direction of emotion variation. The former one would mean that ``the music makes me kind of blue'' (3rd class), and the latter one means that ``the music makes me slightly delighted'' (1st class). Besides, neither the value of the designated v/a score nor that of the evaluated v/a score could the participant be firmly certain about. Instead, participants are more confident about the direction or range of valence and arousal changes. We chose the binary classification for valence and arousal models because the boundary of emotions on the continuous model is undefined. The problem appears for two close v/a scores when another hard boundary is defined other than x- and y-axes. For example, if a line separated two relatively far emotions ``Aroused'' and ``Happy'' in the first quadrant, then it would also distinguish two similar emotions near the line into two different classes. Support Vector Machine (SVM) was employed due to its great generalization capability, computational efficiency, and proven performance for EEG-based emotion classification \cite{bazgir2018emotion}. The formation of the arousal model is shown in Equation \ref{equ:1}: \begin{equation} y_a = C_a' * [E_a, M_a] \label{equ:1} \end{equation} where $y_a$ represents the binary class of arousal, and $C_a$ is the parameter vector of the arousal model. Selected feature vectors of EEG and music for the arousal model are represented as $E_a$ and $M_a$ and they are concatenated together to learn the emotion variations. The valence model is expressed with the same equation. To manage the unbalanced observations collected for training, the regularization parameter was adjusted by the ratio of their binary labels. It is worth pointing out that, since the data collection of the testing process was separated from the training process, all of the observations collected in training experiments were only used to train the model. \subsection{Emotional Instability} In addition to predicting the direction of valence and arousal changes, a secondary study of this research is related to participants' emotional instability. There were $25\% - 35\%$ of songs that reappeared two to five times for each participant in different training experiments. And their varying feelings towards these songs were employed to quantify users' emotional instability. People listen to different songs at different times in real life, thus we set a range for the repeated songs and the repeating times when randomly selecting songs in training experiments. And we assume the same number of repeated songs and repeating times for all participants will mislead us with highly biased results and can't draw a general conclusion of whether the emotion instability scores are indicative in future usage. To quantitatively represent and assess the emotional instability, the number of transitions of arousal/valence scores was counted. The arousal or valence scores of a song have two states, $i=0$ (negative) or $i=1$ (positive). We propose that the frequency of transitions of 0 to 1 and 1 to 0 towards the same song reveals the user's emotional instability. For example, the arousal score vectors $[0,0,0,1,1]$ and $[0,1,0,1,0]$ of a song have the same probability of 0 and 1, while the first score vector tends to result from a change in the subject's taste after listening to this song for three times, which is less related to the emotional instability compared with the second subject. We define the frequency of transitions as t-score and it is calculated following the Equation \ref{equ:2}: \begin{equation} t=\frac{1}{M}\sum_{m=1}^{M}(\frac{1}{N}\sum_{n=2}^{N}|(s_n - s_{n-1})|) \label{equ:2} \end{equation} where the score vector $s$ has the binary pattern, thus the summation of the absolute difference of two neighboring elements is the number of transitions. $N$ is the number of times the song with $id=m$ is listened to by a subject. $M$ is the total number of repeated songs. By implementing the equation, the results of the two vectors mentioned before are $0.25$ and $1.0$, respectively, which is consistent with our assumption that the latter subject shows a higher level of emotional instability than the former subject. To verify the proposed emotional instability function from a psychological perspective, we refer to the Big Five Personality Test \cite{Goldberg1992} based on participants' self-reporting and psychological test on Open-Source Psychometrics Project \cite{OSPP}. Big Five Personality Test is based on the five-factor model of personality \cite{mccrae1999five}, including openness to experience, conscientiousness, extraversion, agreeableness, and neuroticism. Among all five factors, neuroticism is referred to in our study because it describes emotional stability. Participants were required to finish the Big Five Personality Test within ten minutes and their respective scores of neuroticism were collected. The score is defined in a way as what percent of other people who have ever taken this test and performed worse than you; that is, a higher score indicates a more stable emotional state. We compared the t-score and the neuroticism score among all participants to verify their correlation level and thus prove that our study can be used as an indicator of an individual's emotional instability. To correspond to the instability scales calculated by Equation~\ref{equ:2}, the neuroticism score is rewritten as $(1-score/100)$. \subsection{Evaluations} Training accuracy of the valence and arousal models are shown by cross-validation separately. To validate the interactive performance of the system between the user and recommended music more directly, testing accuracy is decided by both valence and arousal models: the match rate between the designated v/a score and the evaluated v/a score. In addition, to explore the feasibility of using fewer EEG electrodes, we selected electrodes on the temporal lobe region: T7 and T8, and evaluated the training accuracy solely based on these two sensors. Lastly, we evaluated the performance of the real-life scenario, which was carried out two months after the original testing experiments for five continuous days to meet participants' schedules. The same five participants were recruited and their models were trained by all the data collected before. Meanwhile, participants listened to new \& old songs (which were listened to in the training and testing experiments or not) from the same database. The accuracy was evaluated using the match rate as the testing experiments, and the emotional instability score was updated with new data. \section{Results} \subsection{Data Collection \& Feature Selection} Trials in training and testing experiments with rejected channels are excluded, and the remaining number of trials are shown in Table~\ref{tab:accu} as (\textit{\#Train\_song} and \textit{\#Test\_song}). Each trial represents an observation that contains the collected EEG data and the song presented to participants. The first three rows of Table~\ref{tab:accu} are the statistics from training experiments. \textit{\#Uniq\_song} shows the number of songs without any repetition. \textit{Match\_rate} refers to the percentage of music pieces that have the same v/a annotations (the emotion expressed in the song) with users' evaluated v/a scores (the emotion evoked by the song). The mean match rate of five subjects was $0.4938 \pm 0.0504$, which was close to $0.5$, we arguably conclude that recommending music based on only music v/a annotations for user's emotion variation resembles a random occurrence. To show the match rate visually, we plotted the training data of \textit{s01} in Figure~\ref{dist}, in which each data point represents a song that is located by its v/a annotations, and the marker represents the class of evaluated v/a scores. It shows that the participant's emotional states could be varied to all four classes by songs from the same quadrant, and the song with overlapped markers means that it can vary the participant's emotions oppositely in different experiments. The last three rows of Table~\ref{tab:accu} are the statistics from testing experiments. \textit{\#New\_song} is the number of songs that have never been listened to in the training experiment (new to the prediction model). The match rate calculated in testing experiments is used to evaluate the system performance and called (\textit{\#Test\_accu} to distinguish it from the match rate of training experiments. The number of EEG features and music features are 40 and 178 after feature extraction. Fifty music features were eventually selected by SBS based on v/a annotations before being concatenated with EEG features. Considering the limited number of observations we obtained, the final number of features was set to 25 by implementing SBS on the concatenated features for both valence and arousal models. The remaining EEG and music features for the valence and arousal models of each participant are different. The remaining number of EEG features in arousal model is $[6, 8, 10, 13, 7]$, and in valence model is $[8, 10, 11, 5, 14]$ for participants from $s01$ to $s05$. And the first 25 most significant features would change if we re-train the model with new data. \begin{table}[tbp!] \centering \caption{Training and testing statistics of experiments} \begin{tabular}{c c c c c c} \toprule {\small\textit{}} & {\small\textit{s01}} & {\small \textit{s02}} & {\small \textit{s03}} & {\small \textit{s04}} & {\small \textit{s05}}\\ \midrule {\small \textit{\#Train\_song}} & 115 & 129 & 135 & 131 & 131\\ {\small \textit{\#Uniq\_song}} & 72 & 85 & 107 & 79 & 88\\ {\small \textit{Match\_rate}} & 0.522 & 0.512 & 0.504 & 0.405 & 0.526\\ {\small \textit{\#Test\_song}} & 44 & 40 & 40 & 41 & 43\\ {\small \textit{\#New\_song}} & 33 & 31 & 35 & 31 & 32 \\ {\small \textit{Test\_accu}} & 0.867 & 0.850 & 0.850 & 0.976 & 0.884\\ \bottomrule \end{tabular} \label{tab:accu} \end{table} \begin{figure}[tbp!] \centering \includegraphics[width=0.6\columnwidth]{figures/7CV.pdf} \caption{The validation accuracy of arousal model with three time-windows evaluated by 7-fold cross-validation.} \label{7cv} \end{figure} \subsection{Classification Model} The arousal model was trained by $\{10, 5, 2\}$ seconds of EEG data separately to select the proper time length for testing experiments. The result of the 7-fold cross-validation for the arousal model is shown in Figure \ref{7cv}. The F-value and P-value for the classification accuracy of three different lengths are $0.321$ and $0.726$, which cannot reject the null hypothesis that the distributions under these three conditions are normal, thus there is no time length better or worse in this experiment. Even though the longer, the more EEG information is contained, the window lengths of 5s and 10s don't outperform 2s for representing users' current emotional state. Thus, we chose the shortest time length for testing experiments. The same conclusion was obtained for the valence model. Besides, we calculated the classification accuracy of training experiments with only two channels (T7 and T8) and compared it with 14 channels to verify the possibility of using only a small number of electrodes. In addition to the 8 EEG features from T7 and T8 channels, we selected the 11 most significant features of music by SBS and concatenated them with EEG features. The results are shown in Table~\ref{tab:channel}. With a slight decrease in classification accuracy in both valence and arousal models, the system retains a reasonable level of accuracy with only 2 EEG channels, even though they contain 7 times less information than the original 14 channels. \begin{figure}[tbp!] \centering \includegraphics[width=0.6\columnwidth]{figures/score_dist.pdf} \caption{Music excerpts listened by \textit{s01} in training experiments located on Russell's valence-arousal model by their v/a annotations. They are marked by \textit{s01}'s evaluated v/a scores. The different markers on the same location represent the participant's different v/a scores for the same song. } \label{dist} \end{figure} \begin{table}[tbp!] \centering \caption{Comparison of validation accuracy between 14-channel (`FS') and 2-channel (`T78') EEG data} \label{tab:channel} \begin{tabular*}{245pt}{@{\extracolsep{\fill}}c c c|c c} \toprule {\small\textit{}} & \multicolumn{2}{c}{{\small\textit{Arousal}}} & \multicolumn{2}{c}{{\small\textit{Valence}}}\\ \hline \hline {\small\textit{}} & {\small\textit{FS}} & {\small \textit{T78}} & {\small \textit{FS}} & {\small \textit{T78}}\\ \hline {\small\textit{s01}} & 0.68 $\pm$ 0.19 & 0.64 $\pm$ 0.19 & 0.64 $\pm$ 0.22 & 0.63 $\pm$ 0.22\\ {\small\textit{s02}} & 0.74 $\pm$ 0.18 & 0.73 $\pm$ 0.22 & 0.67 $\pm$ 0.11 & 0.62 $\pm$ 0.18\\ {\small\textit{s03}} & 0.73 $\pm$ 0.20 & 0.72 $\pm$ 0.10 & 0.68 $\pm$ 0.14 & 0.57 $\pm$ 0.18\\ {\small\textit{s04}} & 0.70 $\pm$ 0.20 & 0.65 $\pm$ 0.21 & 0.65 $\pm$ 0.26 & 0.54 $\pm$ 0.19\\ {\small\textit{s05}} & 0.72 $\pm$ 0.19 & 0.67 $\pm$ 0.30 & 0.73 $\pm$ 0.14 & 0.62 $\pm$ 0.18\\ \bottomrule \end{tabular*} \end{table} Testing accuracy in the last row of Table~\ref{tab:accu} was calculated as the match rate of the designated v/a score and the evaluated v/a score. One of five songs in MAs was selected and presented to the participant. Users' EEG changes every moment, thus songs that were filtered out from MAs were unable to be known whether it was matched or not at that moment. Therefore, the false negative rate and the true negative rate of our system could never be calculated and they both are insignificant to the system function. The true and false positive rates correspond to the \textit{Test\_accu} and ($1-Test\_accu$). The accuracy of recommending new songs is significant for the \textit{Test\_accu} because it's the new observation for the prediction model. And there were ($\#Test\_song - \#New\_song$) old songs that were chosen for each participant in testing experiments, among which some songs had various influences in the training process but matched the user's designation in the testing process. The match rate of old songs is 100\% for participants except for one incorrect prediction for \textit{s02}, which was listened to by \textit{s02} three times and resulted in two different classes of the evaluated v/a scores. It turned out that the system might recommend incorrect old songs with the user's changing tastes and open EEG information. However, it could be solved by updating the prediction model with new data. \subsection{Emotion Instability} \begin{table} \centering \caption{Emotion instability of participants} \label{tab:randomness} \begin{tabular}{c c c c c c} \toprule {\small\textit{}} & {\small\textit{s01}} & {\small \textit{s02}} & {\small \textit{s03}} & {\small \textit{s04}} & {\small \textit{s05}}\\ \midrule {\small\textit{t\_Arousal}} & 0.24 & 0.15 & 0.22 & {\textbf{0.40}} & {\textbf{0.07}}\\ {\small \textit{t\_Valence}} & 0.19 & 0.16 & 0.20 & 0.21 & 0.22\\ {\small \textit{Big-5}} & 0.38 & 0.43 & 0.38 & {\textbf{0.52}} & {\textbf{0.05}}\\ \bottomrule \end{tabular} \end{table} The t-score calculated by Equation~\ref{equ:2} for the arousal and valence models are shown as \textit{t\_Arousal} and \textit{t\_Valence} in Table \ref{tab:randomness}. And the collected neuroticism scores from the Big Five Personality Test are shown in \textit{Big-5}. The correlation coefficient between \textit{t\_Arousal} and \textit{Big-5} is 0.808, and between \textit{t\_Valence} and \textit{Big-5} is -0.473. We also found that the participant who had the highest \textit{t\_Arousal} received the highest Big-5 score, and the participant with the lowest \textit{t\_Arousal} received the lowest Big-5 score. Besides, participants $s01$ and $s03$ had the closest \textit{t\_Arousal} and received the same Big-5 score. The p-value between \textit{t\_Arousal} and \textit{Big-5} is 0.073, which is bigger than 0.05 but statistically not random. However, the participants' pool is too small and the songs chosen in experiments are limited, from which we cannot make a general conclusion about the relation between \textit{t\_Valence} and Big-5 score. Here we arguably hypothesize that the t-score can be a referable indicator in emotion-based applications. \subsection{Results of Real-life Scenarios} The data collected in training and testing experiments were used to train the emotion variation prediction model of the first day in real-life scenarios. And the model was then updated each day with new data collected, as well as the emotional instability values. There were two experiments for each day; at the beginning of each experiment, participants designated their desired emotion variation direction and stayed still for EEG collection. A list of 10 to 13 songs was selected by steps (2) to (5) of testing experiments iteratively and played. The participants evaluated their v/a scores after listening to each song. Results were shown in Figure \ref{fig:Robu} for every participant. Day 0 showed the results of testing experiments. The dark cyan line is the testing accuracy of the day. Emotional instability (t-score) was re-calculated with new data and plotted by the yellow line. The proportion of new songs influences the system performance and t-scores besides participants' ever-changing EEG data, thus we also calculated and plotted the ratio of new songs of that day with blue bars for system performance analysis. There are several drops of testing accuracy in Figure \ref{fig:Robu}, including the mismatch of new songs and old songs. When the drop in accuracy was accompanied by the increase in emotional instability (e.g., day 3 of \textit{s02} and \textit{s03}; day 2 of \textit{s05}), it demonstrated that the participant gave different v/a scores to the old song compared with the last time when he listened to it. However, the emotional instability value was mostly averaged by consistent scores of the repeated songs given by participants. The mean t-score of five days for each participant is $[0.206 \pm 0.0009, 0.118 \pm 0.0002, 0.168 \pm 0.0002, 0.241 \pm 0.0017, 0.106 \pm 0.0002]$. All participants have lower instability values as we expected except for a subtle increase for \textit{s05}, who has the lowest emotional instability among them. The small variance of the emotional instability value demonstrates that the songs chosen for participants were not biased, otherwise, it will change dramatically over time. The mean match rate of five days for each participant is $[0.825 \pm 0.0054, 0.922 \pm 0.0046, 0.794 \pm 0.0015, 0.973 \pm 0.0013, 0.862 \pm 0.0035]$. The accuracy is challenged by open EEG data, multi choices of user's designation, and new songs, but remains above $0.8$ and guarantees a satisfactory emotion regulation experience with four choices of emotion variations. Different from testing experiments where participants were suggested to designate different classes of emotion variation to cover all four classes, they are free to designate their desired emotion variation direction in real-life scenarios and they all designated the first and fourth classes (positive valance changes). It is not surprising that people barely want to change their valence negatively, especially in the experimental environment. But we speculated that people would choose to decrease the valence when they feel hilarious but need to be more ``calm'' and ``serious''. And we didn't build this system by assuming people's specific styles of changing their emotions. Meanwhile, the prediction model is challenged by the unbalanced data collected from real-life scenarios. Even though the regularization parameter is updated with the ratio of binary labels, the prediction accuracy would be degraded with more unbalanced observations. The results from five continuous days of real-life scenarios are insufficient to reveal the long-term performance with more unbalanced data for the missing participants' negative designation of valence. The strategy that makes up for insufficient coverage of all emotional classes and doesn't require users to listen to what they don't designate should be discussed in further studies. \begin{figure}[tbp!] \begin{subfigure} \centering \includegraphics[width=.32\linewidth,height=.22\linewidth]{figures/R_s01.pdf} \end{subfigure}% ~ \begin{subfigure} \centering \includegraphics[width=.32\linewidth,height=.22\linewidth]{figures/R_s02.pdf} \end{subfigure} ~ \begin{subfigure} \centering \includegraphics[width=.32\linewidth,height=.22\linewidth]{figures/R_s04.pdf} \end{subfigure} ~ \begin{subfigure} \centering \includegraphics[width=.32\linewidth,height=.22\linewidth]{figures/R_s06.pdf} \end{subfigure} ~ \begin{subfigure} \centering \includegraphics[width=.32\linewidth,height=.22\linewidth]{figures/R_s07.pdf} \end{subfigure} ~ \caption{Testing accuracy (dark cyan line), proportion of new song (blue bar), and emotion instability (yellow line) of each day in real-life scenario. Day 0 is the result of testing experiments.} \label{fig:Robu} \vspace{-15pt} \end{figure} \subsection{Summary of Questionnaire} We created a questionnaire on Google Forms with ten questions about users' experience of using music to regulate their emotions. 54 subjects responded, including the five participants in our experiments, in which $38.8\%$ were females, $79.6\%$ of subjects aged from 20 to 29 years, and $18.1\%$ aged above 30 years. For the basic question of how much they like music, from 5 (very much) to 1 (not at all), $50.9\%$ like it very much, $22.6\%$ like it (score 4) and $22.6\%$ chose 3 (moderate). Among all subjects, $86.5\%$ have experienced different feelings towards the same song (without considering familiarity). The question about how well they think the recommendations of a music app could match their tastes \& moods was answered from 5 (very well) to 1 (not at all), $46.7\%$ subjects gave it a 3 and $17.8\%$ were unsatisfied (score 2 \& 1). When asking subjects how annoyed they are when listening to a music piece that they don't like but are recommended by music apps, $86.7\%$ of subjects were annoyed to a varying extent, and the remaining subjects didn't feel annoyed at all. The question about how much they want to try a music recommendation system that can help them regulate their emotional states was answered from 5 (very much) to 1 (not at all). $60.3\%$ of subjects chose 4 and 5, and $13.2\%$ of subjects chose 2 and 1. For the extent of acceptance of two additional EEG sensors bonded with earphones, subjects answered from 5 (very much) to 1 (not at all), $60.4\%$ of subjects chose 4 and 5, and $16.9\%$ chose 1 and 2. Most people enjoy music and their feelings are diverse. Thus some of them might feel annoyed when the platform recommends songs based on others' feedback and their old feelings. Our system would particularly benefit users who are willing to regulate emotions through music but don't trust streaming platforms or try to connect with their neural-centric activities. It is not surprising that some people with stable emotions like $s05$ barely have different feelings towards the same song, and they even don't demand emotion regulation. However, besides emotion regulation, the system is open to music and thus can recommend new songs based on users' EEG data. And most people involved in our questionnaire don't refuse to put on a two-electrode EEG device if it could help them with their emotional problems effectively. \section{Discussions of Limitations and Future Work} \subsection{Music Dynamics and Influences} The results of the real-life scenario evaluations revealed the feasibility of recommending a list of songs based on a one-time EEG collection at the beginning. However, the number of songs could be further explored since our choice is based on the number of trials in a training experiment. As people's EEG is dynamically changed by factors like the music they are listening to and the situation they are involved in, the collected EEG information can hardly be representative of more songs. Thus collecting EEG data before selecting each song would be more conservative, which emphasizes the advantage of our system that it is flexible towards people's ever-changing emotional states. Moreover, songs listened to by users in their daily lives are much longer than excerpts we used in experiments. And sometimes, users may enjoy the beginning of the music but start to dislike the remaining part, and vice versa, which diminishes the representation of the averaged features and system performance. In the future, the features that represent the high-level differences between chapters within a song could contribute to a fine-grained solution. Still, users' dynamic self-assessments for a song are necessary, which means a lot more workload for users. Another concern is related to the impacts of music entrainment, which reflects on users' brainwaves and interferes with their emotional states unconsciously. The impacts of music would last after a song has finished and influence the EEG data collected between two songs. We considered the aftermath of music as trivial in our study, thus we didn't refer to its impacts on this system. But what is reflected on users' brainwaves, and how long will the impacts last are still open questions for studying emotions evoked by music. \subsection{EEG Interference and Interruptions} Music features are extracted offline and from a stable source. However, EEG data is collected online and would be easily disturbed and thus mislead the system. The strategy should be considered to make sure that the EEG data collected online is reliable and could be rejected if it is defective. For now, we only employ pre-processing technologies to reject data and channels that are affected by artifacts. With more collected EEG data, it's possible that users' personal EEG profiles could be developed as a reference to evaluate the quality and validity of newly collected data. Besides, the computational efficiency of a real-time system influences users' experience for possible interruption. And the interruption could be salient and unacceptable when users are focusing on music for emotion regulation. The running time of the music selection in testing experiments is 0.763 seconds (on a 3.4 GHz Intel Core i5 processor) besides the time for EEG collection, in which the EEG processing time by EEGLAB accounts for approximately $80\%$. Decreasing the EEG collection time would significantly relieve the computational workload. Since the slowest component we employed is 8 Hz, a sufficient time for EEG data collection is allowed to be 0.25 seconds theoretically, which should be testified in future work. Another point is that, the acceptance of interruption would be increased when users acknowledge that the system is acquiring EEG signals during that time. \subsection{Emotion Models} There are several clear benefits that users evaluate their emotion variations instead of definitive emotional states, including that it relates the event and users' EEG in one equation, makes it easier for users to report after listening to a song and supports multiple choices of emotion regulation styles. However, since we didn't employ the emotion recognition model, the EEG information didn't reflect any emotional state. Thus we won't return the feedback regarding the user's emotional state based on the collected EEG signals. But we propose that, with sophisticated music information retrieval technologies and users' data involved, the interaction between users and music will provide significant information for neurophysiological studies. Furthermore, four classes of emotion variation choices would be insufficient for users' demands of different emotional change levels. For example, users may want their valence increased substantially but arousal increased slightly. Therefore, emotion variation levels on top of the four basic classes can be further explored for users' personalized, specific emotional demands. Lastly, users' comprehension of their emotions and music tastes may gradually change over time. With more data being collected, the system should take less account of old data and give more weight to the latest data. An online, adaptive preference learning method shall be further developed, instead of the traditional, static classifier. \subsection{System Generalization} Lastly, the system is highly personalized for participants. However, the experiments for model training before using the online system are time-consuming: more than one hundred trials for each participant in our study. Therefore, the generalization method for new users is strongly demanded in future work. One possible solution is to use advanced classification models like pre-training or fine-tuning. They are built for trained tasks, from which the new customized model can benefit. Another possible solution is to categorize new users into groups by presenting discrepant songs. For users who have similar music tastes or emotion regulation styles, discrepant songs can be used to distinguish their general styles, and then the model can be shared and updated among users in the same category. Furthermore, the system is open to all music pieces, thus it can serve as a second-step music recommender to a user's personal playlist or the playlist recommended by streaming platforms. For example, the playlist could be the one that a user selects in iTunes by mood (like `Sleep'). Then the system will choose the music pieces from the library for the user based on his/her EEG information by setting the designated emotion variation to negative arousal and positive valence, and filtering out the songs that would lead the emotion to higher arousal and cause sleeplessness. \section{Conclusion} The interaction between users and music paves the way to comprehending the diverse and ever-changing emotions of people and assists them to regulate emotions in daily life. None of the existing music-based emotion regulation studies has pointed out the limitation of traditional emotion recognition models and come up with the idea that predicts emotion variations, which relate to the song and users' current emotional state. Our system bridges the gap between theoretical studies and real-life applications, and presents the system performance under all four choices of users' desired emotion variations. The robustness of the system is tested with new songs on different days spanning over a period of two months. From users' perspective, the system doesn't return deterministic feedback but follow their wills and emotional states to present stimuli carefully, which assists users to adjust their emotion and leaves space for them to comprehend the emotional changes. We believe that users can get benefits from such decent and user-oriented interaction, which really considers their variant emotions during the process. \section*{Acknowledgment} This material is based upon work supported by the National Science Foundation under Grant No. CNS-1840790. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. \bibliographystyle{unsrtnat}
astro-ph/9901252
\section{Introduction} The study of resolved stellar populations provides a powerful tool to follow galaxy evolution directly in terms of physical parameters such as age (star formation history, SFH), chemical composition and enrichment history, initial mass function, environment, and dynamical history of the system. Photometry of individual stars in at least two filters and the interpretation of Colour-Magnitude Diagram (CMD) morphology gives the least ambiguous and most accurate information about variations in star formation within a galaxy back to the oldest stars. Some of the physical parameters that affect a CMD are strongly correlated, such as metallicity and age, since successive generations of star formation may be progressively enriched in the heavier elements. Careful, detailed CMD analysis is a proven, uniquely powerful approach (e.g., Tosi {\it et al.} 1991; Tolstoy \& Saha 1996; Aparicio {\it et al.} 1996; Mighell 1997; Dohm-Palmer {\it et al.} 1997, 1998; Hurley-Keller {\it et al.} 1998; Gallagher {\it et al.} 1998; Tolstoy {\it et al.} 1998) that benefits enormously from the high spatial resolution of $HST$ to the point that ground based CMD analysis is only worthwhile in ideal conditions beyond about the distance of the Magellanic Clouds. Because of the tremendous gains in data quality and thus understanding which have come from recent high quality CMDs of nearby galaxies it is now clearly worthwhile and fundamentally important to complete a survey of the resolved stellar populations of all the galaxies in our Local Group (LG). This will provide a uniform picture of the global star formation properties of galaxies with a wide variety of mass, metallicity, gas content etc. (e.g. Mateo 1998), and will make a sample that ought to reflect the SFH of the Universe and give results which can be compared to high redshift survey results (e.g., Madau {\it et al.} 1998). Initial comparisons suggest these different approaches do not yield the same results (Fukugita {\it et al.} 1998), but the errors are large. \begin{figure} \psfig{file=fig1.ps,height=10cm,width=10cm} \vskip-2.5cm \caption{ Isochrones (Bertelli {\it et al.} 1994) for a single metallicity (Z=0.001) and a range of ages, as marked in Gyr at the MSTOs. Isochrones were designed for single age globular cluster populations and are best avoided in the interpretation of composite populations, which can best be modeled using Monte-Carlo techniques ({\it e.g.} Tolstoy 1996). They are used here for the purpose of illustration. } \vskip-0.5cm \end{figure} \section{Colour-Magnitude Diagram Analysis} Much of our detailed knowledge of the SFHs of galaxies beyond 1~Gyr ago comes from the Milky Way and its nearby dSph satellites or from $HST$ CMDs. To date, the limiting factors have been crowding and resolution limits for accurate stellar photometry from the ground. $HST$ provides a unique opportunity to extend beyond our immediate vicinity and encompass the whole LG. To date $HST$ has observed the resolved stellar populations in variety of nearby galaxies (e.g., dE, NGC~147, Han {\it et al} 1997; Irr, LMC, Geha {\it et al.} 1998; Spiral, M~31, Holland {\it et al.} 1996; BCD, VII~Zw~403, Lynds {\it et al.} 1998; dI, Leo~A, Tolstoy {\it et al.} 1998; dSph, Leo~I, Gallart {\it et al.} 1998). For every LG galaxy at which $HST$ has pointed at we have learnt something new and fundamentally important that was not discernable from ground based images, especially in the case of small dIs. The small dIs, like the dSph appear to exhibit a wide variety of SFHs. These results have affected our understanding of galaxy formation and evolution by demonstrating the importance of episodic star formation in nearby low mass galaxies. The larger galaxies in the LG have evidence of sizeable old halos, which appear to represent the majority of star formation in the LG by mass, although the problems distinguishing between effects of age and metallicity in a CMD result in a degree of uncertainty in the exact age distribution in these halos. It is important that detailed comparative studies of all galaxies in the LG are made in the future, including the M~31 and M~33 halo populations, to obtain a picture of the fossil record of star formation in galaxies of various types and sizes, and to identify both commonalities and differences in their SFH across the LG. In addition to a better understanding of galaxy evolution this will enable the comparison with cosmological surveys to be made more accurately. Stellar evolution theory provides a number of clear predictions, based on relatively well understood physics, of features expected in CMDs for different age and metallicity stellar populations (see Figure~1). There are a number of clear indicators of varying star formation rates ({\it sfr}) at different times which can be combined to obtain a very accurate picture of the entire SFH of a galaxy. \noindent{{\it Main Sequence Turnoffs (MSTOs)}}: If we can obtain deep enough exposures of the resolved stellar populations in nearby galaxies we can obtain the {\it unambiguous age information that comes from the luminosity of MSTOs}. Along the Main Sequence itself different age populations overlie each other completely making the interpretation of the Main Sequence luminosity function complex, especially for older populations. However the MSTOs do not overlap each other like this and hence provide the most direct, accurate information about the SFH of a galaxy. MSTOs can clearly distinguish between bursting star formation and quiescent star formation, ({\it e.g.} Hurley-Keller {\it et al.} 1998). \noindent{{\it The Red Giant Branch (RGB)}}: The RGB is a very bright evolved phase of stellar evolution, where the star is burning H in a shell around its He core. For a given metallicity the RGB red and blue limits are given by the young and old limits (respectively) of the stars populating it (for ages $~\rlap{$>$}{\lower 1.0ex\hbox{$\sim$}}$1~Gyr). As a stellar population ages the RGB moves to the red, for constant metallicity, the blue edge is determined by the age of the oldest stars. However increasing the metallicity of a stellar population will also produce exactly the same effect as aging, and also makes the RGB redder. This is the (in)famous age-metallicity degeneracy problem. The result is that if there is metallicity evolution within a galaxy, it impossible to uniquely disentangle effects due to age and metallicity on the basis of the optical colours of the RGB alone. \begin{figure} \vskip0.5cm \psfig{file=fig2.ps,height=7cm,width=10cm} \caption{ In the top panel are plotted the results of Caputo, Castellani \& Degl'Innocenti (1995) for the variation in the {\it extent} in M$_V$ magnitude of a RC with age, for a metallicity of Z=0.0004. We plot the magnitude of the upper and lower edge of the RC versus age, in Gyr. We can thus clearly see that this extent is strong function of the age of the stellar population. Also plotted is M$_V$ of the zero age HB against age. In the lower panel are plotted the results of running a series of Monte-Carlo simulations (Tolstoy 1996) using stellar evolution models at Z=0.0004 (Fagotto {\it et al.} 1994) and counting the number of RC and RGB stars in the same part of the diagram, and thus we determine the expected ratio of RC/RGB stars versus age. } \vskip-0.5cm \end{figure} \noindent{{\it The Red Clump/Horizontal Branch (RC/HB)}}: Red Clump (RC) stars and their lower mass cousins, Horizontal Branch (HB) stars are core helium-burning stars, and their luminosity varies depending upon age, metallicity and mass loss (Caputo {\it et al.} 1995). The extent in luminosity of the RC can be used to estimate the age of the population that produced it (Caputo {\it et al.} 1995), as shown in the upper panel of Figure~2. The ratio, t$_{RC}$ / t$_{RGB}$, is a decreasing function of the age of the dominant stellar population in a galaxy, and the ratio of the numbers of stars in the RC, and the HB to the number of RGB is sensitive to the SFH of the galaxy (Tolstoy {\it et al.} 1998; Han {\it et al.} 1997). Thus, the higher the ratio, N(RC)/N(RGB), the younger the dominant stellar population in a galaxy, as shown in the lower panel of Figure~2. This age measure is {\it independent of absolute magnitude and hence distance}, and indeed these properties can be used to determine an accurate distance measure on the basis of the RC (e.g. Cole 1998). The presence of a large HB population on the other hand (high N(HB)/N(RGB) or even N(HB)/N(MS), is caused by a predominantly much older ($>$10~Gyr) stellar population in a galaxy. The HB is the brightest indicator of very lowest mass (hence oldest) stellar populations in a galaxy. \noindent{{\it The Extended Asymptotic Giant Branch (EAGB)}}: The temperature and colour of the EAGB stars in a galaxy are determined by the age and metallicity of the population they represent (see Figure~3). However there remain a number of uncertainties in the comparison between the models and the data (Gallart {\it et al.} 1994; Lynds {\it et al.} 1998). It is very important that more work is done to enable a better calibration of these very bright indicators of past star formation events. In Figure~3 theoretical EAGB isochrones (Bertelli {\it et al.} 1994) are overlaid on the HST CMD of a post-starburst BCD galaxy VII~Zw403, and we can see that a large population of EAGB stars is a bright indicator of a past high {\it sfr}, and the luminosity spread depends upon metallicity and the age of the {\it sfr}. That the RGB+AGB population of VII~Zw403 looks so similar to NGC~6822 (Gallart {\it et al.} 1994) is suggestive that dI and BCD galaxies can easily transform into each other on very short time scales. \begin{figure} \vskip-7cm \psfig{file=fig3.ps,height=14cm,width=14cm} \vskip-1cm \caption{ EAGB isochrones (Bertelli {\it et al.} 1994) for metallicities, Z=0.001 and Z=0.004, are shown superposed on the observed CMD of VII~Zw403 (Lynds {\it et al.} 1998). For each metallicity the isochrones are for populations of ages 1.3, 2, 3, and 5 Gyrs, with the youngest isochrone being the brightest. This shows the potential discriminant between the age and metallicity of older populations, if the models could be better calibrated to a known SFH, {\it e.g.} for a nearby EAGB rich system like NGC~6822 where old MSTOs are observable. } \vskip-0.5cm \end{figure} \section{The Connection to High Redshift} Star-forming, dI galaxies represent the largest fraction by number of galaxies in the LG, and it is clear from deep imaging surveys that this number count dominance appears to {\it increase} throughout the Universe with lookback time (Ellis 1997). The large numbers of ``Faint Blue Galaxies'' (FBG) found in deep imaging-redshift surveys appear to be predominantly intermediate redshift ($z<1$, or a look-back time out to roughly half a Hubble time), intrinsically {\it small} late type galaxies, undergoing strong bursts of star formation (Babul \& Ferguson 1996). Thus we can assume that the dIs we see in the LG are a cosmologically important population of galaxies which can be used to trace the evolutionary changes in the {\it sfr} of the Universe with redshift. The ``Madau-diagram'' (Madau {\it et al.} 1998) uses the results of redshift surveys to plot the SFH of the Universe against redshift. It predicts that most of the stars that have formed in the Universe have done so at redshifts, {\it z} $\sim 1 - 2$. If it is correct, then the MSTOs from the most active period of star formation in the Universe will be easily visible as 7$-$9~Gyr old MSTOs in the galaxies of the LG (e.g. Rich 1998). Determining accurate SFHs for all the galaxies in the local Universe using CMD analysis provides an alternate route to and thus check upon the Madau-diagram. Recent detailed CMDs of several nearby galaxies and self-consistent grids of theoretical stellar evolution models have transformed our understanding of galactic SFHs. Most of the dI CMDs to date suggest that the {\it sfr} was higher in the past, although the peak in the {\it sfr} has occured at relatively recent times as defined by Madau-diagram (the peaks occur at z=0.1$-$0.2, within the first bin). The Mateo review of {\it all} LG dwarf galaxies (Mateo 1998) and studies of M31 and our Galaxy (Renzini 1998), on the other hand, suggest that the LG had its most significant peak in star formation $>$10~Gyr ago (i.e at z~$>$~3), the epoch of halo formation. Many galaxies contain large numbers of RR~Lyr variables (or HB) and/or globular clusters which can only come from a significant older population. It is possible that dI galaxies have quite different SFHs to the more massive galaxies. Thus although the small dI galaxies in the LG have been having short, often intense, bursts of star formation in comparatively recent times this is not representative of the majority of the star formation in the LG. However direct observations of the details of the oldest star forming episodes in any galaxy are limited at best. This is an area where advanced CMD analysis techniques have been developed (e.g. Tolstoy \& Saha 1996) and telescopes with sufficient image quality exist and the required deep, high quality imaging are observations are waiting to be made. \begin{figure} \psfig{file=fig4.ps,height=9cm,width=11cm} \caption{ In the upper panel is a {\it rough} summation of the {\it sfr}s of the LG dwarf galaxies with time (data taken from Mateo 1998) to obtain the integrated SFH of all the LG dwarfs. The redshifts corresponding to lookback times (for H$_0 = 50$, q$_0 = 0.5$). In the middle panel, a wild extrapolation is made; the assumption that the integrated SFH of the LG {\it dwarfs} in the upper panel is representative of the Universe as a whole. The resulting star formation density of the LG versus redshift is plotted using the same scheme as Madau {\it et al.} (1998) and Shanks {\it et al.} (1998), and these two models are also plotted and the LG curve is {\bf arbitrarily}, and with a very high degree of uncertainty, normalised to the other two models. In the lowest panel the The LG dwarf {\it sfr} as a fraction of the total star formation integrated over all time is plotted versus redshift, and the Madau curve is also replotted in this form, for the volume of the LG. This highlights the totally different distribution of star formation with redshift found from galaxy redshift surveys and what we appear to observe in the stellar population of the LG. } \vskip-0.5cm \end{figure} Figure~4 summarizes what can currently be said about the SFH of the LG and how this compares with the Madau {\it et al.} (1998) and Shanks {\it et al.} (1998) redshift survey predictions. We have not included the dominant large galaxies in the LG, the Galaxy and M~31, but the SFH of the combined dwarfs is broadly consistent with what is known about the SFH of these large systems. They have, as far as we can tell, had a global {\it sfr} that has been gradually but steadily declining since their (presumed) formation epoch $>$10~Gyr ago. There is currently no evidence for a particular peak in {\it sfr} around 7$-$9~Gyr ago or any other time, as predicted by the Madau-diagram for either large galaxies or dwarfs. The dominant population by mass in the LG dwarfs are dE, if dIs are singled out a population with a star formation peak in the Madau-diagram range can be found. But at present the statistics are too limited to determine the typical fraction of old population in LG dIs. There is clearly a total mismatch between the SFH of the LG and the results from the redshifts surveys. This might hint at serious incompleteness problems in high redshift galaxy surveys, which appear to miss passively evolving systems in favour of small bursting systems. The recent HST CMD results give much cause for optimism that we can hope to sort out in detail the SFH of all the different types of galaxies within in the LG if only HST would point at them occasionally. There is also great potential for ground based imaging using high quality imaging telescopes with large collecting areas, such as VLT is clearly going to be. \def\aj{{\it Astron.~J.}} \def\araa{{\it Ann.~Rev.~Astron.\&~Astrophys.}} \def\apj{{\it Astroph.~J.}} \def\apjl{{\it Astroph.~J.~Lett.}} \def{\it Astroph.~J.~Supp.}{{\it Astroph.~J.~Supp.}} \def{\it Astron. \& Astrophys.}{{\it Astron. \& Astrophys.}} \def{\it Astron. \& Astrophys. Supp.}{{\it Astron. \& Astrophys. Supp.}}
astro-ph/9901001
\section{Introduction} The dipole anisotropy of the Cosmic Microwave Background (CMB) radiation is generally interpreted as a Doppler effect due to the motion of the Sun with respect to the CMB rest-frame. The velocity of the Local Group (LG), 627$\pm$22\,\unit{km~s\mone}\ towards \lb{276\pm3}{30\pm3}, is now well determined from COBE (Kogut et al.\ 1993). Less well known, however, is the depth and degree to which nearby galaxies share in this motion and, by implication, the scales and amplitudes of the mass fluctuations responsible for the flow. To a depth of $\sim{}6000$\,\unit{km~s\mone}, a rough consensus has emerged from recent peculiar velocity surveys of galaxies. For instance, Giovanelli et al. (1998) find a flow (in the CMB frame) of $200\pm65$\,\unit{km~s\mone}\ towards \lb{295}{25}, from an I-band Tully--Fisher survey, while the Mark III velocity compilation yields $370\pm110$\,\unit{km~s\mone}\ towards \lb{306}{13} (Dekel et al.\ 1998, in prep.). Beyond this depth, however, the situation is much less clear. Lauer \& Postman (1994, hereafter LP), using a photometric distance indicator for brightest cluster galaxies, found a $689\pm178\,\unit{km~s\mone}$ bulk motion (towards \lb{343}{52}) for an all-sky sample of 119 Abell/ACO clusters to 15000\,\unit{km~s\mone}\ depth. Currently-popular cosmological models have too little large-scale power to generate coherent flows on such large scales (Feldman \& Watkins 1994; Strauss et al.\ 1995; Jaffe \& Kaiser 1995). The LP result has been challenged by subsequent peculiar velocity surveys (Riess et al.\ 1995; Giovanelli et al.\ 1996; Hudson et al.\ 1997, hereafter H97; Giovanelli et al.\ 1997; M\"{u}ller et al.\ 1998; Dale et al.\ 1998). However, with the exception of Dale et al.\ (1998), none of these studies approach the depth and sky coverage of the LP survey. In this {\em Letter\/}, we report the detection of a significant coherent flow from the SMAC survey of cluster peculiar motions. The survey comprises 699 early-type galaxies in 56 clusters mainly within 12000\,\unit{km~s\mone}. A more detailed description and analysis of the survey will be presented in a forthcoming series of papers. \section{Data \& Method} The distance indicator used in this study is the Fundamental Plane (FP) of early-type galaxies (Davis \& Djorgovski 1987; Dressler et al. 1987). The FP relates the effective (half-light) radius, $\log R_e$ (the distance-dependent quantity), the mean surface brightness within this radius, $\expec{\mu}_e$, and the central velocity dispersion, $\log \sigma$. The SMAC cluster sample consists of new data for 40 clusters (to be reported in future papers), supplemented with data from the literature for 16 clusters previously studied in H97. All but 8 of these are Abell/ACO clusters, and all but 5 have $cz_{\hbox{$\odot$}} < 12000 \unit{km~s\mone}$. The median distance of the SMAC sample is $\sim{}8000 \unit{km~s\mone}$. The number of early-type galaxies observed, per cluster, is in the range 4 -- 56, with a median of 8. New velocity dispersions and Mg$_2$ linestrengths were obtained from the Isaac Newton 2.5m and Anglo-Australian 3.9m telescopes. We followed the homogenization procedure of Smith et al.\ (1997) to bring new spectroscopic data and existing data from the literature onto a common system. Corrections of up to 0.03 dex in $\sigma$ are derived for each system, but typical uncertainties in these corrections are only 0.005--0.008 dex, corresponding to 1.6--2.6\% systematic uncertainty in distance, per observing run. Of the 699 galaxies in the final SMAC sample, velocity dispersion data is drawn wholly from new observations for 41\% of the sample. For 21\% of the sample, the final $\sigma$ derives from both new data and published measurements. For the remaining 38\%, the dispersions are derived from previously published data. New R-band photometric data were obtained from the Jacobus Kapetyn 1.0m and Cerro Tololo Inter-American Observatory 0.9m telescopes. Effective radii and surface brightnesses were determined by fitting a one-dimensional $R^{1/4}$-law profile to the circular aperture photometry, with corrections for seeing, cosmological effects and Galactic extinction (using the maps of Schlegel, Finkbeiner and Davis 1998; hereafter SFD). Additional photometric data has been drawn from the literature and carefully combined with newly obtained parameters. Our new photometry provides data for roughly half of the 699 galaxies in the final SMAC sample. Our method for determining cluster distances and bulk flows follows H97. We summarize the important points here. To obtain cluster distances, we use the inverse form of the FP, i.e.\ we regress on the distance-independent quantity $\log \sigma$. This regression is insensitive to photometric selection effects (e.g.\ selection on magnitude or diameter; see H97, Strauss \& Willick 1995). The FP slopes and scatter for the SMAC sample are consistent with previous results for the inverse FP (H97). The resulting distance error is 21\% per galaxy, and 3\% -- 11\% per cluster, with a median of 7\%. We make a correction for Malmquist bias under the assumption that clusters are drawn from a homogeneous underlying density field. Due to the large number of objects per cluster, the Malmquist bias corrections are small (typically $\sim{}2\%$) for this sample. Finally, we adjust the FP zero-point so that the cluster peculiar velocities have no net radial inflow or outflow with respect to the CMB frame. A bulk flow model provides the simplest parameterization of the peculiar velocity field. We minimize \begin{equation} \chi^2 = \sum_i \frac{(v_i - {\bf V} \cdot \hat{\bf r}_i)^2}{\epsilon^2_i} \label{eq:chi} \end{equation} where $v_i$ is the CMB frame peculiar velocity of cluster $i$ with direction vector $\hat{\bf r}_i$, and ${\bf V}$ is the bulk flow. The total error, $\epsilon_i$, is the quadrature sum of the cluster distance error (in the range 100--1200\,\unit{km~s\mone}), the error in the mean cluster $cz$ (typically 150\,\unit{km~s\mone}) and a ``thermal component'' (set to 250 \unit{km~s\mone}) which allows for small scale fluctuations around the mean bulk flow. Given the error-weighting in \eqref{chi}, the weighted mean depth of the sample is $\sim{}6700 \unit{km~s\mone}$. We have tested the bulk flow recovery with Monte Carlo simulations in which galaxies are assigned peculiar velocities consistent with an input bulk flow plus random errors. Because our survey has good sky-coverage, we find that our bulk flow fit is not affected by ``geometry bias'', i.e.\ there is little covariance between the monopole and dipole (bulk flow) terms. \section{Results} The principal result of this {\em Letter\/} is that the SMAC sample exhibits a CMB-frame bulk flow of $V = 630\unit{km~s\mone}$ (error-bias corrected, see LP), towards \lb{260\pm15}{-1\pm12}. The peculiar velocities of the sample clusters are shown, as a function of angle from the apex direction, in \figref{cos}. The linear trend seen here is a clear signature of a bulk streaming motion. The random error in the bulk flow due to distance and velocity errors is 180 \unit{km~s\mone}. A further uncertainty in the bulk flow arises from uncertainties in the corrections to $\sigma$ for different runs. We estimate this uncertainty by generating bootstrap realizations of the corrections and analyzing the bootstrap-corrected datasets in the same way as the real data. Through this procedure, we find that this source of systematic error contributes an uncertainty of 90 \unit{km~s\mone}\ to the measured bulk flow. The total velocity error in the direction of the flow is 200 \unit{km~s\mone}. The error ellipsoid is triaxial: the direction of smallest error ($\pm100 \unit{km~s\mone}$) is along \lb{314}{49}, and the direction of largest error ($\pm 214 \unit{km~s\mone}$) is along \lb{237}{-11}, which is within 24\hbox{$^\circ$}\ of our bulk flow. Allowing for the three degrees of freedom in the bulk flow vector we find that our sample is inconsistent with being at rest in the CMB frame at the 99.9$\%$ confidence level. A 95$\%$ lower limit on the bulk motion is 400 \unit{km~s\mone}. If we divide the sample by distance into two parts with equal errors, the outer shell of the SMAC sample gives a larger amplitude flow (by 240$\pm$360\,\unit{km~s\mone}) but is consistent, within the errors, with that found from the inner sphere. Both inner and outer samples independently yield a bulk flow with a significance $\ga 98\%$. \vbox{% \begin{center} \leavevmode \hbox{% \epsfxsize=8.9cm \epsffile{fig1.eps}} \begin{small} \figcaption{% Peculiar velocities as a function of angle from the apex of the bulk flow. Symbol sizes are inversely proportional to errors. The solid line is the best-fitting bulk flow model, and clusters which deviate from this flow at the $2\sigma$ level are indicated, with error bars. \label{fig:cos}} \end{small} \end{center}} We have explored a wide range of potential systematic effects. By excluding each cluster from the sample in turn, we find that no individual cluster is responsible for more than 50 \unit{km~s\mone}\ of the total bulk motion. No single supercluster structure dominates the flow. We find a similar result excluding individual spectroscopic runs. Furthermore, because of the balanced sky coverage, the bulk flow is insensitive to the adopted zero-point of the FP relation. A 1\% change in the zero-point (corresponding to its $1\sigma$ uncertainty) alters the bulk flow by only $\sim{}25 \unit{km~s\mone}$. If, instead of SFD extinction corrections, we use those of Burstein \& Heiles (1982), the recovered bulk flow drops by $\sim$ 100\,\unit{km~s\mone}. In order to assess the effect of Malmquist bias and redshift cuts, we have performed a simultaneous FP and bulk flow fit (``Method II'' in the terminology of Strauss \& Willick 1995); we recover a bulk flow which is identical within the errors ($\sim$ 40\,\unit{km~s\mone}\ change). The effect of a possible FP dependence on morphological type is negligible: E and S0 subsamples of the data give consistent results (they differ by $140\pm330$\,\unit{km~s\mone}). Finally, we have examined the possible effect of cluster-to-cluster stellar population differences, using the \mbox{Mg$_2$}\ index as an age/metallicity indicator. We find that our clusters are consistent with following a universal {\mg-- \sig}\ relation. Furthermore, we find no correlation between peculiar velocity and the (non-significant) offsets of clusters from the {\mg-- \sig}\ relation. \vbox{% \begin{center} \leavevmode \hbox{% \epsfxsize=8.9cm \epsffile{fig2.eps}} \begin{small} \figcaption{% Peculiar velocities projected onto a plane between SGX=0 and SGZ=0. The negative half of the horizontal axis points towards \lb{272}{-3}, which is only $\sim{}13\hbox{$^\circ$}$ from the direction of the SMAC bulk flow. We plot only the 43 clusters within 45$^\circ$ of the plane. The shaded region indicates the Zone of Avoidance ($|b| < 15$). The distance to each cluster is indicated by the circle and the CMB frame redshift is indicated by the tip of the vector. Outflowing clusters have filled circles and solid lines; inflowing ones have open circles and dotted lines. Circle size is inversely proportional to the error in the peculiar velocity. Important locations are indicated: the Local Group (LG); the Great Attractor (GA); the Shapley Concentration (SC) and the Horologium-Reticulum (HR) supercluster. \label{fig:plane}} \end{small} \end{center}} \section{Discussion} The motion of the Local Group (LG) and the bulk motion of the SMAC sample both lie in the plane which is at a 45\hbox{$^\circ$} angle from the SGX=0 and SGZ=0 Supergalactic planes. The CMB-frame peculiar velocities of SMAC clusters within $\pm45\hbox{$^\circ$}$ of this plane are shown in \figref{plane}. This figure shows the continued outflow beyond the Great Attractor (``GA'', Lynden-Bell et al.\ 1988) both above and below the Galactic Plane, indicating that the local motions are not generated wholly by the GA. On the opposite side of the sky the data, though sparser, exhibit an inflow of similar magnitude, suggesting a bulk flow with very large coherence length. \begin{figure*} \figurenum{3} \plotone{fig3.eps} \caption{% Peculiar velocities and bulk flow directions in Galactic coordinates (Aitoff projection). Clusters in the SMAC survey are indicated by open circles (inflowing) or asterisks (outflowing), with symbol size proportional to peculiar velocity. The direction of the SMAC bulk flow is indicated by the large ``S'', while the dots show 1000 directions drawn from the bulk flow error ellipsoid. Important directions are indicated: the motion of the Local Group (LG); the motion of the Lynden-Bell et al. (1988) sample (7S); the bulk flow of LP; the motion of the Riess et al.\ (1997) SNIa sample (R); the Shapley Concentration (SC); the Horologium-Reticulum supercluster (HR); and the predicted motion of the LG from X-ray clusters (X) (Plionis \& Kolokotronis 1998). The bulk flow of Dale et al.\ (1998) is small and consistent with zero. Consequently, its direction is not meaningful and has not been plotted here. \label{fig:aitoff} } \end{figure*} The direction of the SMAC bulk flow in Galactic coordinates is shown in \figref{aitoff}. Also shown are the positions of two most prominent concentrations of clusters within 300 \mm{h\mone}\mpc\ (Tully et al.\ 1992), namely the Shapley Concentration (Scaramella et al.\ 1989; Raychaudhury 1989) and the Horologium-Reticulum supercluster (Lucey et al.\ 1983). The direction of the SMAC bulk motion is roughly between these two concentrations. Indeed, there is some evidence in our sample of inflow towards the Shapley Concentration. The data are too sparse in the foreground of Horologium-Reticulum to measure any infall. The direction of our flow agrees very well ($< 16$\hbox{$^\circ$}) with the direction of LG motion predicted from the density field of X-ray/Abell clusters (Plionis \& Kolokotronis 1998). The IRAS PSCz galaxy redshift survey is of sufficient depth to yield accurate predicted peculiar velocities (Branchini et al.\ 1998) for our clusters. A preliminary comparison indicates excellent directional agreement between the PSCz-predicted and the observed bulk flow of SMAC clusters. How does our bulk flow compare to results from other surveys? One might naively estimate the level of agreement between two surveys from a $\chi^2$ statistic derived from the difference between the two bulk flow vectors and the sum of their observational-error covariance matrices. This procedure is incorrect, however, because it does not allow for the fact that different surveys probe different volumes and regions on the sky. As stressed by Watkins \& Feldman (1995), the incomplete cancelation of small-scale flows internal to a volume can have a significant effect on the total measured bulk flow. The significance of disagreements between flow vectors is always {\em overestimated} if these effects are neglected. For example, according to the naive comparison described above, our result is in conflict with the bulk flow vector of LP and of Dale et al.\ (1998). Our flow is of similar amplitude to that found by LP, but the direction is $\sim{}90\hbox{$^\circ$}$ from the apex of the LP dipole, so at face value the agreement is poor. Dale et al.\ find a bulk flow of $\la 200 \unit{km~s\mone}$ for their deep Tully--Fisher cluster sample, in contrast to our result% \footnote{It is worth noting, however, that our error-weighted bulk flow is within the $2\sigma$ error ellipse of Dale et al.'s volume-weighted solution, although not of their error-weighted solution.}. A detailed analysis, allowing for the effects of survey geometries and internal flows, is required in order to determine whether the apparent disagreements with LP and Dale et al.\ are in fact significant. Such an analysis is in progress. In contrast to the two surveys just discussed, our flow agrees very well in amplitude and direction with that of the similarly deep (9000 -- 13000 \unit{km~s\mone}) Tully--Fisher sample of Willick (1998, also private communication). The SMAC bulk flow is not inconsistent with preliminary results from the EFAR project (Saglia et al. 1998). Finally, we have performed a bulk flow fit to the SNIa velocity data of Riess et al.\ (1997), which yields a flow of $320\pm160$\,\unit{km~s\mone}\ (error-bias corrected) directed towards \lb{282}{-8}. This direction is only $\sim{}20^\circ$ from the SMAC flow apex. Our result can be compared to expectations based on popular families of cosmological models. The bulk flow amplitude measured by a typical (i.e. randomly-placed) observer depends on the power spectrum of mass density fluctuations and the geometry of the survey. Following Kaiser (1988), we have calculated the $k$-space window function for the bulk flow given the SMAC survey geometry. We find that the SMAC bulk flow probes the mass power spectrum on scales larger than $\lambda \sim{}60 \mm{h\mone}\mpc$. If we fix the power spectrum on the largest scales using COBE, then the high amplitude of the SMAC bulk flow implies substantial power on intermediate scales. For variants of the cold dark matter (CDM) cosmology with either a cosmological constant, neutrinos, or a tilt ($n \neq 1$) in the initial primordial spectrum, the choice of free parameters fixes the power spectrum on sub-COBE scales. For the three COBE-normalized CDM variants, we require choices of the parameters such that the rms fluctuation in mass density contrast in an 8 \mm{h\mone}\mpc\ sphere is $\sigma_8\,\Omega^{0.6} \ga 0.87$, 0.80 or 0.85, respectively, at the 90\% confidence level. This is in conflict with the value $\sigma_8\Omega^{0.5} \sim{}0.55$ inferred from the abundance of rich clusters (Vianna \& Liddle 1996; Eke et al.\ 1996; Pen 1998), but is in better agreement with the value $\sigma_8 \Omega^{0.6} \sim{}0.8$ inferred from other peculiar velocity surveys (Kolatt \& Dekel 1997; Zaroubi et al.\ 1997). Alternatively, if the power spectrum is to fit {\em both\/} the abundance of rich clusters {\em and\/} the COBE fluctuations, then it must be considerably more ``peaked'' between these scales, relative to the CDM variants. \section{Summary} We have recently completed a FP survey of 699 early-type galaxies in 56 clusters within $\sim{}12000 \unit{km~s\mone}$. For this sample, we find a large-scale bulk flow of amplitude $630\pm200\unit{km~s\mone}$ towards \lb{260\pm15}{-1\pm12}. Our result is robust against the effects of individual clusters and data subsets, the choice of Galactic extinction maps, Malmquist bias and stellar population effects. This result suggests that the mass fluctuation power spectrum has a high amplitude on scales $\lambda \sim{}60 - 600 \mm{h\mone}\mpc$. This regime, near the peak of the power spectrum, is at present poorly constrained by other methods. On these scales, CMB anisotropies are dominated by acoustic oscillations and depend mainly on the baryon content, whereas current galaxy redshift surveys are insufficiently deep, and are subject to the (unknown) relationship of galaxies to the underlying mass density fluctuations. Further analyses, currently underway, will (i) determine the consistency of our results with those of other surveys, allowing for the different sample geometries; (ii) compare measured velocities with predictions of the IRAS PSCz redshift survey; and (iii) constrain directly the mass power spectrum, by means of the dipole and higher moments of the velocity field. \acknowledgments We thank the staff of the Isaac Newton Group of telescopes at the Observatorio del Roque de los Muchachos, the Anglo-Australian Observatory and the Cerro Tololo Inter-American Observatory, where these observations were conducted.
1804.09275
\section{Introduction}\label{sec:1} \begin{figure*}[!hbt] \centering \includegraphics[width = 1.5\columnwidth]{g_over_omega} \caption{\label{fig:g-w_r} (Color online) Evolution in cavity QED of the highest value of $g/\omega$, with $\omega$ the cavity frequency, as function of time for different physical platforms. The dotted lines at $g/\omega\simeq0.1$ and $g\simeq\omega$ mark the beginning of the USC and DSC regimes, respectively. References for the data, chronological: Atoms in Optical cavities: \cite{thompson1992}, \cite{TurchettePRL95}, \cite{hood1998}, \cite{colombe2007} \cite{thompson2013}, \cite{Tiecke2014}; Atoms in microwave cavities: \cite{Brune1994}, \cite{Brune1996}, \cite{Maitre1997}, \cite{Brune2008}; Superconducting qubits: \cite{wallraff2004}, \cite{chiorescu2004}, \cite{johansson2006}, \cite{niemczyk2010}, \cite{forn-diaz2010}, \cite{baust2016}, \cite{yoshihara2017a}; Quantum dots: \cite{Reithmaier2004}, \cite{Reinhard2012}, \cite{Takamiya2013}, \cite{Kelaita2017}, \cite{Mi2017}, \cite{Stockklauser2017}; Intersubband polaritons: \cite{DupontetAl03PRB}, \cite{DupontetAl07PRB}, \cite{TodorovetAl10PRL}, \cite{DelteiletAl12PRL}, \cite{AskenazietAl14NJP}; Cyclotron resonance \cite{MuravevetAl11PRB}, \cite{ScalarietAl12Science}, \cite{MaissenetAl14PRB}, \cite{BayeretAl17NL}.} \end{figure*} The Rabi model~\cite{PhysRev.49.324} arguably describes the simplest class of light-matter interactions, namely, the dipolar coupling between a two-level quantum system (qubit) and a classical radiation field mode. This semiclassical model has a fully quantum counterpart, where the electromagnetic radiation is specified by a single-mode quantum field, yielding the so-called quantum Rabi model (QRM)~\cite{Braak2011}. The QRM describes with accuracy the dynamical and static properties of a wide variety of physical systems, such as quantum optics and solid state settings. Moreover, a variety of protocols in modern quantum information theory~\cite{nielsen_chuang} employ the QRM as a fundamental building block, with plausible applications in quantum technologies, including, e.g., ultrafast quantum gates~\cite{PhysRevLett.108.120501}, quantum error correction~\cite{PhysRevB.91.064503}, and remote entanglement generation~\cite{PhysRevLett.113.093602}. In consequence, the QRM is extremely important in both applied and theoretical physics. The standard cavity quantum electrodynamics (cavity QED) experiments are usually constrained to a light-matter coupling strength orders of magnitude smaller than the natural frequencies of the noninteracting contributions. Therefore, they take place in the realm of the well-known Jaynes-Cummings (JC) model~\cite{JaynesCummings63IEEE}, which can be obtained by performing the rotating-wave approximation (RWA) to the QRM~\cite{Braak2011}. In this context, attaining the so-called strong coupling (SC) regime, where the coupling strength is larger than all decoherence rates, boosted the field of cavity QED for several decades~\cite{raimond2001}. Thus, the JC model has represented a theoretical and experimental milestone in the history of light-matter interactions and quantum optics. During the past decade, a novel coupling regime of the QRM has been theoretically investigated in which the coupling strength is a sizable fraction of the natural frequencies of the noninteracting parts~\cite{CiutietAl05PRB, LiberatoetAl07PRL, bourassa,Beaudoin2011,daniel_prx,Pedernales15,TodorovetAl10PRL}, and experimentally achieved in several quantum systems~\cite{TodorovetAl10PRL,forn-diaz2010,niemczyk2010,faist,AnapparaetAl09PRB,GunteretAl09Nature,MuravevetAl11PRB,schwartz2011,GeiseretAl12PRL,PhysRevApplied.2.054002,ZhangetAl16NP,chen2016,braumuller2016,KihwanKimRabi2017, LietAl18NP}. In this ultrastrong coupling (USC) regime, the RWA is not valid anymore, while the counter-rotating terms produce novel, unexpected physical phenomena \cite{CiutietAl05PRB} as well as applications in quantum information~\cite{PhysRevLett.108.120501,PhysRevB.91.064503,PhysRevLett.113.093602}. In the regime in which the counter-rotating terms can still be analyzed with perturbation theory~\cite{TodorovetAl10PRL,forn-diaz2010,niemczyk2010,faist,AnapparaetAl09PRB,GunteretAl09Nature,MuravevetAl11PRB,schwartz2011,GeiseretAl12PRL,PhysRevApplied.2.054002,ZhangetAl16NP,chen2016}, the QRM can be described by the Bloch-Siegert (BS) Hamiltonian~\cite{Cohen1973, Beaudoin2011, klimov2009}. On the other hand, some experiments have recently reached the non-perturbative USC regime~\cite{MaissenetAl14PRB,forn-diaz2017,yoshihara2017a}, where the coupling strength exceeds the natural frequencies of the noninteracting parts, and the full-fledged QRM has to be considered. Under these conditions, a new regime of light-matter interaction emerges, with absolutely different physics than the USC regime. In this deep strong coupling (DSC) regime~\cite{casanova2010}, an approximate solution can reasonably describe some aspects of the QRM model. In fact, recently, the DSC regime has been experimentally achieved with a superconducting circuit~\cite{yoshihara2017a} and in a two-dimensional electron gas coupled with terahertz metamaterial resonators~\cite{BayeretAl17NL}. Figure~\ref{fig:g-w_r} presents the evolution over time of the highest reported coupling strength $g$ normalized to the mode frequency, $g/\omega$, in all fields exploring light-matter interactions. Clearly, experimental ultrastrong couplings are a recent advent over the past decade, mostly as a consequence of the interdisciplinary influence each area has had on the others. Figure~\ref{fig:U} shows the evolution over time of the parameter $U$, which we propose as novel figure of merit. It corresponds to the geometric mean between reduced coupling $g/\omega$ and the cooperativiy factor used in atomic systems $C=4g^2/\kappa\gamma$, with $\kappa$ and $\gamma$ representing the cavity and atomic losses, respectively. $U$ is therefore a measure of coherence in ultrastrongly coupled systems and, when its value approaches or exceeds unity, it is possible to access the exotic physics of the USC regime. \begin{figure*}[!hbt] \centering \includegraphics[width = 1.5\columnwidth]{Ultracoherence} \caption{\label{fig:U} (Color online) Evolution in time in cavity QED of the highest value of the parameter $U=(Cg/\omega)^{1/2}$ for different physical platforms from the same experimental points in Fig.~\ref{fig:g-w_r}. $C=4g^2/\kappa\gamma$ is the cooperativity, with $\kappa$ and $\gamma$ being the cavity and qubit loss rates, respectively. $U$ is an indicator of combined coupling strength and quantum coherence. References in addition to those in Fig.~\ref{fig:g-w_r}: Quantum dots: \cite{Srinivasan2007}, \cite{Faraon2008}; Cyclotron resonance:~\cite{ZhangetAl16NP}, \cite{LietAl18NP}.} \end{figure*} This review presents a general overview of the theoretical and experimental progress in the USC and DSC regimes of light-matter interaction. In the past decade, experimental access to increasingly larger light-matter coupling strengths in different fields has brought forward USC and DSC regimes to the frontiers in quantum optics, both from a theoretical as well as from an experimental point of view. Moreover, beyond the fundamental interest, it is becoming natural to consider the impact of USC regimes in the context of the emerging interdisciplinary aspects of quantum technologies. The physics of the USC regimes is currently a very active research field that is in constant transformation and evolution. In particular, new lines of exploration of USC involving a continuum of modes have already been started~\cite{forn-diaz2017, puertas2018, forn-diaz2018}, enabling the exploration of condensed matter models of relevant interest. Additionally, recent work in the two-photon quantum Rabi model~\cite{Felicetti2018} represents a playground for novel physics in nonlinear quantum optics. It is noteworthy to mention that in this review we cover neither open quantum systems nor multi-photon quantum Rabi models, nor the impressive developments in the QRM from a mathematical physics perspective~\cite{Chen2012,Zhong2013,Wakayama2013,Maciejewski2014,Braak2011,Braak2016}. However, we have tried to provide a connection to these growing areas of high theoretical and experimental interest. The USC regimes of light-matter interaction will keep on expanding at the frontier of quantum optics and quantum physics. We envision that all related topics to USC physics will remain a prominent field in the foreseeable future. The contents of this review can be summarized as follows. Section~\ref{sec:2} presents an overview of the different light-matter interaction models. We follow a historical approach along the lines of cavity QED and the recent progress in the theory and experiments related to the USC regimes. Section~\ref{sec:3} reviews the most relevant experiments having unveiled the physics related to the USC and DSC regimes. In Sec.~\ref{sec:4}, the quantum simulations of USC regimes are reviewed from a theoretical perspective. Section~\ref{sec:5} reviews a variety of potential applications of USC regimes, from the point of view of quantum optics and quantum computation. Finally, Sec.~\ref{sec:6} presents our conclusions and outlook. \section{The quantum Rabi model}\label{sec:2} The Rabi model~\cite{PhysRev.49.324} was introduced by Isidor Rabi in 1936 to describe the semiclassical coupling of a two-level atom with a classical monochromatic electromagnetic wave. In its fully quantized version, the model is given by the Hamiltonian \begin{equation} \label{QRH} \hat{\cal H}_R=\hbar (\Omega/2) \hat{\sigma}_{z} + \hbar \omega \hat{a}^{\dagger} \hat{a} + \hbar g \hat{\sigma}_{x} \left( \hat{a} + \hat{a}^{\dagger} \right) , \end{equation} which is nowadays known as the quantum Rabi model. Equation~(\ref{QRH}) describes the dipolar coupling between a two-level atom and a quantized electromagnetic field mode. This Hamiltonian appropriately describes a plethora of quantum systems, several of which are laid out in Sec.~\ref{sec:3}. Here, $\Omega$ and $\omega$ are the frequencies of the atomic transition and the field mode, respectively, and $g$ is the light-matter coupling strength. $\hat{\sigma}_{x,z}$ are Pauli matrices describing the atomic spin, while $a$ and $a^\dag$ are the annihilation and creation operators of the bosonic field mode, respectively. In atomic systems, the achievable ratio $g/\omega$ between the coupling strength and the bosonic field mode frequency is orders of magnitude lower than unity. Therefore, the QRM has been historically considered for these systems in the so-called Jaynes-Cummings (JC) regime~\cite{JaynesCummings63IEEE}, where one performs the rotating-wave approximation and neglects the counter-rotating terms, which contribute weakly to the dynamics when $g/\omega\ll 1$, \begin{equation} \label{eq:JC} \hat{\cal H}_{\rm JC}=\hbar (\Omega/2) \hat{\sigma}_{z} + \hbar \omega \hat{a}^{\dagger} \hat{a} + \hbar g \left( \hat{\sigma}_{+} \hat{a} + \hat{\sigma}_{-}\hat{a}^{\dagger} \right). \end{equation} Here, $\hat{\sigma}_{+}$ and $\hat{\sigma}_{-}$ are the raising and lowering atomic operators, respectively. The JC model described by the Hamiltonian $\hat{\cal H}_{\rm JC}$ is analytically solvable in a straightforward way in terms of JC doublets, and has been a cornerstone of quantum optics in the past 50 years. This model has had a widespread use in a variety of physical platforms, ranging from neutral atoms in optical and microwave cavities, trapped ions with quantized motion, to superconducting qubits coupled to electromagnetic cavities, transmission line resonators and nanomechanical resonators. Recent implementations of small-scale quantum processors use the physics from Eq.~(\ref{eq:JC}) as the basis for the coherent quantum control of coupled quantum systems \cite{Corcoles2015}. In the regime where a difference $\delta\equiv\Omega-\omega$ exists between the frequencies of the atom and the field mode, a Schrieffer-Wolf transformation can be applied to Eq.~(\ref{eq:JC}) if the dispersive condition is satisfied $g/\delta\ll1$, to become, up to second order in $g^2/\delta$ \cite{blais2004}, \begin{equation}\label{eq:AC} \mathcal{\hat{H}}_{\rm AC}/\hbar = \frac{1}{2}\left[\Omega+\frac{g^2}{\delta}\right]\hat{\sigma}_z + \left[\omega + \frac{g^2}{\delta}\hat{\sigma}_z\right]\hat{a}^{\dag}\hat{a}. \end{equation} Equation~(\ref{eq:AC}) is also known as the AC-Stark Hamiltonian. The atom-photon interaction is manifested in the non-radiative energy shifts that atom and field mode exert on each other. A detection of the field frequency yields a nearly quantum-nondemolition measurement of the qubit state. This property is being widely exploited in quantum computing approaches, particularly with superconducting qubits \cite{Reed2010}. However, in the past decade, two novel regimes of light-matter interaction have emerged, namely, the USC regime, where $0.1\leq g/\omega< 1$, and the DSC regime, where $g/\omega> 1$. These new regimes exhibit a variety of physics which are not observable with lower light-matter coupling strengths. In addition, one may take advantage of such new phenomena for quantum information applications as will be shown in Sec.~\ref{sec:5}. Figure~\ref{fig:class} displays the classification of different coupling regimes of the QRM~\cite{Rossatto2017} as a function of $g/\omega$ and for increasing energy eigenstates of Eq.~(\ref{QRH}). \begin{figure}[!hbt] \centering \includegraphics[width = \columnwidth]{medium} \caption{\label{fig:class} Classification of the different coupling regimes of the quantum Rabi model (QRM). $g_0$ in the figure corresponds to $g$ as defined in the main text. The leftmost region at lowest couplings stands for the perturbative ultrastrong coupling (pUSC), which includes the Bloch-Siegert Hamiltonian regime. For the lowest-energy eigenstates it extends up to $g/\omega\sim0.4$. The intermediate region symbolizes the non-perturbative ultrastrong/ deep strong coupling (npUSC/npDSC) regime. The color gradient around the boundaries symbolizes the lack of an abrupt transition in the physical properties of the QRM. The rightmost area is the perturbative deep strong coupling regime pDSC, where the qubit becomes a perturbation to the system~\cite{Rossatto2017}.} \end{figure} The USC regime $0.1\leq g/\omega< 1$ can be divided into a perturbative region $0.1 \lesssim g/\omega \lesssim 0.3$ and a non-perturbative region $0.3 \lesssim g/\omega \lesssim 1$ \cite{Rossatto2017}. The perturbative region consists of a deviation from the JC model that accepts an analytical treatment by considering the counter-rotating terms $\hat{a}^{\dag}\hat{\sigma}_+$ and $\hat{a}\hat{\sigma}_-$ as an off-resonant driving field. Applying perturbation theory to the quantum Rabi Hamiltonian [Eq.~(\ref{QRH})] up to second order on the perturbative parameter $\lambda\equiv g/(\Omega+\omega)$, yields the following Hamiltonian \cite{klimov2009}, \begin{multline}\label{eq:BS} H_{\rm BS}/\hbar = \frac{1}{2}\left(\Omega+\omega_{\rm BS}\right) \hat{\sigma}_{z} + \left(\omega + \omega_{\rm BS}\hat{\sigma}_z\right) \hat{a}^{\dagger} \hat{a} \\ -\frac{\omega_{\rm BS}}{2} + g(\hat{a}^{\dag}\hat{a})\hat{\sigma}_{-}\hat{a}^{\dagger} + \hat{\sigma}_{+} \hat{a}g(\hat{a}^{\dag}\hat{a}), \end{multline} where $\omega_{\rm BS}\equiv g^2/(\omega + \Omega)$ is the Bloch-Siegert shift. The coupling constant $g$ is renormalized to $g(\hat{a}^{\dag}\hat{a}) \equiv -g[1 - \hat{a}^{\dag}\hat{a}\omega_{\rm BS}/(\omega + \Omega)]$. The additional terms appearing in Eq.~(\ref{eq:BS}) compared to Eq.~(\ref{eq:JC}) are analogous to the AC-Stark Hamiltonian [c.f. Eq.~(\ref{eq:AC})], arising from having treated the counter-rotating terms as an off-resonant driving field. Equation~(\ref{eq:BS}) is known as the Bloch-Siegert Hamiltonian, in analogy to the case of a strongly-driven single spin~\cite{bloch1940}. The nonperturbative region $0.3 \lesssim g/\omega \lesssim 1$ departs from the standard quantum optical treatment of light-matter interaction. In this region, one has to resort to the exact solution for arbitrary coupling~\cite{Braak2011}. In contrast to the approximations in Eqs.~(\ref{eq:AC}) and (\ref{eq:BS}), the energy eigenvalues are no longer given in closed form. However, the spectrum can be analyzed qualitatively, leading to the unification of quasi-exact crossing points~\cite{Judd1979,Kus1986} and avoided crossings (see Fig.~\ref{fig:class}). In the first-ever work coining the USC regime~\cite{CiutietAl05PRB}, it was found that the ground state of an ultrastrongly coupled system consists of a squeezed vacuum. In the ordinary vacuum, $|0\rangle$, in the zero- or weak-coupling regime, it is required that $\hat{\sigma}_- |0\rangle = \hat{a}|0\rangle = 0$. However, in the USC regime, the ground state $|G\rangle$ is a squeezed state, which contains a finite number of cavity photons and atomic population. Therefore, $\hat{p} |G\rangle = 0$, where $\hat{p}$ is a linear combination of $\hat{\sigma}_+$, $\hat{a}^{\dagger}$, $\hat{\sigma}_-$, and $\hat{a}$. Further studies have looked into the possibility to release such a squeezed photon field by modulating different system parameters~\cite{CiutiCarusotto06PRA, LiberatoetAl07PRL, DeLiberato2009}. As shown in Fig.~\ref{fig:class}, the nonperturbative USC regime merges in a continuous manner with the nonperturbative DSC regime. On the other hand, the perturbative DSC regime represents the extreme coupling condition $g/\omega\gg1$. Here, the effective QRM Hamiltonian, in the spirit of spin-dependent forces, can be solved analytically while unitarily creating Shchr\"odinger cat states. In an important step to unveil the physics of the DSC regime~\cite{casanova2010}, new light was shed on the structure of the QRM following an analysis based on the symmetries of Eq.~(\ref{QRH}). The quantum Rabi Hamiltonian contains a discrete $\mathbb{Z}_{2}$ symmetry characterized by the parity operator $\hat{P}=\hat{\sigma}_{z} e^{i \pi \hat{a}^{\dagger}\hat{a}}$, which can take values $\pm 1$~\cite{casanova2010,wolf2013}. Therefore, the total Hilbert space splits into two infinite-dimensional invariant chains labeled by the parity eigenvalues \begin{equation} \begin{split} &|g0\rangle \leftrightarrow |e1\rangle \leftrightarrow |g2\rangle \leftrightarrow |e3\rangle \leftrightarrow \cdots \left( p = -1 \right) , \\ &|e0\rangle \leftrightarrow |g1\rangle \leftrightarrow |e2\rangle \leftrightarrow |g3\rangle \leftrightarrow \cdots \left( p = +1 \right) . \end{split} \end{equation} The quantum Rabi Hamiltonian can be rewritten using the parity operator $\hat{P}$ and a composite bosonic mode $\hat{b} \equiv \hat{\sigma}_{x}\hat{a}$, as \begin{equation} \hat{\cal H}_{R}=\hbar \omega \hat{b}^{\dagger} \hat{b} + \hbar g \left( \hat{b} + \hat{b}^{\dagger} \right) - \hbar(\Omega/2) \left( -1 \right)^{\hat{b}^{\dagger}\hat{b}} \hat{P}. \end{equation} In the slow qubit limit $\Omega \to 0$, $\hat{\cal H}_{R} \to \left[ \hbar \omega \left( \hat{b}^{\dagger} + g/\omega \right) \left( \hat{b} + g/\omega \right) - \hbar g^{2}/\omega \right]$, which corresponds to a simple harmonic oscillator displaced by the ratio of the coupling with the frequency of the cavity~$g/\omega$. \begin{figure}[!hbt] \centering \includegraphics[width = 8.5cm]{jorge_Dsc} \caption{\label{fig:jorge_Dsc} (Color online) Dynamics of the deep strong coupling (DSC) regime. (a) Photon statistics at different times of the evolution for $\Omega = 0.5 \omega$. When the qubit frequency $\Omega \neq 0$, the photon number wave packet suffers self-interference and is distorted; (b) comparison of revival probability of the initial state $P_{+0_{b}}\left( t \right) = |\langle g , 0_{a} |\psi \left( t \right) \rangle | $ calculated for $\Omega = 0$ (solid line) and $\Omega = 0.5 \omega$ (dashed line). In the case $\Omega \neq 0$, full collapses and partial revivals are observed where the initial probability is not completely restored, with a maximum value that deteriorates as time evolves. In all simulations the initial state is $|g,0_{a}\rangle$ and $g/\omega = 2$~\cite{casanova2010}.} \end{figure} Figure~\ref{fig:jorge_Dsc} shows the time evolution of a state initially prepared in the uncoupled vacuum $|0,g\rangle$. Since this state is not an eigenstate of the quantum Rabi Hamiltonian, the system evolves as a wave packet climbing up and down the parity chains, displaying photon number wave packet oscillations. When the qubit frequency is finite, it effectively dephases the photon number oscillations which decay in amplitude over time. Also, the temporal development of qubit operators depends crucially on the presence of parity chain mixing~\cite{Wolf2012}. The DSC regime requires a specific theoretical treatment due to its distinctive character when compared to USC physics, both in the discrete and in the continuous mode approaches~\cite{forn-diaz2017,yoshihara2017a,BayeretAl17NL}. In the regime where the coupling strength dominates over any other term, the limit of spin-dependent forces is expected. Such a limit was previously studied in trapped ion systems in order to achieve faster quantum computing operations, among other applications~\cite{Solano2003, haljan2005}. Finally, it is noteworthy to mention another surprising limit of the QRM when the mode frequency is negligible, giving rise to the emergence of the 1+1 dimensional Dirac equation~\cite{LamataDirac2007,Gerritsma2010}. This connection was further explored in the literature~\cite{Gerritsma2011} and may still produce important analogies for quantum simulations of relativistic quantum models encoded in nonrelativistic quantum systems~\cite{Pedernales2018}. Note that in the USC regime, the complete cavity QED Hamiltonian contains an additional term, the so-called $A^2$-term which represents the self-interaction energy of the field. This term is usually neglected under the RWA, but in the USC regime it has an important role in most physical systems. A historical dispute in the context of cavity QED has surrounded the discussions about the $A^2$ term due to an initial prediction of a superradiant phase transition \cite{Dicke54PR, HeppLieb73AP, WangHioe73PRA} followed by a no-go theorem \cite{RzazewskietAl75PRL}. More recently, the dispute has surged back in discussing different quantum systems such as superconducting qubits \cite{nataf2010, ViehmannetAl11PRL, jaako2016} and polaritons \cite{HagenmullerCiuti12PRL, ChirollietAl12PRL}. Therefore, the study of the USC regime unavoidably leads to the exploration of the influence of the $A^2$ term in different physical systems, as was highlighted in a recent theoretical work which also included direct dipole interactions between the two-level systems \cite{Bernardis2018}. Learning information about this term would lead to profound insights in the ultimate nature of light-matter interaction. Extensions of the QRM considering the anisotropic Rabi model \cite{Xie2014} including discussions of the $A^2$ term \cite{Maoxin2017} have also been investigated. In this modified QRM, the counter-rotating terms are assumed with a different coupling strength $g_{cr}$ than the co-rotating terms $g$. \section{Experiments in the USC and DSC regimes}\label{sec:3} Ultrastrong coupling regimes have been the focus of theoretical studies for many decades \cite{Shirley1965, Cohen1973, Zela1997, irish2007}. It was not until late 2000's that the first truly experimental sightings of light-matter interactions in the USC regime were made~\cite{niemczyk2010, forn-diaz2010, AnapparaetAl07SSC, DupontetAl07PRB}. This first round of experimental results triggered a period of intense theoretical exploration. Therefore, the experimental progress has marked the pace at which the field has evolved. Coincidentally, the exploration of the USC regime in different physical systems has taken place at about the same period of time. In this section, we will overview the most relevant of these fields, namely superconducting quantum circuits (Sec.~\ref{sec:3A}), semiconductor quantum wells (Sec.~\ref{sec:3B}) and other hybrid quantum systems (Sec.~\ref{sec:3C}). \subsection{Superconducting quantum circuits}\label{sec:3A} The experimental exploration of ultrastrong interactions in superconducting quantum circuits was initiated in 2010, following several years of development of circuit quantum electrodynamics~\cite{blais2004, wallraff2004}. The early experiments used capacitive \cite{schuster2007, bishop2009} or mutual geometric inductive couplings \cite{chiorescu2004, johansson2006}, which yielded interaction strengths in the strong coupling regime. The first two experiments reaching USC regimes used galvanic couplings instead~\cite{niemczyk2010, forn-diaz2010}. Both experiments reported clear evidence of deviations from the conventional model used in quantum optics, the JC model introduced in Sec.~\ref{sec:2}~\cite{JC}. The couplings achieved are nowadays cast in the perturbative USC regime~\cite{Rossatto2017}. These were followed by several experimental results addressing distinct features related to counter-rotating wave physics inherent to the perturbative USC regime~\cite{chen2016, forn-diaz2016, baust2016}. In 2016, two independent experiments attained a qualitative jump in the light-matter interaction strength, pushing the boundaries into the non-perturbative USC domain by using Josephson junctions as coupling elements. These experiments spanned both closed~\cite{yoshihara2017a} and open system settings~\cite{forn-diaz2017} and entered the DSC regime~\cite{casanova2010, Rossatto2017}. In parallel to the engineering of circuits showing USC and DSC physics, novel techniques of digital and analog quantum simulation using superconducting circuits and trapped ions studied the QRM in these extreme coupling regimes~\cite{langford2016, braumuller2016,KihwanKimRabi2017}. Altogether, the year 2016 consolidated the field of research on USC regimes in superconducting circuits both from a fundamental and an applied point of view \cite{Braak2016}. A summary of the milestones in coupling strength achieved in experiments with superconducting quantum circuits is reported in Table~\ref{table:sqc}. \begin{table*}[t] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline Reference & Qubit & Cavity & Interaction & $\gamma/2\pi$ & $\kappa/2\pi$ & $g/2\pi$ & $\omega_r/2\pi$ & $g/\omega_r$ & $U$ & Notes \\ & type & type & type & (MHz) & (MHz) & (MHz) & (GHz) & (\%) & &\\ \hline \hline \cite{wallraff2004} & CPB & TL & Capacitive & 0.7 & 0.8 & 5.8 & 6.044 & 0.1 & 0.24 & First strong coupling \\ \hline \cite{chiorescu2004} & FQ & LE & Galvanic, external & 27 & 1.6 & 200 & 2.91 & 6.9 & 7.97 & Resonator SQUID \\ \hline \cite{johansson2006} & FQ & LE & Galvanic, external & 0.2 & 0.2 & 216 & 4.35 & 5 & 241 & First vacuum oscillations \\ \hline \cite{schuster2007} & TR & TL & Capacitive & 0.25 & 1.6 & 105 & 5.7 & 2 & 22.9 & First transmon work \\ \hline \cite{bishop2009} & TR & TL & Capacitive & 0.3 & 0.09 & 173.5 & 6.92 & 2.5 & 167 & \\ \hline \cite{fedorov2010} & FQ & LE & Galvanic, external & 2.9 & 0.1 & 119.5 & 2.723 & 4.4 & 46.5 & \\ \hline \cite{niemczyk2010} & FQ & TL & Galvanic, external & 2.5 & $<2$ & 636 & 5.357 & 12 & 98 & First USC work\\ \hline \cite{forn-diaz2010} & FQ & LE & Galvanic, external & $<10$ & 10 & 810 & 8.13 & 10 & 25.6 & Bloch-Siegert in USC\\ \hline \cite{baust2016} & FQ & TL & Galvanic, external & $\sim10$ & - & 775 & 13.3 & 17.2 & - & Dressed mode coupling\\ \hline \cite{chen2016} & FQ & TL & Galvanic, external & $\sim1$ & - & 306 & 3.143 & 9.7 & - & \\ \hline \cite{yoshihara2017a} & FQ & LE & Galvanic, internal & $\sim1$ & $\sim1$ & 7630 & 5.711 & 134 & 8819 & First DSC work\\ \hline \cite{yoshihara2017b} & FQ & LE & Galvanic, internal & $\sim1$ & $\sim1$ & 5310 & 6.203 & 86 & 4913 & \\ \hline \cite{bosman2017a} & TR & TL & Capacitive & 29.3 & 38 & 455 & 6.23 & 7.1 & 3.7 & \\ \hline \cite{bosman2017b} & TR & TL & Capacitive & 3.1 & $<0.1$ & 897 & 4.268 & 19 & 739 & First USC transmon\\ \hline \end{tabular} \caption{\label{table:sqc}Experimental observations of ultrastrong light-matter coupling in superconducting quantum circuits. CPB = Cooper pair box. FQ = Flux qubit. TR = Transmon qubit. TL = Transmission line resonator. LE = lumped-element resonator. $\gamma$ = qubit decay rate. $\kappa$ = photon decay rate. $g$ = coupling strength. $\omega_r$ = resonator frequency. $U\equiv\sqrt{(g/\omega_r)(4g^2/\kappa\gamma})$ = geometric mean between cooperativity and normalized coupling strength.} \end{table*} \subsubsection{Circuit considerations: qubit-resonator systems} \label{sec:3A1} The interaction between light and matter is fundamentally manifested as a modification of a property of one of the interacting subsystems due to the presence of the other one. Consider a single atom placed in a dielectric. The presence of the atom represents a sudden modification of the medium through which light propagates. This point-like discontinuity in the dielectric causes a modification of the electromagnetic field distribution of photons, resulting in a net light-matter interaction. In the case of circuits, superconducting qubits play the role of effective artificial atoms. In analogy to natural atoms, the presence of a qubit induces a strong change in the impedance of the circuit through which microwave photons propagate, enabling qubit-photon interactions. The interaction in this case may be capacitive or inductive, depending on the circuit design [Fig.~\ref{fig:types}(a)]. Within the \emph{strong coupling regime} where the interaction strength $g$ dominates over qubit loss $\gamma$ and cavity loss $\kappa$, the qubit-photon interaction is perturbative with respect to the cavity mode frequency $\omega_c$, $\kappa,\gamma\ll g\ll\omega_c$, leaving the bare eigenstates of the interacting subsystems unmodified. The eigenstates of the total system will still consist of superpositions of qubit and photon in a dressed-state basis \cite{JC}. For this reason we refer to strong coupling regimes as~\emph{external} in this review. \begin{figure}[!hbt] \centering \includegraphics{types} \caption{\label{fig:types} (Color online) a) Circuit schematic of \emph{external} coupling, with a circuit element (center, red) which couples resonator (left, blue) and qubit (right, yellow). Capacitors or inductors are examples of possible coupling elements. b) \emph{Internal} coupling where the qubit (right, yellow) and resonator (blue, left) shunt each other and share internal degrees of freedom.} \end{figure} There exists an important difference between atomic systems and superconducting circuits: superconducting qubits are circuits themselves, allowing the possibility to directly embed the artificial atom in the medium of propagation of photons [Fig.~\ref{fig:types}(b)]. In this way, the two coupled systems share more than just mutual geometric elements of the circuit (capacitive and/or inductive) which store the interaction energy [Fig.~\ref{fig:types}(a)]. As described later in this section, circuit engineering permits sharing an actual \emph{internal} degree of freedom between the artificial atom and the resonator, which becomes the actual source of coupling. In such a scheme, the qubit degrees of freedom become renormalized by the elements of the coupling resonator circuit \cite{Manucharyan2017}. With such a strong interaction, the natural basis of eigenstates of the qubit circuit is modified, both for charge-type [Cooper pair box (CPB), and transmon qubit] and flux-type qubits (flux qubit and fluxonium qubit). This is the fundamental key point that permitted attaining coupling strengths well above the excitation frequencies of the interacting subsystems, i.e. the nonperturbative USC/DSC regimes \cite{yoshihara2017a, forn-diaz2017}. Superconducting qubits are generally classified in two types: flux-type and charge-type. The qubit-resonator interaction can be of inductive (which includes galvanic coupling) or capacitive nature. All types of superconducting qubits developed so far have been shown to couple with either type of interaction. Generally speaking, the capacitive interaction is determined by the mutual capacitance between the two coupled circuits. Similarly, geometric inductive couplings are given by the mutual qubit-resonator inductance. Galvanic couplings are given by the superconducting phase drop that is developed across the shared mutual inductance between the two circuits (see Sec.~\ref{sec:3.1.b}). It is possible to reach ultrastrong couplings with both capacitive and galvanic interactions, with quite different fundamental limits imposed for each type, as detailed in the next subsections. We emphasize that all formulae shown in this section will be specific to a lumped-element resonator for which there is no spatial dependence on the amplitude of the electromagnetic field fluctuations, and only a single resonant mode exists. This is in contrast to distributed resonators made of a section of a transmission line. In the latter, the presence of the qubit modifies the amplitude of the resonator field at that location, leading to a decrease of the interaction strength. This is due to the appearance of additional coupling mechanisms. For example, a flux qubit inductively coupled to a transmission line resonator develops a capacitive coupling at the expense of the inductive interaction \cite{bourassa2012}. Each superconducting qubit is defined within a subset of a larger Hilbert space of eigenstates of the whole quantum circuit. A recent theoretical study considered the complete circuit Hamiltonian of both flux-type and charge-type superconducting qubits embedded in a resonator \cite{Manucharyan2017}. Deviations from the QRM were evidenced but found not to alter the main qualitative properties of the model, particularly for the ground state. The conclusions of this study will be presented in Sec.~\ref{sec:3.1.b}. In the following subsections we explore the limits to capacitive and galvanic interactions. Mutual geometric inductive couplings are less interesting as one requires very large qubits to attain sufficiently large mutual inductance. This in turn modifies the qubit eigenstates and eventually reduces the qubit persistent current so the coupling is reduced. Therefore, the limit of qubit-resonator interaction is lower than in galvanic interactions. \subsubsection{Capacitive couplings}\label{sec:3.1.a} Capacitive couplings have been widely used with all types of superconducting qubits engineered so far \cite{wallraff2004, inomata2012, hofheinz2009, manucharyan2009}. This type of coupling involves the root mean square (r.m.s.)~voltage $\hat{V}$ in the ground state of the resonator mode with frequency $\omega_r$ and capacitance $C_r$ \begin{equation V_{\rm r.m.s.}\equiv\langle0| \hat{V}^2|0\rangle^{1/2} = \sqrt{\frac{\hbar\omega_r}{2C_r}}=\omega_r\sqrt{\frac{\hbar Z}{2}}, \end{equation which scales as $\sqrt{Z}$, where $Z$ is the impedance of the resonator mode coupled to the qubit \cite{devoret2007, andersen2016, jaako2016}. This scaling already points to high-impedance resonators to reach the USC regime. A charge qubit capacitively coupled to a lumped resonator can be modeled as in Fig.~\ref{fig:cqb}. The qubit-resonator interaction is well described by the dipolar energy term, $H_{\rm int} = -\vec{d}\cdot\vec{E}$, where $\vec{d}$ is the electric dipole moment. For a superconducting quantum circuit, this term can be rewritten as \cite{blais2004} $\hat{H}_{\rm int} = C_gV_{\rm r.m.s.}(\hat{a}+\hat{a}^{\dag})\hat{V}_q$, where $\hat{V}_q$ is the voltage produced by the qubit and $C_g$ is the coupling capacitor. For a CPB in the charging regime, $\hat{V}_q= 2e\hat{\sigma}_x/(C_g+C_q)$. As shown in Fig.~\ref{fig:cqb}, $C_g$ is the coupling capacitor and $C_q$ is the capacitance shunting the qubit junction, which may represent the junction capacitance itself (CPB) or an external, much larger shunting capacitor (transmon). The qubit-resonator interaction results in \begin{equation H_C^{\rm CPB} = 2e\frac{C_g}{C_g+C_q}V_{\rm r.m.s.} \hat{\sigma}_x(\hat{a}+\hat{a}^{\dag}). \end{equation For a transmon qubit, the electric dipole has the form $d = e(E_J/8E_C)^{1/4}$, where $E_C = e^2/2C$ is the charging energy, leading instead to a modified interaction Hamiltonian \begin{equation\label{eq:tr_cap} H_C^{\rm tr} = e\frac{C_g}{C_g+C_q}\left(\frac{E_J}{8E_C}\right)^{1/4}V_{\rm r.m.s.}\hat{\sigma}_x(\hat{a}+\hat{a}^{\dag}). \end{equation The coupling strength $g$ in the last expression can be re-written in a reduced form \cite{devoret2007} \begin{equation\label{eq:gc} \frac{g_C^{\rm{tr}}}{\omega_r} = \frac{1}{\sqrt{2\pi^3}}\left(\frac{E_J}{8E_C}\right)^{1/4}\sqrt{\frac{Z}{Z_{\rm{vac}}}}\frac{C_g}{C_g+C_q}\alpha^{1/2}. \end{equation $Z_{\rm{vac}}=\sqrt{\mu_0/\epsilon_0}\simeq377\,\Omega$ is the vacuum impedance while $\alpha\simeq1/137$ is the fine structure constant. Equation~(\ref{eq:gc}) shows the fundamental limitations for transmon qubits and capacitive couplings. It has been shown \cite{jaako2016} that this type of coupling cannot reach the DSC regime $g/\omega_r>1$, as the coupling is bound by \begin{equation\label{eq:jaako} \frac{g^{\rm{tr}}_C}{\omega_r} = \frac{C_g}{\sqrt{C_r(C_q+C_g) + C_g(C_g+C_q)}}<1, \end{equation for exact qubit-photon resonance. The capacitances refer to the circuit in Fig.~\ref{fig:cqb}. Typical circuit parameters limit this quantity to $g_C^{\rm{tr}}/\omega_r\approx0.01$ for $Z = 50\,\Omega$. \begin{figure}[!hbt] \centering \includegraphics[width = 7cm]{cqb} \caption{\label{fig:cqb} Circuit model of a charge qubit shunted with capacitance $C_q$ coupled with a capacitor $C_g$ to a lumped resonator of capacitance $C_r$. The cross corresponds to the circuit element of a Josephson junction. Lumped resonator (left) is depicted in blue, charge qubit (right) in red. This model is valid both for Cooper pair boxes as well as transmon qubits.} \end{figure} The same analysis for pure charge qubits (CPB) gives a reduced coupling of \begin{equation\label{eq:g_c_cpb} \frac{g_C^{\rm CPB}}{\omega_r} = \frac{1}{\sqrt{8\pi}}\sqrt{\frac{Z}{Z_{\rm vac}}}\frac{C_g}{C_g+C_J}\alpha^{1/2}. \end{equation Using a lumped-element resonator model, the reduced coupling can be recast using circuit parameters in analogy to the transmon case \cite{jaako2016} \begin{equation\label{eq:g_cpb} \frac{g^{\rm{CPB}}_C}{\omega_r} = \frac{2C_g}{\sqrt{C_r(C_q+C_g) + C_g(C_g+C_q)}}\sqrt{\frac{E_C}{E_J}}. \end{equation Note that the frequency of a CPB is assumed here to be $\hbar\omega_q= E_J$. Equation~(\ref{eq:g_cpb}) shows that it is in principle possible to reach the DSC regime with a CPB with $E_C\gg E_J$. In practice, the limitation on charge qubit lifetime makes this circuit implementation challenging. The circuit parameters used so far in experiments involving CPBs and resonators \cite{wallraff2004} achieved values of $g_C^{\rm{CPB}}/\omega_r\approx0.01$ with a resonator impedance $Z = 50\,\Omega$. We point out that the limits imposed by Eqs.~(\ref{eq:gc})-(\ref{eq:g_cpb}) are specific to the circuit\footnote{Equations (\ref{eq:jaako}) and (\ref{eq:g_cpb}) are obtained from a modified but similar circuit to that shown in Fig.~{\ref{fig:cqb}} \cite{jaako2016}.} shown in Fig.~\ref{fig:cqb}. However, as will be shown in Sec.~\ref{sec:3.1.b}, a charge qubit, either transmon or CPB, shunted by an $LC$ circuit presents a charge-like interaction with a coupling strength well into the $g/\omega_c>1$ regime \cite{Manucharyan2017}. The $\sqrt{Z}$ scaling of the coupling in Eqs.~(\ref{eq:gc}) and (\ref{eq:g_c_cpb}) is originated from the resonator voltage fluctuations $V_{\rm r.m.s.}$, favoring high-$Z$ resonators. Employing high kinetic inductance films or Josephson junction arrays \cite{masluk2012, andersen2016}, impedances of several k$\Omega$ would allow reaching the regime $g_C^{\rm CPB, tr}\approx\omega_r$. Very recent experiments reported USC for the first time between a superconducting transmon qubit and a transmission line resonator \cite{bosman2017b}. The strength of the coupling was attained by implementing a vacuum gap capacitor in which the qubit capacitance was suspended over the ground plane, enhancing in this way the ratio of coupling capacitance $C_g$ to total capacitance $C_g+C_q$ in Eq.~(\ref{eq:tr_cap}). Being an effective drum 30~$\mu\rm{m}$ in diameter suspended less than 1~$\mu\rm{m}$ over the resonator ground led to a coupling capacitance nearly an order of magnitude larger than planar capacitance designs. Combined with a high-impedance superconducting transmission line resonator, an USC of up to $g/\omega\sim0.19$ was observed with the fundamental resonator mode. The high resonator impedance was achieved by narrowing the center line of the resonator and in this way reducing the capacitance per unit length between the ground planes and the center line. A single-photon Bloch-Siegert shift of $\omega_{\rm{BS}}/2\pi = -100$~MHz was reported. This result represented the largest qubit-resonator coupling achieved using a charge-based superconducting qubit. Despite Eq.~(\ref{eq:jaako}) limiting the ratio $g_C^{\rm{tr}}/\omega_r$ to lie below~1, transmon-based devices approaching the DSC regime may be demonstrated in the near future. As already mentioned above, a more straightforward way to enter the DSC regime with charge qubits is to directly shunt the qubit by an $LC$ resonator. \subsubsection{Galvanic couplings}\label{sec:3.1.b} Two systems are galvanically coupled when they share a portion of their respective circuits. Here, we will distinguish two types of galvanic coupling based on the amount of circuit shared: a)~sharing a linear inductance and b)~embedding the qubit directly into the resonator circuit. The general picture is that the qubit and resonator share a circuit element, the latter case being the entire qubit itself. In both situations, the qubit-resonator coupling is then given by the superconducting phase drop across the shared circuit element $\hat{\varphi}$, which itself is a new degree of freedom of the circuit [Fig.~\ref{fig:galvanic}]. For flux-type qubits [Figs.~\ref{fig:galvanic}(a), \ref{fig:galvanic}(b)], $\hat{\varphi}$ can be represented in the basis of eigenstates of the qubit, $\langle i|\hat{\varphi}|j\rangle$, which relates to the current running across the inductive element [see Eq.~(\ref{eq:liqir})]. For charge-type qubits, an inductor in series with the qubit junction may be shared with a resonator, as shown in Fig.~\ref{fig:galvanic}(c). Increasing the coupling strength in this configuration will be at the expense of the qubit anharmonicity~\footnote{See related literature for a more detailed calculation of the effects of linear inductors in transmon qubits~\cite{bourassa2012}.}. Therefore, it is not very favorable for reaching ultrastrong interaction strengths, and we will not discuss this configuration further. In practice, this type of interaction has only been implemented in coupled-qubit circuits~\cite{Chen2014}. The other possibility~\footnote{Here, we are only discussing transverse-type couplings. For both flux-type and charge-type qubits, a longitudinal coupling can be instead engineered by replacing one of the qubit junctions by a SQUID loop and galvanically attaching a fraction of this loop to a resonator circuit. We will not discuss longitudinal couplings in this review.} is to embed the qubit in the resonator circuit [Fig.~\ref{fig:galvanic}(d)] where the coupling is to the charge degree of freedom $\hat{Q}$ on the island formed on one side of the qubit junction. Coupling to the phase $\hat{\varphi}$ involves the r.m.s.~current $\hat{I}$ in the ground state of the resonator mode with frequency $\omega_r$ and inductance $L_r$ \begin{equation\label{eq:irms} I_{\rm r.m.s.} \equiv\langle0|\hat{I}^2|0\rangle^{1/2} = \sqrt{\frac{\hbar\omega_r}{2L_r}} = \omega_r\sqrt{\frac{\hbar}{2Z}}. \end{equation Clearly, in order to maximize the coupling strength, low resonator impedance $Z$ is desirable. \begin{figure}[!hbt] \centering \includegraphics[width = 7cm]{galvanic} \caption{\label{fig:galvanic} (Color online) Circuit model for galvanic couplings. a) Flux qubit sharing a section of its loop with a resonator. The coupling element consists of a linear inductance. b) Flux qubit embedded into the resonator loop. The coupling is given by the phase across the shared junction. c) Charge qubit sharing an inductance with a resonator. The coupling element is given by the shared inductance. d) Charge qubit embedded in the resonator loop. The coupling operator is related to the charge $\hat{Q}$ stored in the superconducting island shared between qubit and resonator, highlighted by the dashed line. a) and c) represent an external coupling element, while b) and d) are internal couplings.} \end{figure} In what follows, we will use the three-junction flux qubit \cite{Mooij1999} to analyze the different types of galvanic couplings. The description can be easily extended to the fluxonium \cite{manucharyan2009} and other flux-type qubit circuits. \paragraph{Linear inductance.-} Here, we will only focus on flux-type qubits, but the discussion can be extended to charge qubits in the configuration shown in Fig.~\ref{fig:galvanic}(c). The circuit topology of a flux-type qubit consists of one or more junctions interrupting a superconducting loop, a section of which can be shared with a resonator circuit [Fig.~\ref{fig:galvanic}(a)]. The coupling element is then the shared linear inductor~$L$, which adds a degree of freedom to the circuit, the phase drop across it $\hat{\varphi}_L$. In the perturbative USC regime, which corresponds to the experiments described in this subsection, the value of the coupling inductance is typically small compared to the resonator inductance $L_r$ and the qubit loop inductance. Therefore, $\hat{\varphi}_L$ is frozen in its ground state and is treated as a constant which becomes a perturbation to the qubit-resonator system.\footnote{The linear coupling inductance in typical flux qubit loops a few micrometers in size does not significantly contribute to the energy spectrum and is usually neglected.} Therefore, in this regime of small coupling inductance, the coupling element does not modify the bare qubit/resonator spectra and is therefore an external coupling as defined in Sec.~\ref{sec:3A1}. The inductance of a superconducting wire has a geometric as well as a kinetic origin. The inductance from a Josephson junction may also be used as a linear inductor, provided that its critical current is much larger than the current flowing through it. The geometric inductance is typically calculated from $L_G = (\mu_0l/2\pi)\left[\ln\left(2l/w+t\right)+1/2\right]$. Here, $l$, $w$, $t$ are the wire length, width, and thickness, respectively. The kinetic inductance has the origin in the inertia of Cooper pairs. In the dirty superconductor limit, it takes the form \cite{tinkham2004} $L_K = \mu_0\lambda_L^2 l/wt$, where $\lambda_L$ is the London penetration depth, which for thin films can reach values several times the bulk value. The kinetic inductance can also be expressed as a function of the normal state resistance of the wire $R_n$, $L_K = 0.14\hbar R_n/k_{\rm B}T_c$, with $T_c$ being the superconductor critical temperature. For a large, unbiased Josephson junction, the inductance is given by $L_J = \Phi_0/2\pi I_C$ \cite{orlando_book}, with $I_C$ being the junction critical current, and $\Phi_0=h/2e$ the flux quantum. Irrespective of the type of coupling inductor, the phase across it can be treated as a constant operator with off-diagonal matrix elements which are directly calculated in the qubit eigenbasis, $\langle 0|\hat{\varphi}_L|1\rangle\simeq LI_p(\Phi_0/2\pi)$. Here, $I_p \equiv\langle 0 |\hat{I}|1\rangle$ is the persistent current in the qubit loop. The interaction strength is in this case is given by the magnetic dipolar energy, $H_{\rm int} = -\vec{m}\cdot\vec{B}$, which for a superconducting quantum circuit is re-written as \begin{equation\label{eq:liqir} \hat{H}_{\rm int} = LI_pI_{\rm r.m.s.}\hat{\sigma}_x(\hat{a}+\hat{a}^{\dag}), \end{equation leading to the definition of the coupling strength $g \equiv LI_pI_{\rm r.m.s.}/\hbar$. Here, $L$ represents the sum of all linear inductance contributions shared between qubit and resonator. The first two experiments demonstrating USC in superconducting circuits used linear inductors as coupling elements. In the first experiment \cite{niemczyk2010}, a flux qubit was coupled to a transmission line resonator by means of the large inductance of a shared Josephson junction operated in the linear regime (Fig.~\ref{fig:NM1}). The spectrum of the system showed clear signatures of qubit-photon interactions in different modes of the resonator. The extracted qubit-resonator coupling rates to the first three resonator modes were $g_0/2\pi = 314~$MHz, $g_1/2\pi = 636~$MHz, and $g_2/2\pi = 568~$MHz, respectively. The maximum normalized coupling strength was achieved by the second mode, i.e., $g_1/\omega_1 = 0.12$. Deviations from the JC model were clearly observed with the appearance of avoided-level crossings corresponding to a breakdown of the conservation of the number of excitations. Due to the presence of the counter-rotating terms, the states $|e,1,0,0\rangle$ and $|g,0,0,1\rangle$, which are degenerate under the RWA, hybridize and result in visible avoided-level crossings, as seen in Fig.~\ref{fig:NM3}. \begin{figure}[!hbt] \includegraphics[width = 8.5cm]{WM-fig1} \caption{\label{fig:NM1} (Color online) First experiment that reported breakdown of the rotating-wave approximation in a superconducting qubit circuit. (a) Optical image of the circuit; (b) scanning electron micrograph (SEM) from coupling capacitor; (c) resonator mode profiles coupling to the qubit; (d)-(f) SEM images showing qubit circuit and qubit junctions; (g) circuit schematic~\cite{niemczyk2010}.} \end{figure} \begin{figure*}[!hbt] \includegraphics[width = 16cm]{WM-fig3} \caption{\label{fig:NM3} (Color online) Observation of transitions which do not conserve the number of excitations in a flux qubit-resonator spectrum. Plots display transmission through the circuit, with $\omega_{\rm rf}$ being the probe frequency. $\delta\Phi_{\rm x}$ corresponds to the flux applied to the qubit using an external coil. (a) Full circuit spectrum near second resonator mode frequency. Dashed lines fitting the data correspond to the full Hamiltonian, the green vertical lines represent the case of no qubit-resonator coupling, while the solid magenta line is the prediction of the Jaynes-Cummings (JC) model; (b) zoom-in near avoided qubit-resonator level crossing; (c) avoided level crossings not included in the JC model. The presence of the counter-rotating wave terms introduce hybridization between the indicated eigenstates that otherwise would not couple~\cite{niemczyk2010}.} \end{figure*} \begin{figure}[!hbt] \includegraphics[width = 8cm]{FDfig1} \includegraphics{FDfig2} \caption{\label{fig:BS} (Color online) Observation of physics beyond the rotating-wave approximation: the Bloch-Siegert shift. (a) Circuit schematic and scanning electron micrograph images; (b) spectrum near resonator frequency $\omega_r/2\pi = 8.13~$GHz as function of magnetic flux in the qubit. The acquired signal represents the magnetic flux sensed by the SQUID coupled to the qubit; (c) resonator frequency shift with respect to the prediction of the Jaynes-Cummings model, identified here as the Bloch-Siegert shift. The horizontal dashed line is the prediction from the Jaynes-Cummings model, the solid line is the full Hamiltonian without approximations, and the dashed line fitting the data is the approximated Hamiltonian in the perturbative USC regime~\cite{forn-diaz2010}.} \end{figure} In the second experiment \cite{forn-diaz2010}, a flux qubit was galvanically attached to a lumped-element $LC$ resonator, such that both systems were coupled by the inductance of the shared wire [see Fig.~\ref{fig:BS}(a)]. The qubit spectrum showed a large avoided-level crossing at the resonance point, yielding a coupling strength of $g/2\pi = 810$~MHz for a resonator frequency of $\omega_r/2\pi = 8.13$~GHz. This resulted in a normalized coupling of $g/\omega_r = 0.1$. Deviations from the RWA were identified as a frequency shift in the resonator when the qubit was flux-biased near its symmetry point $\Phi=\Phi_0/2$ [Figs.~\ref{fig:BS}(b), \ref{fig:BS}(c)]. At this bias point, the effective qubit-resonator coupling is maximal. The frequency shift of the resonator compared to the JC model, also known as the Bloch-Siegert shift \cite{bloch1940}, was attributed to the dispersive effect of the counter-rotating terms, as explained in Sec.~\ref{sec:2}. Its existence had long been predicted \cite{Cohen1973, zakrzewski1991} and this experiment represented its first observation. The maximum Bloch-Siegert shift attained in this experiment was $\omega_{\rm{BS}}\equiv g^2/(\omega_r+\omega_q) = 2\pi\times52~$MHz. The two experiments described above were performed in the perturbative USC regime, defined when the normalized coupling constant $\lambda\equiv g/(\omega_r+\omega_q)$ is smaller than 1 \cite{Rossatto2017}. The experiments achieved \cite{niemczyk2010} $\lambda = 0.084$ and \cite{forn-diaz2010} $\lambda = 0.066$, respectively, satisfying the condition of perturbative USC. In later experiments, a two-resonator circuit was coupled to a single flux qubit by sharing a section of the qubit loop, several $\mu\rm{m}$ long \cite{baust2016}. The coupling strength observed was of $g/\omega_r = 0.17$, attained using a collective mode between the two resonators. Follow-up work on the Bloch-Siegert shift observation experiment studied the energy-level transitions between excited states as a function of coupling strength \cite{forn-diaz2016}. In the RWA regime, the excited states of the JC model appear in doublets $|n, \pm\rangle$ for each photon number $n$. In circuit QED, the qubit is typically driven via the resonator. With this indirect driving, a selection rule exists under the RWA between eigenstates of different manifolds $|n,\pm\rangle$ and $|n\pm1,\pm\rangle$. The observation of a transition between dressed states $|1,-\rangle$ and $|2,+\rangle$ belonging to different manifolds was identified in this work as another distinct feature of the USC regime. In another experiment in the perturbative USC regime, Chen \emph{et al.} explored multi-photon red sidebands in an experiment consisting of a flux qubit coupled to a transmission line resonator \cite{chen2016}. These higher-order sidebands could only be unambiguously detected in the USC regime, where the counter-rotating terms modify the selection rules. The largest coupling in this experiment was attained between the flux qubit and the fundamental mode of the resonator, reaching a value of $g/\omega_0 = 0.097$. \paragraph{Embedded qubit circuit.-} Up to this point, the description of galvanic couplings as a perturbation of the qubit-resonator system has been valid in the range $g/\omega\simeq0.1$. Increasing the coupling strength towards the nonperturbative regime would be analogous to considering the phase drop of the inductive element $\hat{\varphi}_L$ as a degree of freedom shared between the qubit and resonator with dynamics of its own. While in principle it should be possible to increase the shared inductance and enter the non-perturbative USC regime \cite{Rossatto2017}, in practice this would result in a very large qubit geometry, hence susceptible to flux noise, and a decrease of the persistent current in the qubit loop that would eventually decrease the coupling strength. The natural way to further enhance the interaction strength is to share a junction of the qubit circuit with the resonator [see Fig.~\ref{fig:galvanic}(b)]. In other words, the qubit needs to be embedded ``in parallel" to the resonator. This circuit will require full quantization in order to be properly described. In that case, the interaction term becomes of dipole-type \cite{peropadre2013} \begin{equation H_{\rm{int}} = \sum_{\alpha=x,y,z}\hbar g_G^{\alpha}(\hat{a}^{\dag}+\hat{a})\hat{\sigma}_{\alpha}. \end{equation The coupling operators are here defined as \begin{align}\label{eq:gx} \hbar g_G^x &= \sqrt{\frac{\hbar\omega_r}{2L_r}}\times\frac{\Phi_0}{2\pi}\langle 0|\hat{\varphi}|1\rangle,\\ \label{eq:gz} \hbar g_G^z &= \sqrt{\frac{\hbar\omega_r}{2L_r}}\times\frac{1}{2}\left(\frac{\Phi_0}{2\pi}\right)\left(\langle 1|\hat{\varphi}|1\rangle - \langle0|\hat{\varphi}|0\rangle\right). \end{align} The prefactor $\sqrt{\hbar\omega_r/2L_r}$ corresponds to the r.m.s.~of the resonator current in its ground state, Eq.~(\ref{eq:irms}). The last factors in Eqs.~(\ref{eq:gx}) and (\ref{eq:gz}) correspond to the magnetic dipole moment and the net magnetic flux generated by the qubit, respectively. Near the qubit symmetry point, where the qubit is usually operated to maximize quantum coherence, the net flux generated is null. Therefore, we may neglect the coupling term $g_G^z$. Equation~(\ref{eq:gx}) includes the case of a shared linear inductor, since in that case we can write the dipole moment as $(\Phi_0/2\pi)\langle 0|\hat{\varphi}|1\rangle\simeq LI_p$ so that the coupling becomes the mutual inductive energy $LI_pI_{\rm{r.m.s.}}$, as in Eq.~(\ref{eq:liqir}). Equation~(\ref{eq:gx}) can be recast as function of the resonator impedance $Z$ \begin{equation\label{eq:gw} \frac{g_G^x}{\omega_r} = \frac{1}{8}\sqrt{\frac{Z_{\rm{vac}}}{\pi Z}}\alpha^{-1/2}\langle 0|\hat{\varphi}|1\rangle. \end{equation Manucharyan \emph{et al.} showed that for a fluxonium qubit $g_G^x/\omega_r$ yields an identical result \cite{Manucharyan2017}. Using a linear inductance as coupler, the matrix element of the phase operator is of order $\langle 0|\hat{\varphi}|1\rangle\approx10^{-2}$ \cite{forn-diaz2010, chen2016, baust2016} so that Eq.~(\ref{eq:gw}) leads to $g_G^x/\omega_r\approx0.1$, just entering the perturbative USC regime. Maximizing Eq.~(\ref{eq:gx}) may be accomplished by sharing a qubit junction, as shown in Fig.~\ref{fig:galvanic}(b). In that case, $\langle1|\hat{\varphi}|0\rangle\approx1$, so $g_G^x/\omega_r\simeq2$, which lies well in the DSC regime. Increasing the coupling further is possible by using low-impedance resonators. Following the initial experiments in the perturbative USC regime, a new wave of results was reported when two experiments demonstrated DSC regimes both between a flux qubit and a resonator \cite{yoshihara2017a} and a transmission line in an open-space setting~\cite{forn-diaz2017}. In both experiments, the qubit was embedded in the resonator/transmission line circuit, with the coupling element being a Josephson junction of the qubit loop. Contrary to the first experiment reporting USC \cite{niemczyk2010}, the coupling junction was part of the qubit internal dynamics, therefore corresponding to an \emph{internal coupling} as defined in Sec.~\ref{sec:3.1.b}. The effective inductance stored in the junction enabled coupling strengths all the way into the non-perturbative USC regime. \begin{figure}[!hbt] \includegraphics[width = 8cm]{NTT-fig1} \caption{\label{fig:NTTcirc} (Color online) DSC regime circuitry of a superconducting flux qubit coupled to an $LC$ resonator. (a) Circuit schematic; (b) scanning electron micrograph of the device. The large interdigitated finger capacitor occupies most of the image. The probing transmission line can be seen to the right of the image; (c) zoom-in of the qubit, with the 4-junction SQUID coupler in the bottom arm~\cite{yoshihara2017a}.} \end{figure} \begin{figure*}[!hbt] \includegraphics{NTT-fig2} \caption{\label{fig:NTT} (Color online) DSC regime spectrum at different coupling strengths. (a)-(d) show the spectrum near the bare resonator frequency. Signal represents transmission through resonator; (e)-(h) display the same spectra with fitted theory calculations using the full quantum Rabi model, finding an excellent agreement with the experiments; (j)-(n) display broader frequency range corresponding to the same coupling strengths in (a)-(d), where additional transitions are identified which confirm the large size of the coupling strength of the system. Certain transitions vanish due to the symmetry of the system Hamiltonian. The inset shows several transition matrix elements coinciding with resonances in the experiment~\cite{yoshihara2017a}.} \end{figure*} The qubit-resonator experiment consisted of an $LC$ circuit galvanically coupled to a flux qubit by sharing an array of four Josephson junctions in parallel, acting as an effective SQUID, which allowed tuning of the interaction strength \cite{peropadre2010}; see Fig.~\ref{fig:NTTcirc}. The resonator was inductively coupled to a transmission line to allow probing the system in transmission. In order to enhance the coupling strength, a very large resonator capacitor was used to decrease its impedance $Z=\sqrt{L/C}$ and enhance in this way the ground-state current fluctuations $\langle I_{\rm{r.m.s.}}^2\rangle^{1/2}=\omega_r\sqrt{\hbar/2Z}$, as explained in Sec.~\ref{sec:3.1.b}. The spectrum of the system showed energy-level transitions that agreed with the full QRM (see Fig.~\ref{fig:NTT}). The coupling strengths reported spanned the region $0.72\leq g/\omega_r \leq 1.34$, with coupling strength values up to $g/2\pi = 7.63~$GHz. These remarkable results exceeded all previous reports of ultrastrong couplings and entered the DSC regime $g/\omega>1$, where the interaction operator starts to dominate the system spectrum and its dynamics \cite{casanova2010}. Given the coupling strength achieved, the system ground state should exhibit a large degree of qubit-resonator entanglement. These results from Yoshihara \emph{et al.} represented the largest normalized atom-photon interaction strength reported in any physical system to date. In follow-up experiments, Yoshihara \emph{et al.} demonstrated insights into the energy spectrum of the QRM to more accurately characterize the relative coupling strength $g/\omega$ of the system. By looking at higher-energy level transitions, a method was developed to qualitatively estimate the regime of coupling $g/\omega$ in which the system lies without the need for complex fits of the whole spectrum \cite{yoshihara2017b}. Using two-tone spectroscopy, they were able to map out the QRM spectrum up to six levels, finding excellent agreement with Eq.~(\ref{QRH}) \cite{Yoshihara2017c}. As discussed in Sec.~II, the following natural step would be to start exploring the dynamics of the QRM model in the nonperturbative regime, the coherence time of the system~\cite{nataf2011}, its internal dynamics \cite{casanova2010} and possibly phase transitions with multiple qubits involved \cite{nataf2010, jaako2016}. We turn now to galvanic couplings using charge qubits embedded in the resonator circuit. In such a configuration, the qubit couples directly to the charge operator of the resonator. Recently \cite{Manucharyan2017}, a circuit consisting of a charge qubit embedded in an $LC$ resonator circuit [Fig.~\ref{fig:galvanic}(c)] was inspected, and the following normalized coupling strength was obtained \begin{equation \frac{g_G^{\rm{ch}}}{\omega_r'} = \frac{C_r}{C_q+C_r}\frac{\langle0|\hat{Q}|1\rangle}{e}\sqrt{2\pi\frac{Z_r'}{Z_{\rm{vac}}}}\alpha^{1/2}. \end{equation Here, the resonator frequency is renormalized due to the qubit capacitor $C_q$, $\omega_r'=1/\sqrt{L_rC_p}$, with $C_p^{-1}= C_r^{-1} + C_q^{-1}$. The resonator impedance is also renormalized as $Z_r'=\sqrt{L_r/C_p}$. $\langle1|\hat{Q}|0\rangle$ is the qubit electric dipole in units of the electron charge. For a Cooper pair box, $\langle1|\hat{Q}|0\rangle\sim1$. With sufficiently large resonator capacitance, it is possible to reach the DSC regime $g_G^{\rm{ch}}/\omega_r'>1$ by employing very high-impedance resonators \cite{masluk2012}. A different circuit configuration was analyzed by Bourassa \emph{et al.} \cite{bourassa2012}. The circuit consisted in galvanically attaching a charge qubit to a transmission line resonator. For charge qubits in the transmon regime $E_J/E_C\gg1$, the coupling to such a resonator was calculated to be \begin{equation\label{eq:g_tr_g} \frac{g_G^{\rm{tr}}}{\omega_r} = \frac{1}{\sqrt{8\pi}}\left(\frac{E_C}{8(E_J+E_L)}\right)^{1/4}\sqrt{\frac{Z_{\rm{vac}}}{Z}}\alpha^{-1/2}. \end{equation In this expression, $E_L = (\Phi_0/2\pi)^2/L_r$ corresponds to the inductive energy of the resonator which dilutes the anharmonicity of the transmon qubit and reduces the effective maximum coupling. This inductive term was omitted in the first analysis of this circuit \cite{devoret2007}. Given that the inductive energy of resonators is usually much larger than the Josephson energy, achieving the DSC regime $g_G^{\rm{tr}}/\omega_r>1$ compromises the transmon condition $E_J\gg E_C$ that is required to derive Eq.~(\ref{eq:g_tr_g}). In addition, the presence of the qubit junction was shown to reduce the resonator current, leading to a maximum coupling of $g_G^{\rm{tr}}/\omega_r\sim0.2$ \cite{bourassa2012}, which is far from the DSC regime. It is worth at this point to refer to the analysis carried out by Manucharyan \emph{et al.} \cite{Manucharyan2017}. The authors considered the full quantum circuit of both a fluxonium and a CPB qubit and compared them to the QRM. It turns out that both flux-like and charge-like qubits display a spectrum that resembles very closely with that of the QRM. In particular, the two lowest energy levels become nearly degenerate in the DSC regime $g/\omega_r>1$. Although a large number of bare qubit states are involved in the qubit-resonator ground state, the entanglement spectrum is dominated by the lowest two eigenvalues even though the qubits are multi-level systems. The analysis for flux-like qubits using many of the circuit levels shows similar features to the QRM even though the calculated low energy-level splittings differ quantitatively. By contrast, the CPB ultrastrongly coupled to a resonator results in a much more faithful reproduction of the energy-level spectrum of the QRM. Manucharyan \emph{et al.} interpreted the vacuum level degeneracy as an environmental suppression of flux/charge tunneling due to dressing of the qubit with low-/high- impedance photons in the resonator. In flux-like qubits, the flux tunneling suppression was understood as the qubit circuit being shunted by the large resonator capacitor, which increases the effective qubit mass and suppresses quantum tunneling. In other words, the system localizes itself in one of the two minima of the qubit potential, suppressing in this way the qubit transition frequency. The CPB ultrastrongly coupled to a resonator has a less obvious circuit model interpretation since no simple circuit elements represent the system at high coupling values. The charge tunneling suppression was related to the manifestation of the dynamical Coulomb effect of transport in tunnel junctions connected to resistive leads. In conclusion, Manucharyan \emph{et al.} found the description of the QRM by superconducting qubits to be quite faithful, despite the presence of the multi-level spectrum. The CPB is the most suitable qubit despite the fact that charge noise has so far hindered the exploration of ultrastrong couplings, even though the USC features may be robust against dissipation \cite{DeLiberato2017}. \subsection{Semiconductor quantum wells} \label{sec:3B} Semiconductor quantum wells (QWs) provide one of the cleanest and most tunable solid-state environments with quantum-engineered electronic and optical properties. In the context of cavity QED, microcavity-exciton-polaritons in QWs have served as a model system for highlighting and understanding the striking differences between light-atom coupling and light-condensed-matter coupling~\cite{WeisbuchetAl92PRL,KhitrovaetAl99RMP,DengetAl10RMP,GibbsetAl11NP}. However, the large values of resonance frequency (typically in the near-infrared or visible) and relatively small dipole moments for interband transitions make it impractical to achieve USC using exciton-polaritons (see, however, the cases of microcavity exciton polaritons in organic semiconductors, carbon nanotubes, and two-dimensional materials described in Sec.~\ref{sec:3c2}). \emph{Intraband transitions}, such as intersubband transitions (ISBTs)~\cite{Helm00Chapter,Paielle06Book} or inter-Landau-level transitions (ILLTs) (colloquially known as cyclotron resonance, CR)~\cite{Kono01MMR,HiltonetAl12Book}, are much better candidates for realizing USC regimes in QWs. Shown schematically in Fig.~\ref{intraband}, they have small resonance frequencies, typically in the midinfared (MIR) and terahertz (THz) range, and enormous dipole moments (10s of $e$-\AA). \begin{figure} \begin{center} \includegraphics[scale=0.62]{intraband} \caption{\small (Color online) Semiconductor quantum well transitions. Two types of \emph{intraband} transitions in semiconductor quantum wells are shown that have been demonstrated to exhibit USC: (a)~intersubband polaritons and (b)~inter-Landau-level (or cyclotron) polaritons. In contrast to interband transitions, which typically occur in the near-infrared/visible range, these intraband transitions occur in the midinfrared/THz range, with enormous dipole moments. In (a), the lowest two subbands of opposite parity, with an energy separation of $\hbar\omega_{12}$, within the conduction or valence band are resonantly coupled with a light field ($E_\mathrm{light}$) polarized in the growth direction (TM-polarization), to form \emph{intersubband polaritons}. In (b), a magnetic field ($B_\mathrm{DC}$) applied in the growth direction quantizes each subband into Landau levels with an enery separation of $\hbar\omega_\mathrm{c}$, where $\omega_\mathrm{c} = eB_\mathrm{DC}/m^*$ is the cyclotron frequency, $e$ is the electronic charge, and $m^*$ is the effective mass; the highest occupied Landau level and the lowest unoccupied Landau level are resonantly coupled with a light field ($E_\mathrm{light}$) polarized in the quantum well plane (TE-polarization) to form \emph{inter-Landau-level polaritons}. } \label{intraband} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=1.39]{LiuCiuti} \caption{\small Theoretically predicted intersubband polaritons. (a)~Absorption spectra showing intersubband polaritons for different numbers of QWs (1 to 50); (b)~QW-number dependence of the vacuum Rabi splitting; (c)~absorption spectra for intersubband polaritons for different electron densities: 0.5 $\times$ 10$^{12}$\,cm$^2$ (curve 1), 1.0 $\times$ 10$^{12}$\,cm$^2$ (curve 2), 1.5 $\times$ 10$^{12}$\,cm$^2$ (curve 3), and 2.0 $\times$ 10$^{12}$\,cm$^2$ (curve 4); (d)~electron density dependence of the vacuum Rabi splitting; (e)~calculated upper polariton (UP) and lower polariton (LP) frequencies as a function of coupling strength, where $\omega_{12}$ is the transition frequency. (a)--(d): Adapted from reference \cite{Liu97PRB}; (e) adapted from reference \cite{CiutietAl05PRB}. } \label{LiuCiuti} \end{center} \end{figure} Theoretically, Liu was the first to propose and analyze \emph{intersubband (ISB) polaritons} in QWs~\cite{Liu96JAP,Liu97PRB}. He demonstrated that the vacuum Rabi splitting increases with the electron density as well as the number of QWs. Figure~\ref{LiuCiuti}(a) shows calculated absorption spectra, displaying ISB polaritons for QWs for different numbers of QWs, while in Fig.~\ref{LiuCiuti}(b) the QW-number dependence of the vacuum Rabi splitting is calculated; Figure~\ref{LiuCiuti}(c) shows absorption spectra for different electron densities, while in Fig.~\ref{LiuCiuti}(d) the electron density dependence of the vacuum Rabi splitting is displayed \cite{Liu96JAP,Liu97PRB}. Unique electrically driven MIR emission devices based on quantum cascade structures incorporating ISB polaritons have also been proposed \cite{ColombellietAl05SST}. In particular, it was predicted that in InP-based multiple-QW structures a polariton splitting $2\hbar g$ of 40~meV can be obtained for an ISBT at $\hbar\omega_{12} \approx$ 130~meV, i.e., $g/\omega_{12} \approx 0.15$. Ciuti \textit{et al.}\ used a Bogoliubov transformation to diagonalize the full Hamiltonian and obtained the energies of the upper polariton (UP) and lower polariton (LP) branches \cite{CiutietAl05PRB}. Figure~\ref{LiuCiuti}(e) shows the calculated UP and LP energies as a function of normalized coupling strength, where $\omega_{12}$ is the ISBT frequency, for zero detuning, demonstrating that USC is possible. Similarly, for \emph{inter-Landau-level (ILL) polaritons}, Hagenm\"uller \textit{et al.}\ derived and diagonalized an effective Hamiltonian describing the resonant excitation of a two-dimensional electron gas (2DEG) by cavity photons in the integer quantum Hall regime \cite{HagenmulleretAl10PRB}. The dimensionless vacuum Rabi frequency $g/\omega_\mathrm{c}$ was shown to scale as $\sqrt{\alpha N_\mathrm{QW}\nu}$. Here, $\omega_\mathrm{c} = eB_\mathrm{DC}/m^*$ is the cyclotron frequency, $B_\mathrm{DC}$ is the DC magnetic field applied perpendicular to the 2DEG, $e$ is the electronic charge, $m^*$ is the effective mass, $\alpha$ is the fine structure constant, $N_\mathrm{QW}$ is the number of QWs, and $\nu$ is the Landau-level filling factor in each well. It was shown that $g/\omega_\mathrm{c} > 1$ could be achieved when $\nu \gg 1$ with realistic parameters of a high-mobility 2DEG. Furthermore, as mentioned in Sec.~\ref{sec:2}, Ciuti {\it et al}.\ provided much physical insight into the ground-state properties of ISB polaritons \cite{CiutietAl05PRB}. They found that the ground state consists of a \emph{two-mode squeezed vacuum}. In the ordinary vacuum, $|0\rangle$, in the zero- or weak-coupling regime, it is required that $\hat{\sigma}_- |0\rangle = \hat{a}|0\rangle = 0$; here, $\hat{\sigma}_+$ is the creation operator and $\hat{\sigma}_-$ is the annihilation operator for ISBTs; likewise, $\hat{a}^{\dagger}$ and $\hat{a}$ create and destroy photons, respectively. However, in the USC regime, the ground state $|G\rangle$ is a squeezed state, \emph{which contains a finite number of cavity photons and ISBTs}. Therefore, $\hat{p}_{j,k} |G\rangle = 0$, where $\hat{p}_{j,k}$ are linear combinations of $\hat{\sigma}_+$, $\hat{a}^{\dagger}$, $\hat{\sigma}_-$, and $\hat{a}$. Ciuti {\it et al}.\ specifically considered a system in which a cavity photon mode was strongly coupled to an ISBT. They showed that the system could be brought into the USC regime, where correlated photon pairs can be generated, by tuning the quantum properties of the ground state~\cite{CiutietAl05PRB}. The tuning could be achieved by changing the Rabi frequency via an electrostatic gate. Similarly, De Liberato {\it et al}.\ proposed to \emph{modulate} the vacuum Rabi frequency in time and calculated the spectra expected for the emitted radiation~\cite{LiberatoetAl07PRL}. More recently, Stassi {\it et al}.\ described a three-level system ($|0\rangle$, $|1\rangle$, $|2\rangle$) in which a spontaneous $|1\rangle \rightarrow |0\rangle$ transition was accompanied by the creation of real cavity photons out of virtual photons resonant with the $|1\rangle \rightarrow |2\rangle$ transition~\cite{StassietAl13PRL}. Finally, Hagenm\"uller has recently proposed an \emph{all-optical} scheme for observing the dynamical Casimir effect in a THz photonic band gap using ILL polaritons \cite{Hagenmuller16PRB}. These theoretical studies have stimulated much interest in experimentally probing ultrastrong light-matter coupling phenomena in semiconductor QWs. \subsubsection{Intersubband transitions} \label{sec:3B1} \begin{figure}[b] \begin{center} \includegraphics[scale=0.91]{Dini} \caption{\small First experimental observation of intersubband polaritons. Reflectance spectra are shown for a GaAs quantum well sample at 10~K for different angles of incidence for TM-polarized light. The spectra are offset from each other for clarity. Top-left inset: the dip position versus angle shows a level anticrossing. Top-right inset: a spectrum recorded for TE-polarized light, showing only a dip due to the cavity mode~\cite{DinietAl03PRL}. } \label{Dini} \end{center} \end{figure} Experimentally, the first observation of polariton splitting of an ISBT was reported by Dini \textit{et al}.\ in 2003~\cite{DinietAl03PRL}. The dispersion of the ISB polaritons in GaAs QWs was measured through angle-dependent reflectance measurements using a prism-like geometry. Figure~\ref{Dini} shows measured reflectance spectra at 10\,K for TM-polarized waves for different incidence angles. Two dips are clearly displayed, exhibiting anticrossing behavior with a splitting ($2\hbar g$) of 14~meV as a function of incident angle. With an ISBT resonance energy of $\hbar\omega_{12}=142~{\rm meV}$, $g/\omega_{12} \sim 0.05$ was achieved even in this early work. As a comparison, in the top-right inset of Fig.~\ref{Dini}, a TE reflectance spectrum is shown; only a single dip corresponding to the cavity mode is observed, as the ISBT is dipole-forbidden for this polarization. In the top-left inset, the energies of the UP and LP dips are plotted as a function of incidence angle, highlighting the anticrossing behavior. \begin{table*} \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline Reference & Transition & Cavity & $d_\mathrm{QW}$ & $N_\mathrm{QW}$ & $\hbar\gamma$ & $\hbar\kappa$ & $\hbar g$ & $\hbar\omega_0$ & $g/\omega_0$ & $U$ & Notes\\ & type & type & (nm) & & (meV) & (meV) & (meV) & (meV) & (\%) & &\\ \hline \hline \cite{DinietAl03PRL} & ISBT & PWM & 7.2 & 18 & 5 & 15 & 7 & 142 & 5 & 0.62 &\\ \hline \cite{DupontetAl03PRB} & ISBT & PWM & 6.0 & 140 & 2.2 & 11 & 6 & 115 & 5 & 0.54 & bound-to-quasibound\\ \hline \cite{AnapparaetAl05APL} & ISBT & PWM & 7.5 & 10 & - & - & 7 & 135 & 5 & - & electrical control\\ \hline \cite{AnapparaetAl06APL} & ISBT & PWM & 7.2+14 & 9 & - & - & 10.5 & 150 & 7 & - & coupled double QWs\\ \hline \cite{AnapparaetAl07SSC} & ISBT & PWM & 13.7 & 10 & - & - & 16.5 & 123 & 14 & - & InAs/AlSb QWs\\ \hline \cite{DupontetAl07PRB} & ISBT & PWM & 7.5 & 160 & 6.9 & 12 & 21 & 123 & 17 & 1.9 & \\ \hline \cite{SapienzaetAl07APL} & ISBT & PWM & QC & 30 & $\sim$10 & - & 8 & 163 & 5 & - & QC photovoltaic\\ \hline \cite{SapienzaetAl08PRL} & ISBT & PWM & QC & 30 & 8 & 15 & 11 & 150 & 7 & 0.54 & QC LED\\ \hline \cite{TodorovetAl09PRL} & ISBT & MDM & 32 & 15 & 2 & 3 & 1.6 & 14.4 & 11 & 0.44 & first THz ISB polariton\\ \hline \cite{AnapparaetAl09PRB} & ISBT & PWM & 6.5 & 70 & 12 & $\sim$15 & 16.5 & 152 & 11 & $\sim$0.82 &\\ \hline \cite{GunteretAl09Nature} & ISBT & PWM & 9 & 50 & - & - & 10 & 113 & 9 & - & ultrafast buildup\\ \hline \cite{GeiseretAl10APL} & ISBT & ICR & 95 & 8 & 3.3 & 0.8 & 1.9 & 13 & 14 & 0.88 & parabolic QWs\\ \hline \cite{TodorovetAl10PRL} & ISBT & MDM & 32 & 25 & - & - & 2.8 & 12 & 24 & - & 0D polaritons\\ \hline \cite{ZanottoetAl10APL} & ISBT & SPPC & 8.3 & 50 & 5 & 5 & 5.5 & 119 & 5 & 0.47 &\\ \hline \cite{JouyetAl11APL} & ISBT & MDM & 9 & 10 & - & - & 11 & 107 & 10 & - &\\ \hline \cite{GeiseretAl12PRL} & ISBT & ICR & 72 & 8 & - & - & 4.7 & 18 & 27 & - & parabolic QWs\\ \hline \cite{PoreretAl12PRB} & ISBT & SPPC & 8.3 & 50 & - & - & 6.8 & 113 & 6 & - & ultrafast buildup\\ \hline \cite{ZanottoetAl12PRB} & ISBT & SPPC & 8.3 & 50 & 5.36 & - & 5.5 & 125 & 4 & - & ultrafast bleaching\\ \hline \cite{DelteiletAl12PRL} & ISBT & MDM & 18.5 & 5 & - & - & 57 & 166 & 33 & - & multisubband plasmon\\ \hline \cite{DietzeetAl13APL} & ISBT & MMC & 32 & 25 & - & 2.5 & 1.4 & 13 & 11 & - &\\ \hline \cite{AskenazietAl14NJP} & ISBT & MMC & 148 & 1 & 7.5 & - & 43 & 118 & 37 & - & the Berreman mode\\ \hline \cite{AskenazietAL17ACS} & ISBT & MDM & 148 & 18 & - & - & 45 & 100 & 45 & - & thermal emission \\ \hline \cite{LaurentetAL17APL} & ISBT & MDM & 5 & 18 & 77 & 17 & 53 & 403 & 13.1 & 1.06 & \\ \hline \cite{MuravevetAl11PRB} & ILLT & CMR & 30 & 1 & 0.02 & 0.02 & 0.025 & 0.058 & 46 & 1.64 & \\ \hline \cite{ScalarietAl12Science} & ILLT & MMC & - & 4 & $>$0.5 & $>$0.5 & 1.2 & 2.1 & 58 & $<$3.66 &\\ \hline \cite{MuravevetAl13PRB} & ILLT & MPR & 20 & 1 & - & 0.002 & 0.01 & 0.05 & 25 & - & \\ \hline \cite{MaissenetAl14PRB} & ILLT & MMC & 20 & 4 & $\sim$0.8 & $\sim$0.2 & 1.11 & 1.28 & 87 & $\sim$5.16 & InAs/AlSb QWs\\ \hline \cite{ZhangetAl16NP} & ILLT & PCC & 30 & 1&$<$0.04 & $<$0.04 & 0.18 & 1.5 & 12 & $>$3.2 & $C = 4g^2/(\kappa\gamma) > 300$\\ \hline \cite{MaissenetAl17NJP} & ILLT & MMC & - & 1 & $>$0.5 & $>$0.5 & 0.46 & 1.98 & 23 & $<$0.88 & \\ \hline \cite{BayeretAl17NL} & ILLT & MMC & 25 & 6 & - & - & 2.85 & 1.99 & 143 & - & $g/\omega_0>1$ \\ \hline \cite{LietAl18NP} & ILLT & PCC & 30 & 10 & 0.024 & 0.019 & 0.62 & 1.7 & 36 & 35.8 & $C = 4g^2/(\kappa\gamma) = 3513$ \\ \hline \end{tabular} \caption{Experimental observations of ultrastrong light-matter coupling in semiconductor quantum wells. $d_\mathrm{QW}$ = QW width. $N_\mathrm{QW}$ = number of QWs or periods. $\hbar\gamma$ = matter decay rate. $\hbar\kappa$ = photon decay rate; cavity $Q = \omega_0/\kappa$. $\hbar g$ = coupling strength. $\omega_0 = \omega_{12}$ for ISBT, and $\omega_0 = \omega_\mathrm{c}$ for ILLT. ISBT = intersubband transition. ILLT = inter-Landau-level transition (i.e., cyclotron resonance). PWM = planar waveguide microcavity. MDM = metal-dielectric-metal microcavity. ICR = inductor-capacitor (LC) resonator. SPPC = surface plasmon photonic crystal. CMR = coplaner microresonator. MMC = metamaterial cavity. MPR = metallic patch resonator. PCC = photonic-crystal cavity. FPC = Fabry-P\'erot cavity. QC = quantum cascade. $U\equiv\sqrt{(4g^2/\kappa\gamma)(g/\omega_0)}$ = geometric mean between cooperativity and normalized coupling.} \label{Summary} \end{table*} This initial ISB polariton work \cite{DinietAl03PRL} was immediately followed by similar observations by Dupont \textit{et al}.~\cite{DupontetAl03PRB}, who measured a bound-to-quasibound transition in a QW-IR-photodetector structure through both reflection and photocurrent spectroscopy. Rabi splittings were demonstrated with $g/\omega_{12}$ values similar to those reported by Dini \textit{et al}. Furthermore, by increasing the doping density, Dupont \textit{et al}.\ \cite{DupontetAl07PRB} were able to observe a square-root dependence of the vacuum Rabi splitting on the total electron density ($N_\mathrm{QW} n_\mathrm{e}$). Here, $N_\mathrm{QW}$ corresponds to the number of QWs and $n_\mathrm{e}$ is the density per well, i.e., $2g \propto \sqrt{N_\mathrm{QW}n_\mathrm{e}}$, indicating that electrons in QWs interact cooperatively as a single giant atom with cavity photons \cite{Dicke54PR,KaluznyetAl83PRL,Agarwal84PRL,AmsussetAl11PRL,tabuchi2014,zhang2014}. A coupling of $g/\omega_{12} = 0.17$ was achieved at the highest electron density \cite{DupontetAl07PRB}. During the past decade, progressively higher values of $g/\omega_{12}$ have been reported, as seen in Table~\ref{Summary}, due to the diverse approaches used by different experimental groups. In a simple approximation, for a parabolic band of mass $m^*$, the $g/\omega_{12}$ ratio can be written as \begin{align} \frac{g}{\omega_{12}} \propto \frac{1}{\sqrt{m^* \omega_{12}}}. \label{g-over-omega} \end{align} Therefore, one can immediately see that a lighter-mass material can generally provide larger $g/\omega_{12}$ ratios for a given $\omega_{12}$. Anappara \textit{et al}.\ used QWs composed of InAs (which has a bulk band-edge electron mass of 0.023$m_0$, as compared to 0.069$m_0$ for electrons in GaAs) to achieve $g/\omega_{12} = 0.14$ \cite{AnapparaetAl07SSC}. Another guideline for increasing the $g/\omega_{12}$ ratio, hinted at by Eq.~(\ref{g-over-omega}), is to increase the QW width, which naturally decreases $\omega_{12}$. Todorov \textit{et al.}\ used 32-nm-wide GaAs QWs embedded inside a subwavelength metal-dielectric-metal microcavity \cite{TodorovetAl10OE} to demonstrate USC ($g/\omega_{12} = 0.11$) in the THz regime \cite{TodorovetAl09PRL}. By further reducing the cavity volume with respect to the wavelength of the mode, $V_\mathrm{cav}/\lambda_\mathrm{res}$, to 10$^{-4}$, Todorov \textit{et al.}\ achieved $g/\omega_{12} = 0.24$ \cite{TodorovetAl10PRL}. As one increases the electron density and QW width, more subbands are occupied, which, within a single-particle picture, leads to multiple ISBT peaks due to band nonparabolicity. However, Delteil \textit{et al}.\ showed that due to many-body interactions a single peak appears \cite{DelteiletAl12PRL}. Namely, cooperative Coulombic coupling of dipolar oscillators with different frequencies can induce mutual phase locking, lumping together all individual ISBTs into a single collective bright excitation (multisubband plasmon resonance). Furthermore, Askenazi \textit{et al.}\ presented a model to describe the crossover from the ISB plasmon to the multisubband plasmon and then eventually to the so-called Berreman mode in the classical limit as the QW width was increased \cite{AskenazietAl14NJP}. In the Berreman mode limit, a record high $g/\omega_{0}$ value of 0.37 was experimentally achieved. For a recent review, see \cite{VasanellietAl16CRP}. \begin{figure} \begin{center} \includegraphics[scale=1.1]{Control} \caption{\small (Color online) Switchable USC. (a)~Reflectance spectra for GaAs asymmetrically coupled quantum wells at various bias voltages, showing field-tuned vacuum Rabi splitting. The splitting increases with increasing voltage. Reproduced from reference \cite{AnapparaetAl06APL}; (b)~setup used for ultrafast control of ultrastrong light-matter coupling. A quantum well structure embedded in a planar waveguide structure is activated by a near-infrared control pulse. Terahertz transients probe the ultrafast build-up of light-matter coupling; (c)~ultrafast switch-on of ISB polaritons. Spectra of the reflected terahertz field are given for various delay times; (d)~terahertz reflectance spectra measured at 293~K for various fluences of the control pulse~\cite{GunteretAl09Nature}. } \label{Control} \end{center} \end{figure} One of the attractive features of ISB polaritons is their controllability via external fields, which can lead to practical devices. Since the vacuum Rabi splitting 2$g$ in a collective system is proportional to $\sqrt{n_\mathrm{e}}$, controlling the electron density $n_\mathrm{e}$ in the QW controls 2$g$. An electric field applied perpendicular to the QW changes the ground state $n_\mathrm{e}$ through gating \cite{AnapparaetAl05APL}, or more quickly through resonant charge transfer via tunneling \cite{AnapparaetAl06APL}. Figure~\ref{Control}(a) shows reflectance spectra for GaAs asymmetrically coupled QWs at a fixed incidence angle at various bias voltages. At zero bias voltage, all electrons are in the wider well, and the spectrum shows a single peak due to the ISBT in the wider well. As the bias voltage is increased, electrons are increasingly transferred into the ground subband of the narrower quantum well, resulting in the appearance of ISB polaritons. As the bias is further increased, $n_\mathrm{e}$ increases in the narrow well and thus the vacuum Rabi splitting increases \cite{AnapparaetAl06APL}. Ultrafast optical excitation can also be used to control ultrastrong light-matter coupling in ISB polaritons -- an ultrashort laser pulse can either enhance it \cite{GunteretAl09Nature,PoreretAl12PRB} or destroy it \cite{ZanottoetAl12PRB}. For example, ultrafast buildup of ultrastrong light-matter coupling was demonstrated using interband-pump/ISBT-probe measurements in undoped QWs \cite{GunteretAl09Nature}, as shown in Figures~\ref{Control}(b)-(d). A multiple-QW sample was embedded into a planar waveguide structure based on total internal reflection. The band diagram shows how the $|1\rangle \rightarrow |2\rangle$ ISBT is activated by a near-infrared control pulse, populating level $|1\rangle$. Few-cycle TM-polarized multi-THz transients guided through the prism-shaped substrate are reflected from the waveguide to probe the ultrafast buildup of light-matter coupling, as shown in Fig.~\ref{Control}(c). The blue arrow shows the bare cavity resonance, whereas the red arrows show the ISB LP and UP. Figure~\ref{Control}(d) plots THz reflectance spectra measured for various fluences of the control pulse at a fixed time delay. As the fluence increases, $n_\mathrm{e}$ increases, which in turn increases the vacuum Rabi splitting. \subsubsection{Inter-Landau-level transitions (cyclotron resonance)} \label{sec:3B2} Strong light-matter coupling has also been actively studied using ILLTs (or cyclotron resonance CR) in 2DEGs formed in GaAs QWs \cite{MuravevetAl11PRB,ScalarietAl12Science,ScalarietAl13JAP,MuravevetAl13PRB,MaissenetAl14PRB,ZhangetAl16NP,MaissenetAl17NJP, BayeretAl17NL, LietAl18NP}, InAs QWs \cite{MaissenetAl14PRB}, and on the surface of liquid helium~\cite{AbdurakhimovetAl16PRL}. Muravev \textit{et al.}\ studied the USC of magnetoplasmon (also known as ``cyclotron-plasmon'') excitations with microwave photon modes in a coplanar microresonator \cite{MuravevetAl11PRB} and a metallic patch resonator \cite{MuravevetAl13PRB}. The great advantage of the straightforward continuous magnetic-field tuning of polaritons over ISB polaritons was clearly demonstrated. High values of $g/\omega_{0}$ close to 0.5 were achieved \cite{MuravevetAl11PRB} owing to the large dipole moment of ILLTs. Scalari \textit{et al.}\ reported experiments showing USC of 2DEG CR with photons in a THz metamaterial cavity consisting of an array of electronic split-ring resonators shown in Fig.~\ref{ETH-CR}(a)-(b) \cite{ScalarietAl12Science}. The authors obtained a $g/\omega_{0}$ value of 0.58 and showed potential scalability in frequency to extend to the microwave spectral range, where control of the magnetotransport properties of the 2DEG through light-matter coupling would be possible. Furthermore, using similar split-ring resonators in the complementary mode, Maissen \textit{et al.}\ obtained $g/\omega_{0}=0.87$, shown in Fig.~\ref{ETH-CR}(c)-(d). In addition, a blue-shift of both LP and UP was observed due to the diamagnetic term of the interaction Hamiltonian. \begin{figure} \begin{center} \includegraphics[scale=0.83]{ETH-CR} \caption{\small (Color online) USC of normal-incidence THz radiation with a GaAs 2DEG in a Landau-quantizing magnetic field. (a)~Experimental setup used to observed USC. An array of metamaterial THz cavities is deposited on top of the 2DEG; (b) scanning electron microscopy picture displays a single cavity unit. Adapted from reference \cite{ScalarietAl12Science}; (c) and (d): Transmittance spectra at different magnetic fields showing anti-crossing behavior with a $g/\omega_0$ value of 0.69 in (c) and 0.87 in (d)~\cite{MaissenetAl14PRB}. } \label{ETH-CR} \end{center} \end{figure} In these CR studies of ultrastrong light-matter coupling using metamaterial split-ring resonators, however, the value of cooperativity $C = 4g^2/(\gamma\kappa)$ remained small due to ultrafast decoherence (large $\gamma$) and/or lossy cavities (large $\kappa$). Recently, Zhang \textit{et al.}\ have developed a THz 1D photonic-crystal cavity (PCC), utilizing Si thin slabs and air as the high and low index materials, respectively [Fig.~\ref{CR-QED}(a)]. The air-Si combination provided a large index contrast and thus significantly reduced the number of layers needed on each side of the cavity~\cite{YeeSherwin09APL,ChenetAl14APB}. A thin 2DEG film was transferred onto one surface of the central layer, where the electric field maximum was located. Figure~\ref{CR-QED}(b) shows an experimental transmission spectrum measured for one of the empty cavities, demonstrating an ultranarrow photonic mode ($\kappa/2\pi \sim$ 2.6\,GHz). The highest cavity quality-factor, $Q$, achieved in this scheme was $\sim$10$^3$. \begin{figure}[hb] \begin{center} \includegraphics[scale=0.62]{CR-QED} \caption{\small (Color online) Observation of USC of cyclotron resonance (CR) of a 2DEG and high-$Q$ THz cavity photons. (a)~1D terahertz photonic-crystal cavity structure. Two silicon layers are placed on each side of the central defect layer. The blue part is the transferred 2DEG thin film; (b)~zoom-in spectrum for the first cavity mode, together with a Lorentzian fit with a full-width-at-half-maximum of 2.6\,GHz; (c)~anticrossing of CR and the first cavity mode, exhibiting the lower-polariton (LP) and upper-polariton (UP) branches. The central peak due to the cavity mode results from the CR-inactive circularly polarized component of the linearly polarized terahertz beam. Transmission spectra at different magnetic fields are vertically offset for clarity. The magnetic field increases from 0.4\,T (bottom) to 1.4\,T (top)~\cite{ZhangetAl16NP}. } \label{CR-QED} \end{center} \end{figure} Using these high-$Q$ PCCs, Zhang \textit{et al.}\ simultaneously achieved small $\gamma$ and small $\kappa$ in ultrahigh-mobility 2DEGs in GaAs QWs in a magnetic field~\cite{ZhangetAl16NP}; see Fig.~\ref{CR-QED}(c). High cooperativity values $C > 300$ were achieved, with vacuum Rabi splittings leading to $g/\omega_0 \sim 0.1$. With these favorable parameters it was possible to observe Rabi oscillations in the time domain. Zhang \emph{et al.} showed that the influence of such USC extended even to the region with detuning $\delta > \omega_0$. This effect could only occur when $g^2/(\omega_0\kappa) > 1$, which in the experiment was satisfied through a unique combination of strong light-matter coupling, a small resonance frequency, and a high-$Q$ cavity. Furthermore, the expected $\sqrt{n_\mathrm{e}}$-dependence of 2$g$ on the electron density ($n_\mathrm{e}$) was observed, signifying the collective nature of light-matter coupling \cite{Dicke54PR}. A value of $g/\omega_0$ $=$ 0.12 was obtained with just a single QW with a moderate $n_\mathrm{e}$ (= 3 $\times$ 10$^{11}$\,cm$^{-2}$). Finally, Zhang \emph{et al.} observed a significant suppression of a previously identified superradiant decay of CR in high-mobility 2DEGs \cite{ZhangetAl14PRL} due to the presence of the high-$Q$ THz cavity. As a result, ultranarrow polariton lines were observed, yielding an intrinsic CR linewidth as small as 5.6\,GHz (or a CR decay time of 57\,ps) at 2\,K. \begin{figure \centering \includegraphics[width = 1.0\columnwidth]{VBS} \caption{\label{fig:BSVR} (Color online) Distinction between the vacuum Bloch-Siegert shift due to the counter-rotating terms (CRTs) and the shift due to the $A^2$ terms in the USC regime. Simulated spectra \textbf{a},~with both the CRTs and the $A^2$ terms (full Hamiltonian), \textbf{b},~with the CRTs but without the $A^2$ terms, \textbf{c},~without the CRTs but with the $A^2$ terms, and \textbf{d},~without the CRTs and $A^2$ terms. Each graph includes experimental peak positions as open circles. Reproduced (adapted) with permission from \cite{LietAl18NP}.} \label{VBS} \end{figure} More recently, through optimization of both electronic and photonic components of a 2DEG-metamaterial system, Bayer and coworkers~\cite{BayeretAl17NL} have significantly boosted the light-matter coupling strength, entering the DSC regime. By tailoring the shape of the vacuum mode in the cavity, they achieved $g/\omega_0 = 1.43$. This achievement opens up possibilities of studying vacuum radiation with cutting-edge THz quantum detection techniques~\cite{RieketAl15Science,Benea-ChelmusetAl16PRA,RieketAl17Nature}. Keller \textit{et al}.\ probed USC at 300 GHz to less than 100 electrons located in the last occupied Landau level of a high mobility two-dimensional electron gas \cite{KelleretAl17NL}. By using hybrid dipole antenna-split ring resonator-based cavities with extremely small effective mode volumes and surfaces they achieved a normalized coupling ratio of $g/\omega_c = 0.36$. Effects of the extremely reduced cavity dimensions were observed as the light-matter coupled system resulted better described by an effective mass heavier than the uncoupled one. In later work, Keller \textit{et al}.\ studied the USC of the CR of a 2D {\em hole} gas in a strained germanium QW with THz metasurface cavity photons~\cite{KelleretAl17arXiv}. They observed a mode softening of the polariton branches, deviating from the Hopfield model successfully used in studies of GaAs QWs~\cite{HagenmulleretAl10PRB,ScalarietAl12Science}. At the largest coupling strength, the lower polariton branch was observed to move toward zero frequency, raising the exciting perspective of the Dicke superradiant phase transition in equilibrium~\cite{HeppLieb73AP,WangHioe73PRA}. The authors modeled this behavior by effectively reducing the importance of the $A^2$ term in the Hamiltonian through dipole-dipole interactions, which they attributed to the large carrier density combined with strong strain-induced nonparabolicity that invalidates Kohn's theorem~\cite{Kohn61PR}. Most recently, Li \textit{et al}.\ reported the vacuum Bloch-Siegert shift, which is induced by the coupling of matter with the counter-rotating component of the vacuum fluctuation field in a cavity~\cite{LietAl18NP}, as explained in Sec.~\ref{sec:2}; see, e.g., Eq.~(\ref{eq:BS}). Using an ultrahigh-mobility 2DEG in a high-$Q$ THz cavity in a magnetic field, they created Landau polaritons with an ultrahigh cooperativity ($C = 3513$), which exhibited a vacuum Bloch-Siegert shift up to 40\,GHz. They found that the probe polarization plays a critical role in exploring USC physics in this ultrahigh-cooperativity system. The resonant co-rotating coupling of electrons with CR-active (CRA) circularly polarized radiation leads to the extensively studied vacuum Rabi splitting (VRS). Conversely, the counter-rotating coupling of electrons with the CR-inactive (CRI) mode leads to the time-reversed partner of the VRS, i.e., the vacuum Bloch-Siegert shift. Li \textit{et al}.\ theoretically simulated polariton spectra to explain their data while selectively removing the counter-rotating terms (CRTs) and the $A^2$ terms from the full Hamiltonian, as shown in Figs.\,\ref{VBS}a-d together with experimental data. From the perfect agreement between experiment and theory shown in Fig.\,4a, deviations appear when either the CRTs or the $A^2$ terms are removed. By comparing Figs.\,\ref{VBS}a and b, one can confirm that the $A^2$ terms produce an overall blue-shift for both polariton branches and the CRI mode. On the other hand, through comparison of Figs.\,4a and c, one can confirm that {\em the CRTs only affect the CRI mode}, producing the vacuum Bloch-Siegert shift. \subsection{Hybrid quantum systems}\label{sec:3C} In Sec.~\ref{sec:3A} and \ref{sec:3B}, we presented the main achievements in experimental USC regimes in the fields of superconducting quantum circuits and semiconductor quantum wells, respectively. This section reviews quantum systems of hybrid nature where ultrastrong couplings have also been demonstrated. In these systems, the magnitude of the coupling originates from a collective degree of freedom which is the result of an ensemble of individual systems coupling to the same cavity mode. In such a configuration, a typical scaling of $\sqrt{N}$ is obtained \cite{Dicke54PR, YamamotoImamoglu99Book}, with $N$ being the number of systems participating in the collective degree of freedom. The same scaling is found for intraband transitions in semiconductor QWs (see Sec.~\ref{sec:3B}). In particular, the systems described in this section consist of molecular aggregates in optical microcavities, microcavity exciton polaritons in unconventional semiconductors with large binding energies and oscillator strengths, and magnons in magnetic materials coupled to the magnetic field of a microwave cavity. These cases combine quantum systems of very distinct nature, and therefore fall into the category of hybrid systems. Technically speaking, the previous section on conventional III-V semiconducting quantum wells already presented hybrid quantum systems, i.e., intersubband polaritons (Sec.~\ref{sec:3B1}) and inter-Landau-level polaritons (Sec.~\ref{sec:3B2}). This section therefore covers topics of polaritons in ultrastrong coupling regimes in systems other than traditional semiconductor quantum wells. \subsubsection{Molecules in optical cavities} The influence of cavity modes on the radiative properties of quantum emitters such as molecules has been the object of study since the early works of Purcell \cite{Purcell1946}. In more recent times, the strong coupling regime was reached with ensembles of molecules coupling to a single mode of an optical microcavity \cite{Lidzey1998, Holmes2004}. A key element to maximize the coupling strength was the discovery of molecules with a large enough electric dipole coupling to the electric field of the cavity mode. The electric dipole energy of interaction between an ensemble of molecules and a cavity mode can be calculated from \cite{george2015} \begin{equation} \label{eq:mol} \hbar g = d\sqrt{\frac{\hbar\omega_c}{2\epsilon_0V_c}}. \end{equation} Here, $d$ is the total electric dipole moment of the molecular ensemble and is therefore proportional to $\sqrt{N}$, $d = d_0\sqrt{N}$ with $d_0$ being the electric dipole of a single molecule. $\epsilon_0$ is the vacuum permittivity, and $V_c$, $\omega_c$ are, respectively, the cavity mode volume and mode frequency. The square root factor in Eq.~(\ref{eq:mol}) corresponds to the r.m.s. electric field in the ground state of the cavity mode. The first demonstration of a molecular ensemble ultrastrongly coupled to a single mode of a microcavity was carried out by Schwarz \emph{et al.} \cite{schwartz2011}. The experiment consisted of a PMMA (polymethyl methacrylate) matrix sputtered on both sides by a thin Ag layer in a Fabry-P\'erot configuration, resulting in a low-$Q$ cavity. The PMMA matrix was filled with photochromic spiropyran (SPI) molecules (10, 30-dihydro-10, 30, 30-trimethyl-6-nitrospiro[2H-1-benzopyran-2, 20-(2H)-indole]). These molecules can undergo photoisomerization between a transparent SPI form and a colored merocyanine (MC) form. Schwarz \emph{et al.} observed that molecules in the SPI form were not coupling to the cavity mode. As shown in Fig.~\ref{fig:mol}, upon ultraviolet illumination, a transition between SPI and MC forms was induced, the latter having a strong dipolar coupling to the cavity mode. This was observed as a large mode-splitting in the cavity transmission, indicating strong coupling. With longer illumination more molecules transitioned and the splitting reached values up to 713~meV, being 32.4\% of the cavity resonance and well in the USC regime. In later work \cite{george2015}, other molecules, such as TDBC, BDAB and fluorescenin, were observed to yield USCs of 13, 24, and 27\% of the cavity resonance, respectively. \begin{figure}[!hbt] \centering \includegraphics[width = 8.5cm]{molecules} \caption{\label{fig:mol}(Color online) USC achieved with a molecular ensemble in a Fabry-P\'erot cavity. By shining ultraviolet (UV) light the molecules change from Spiropyran (SP) to Merocyanine (MC) form. The latter displays a large dipole moment which couples to the cavity electromagnetic field all the way up to the USC regime. (a) Cavity absorption spectrum; (b) cavity transmission spectroscopy with no UV illumination; (c) cavity transmission for varying exposure times. Traces are offset for clarity. Mode splitting increases as the UV light exposes the molecules and closes back with infrared radiation that returns the molecules into the SP state demonstrating the reversibility of the process~\cite{schwartz2011}.} \end{figure} In a more recent study, the vibrational dipolar strength of a molecular liquid was also shown to simultaneously ultrastrongly couple to several modes of a Fabry-P\'erot cavity in the infrared \cite{george2016}. The molecules chosen for the study were iron pentacarbonyl (Fe(CO)$_5$) and carbon disulphide (CS$_2$), both showing very strong oscillator strength, which was key to the successful attainment of large coupling strengths to the cavity modes. This work may be important in molecular chemistry as vibrational strong coupling could be used to control chemical reactions given the role played by vibrations in the process. \subsubsection{Microcavity exciton polaritons} \label{sec:3c2} \begin{table*} \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline Reference & Material & Exciton & Temperature & 2$\hbar g$ & $\hbar\omega_0$ & $g/\omega_0$ & Notes\\ & & type & & (meV) & (eV) & (\%) & \\ \hline \hline \cite{WeisbuchetAl92PRL} & GaAs & Wannier & 20\,K & 5 & 1.58 & 0.2 & QWs \\ \hline \cite{DengetAl02Science} & GaAs & Wannier & 4\,K & 15 & 1.61 & 0.46 & QWs\\ \hline \cite{ChristmannetAl08PRB} & GaN & Wannier & RT & 50 & 3.64 & 0.7 & QWs\\ \hline \cite{KasprzaketAl06Nature} & CdTe & Wannier & 5\,K & 26 & 1.68 & 0.77 & QWs\\ \hline \cite{BrodbecketAl17PRL} & GaAs & Wannier & 20\,K & 17.4 & 1.61 & 1.1 & QWs, $g/Ry^* = 0.64$\\ \hline \cite{BlochetAl98APL} & GaAs & Wannier & 77\,K & 19 & 1.62 & 1.2 & QWs\\ \hline \cite{LiuetAl15NP} & MoS$_2$ & Wannier & RT & 46 & 1.87 & 1.2 &\\ \hline \cite{vanVugtetAl06PRL} & ZnO & Wannier & RT & 100 & 3.3 & 1.5 & Nanowires\\ \hline \cite{FlattenetAl16SR} & WS$_2$ & Wannier & RT & 70 & 2 & 1.75 &\\ \hline \cite{GuilletetAl11APL} & ZnO & Wannier & 120\,K & 130 & 3.36 & 1.9 & Bulk\\ \hline \cite{LiuetAl16NL} & MoS$_2$ & Wannier & 77\,K & 116 & 1.87 & 3 & Plasmon-exciton coupling\\ \hline \cite{BellessaetAl04PRL} & J aggregates & Frenkel & RT & 180 & 2.1 & 4.3 & Plasmon-exciton coupling\\ \hline \cite{GrafetAl16NC} & SWCNTs & Wannier & RT & 110 & 1.24 & 4.4 & (6,5)-enriched\\ \hline \cite{WeietAl13OE} & J aggregates & Frenkel & RT & 400 & 2.27 & 8.8 &\\ \hline \cite{GaoetAl18NP} & SWCNTs & Wannier & RT & 329 & 1.24 & 13.3 & (6,5)-enriched \& aligned\\ \hline \cite{Kena-CohenetAl13AOM} & TDAF & Frenkel & RT & 1000 & 3.534 & 14 &\\ \hline \cite{GambinoetAl14ACS} & Squaraine & Frenkel & RT & 1120 & 2.07 & 27 &\\ \hline \end{tabular} \caption{Experimental observations of strong and ultrastrong light-exciton coupling in various microcavity exciton polariton systems. QW = quantum well. $\hbar g$ = coupling strength. 2$\hbar g$ = vacuum Rabi splitting. $\hbar \omega_0$ = exciton resonance photon energy. $Ry^*$ = exciton binding energy. SWCNTs = single-wall carbon nanotubes. TDAF = 2,7-bis[9,9-di(4-methylphenyl)-fluoren-2-yl]-9,9-di(4-methylphenyl)fluorene. RT = room temperature, 300~K.} \label{table:MEP} \end{table*} As described in Section \ref{sec:3B}, microcavity exciton polaritons (MEPs) in semiconductor QWs have long been studied as a model system for investigations of solid-state cavity QED phenomena~\cite{WeisbuchetAl92PRL,SkolnicketAl98SST,KhitrovaetAl99RMP,DengetAl10RMP,GibbsetAl11NP}. However, MEPs based on Wannier excitons in inorganic semiconductors, such as GaAs QWs, have remained in the strong coupling regime, typically with $g/\omega_0$ $<$ 10$^{-2}$, far from the USC and DSC regimes. Wannier excitons in other traditional inorganic semiconductors with larger exciton binding energies (and thus larger band gaps, effective masses, and oscillator strengths) than GaAs, including GaN, CdTe, and ZnO, have been utilized to achieve larger values of $g/\omega_0$ up to $\sim$0.02; see Table~\ref{table:MEP}. Frenkel excitons (i.e., excitons with Bohr radii of the same order as the size of the unit cell) in organic semiconductors~\cite{LidzeyetAl98Nature} possess large binding energies and oscillator strengths and have displayed larger VRS than Wannier-exciton-based MEPs, reporting generally larger values of $g/\omega_0$, as shown in Table~\ref{table:MEP}. In particular, two groups observed giant VRSs, on the order of 1\,eV, in Fabry-P\'erot microcavities filled with 2,7-bis[9,9-di(4-methylphenyl)-fluoren-2-yl]-9,9-di(4-methylphenyl)fluorene~\cite{Kena-CohenetAl13AOM} and squaraine~\cite{GambinoetAl14ACS}, respectively. Representative spectra are shown in Fig.\,\ref{Frenkel}. The corresponding $g/\omega_0$ values are 0.14 and 0.27, respectively, indicating that these systems are in the UCS regime. \begin{figure}[!hbt] \centering \includegraphics[width = 1.0\columnwidth]{Frenkel} \caption{\label{fig:GVR} (Color online) Observation of giant vacuum Rabi splitting ($\sim$1\,eV) in microcavity exciton polariton systems based on Frenkel-type excitons. (a)~Angle-resolved reflectivity spectra for a 67-nm-thick cavity containing a thin film of 2,7-bis[9,9-di(4-methylphenyl)-fluoren-2-yl]-9,9-di(4-methylphenyl)fluorene measured using TM (upper panel) and TE (lower panel) polarized light. Reproduced (adapted) with permission from \cite{Kena-CohenetAl13AOM}. (b)~Contour plots of angle-resolved transmission spectra for a 140-nm-thick microcavity entirely filled with squaraine. Reproduced (adapted) with permission from \cite{GambinoetAl14ACS}.} \label{Frenkel} \end{figure} Moreover, nanomaterials with large binding energy Wannier excitons have recently emerged, including atomically thin transition metal dichalcogenide layers~\cite{LiuetAl15NP,LiuetAl16NL,FlattenetAl16SR} and single-wall carbon nanotubes (SWCNTs)~\cite{GrafetAl16NC,GrafetAl17NM}. These novel materials provide a platform for studying strong-coupling physics under extreme quantum confinement. In particular, one-dimensional (1D) excitons in SWCNTs have enormous oscillator strengths, revealing a very large VRS exceeding 100\,meV in microcavity devices containing a film of single-chirality SWCNTs~\cite{GrafetAl16NC}; the VRS showed a $g \propto \sqrt{N}$ behavior, where $N$ is the number of dipoles (i.e., excitons in the present case), evidencing cooperative enhancement of light-matter coupling~\cite{Dicke54PR,ZhangetAl16NP}, as shown in Fig.\,\ref{SWCNTs}(a). Furthermore, Graf and coworkers have recently demonstrated electrical pumping and tuning of exciton polaritons in SWCNTs~\cite{GrafetAl17NM}, making impressive progress toward creating polaritonic devices~\cite{Sanvitto-KenaCohen16NM}. \begin{figure*}[!hbt] \centering \includegraphics[width = 1.4\columnwidth]{SWCNTs} \caption{\label{fig:SWCNTs} (Color online) Single-wall carbon nanotube microcavity exciton polaritons exhibiting ultrastrong coupling. (a)~Angle-resolved reflectivity and photoluminescence spectra for (6,5) SWCNT microcavity excitons polaritons with increasing nanotube concentrations (from top to bottom) and increasing cavity thickness and detuning from (left to right). Reproduced (adapted) with permission from \cite{GrafetAl16NC}. (b)~Transmittance spectra for a cavity containing aligned (6,5) SWCNTs at zero detuning for various polarization angles from 0$^\circ$ to 90$^\circ$. (c)~Continuous mapping of the dispersion surfaces of the upper polartion (UP) and lower polartion (LP) for the device in (b). EP: exceptional points. (d)~Transmittance spectra for parallel polarization at zero detuning for devices containing aligned SWCNT films of different thicknesses. The device containing a 64-nm-thick aligned SWCNT film demonstrates the largest VRS of 329~meV. (e)~VRS for parallel polarization at zero detuning versus the square root of the film thickness, demonstrating the $\sqrt{N}$-fold enhancement of collective light-matter coupling. Reproduced (adapted) with permission from \cite{GaoetAl18NP}.} \label{SWCNTs} \end{figure*} Most recently, Gao \textit{et al}.\ have developed a unique architecture in which 1D excitons in an aligned SWCNT film interact with cavity photons in two distinct manners~\cite{GaoetAl18NP}. The system reveals ultrastrong coupling (VRS up to 329\,meV) for probe light with polarization parallel to the nanotube axis, whereas VRS is absent for perpendicular polarization. Between these two extreme situations, the coupling strength is continuously tunable through facile polarization rotation; see Fig.\,\ref{SWCNTs}(b). Figure~\ref{SWCNTs}(c) shows complete mapping of polariton dispersions, which demonstrates the existence of exceptional points (EPs), spectral singularities that lie at the border of crossing and anticrossing; the points bounded by a pair of EPs formed two equienergy arcs in momentum space, onto which the upper and lower polariton branches coalesced. This unique system with {\em on-demand USC} can be used for exploring exotic topological properties~\cite{Yuen-ZhouetAl14NM,Yuen-ZhouetAl16NC} and exploring applications in quantum technologies. Similar to \cite{GrafetAl16NC}, the VRS exhibited cooperative enhancement, proportional to the square root of the film thickness, as shown in Figs.\,\ref{SWCNTs}(d) and (e). Figure~\ref{SWCNTs}(d) shows transmittance spectra for the three samples with different thicknesses; the VRS for the thickest sample is 329 $\pm$ 5\,meV, corresponding to $g/\omega_0$ = 0.13, the highest value for MEPs based on Wannier excitons. \subsubsection{Magnons in microwave cavities} In recent years, a new platform of coherent light-matter interaction has been developed by combining magnetic fields from cavity photons and spin waves in magnetic materials \cite{huebl2013, tabuchi2014, zhang2014}. This quantum hybrid system consists in microwave photons residing in a resonant cavity, which interact with a spin wave in a ferro- (ferri-) magnetic material, as shown in Fig.~\ref{fig:magnons}(a). At the fundamental level, a microwave photon interacts with a quantum of excitation of such a spin wave, known as a magnon. This emerging platform of quantum magnonics is designed for strong magnon-photon interactions for applications in quantum information such as frequency conversion, quantum memories and quantum communication \cite{Zhang2016}. The prototypical system used in these experiments is the ferrimagnetic insulator yttrium iron garnet $\rm Y_3Fe_5O_8$ (YIG). This material exhibits spin waves with the largest quality factors among all magnetic materials explored so far, which explains why it is the most widely used. YIG is often employed in spherical form, with its fundamental mode being the Kittel mode in which all spins oscillate collectively in phase. \begin{figure}[!hbt] \centering \includegraphics[width = 9cm]{magnons} \caption{\label{fig:magnons}(Color online) Strong coupling between magnons and photons at room temperature. (a) Image of a microwave cavity used in the experiment with an yttrium-iron-garnet (YIG) sphere positioned near a side wall. Simulations show the magnetic field profile of the mode coupling to the magnons in the YIG sphere. The cavity is designed to yield maximum magnetic field amplitude at the position of the sphere; (b) avoided-level crossing observed at room temperature, indicating strong magnon-photon interactions. The signal displays reflection off the cavity port; (c) real-time, resonant magnon-photon dynamics being driven by an externally applied microwave field; (d) cross section of trace indicated in (c); (e) scaling of coupling strength as function of cavity mode frequency. The star indicates a device in the USC regime; (f) spectrum of device exhibiting USC~\cite{zhang2014}.} \end{figure} The coupling strength $g$ between the Kittel and cavity modes is proportional to the square root of the number of participating spins, $g = g_0\sqrt{N}$, where $g_0$ is the coupling strength of a single Bohr magneton to a cavity photon. The r.m.s. magnetic field generated in the cavity in its ground state is given by $\langle \hat{B}^2\rangle^{1/2}=\sqrt{\mu_0\hbar\omega_c/2V_c}$, with $\omega_c$ being the cavity frequency, $V_c$ the mode volume occupied by the cavity mode and $\mu_0$ the vacuum permeability. The single-spin coupling strength is calculated to be \cite{tabuchi2014, zhang2014} \begin{equation}\label{g_mag} g_0/2\pi = \eta\frac{\gamma}{2\pi}\sqrt{\frac{\hbar\omega_c\mu_0}{2V_c}}. \end{equation} Here, $\eta\leq1$ describes the spatial overlap and polarization matching conditions between the microwave field and the magnon mode \cite{zhang2014}. $\gamma = 2\pi \times 28~$GHz/T is the electron gyromagnetic ratio. In the first demonstration of strong coupling between magnons and photons \cite{tabuchi2014}, a collective coupling strength in the range of 100s of MHz was observed using a cavity of 10.7~GHz resonant to a ferromagnetic resonance mode. The $\sqrt{N}$ scaling was further demonstrated by using spheres of different volume (and therefore of larger number of spins). In a parallel experiment \cite{zhang2014}, real-time magnon-photon oscillations were observed at room temperature (see Fig.~\ref{fig:magnons}(b)-(d)). The same authors studied the scaling properties of the coupling constant [Eq.~(\ref{g_mag})] to maximize the interaction strength [see Fig.~\ref{fig:magnons}(e)-(f)]. By using a smaller cavity to enhance its frequency and a larger sphere containing more spins, a coupling rate of $g/2\pi = 2.5~$GHz was attained, being $g/\omega_c = 0.067$ of the magnon resonance frequency resonant with a cavity of $\omega_c/2\pi=37.5$~GHz. Therefore, the system entered the perturbative USC regime, being the only result so far in this field reaching such a high coupling strength. \section{Quantum simulations}\label{sec:4} The previous section gave an overview of the most relevant work in all experimental platforms studying ultrastrong light-matter interactions. Besides the remarkable couplings achieved in superconducting quantum circuits (see Sec.~\ref{sec:3A}), these platforms have also been used to explore quantum simulations. With a quantum simulator, all regimes of coupling between a qubit and a resonator can be implemented in a fully tunable and efficient manner. In this respect, some proposals were put forward in the literature using superconducting circuits, which include the analog quantum simulation of the quantum Rabi model~\cite{daniel_prx,Pedernales15,Felicetti15,Plenio2015,Plenio2017}, Dirac equation physics~\cite{Pedernales13}, as well as the digital-analog quantum simulation of the quantum Rabi model~\cite{Mezzacapo14}, and Dicke physics~\cite{Mezzacapo14,Lamata16}. In this section, we give an overview of all these proposals. Experimental realizations of the analog \cite{braumuller2016,KihwanKimRabi2017} and the digital-analog quantum simulation of the quantum Rabi model \cite{langford2016} have recently been carried out. In addition, an experimental realization of a classical simulation of the quantum Rabi model was performed in photonic chips~\cite{Crespi2012}. We want to point out that sections~\ref{sec:4A} to~\ref{sec:4C} analyze quantum simulations of USC and DSC models, while Sec.~\ref{sec:4D} deals with analog quantum simulations employing devices already in the USC and DSC regimes. In Fig.~\ref{regionsFigure}, we summarize the different regimes of the QRM that are reproduced by an analog or a digital-analog quantum simulator, following Pedernales \emph{et al.} \cite{Pedernales15}. \begin{figure}[!hbt] \centering \includegraphics[width = 9cm]{regions} \caption{(Color online) Different parameter regimes of the quantum Rabi model (QRM). Here, $g$ is the light-matter coupling strength, $\omega^R$ represents the resonator frequency, and $\omega_0^R$ the qubit energy splitting, according to the QRM. (1) Jaynes-Cummings (JC) regime: $ g \ll \{|\omega^R|,|\omega_0^R| \}$ and $| \omega^R - \omega_0^R | \ll |\omega^R + \omega_0^R | $. (2) Anti-JC regime: $ g \ll \{|\omega^R |,|\omega_0^R| \}$ and $|\omega^R - \omega_0^R| \gg | \omega^R + \omega_0^R | $. (3) Two-fold dispersive regime: $ g < \{ | \omega^R | , | \omega_0^R | , | \omega^R - \omega_0^R |, |\omega^R + \omega_0^R | \}$. (4) USC regime: $| \omega^R | < 10 g $, (5) DSC regime: $ | \omega^R | < g $, (6) Decoupling regime: $ | \omega_0^R | \ll g \ll |\omega^R| $. (7) The intermediate regime ($ |\omega_0^R| \sim g \ll |\omega^R|$) is still open to analysis. The (red) vertical central line corresponds to the regime of the Dirac equation. Colours indicate the different regimes of the QRM, colour degradation denotes transitions between different regions~\cite{Pedernales15}.}\label{regionsFigure} \end{figure} \subsection{Analog quantum simulation of the quantum Rabi model}\label{sec:4A} \subsubsection{Quantum Rabi model with superconducting circuits and the Jaynes-Cummings model} \label{sec:4A1} The first analog quantum simulation of the USC/DSC dynamics was put forward by Ballester \emph{et al.} \cite{daniel_prx}. The proposed simulator consists of a superconducting qubit coupled to a cavity mode in the strong coupling regime, with a two-tone orthogonal drive applied to the qubit. It was shown through analytical calculations and numerics that the method can access all regimes of light-matter coupling, including ultra-strong coupling ($0.1\lesssim \! g/ \omega \lesssim \! 1$, with $g/ \omega$ the ratio of the coupling strength over the resonator frequency) and deep-strong coupling~\cite{casanova2010} ($g/ \omega\gtrsim \! 1$). This scheme allows one to realize an analog quantum simulator for a wide range of light-matter coupling regimes~\cite{Braak2011} in platforms where those regimes are unattainable from first principles. This includes, among others, the simulation of Dirac equation physics, the Dicke/spin-boson models, the Kondo model, and the Jahn-Teller instability~\cite{duty_JT}. We will use the language of circuit QED \cite{blais2004} to describe the method, although it can also be implemented in microwave cavity QED~\cite{Solano2003}. Let us consider a physical system consisting of a superconducting qubit strongly coupled to a transmission line microwave resonator. Working at the qubit degeneracy point, the Hamiltonian reads \cite{blais_pra} \begin{eqnarray} \hat{\cal H} = \frac{\hbar \Omega}{2} \hat{\sigma}_z + \hbar \omega\hat{a}^\dag\hat{a} -\hbar g \hat{\sigma}_x (\hat{a} + \hat{a}^\dag) \label{HamilDiag}, \end{eqnarray} where $\Omega$ is the qubit frequency, $\omega$ is the photon frequency, and $g$ denotes the coupling strength. Moreover, $\hat{a}$ and $\hat{a}^\dag$ stand for the annihilation and creation operators for the field mode of the photon, while $\hat{\sigma}_x = \hat{\sigma}_+ + \hat{\sigma}_- = \projsm{e}{g}+ \projsm{g}{e}$, $\hat{\sigma}_z = \projsm{e}{e}-\projsm{g}{g}$, where ${\ket{g},\ket{e}}$ denote ground and excited states of the superconducting qubit, respectively. One can apply the rotating-wave approximation (RWA) in a typical circuit QED implementation to further simplify this Hamiltonian. More specifically~\cite{zueco_dispbs}, if $\{|\omega-\Omega|, g\} \ll\omega+\Omega$, then it can be expressed as \begin{eqnarray} \hat{\cal H} &=& \frac{\hbar \Omega}{2} \hat{\sigma}_z +\hbar \omega\hat{a}^\dag\hat{a} -\hbar g (\hat{\sigma}_+\hat{a} + \hat{\sigma}_-\hat{a}^\dag) ,\label{HamilRWA} \end{eqnarray} which is formally equivalent to the well-known Jaynes-Cummings model (JC) of cavity QED. By performing the RWA, one is neglecting counter-rotating terms $\hat{\sigma}_-\hat{a}$ and $\hat{\sigma}_+\hat{a}^\dag$, producing in this way a Hamiltonian [Eq.~(\ref{HamilRWA})] where the number of excitations is conserved. The Hamiltonian in Eq.~(\ref{HamilRWA}) will be the basis for our derivations. Consider now two classical microwave fields driving the superconducting qubit. Adding the drivings to Eq.~(\ref{HamilRWA}) results in the following Hamiltonian \begin{multline} \hat{\cal H} = \frac{\hbar \Omega}{2} \hat{\sigma}_z +\hbar \omega\hat{a}^\dag\hat{a} -\hbar g (\hat{\sigma}_+\hat{a} + \hat{\sigma}_-\hat{a}^\dag) \\ - \hbar \Omega_1 ( e^{i \omega_1 t} \hat{\sigma}_- + e^{-i \omega_1 t} \hat{\sigma}_+) - \hbar \Omega_2 ( e^{i \omega_2 t} \hat{\sigma}_- + e^{-i \omega_2 t} \hat{\sigma}_+), \label{HamilDrivRabi} \end{multline} where $\omega_j$ and $\Omega_j$ denote the frequency and amplitude of the $j$th driving. We point out that the orthogonal drivings interact with the qubit in a similar manner as the microwave resonator field. To obtain Eq.~(\ref{HamilDrivRabi}), we have assumed a RWA not only applyied to the qubit-resonator coupling term, but also to the orthogonal drivings. We then write Eq.~(\ref{HamilDrivRabi}) in a frame rotating with the first driving frequency $\omega_1$, namely, \begin{multline} \hat{\cal H}^{L_1} = \hbar \frac{\Omega-\omega_1}{2} \hat{\sigma}_z +\hbar (\omega-\omega_1) \hat{a}^\dag\hat{a} \\ - \hbar g (\hat{\sigma}_+\hat{a} +\hat{\sigma}_-\hat{a}^\dag) - \hbar \Omega_1 ( \hat{\sigma}_- + \hat{\sigma}_+ ) \\- \hbar \Omega_2 ( e^{i (\omega_2-\omega_1) t} \hat{\sigma}_- + e^{-i (\omega_2-\omega_1) t} \hat{\sigma}_+ ) . \end{multline} This transformation permits mapping the original first driving Hamiltonian into a time independent one $\hat{{\cal H}}_0^{L_1} = - \hbar \Omega_1 ( \hat{\sigma}_- + \hat{\sigma}_+) $, while leaving the number of excitations unperturbed. We consider this term to be the most sizeable and treat the rest perturbatively by transforming into a rotating frame with respect to $\hat{\cal H}_0^{L_1} $, $\hat{\cal H}^{I} (t) = e^{i \hat{\cal H}_{0}^{L_1} t/\hbar } \pare{\hat{\cal H}^{L_1} - \hat{\cal H}_0^{L_1} } e^{-i \hat{\cal H}_{0}^{L_1} t/\hbar } $. By employing the rotated qubit basis $\ket{\pm} = \pare{\ket{g} \pm \ket{e} }/\sqrt2$, we obtain \begin{multline} \hat{\cal H}^{I} (t) = -\hbar\frac{\Omega-\omega_1}{2} \pare{ e^{-i 2 \Omega_1 t} \proj{+}{-} + {\rm H.c.}} \\ + \hbar (\omega-\omega_1) \hat{a}^\dag\hat{a} - \frac{\hbar g }{2} \left( \left\{ \proj{+}{+} - \proj{-}{-} \right.\right.\\ \left.\left. + e^{-i 2 \Omega_1 t} \proj{+}{-} - e^{i 2 \Omega_1 t}\proj{-}{+} \right\}\hat{a} + {\rm H.c.} \right) \\ - \frac{\hbar \Omega_2 }{2} \left( \left\{ \proj{+}{+} - \proj{-}{-} - e^{-i 2 \Omega_1 t} \proj{+}{-} \right. \right.\\ \left. \left. + e^{i 2 \Omega_1 t}\proj{-}{+} \right\} e^{i (\omega_2-\omega_1) t} + {\rm H.c.} \right). \label{HI1} \end{multline} The external driving parameters can be tuned in such a way that $\omega_1-\omega_2=2 \Omega_1$, allowing us to select the resonant terms in the time-dependent Hamiltonian. Therefore, if the first driving $ \Omega_1$ is relatively strong, one can approximate the above expression by an effective Hamiltonian which is time independent as \begin{eqnarray} \hat{\cal H}_{\rm eff} = \hbar (\omega-\omega_1) \hat{a}^\dag\hat{a} + \frac{\hbar\Omega_2}{2} \hat{\sigma}_z - \frac{\hbar g}{2} \hat{\sigma}_x \pare{\hat{a}+\hat{a}^\dag} . \label{HamilEffRabi} \end{eqnarray} Notice the similarity between the original Hamiltonian (\ref{HamilDiag}) and Eq.~(\ref{HamilEffRabi}). Even though the coupling $g$ is fixed in Eq.~(\ref{HamilEffRabi}), one can still tailor the relative size of the rest of parameters by tuning frequencies and amplitudes of the drivings. If one can reach $\Omega_2 \sim (\omega-\omega_1) \sim g/2$, the original system dynamics will emulate those of a qubit coupled to a bosonic mode with a relative coupling strength beyond the SC regime, reaching the USC/DSC regimes. The coupling strength attained with the effective Hamiltonian (\ref{HamilEffRabi}) can be estimated by the ratio $g_{\rm eff} / \omega_{\rm eff}$, where $g_{\rm eff} \equiv g/2$ and $ \omega_{\rm eff}\equiv\omega-\omega_1$. \subsubsection{Quantum Rabi model in the Brillouin zone with ultracold atoms}\label{sec:4A2} In the following, we present a technique to implement a quantum simulation of the QRM for unprecedented values of the coupling strength using a system of cold atoms freely moving in a periodic lattice. An effective two-level quantum system of frequency $\Omega$ can be simulated by the occupation of lattice Bloch-bands, while a single bosonic mode is implemented with the oscillations of the atom in a harmonic optical trap of frequency $\omega$ that confines atoms within the lattice. We will see that highly non-trivial dynamics may be feasibly implemented within the validity region of this quantum simulation. At sufficiently low density, the dynamics of the neutral atoms loaded in an optical lattice can be described by the single particle Hamiltonian $\hat{\cal H} =\frac{\hat{p}^{2}}{2m} + \frac{V}{2} \cos{\left( 4 k_{0} \hat{x} \right)} + \frac{m \omega^{2}}{2} \hat{x}^{2}$, where $\hat{p} = - i \hbar \frac{\partial}{\partial x}$, $m$ is the mass of the atom, $\omega$ the frequency of the harmonic trap, while $V$ and $4k_{0}$ are the depth and wave vector of the periodic potential. Using the Bloch functions, we can identify a discrete quantum number, the band index $n_{b}$, and a continuous variable, the atomic quasi-momentum $q$. Fixing our attention to the bands with the two lowest $n_b$, the Hamiltonian can be recast into \begin{equation} \begin{split} \hat{\cal H} =& \frac{1}{2m} \begin{pmatrix} q^{2}+4\hbar k_{0} q & 0 \\ 0 & q^{2} - 4\hbar k_{0} q \end{pmatrix} + \frac{V}{4} \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \\ &- \frac{m \omega^{2} \hbar^{2}}{2} \frac{\partial^{2}}{\partial q^{2}} \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}. \end{split} \end{equation} By analogy to the usual QRM, $\hat{\cal H} = \hbar \omega \hat{a}^{\dagger} \hat{a} + \frac{\hbar \Omega}{2} \sigma_{z} + i \hbar g \sigma_{x}\left( \hat{a}^{\dagger} - \hat{a} \right)$, we define an effective qubit energy spacing $\Omega \equiv \frac{V}{2 \hbar}$ and an effective light-matter interaction $g \equiv 2k_{0} \sqrt{\frac{\hbar \omega}{2m}}$. The value of the effective coupling strength is intrinsically linked to the trap frequency $g \sim \sqrt{\omega}$, and since the trap frequency is low (typically kilohertz in actual experiments) the ratio $g/\omega$ is tunable only over a range of extremely high values, $g/\omega \sim 10$. However, the tunability of the ratio $g/\Omega$ allows us to explore a large region of parameters at the transition between resonant and dispersive qubit-oscillator regimes. Indeed, the value of $\Omega$ can be made large enough such that the qubit free Hamiltonian becomes the dominant term, or small enough to make its energy contribution negligible. Given that only very high values of the ratio $g/\omega$ are accessible, the RWA can never be applied and the model cannot be implemented in the JC limit. Interesting dynamics at the crossover between the dispersive and resonant DSC regimes can be observed for values of parameters unattainable so far with available implementations of the QRM. However, the analogy with the QRM breaks down when the value of the simulated momentum exceeds the borders of the first Brillouin zone. When this is the case, the model represents a generalization of the QRM in periodic phase space. Both the momentum (and correspondingly the state of $\hat{\sigma}_x$), and the atomic cloud position can in principle be measured with absorption imaging techniques. For the former, standard time-of-flight imaging may be used, as performed by simultaneously deactivating both the lattice beams and the dipole trapping potential and then detecting the atoms in the far field after a given free expansion time. While the reconstruction in this way is possible with high precision, achieving the required spatial resolution for an in situ position detection of the oscillation is experimentally challenging. Fig.~\ref{fig:coldatom1} shows experimentally accessible quantities like the distribution $\mathcal{P}(p) = | \langle p | \psi(t) \rangle |^2$ of the atomic physical momentum $\hat{p}$, for different evolution times. The momentum distribution can be experimentally obtained using time-of-flight measurements, and gives a clear picture of the system dynamics during the quantum simulation of the QRM. The cloud is initialised in the momentum eigenstate, $|q=0\rangle |n_{b}=0\rangle$. When the periodic lattice strength $V$ is large enough, the dynamics are dominated by the coupling between $|n_{b}=0\rangle$ and $|n_{b}=1\rangle$. This case corresponds to the dispersive DSC regime. Otherwise, the dynamics are dominated by the harmonic potential, and the evolution resembles the QRM in the DSC regime. \begin{figure}[!hbt] \centering \includegraphics[width = 7cm]{coldatom1} \caption{\label{fig:coldatom1} (Color online) Quantum Rabi model using ultra-cold atoms. The figure shows the distribution $\mathcal{P}(p) = | \langle p | \psi(t) \rangle |^2$ of the atomic physical momentum $\hat{p}$, for different evolution times. $\omega_0$ corresponds to $\omega$ of the main text. For the dispersive DSC regime (upper panel), the parameters are given by $g/\omega = 7.7$ and $g/\Omega = 0.43$. In this case, the initial wave-function is transformed back and forth between two distributions centered around the states $|p=\pm 2 \hbar k_0 \rangle$. For the resonant DSC regime (lower panel), $g/\omega = 10$ and $\omega = \Omega$. In this case, the system is continuously displaced in momentum space up to a maximum value of the momentum~\cite{felicetti2017}.} \end{figure} \subsection{Analog quantum simulation of Dirac physics}\label{sec:4B} There exist strong connections between the QRM and the Dirac equation~\cite{Pedernales13, LamataDirac2007}. Therefore, simulating the physics of the Dirac equation is important to connect to physics of the USC and DSC regimes. We review here a particular method employing superconducting quantum circuits. We want to point out some crucial differences with regards to previous implementations of the Dirac equation Klein paradox in other quantum platforms, particularly ion traps \cite{Gerritsma2010}. Using the method described here, the dynamics of a spin-1/2 relativistic particle are emulated by two interacting degrees of freedom from two different subsystems, namely, a standing wave in a transmission line resonator and a superconducting qubit, none of them representing real motion. The position and momentum of the simulated Dirac particle are codified in the field quadratures. Contrary to the ion trap simulator \cite{Gerritsma2010}, this approach paves the way for combining cavity fields with quantum propagating microwaves \cite{edwin_dual-path,Bozyigit,Itinerant} in complex quantum network architectures \cite{Leib12}. In the protocol described here one requires a superconducting qubit, e.g., a flux qubit~\cite{floor_prl}, working at its degeneracy point strongly coupled to an electromagnetic field mode of a transmission line resonator. The interaction between the two systems can be described by the JC Hamiltonian \cite{JC,blais_pra,wallraff2004}. Additionally, we consider three classical external microwave drivings, two of them transversal to the resonator \cite{daniel_prx} which will only couple to the qubit, and the third drive coupled longitudinally to the resonator. The Hamiltonian of the system reads \begin{multline} \label{HamilDriv} \hat{\cal H} = \frac{\hbar \Omega}{2} \hat{\sigma}_z +\hbar \omega\hat{a}^\dag\hat{a} -\hbar g \pare{\hat{\sigma}_+\hat{a} + \hat{\sigma}_-\hat{a}^\dag} \\ -\hbar \Omega_1 \pare{ e^{ i \pare{ \omega t +\varphi} } \hat{\sigma}_- + e^{-i \pare{ \omega t +\varphi } } \hat{\sigma}_+} - \hbar \lambda \left( e^{i \pare{ \nu t +\varphi }} \hat{\sigma}_- \right. \\+ \left. e^{-i \pare{ \nu t +\varphi } } \hat{\sigma}_+\right) + \hbar \xi \pare{ e^{i \omega t}\hat{a} + e^{-i \omega t}\hat{a}^\dag }, \end{multline} where $\hat{\sigma}_y=i (\hat{\sigma}_- - \hat{\sigma}_+)=i (\proj{g}{e} - \proj{e}{g})$ and $\hat{\sigma}_z= \proj{e}{e} - \proj{g}{g}$, with $\ket{g}$, $\ket{e}$ denoting the ground and excited qubit states, respectively. Here, $\hbar\omega$ and $\hbar\Omega$ correspond to photon and qubit uncoupled energies, whereas $g$ stands for the qubit-photon coupling strength. The two orthogonal microwave drivings have amplitudes $\Omega_1$, $\lambda$, phase $\varphi$, and frequencies $\omega$ and $\nu$. Additionally, the longitudinal driving has amplitude $\xi$ and frequency $\omega$. Notice that two of the drivings are chosen to be resonant with the resonator mode. We also assume that $\Omega=\omega$, i.e. the qubit and the resonator are on resonance as well. This protocol is based on two transformations. First, the Hamiltonian in Eq.~(\ref{HamilDriv}) can be transformed into the rotating frame with respect to the resonator frequency $\omega$ \begin{multline} \label{HL1} \hat{\cal H}^{L_1} = - \hbar g \pare{\hat{\sigma}_+ a +\hat{\sigma}_- a^\dag} \\ - \hbar \Omega_1 \pare{ e^{ i \varphi } \hat{\sigma}_- + e^{-i \varphi } \hat{\sigma}_+} + \hbar \xi \pare{\hat{a} + \hat{a}^\dag} \\ - \hbar \lambda \pare{ e^{i \left[ (\nu-\omega) t + \varphi \right] } \hat{\sigma}_- + e^{-i \left[ (\nu-\omega) t + \varphi \right] } \hat{\sigma}_+ }. \end{multline} Secondly, the Hamiltonian obtained is transformed into another frame rotating with respect to the Hamiltonian $\hat{\cal H}_0^{L_1} = - \hbar \Omega_1 \pare{ e^{ i \varphi } \hat{\sigma}_- + e^{-i \varphi } \hat{\sigma}_+}$, \begin{multline} \hat{\cal H}^{I} = - \frac{\hbar g }{2} \left( \left\{ \proj{+}{+} - \proj{-}{-} + e^{-i 2 \Omega_1 t} \proj{+}{-} \right. \right.\\ - \left.\left. e^{i 2 \Omega_1 t}\proj{-}{+} \right\} e^{i \varphi} \hat{a} + {\rm H.c.} \right) \\ - \frac{\hbar \lambda}{2} \Big( \left\{ \proj{+}{+} - \proj{-}{-} - e^{-i 2 \Omega_1 t} \proj{+}{-} \right. \\ \left. + e^{i 2 \Omega_1 t}\proj{-}{+} \right\} e^{i (\nu-\omega) t} + {\rm H.c.} \Big) + \hbar \xi \pare{ \hat{a} + \hat{a}^\dag} , \label{HI12} \end{multline} where we considered the rotated qubit basis $\ket{\pm} = \pare{\ket{g} \pm e^{-i \varphi} \ket{e} }/\sqrt2$. We now assume $\omega-\nu=2 \Omega_1$ to simplify the calculation, and also assume the first driving amplitude $\Omega_1$ to be large when compared to the other Rabi frequencies in Eq.~(\ref{HI12}). Therefore, we can apply the RWA, which produces the Hamiltonian \begin{eqnarray} {\cal H}_{\rm eff} = \frac{\hbar\lambda}{2} \hat{\sigma}_z + \frac{\hbar g}{\sqrt2} \hat{\sigma}_y \hat{p} + \hbar \xi \sqrt2 \, \hat{x} , \label{HamilEff} \end{eqnarray} where $\varphi=\pi/2$ and we have made use of the electromagnetic field quadratures, i.e. $\hat{x} =(\hat{a}+\hat{a}^\dag)/\sqrt2$, $\hat{p} =-i(\hat{a}-\hat{a}^\dag)/\sqrt2$, obeying the commutation relation $\left[ \hat{x} , \hat{p} \right] = i$. Note that $\Omega_1$ is not present in the effective Hamiltonian Eq.~(\ref{HamilEff}). This is a consequence of deriving the Hamiltonian in a rotating frame with $\Omega_1$ acting as a large frequency in the strong driving parameter regime. The Schr\"odinger dynamics of Eq.~(\ref{HamilEff}) are analogous to those of the 1+1 Dirac equation, where the parameters $\hbar g/\sqrt2$ and $\hbar \lambda / 2$ simulate, respectively, the speed of light and the particle mass. Moreover, we also have an external potential $ \Phi = \hbar \xi \sqrt2 \, \hat{x}$ which is linear in the particle position. The simulated dynamics allow one to cover a wide range of physical regimes within this quantum simulation. We would like to point out that, for fixed coupling constant $g$, the simulated mass grows linearly with the amplitude of the weak driving $\lambda$, while the strength of the potential can be adjusted with the longitudinal driving amplitude $\xi$. This is in contrast with respect to the trapped ion implementation, where one needs a second ion to simulate the external potential \cite{Casanova2010b,Gerritsma2011}. In the case of a massless particle, $\lambda = 0$ and $\nu = 0$, such that $\omega = 2 \Omega_1$ in Eq.~(\ref{HI12}). In the superconducting quantum circuit implementation, the analysis of relativistic quantum features, such as {\it Zitterbewegung} or Klein paradox, should be carried out by a phase-space description of the electromagnetic field in the transmission line resonator. The initial quantum state of the bosonic degree of freedom of the simulated Dirac particle may be represented by a wave-packet with average position $\langle \hat{x}_0 \rangle$ and average momentum $\langle \hat{p}_0 \rangle$, \begin{eqnarray} \psi (x) = \pi^{-1/4} \exp\left\{ i \langle \hat{p}_0 \rangle x \right\} \,\, \exp \left\{ - \frac{ ( x - \langle \hat{x}_0 \rangle )^2 }{2} \right\} . \label{psi} \end{eqnarray} The wave packet is analogous to the $x-$quadrature representation of an electromagnetic field coherent state $\ket{ \frac{ \langle \hat{x}_0 \rangle + i \langle \hat{p}_0 \rangle }{\sqrt2} } = {\cal D} \pare{ \frac{ \langle \hat{x}_0 \rangle + i \langle \hat{p}_0 \rangle }{\sqrt2} } \ket{0} $, where $\ket{0}$ is the vacuum state of the bosonic field, and ${\cal D} ( \alpha ) = \exp \left\{ \alpha a^\dag - \alpha^* a \right\} $ is the displacement operator. \subsection{Digital-analog quantum simulation of the quantum Rabi and Dicke models}\label{sec:4C} The previous two subsections \ref{sec:4A} and \ref{sec:4B} described analog simulations of different physical models. We will now review the digital-analog quantum simulation of the quantum Rabi and Dicke models implemented in a circuit quantum electrodynamics platform. The simulation employs only JC dynamics and local interactions~\cite{Mezzacapo14,Lamata16}. We describe how the rotating and counter-rotating Hamiltonians of the corresponding evolution can be straightforwardly implemented using digital techniques. By interleaving the dynamics of rotating and counter-rotating Hamiltonians, the evolution of the quantum Rabi and Dicke models can be implemented in all parameter regimes of light-matter coupling. At the end of this section, we illustrate how a Dirac equation evolution can be achieved in the limit of negligible mode frequency. We begin by assuming a generic circuit quantum electrodynamics platform composed of a superconducting qubit coupled to a transmission line microwave resonator. This scenario is described by the Hamiltonian~\cite{blais_pra} \begin{equation} \hat{\cal H} = \hbar\omega_r \hat{a}^{\dagger}\hat{a} +\frac{\hbar\omega_q}{2}\hat{\sigma}_z +\hbar g(\hat{a}^{\dagger}\hat{\sigma}_-+\hat{a}\hat{\sigma}_+),\label{QubitResHam} \end{equation} where $\omega_r$, $\omega_q$ are, respectively, the resonator and qubit transition frequencies, $g$ is the qubit-cavity coupling strength, $\hat{a}^{\dagger}$ is the creation bosonic operator for the cavity mode, and $\hat{\sigma}_+,\hat{\sigma}_-$ are raising and lowering spin operators acting on the qubit. Let us take a look at the Hamiltonian of the QRM \begin{equation} \hat{\cal H}_R=\hbar\omega^R_r \hat{a}^{\dagger}\hat{a} +\frac{\hbar\omega^R_q}{2}\hat{\sigma}_z +\hbar g_R\hat{\sigma}_x(\hat{a}^{\dagger}+\hat{a})\label{RabiHam}. \end{equation} It turns out that its evolution can be codified in a superconducting qubit platform with available JC interactions [Eq.~(\ref{QubitResHam})] by a digital decomposition. \begin{figure} \includegraphics[scale=0.26]{FrequencyScheme} \caption{(Color online) Frequency diagram of the digital-analog implementation of the quantum Rabi Hamiltonian. A superconducting qubit of frequency $\omega_q$ interacts with a microwave resonator with transition frequency $\omega_r$. The evolution with $\hat{\cal H}_{1,2}$ in Eqs.~(\ref{Ham11}), (\ref{Ham22}) are implemented, respectively, with a Jaynes-Cummings interaction (step 1), and other Jaynes-Cummings dynamics with a different detuning, interspersed with $\pi$ pulses (step 2), to transform the second Jaynes-Cummings evolution onto an anti-Jaynes-Cummings interaction~\cite{Mezzacapo14}. \label{FrequencyScheme}} \end{figure} Let us express Eq.~(\ref{RabiHam}) as the sum of two parts, $\hat{\cal H}_R=\hat{\cal H}_1+\hat{\cal H}_2$, with \begin{align} \hat{\cal H}_1 &=\frac{\hbar\omega^R_r}{2}\hat{a}^{\dagger}\hat{a} +\frac{\hbar\omega^1_q}{2}\hat{\sigma}_z +\hbar g(\hat{a}^{\dagger}\hat{\sigma}_-+\hat{a}\hat{\sigma}_+) , \label{Ham11} \\ \hat{\cal H}_2 &=\frac{\hbar\omega^R_r}{2} \hat{a}^{\dagger}\hat{a} -\frac{\hbar\omega^2_q}{2}\hat{\sigma}_z +\hbar g(\hat{a}^{\dagger}\hat{\sigma}_++\hat{a}\hat{\sigma}_-), \label{Ham22} \end{align} where we have considered the qubit frequency in the two terms in such a way that $\omega_q^1-\omega_q^2=\omega^R_q$. The dynamics arising from the two Hamiltonians in Eqs.~(\ref{Ham11}), (\ref{Ham22}) can be implemented in a standard circuit quantum electrodynamics platform that includes the possibility of fast detuning of the qubit frequency, see Fig.~\ref{FrequencyScheme}. Beginning with the qubit-resonator Hamiltonian in Eq.~(\ref{QubitResHam}), we can transform into a frame which rotates at frequency $\tilde{\omega}$, where an effective interaction Hamiltonian results \begin{equation} \tilde{\cal H}=\hbar\tilde{\Delta}_r\hat{a}^{\dagger}\hat{a}+\hbar\tilde{\Delta}_q\hat{\sigma}_z+\hbar g(\hat{a}^{\dagger}\hat{\sigma}_-+\hat{a}\hat{\sigma}_+),\label{IntHam} \end{equation} with $\tilde{\Delta}_r=(\omega_r-\tilde{\omega})$ and $\tilde{\Delta}_q=\left(\omega_q-\tilde{\omega}\right)/2$. Accordingly, Eq.~(\ref{IntHam}) coincides with ${\cal H}_1$ after redefinition of the coefficients. The counter-rotating Hamiltonian ${\cal H}_2$ can be realized by local qubit drivings to $\tilde{\cal H}$, employing a different detuning for the qubit frequency, \begin{equation} e^{-i \pi\hat{\sigma}_x/2}\tilde{\cal H}e^{i \pi\hat{\sigma}_x/2}=\hbar\tilde{\Delta}_r\hat{a}^{\dagger}\hat{a}-\hbar\tilde{\Delta}_q\hat{\sigma}_z+\hbar g(\hat{a}^{\dagger}\hat{\sigma}_++\hat{a}\hat{\sigma}_-).\label{RotHam} \end{equation} By choosing different qubit-resonator detunings in the two steps, $\tilde{\Delta}^1_q$ and $\tilde{\Delta}^2_q$, the quantum Rabi Hamiltonian [Eq.~(\ref{RabiHam})] is simulated by a digital expansion~\cite{Lloyd96} by interleaving the different interactions. In the protocol described here, customary quasi-resonant JC dynamics with different qubit frequencies are combined with single-qubit drivings to perform standard qubit rotations~\cite{blais_pra}. This sequence is repeated following the digital quantum simulation scheme in order to achieve a better fidelity of the quantum Rabi dynamics. Note the existence of a direct relationship between the effective system parameters and the real circuit variables. The simulated bosonic mode frequency is related to the resonator detuning $\omega_r^R=2\tilde{\Delta}_r$, while the effective two-level system frequency is connected to the qubit frequency considering the two steps, $\omega_q^R/2=\tilde{\Delta}_q^1-\tilde{\Delta}_q^2$. Finally, the qubit-resonator coupling strength is the same in both cases, $g_R=g$. \subsection{Quantum simulation with ultrastrong couplings}\label{sec:4D} In this section, we analyze analog quantum simulator devices in the USC and DSC regimes which are used to study complex phenomena occurring in real systems, such as biologically relevant molecular complexes. This should not be confused with Sec.~\ref{sec:4A1}, dealing with quantum simulations of models in USC and DSC regimes employing superconducting quantum simulators in the \emph{strong} coupling regime. \subsubsection{Jahn-Teller transitions in molecules} Jahn-Teller models describe the interaction of localized electronic states with vibrational modes in crystals or in molecules \cite{bersuker}. Certain molecules contain a degeneracy in their ground state due to their molecular configuration. A spontaneous symmetry breaking of the geometry of the molecule, a process known as a Jahn-Teller transition, results in one favorable stable configuration, becoming the absolute ground state of the system. Interesting molecular systems undergoing a Jahn-Teller transition exist, e.g. fullerene. Therefore, simulating such quantum systems is very attractive. In a pioneering work by Larson \cite{larson2008}, a connection was made between a class of Jahn-Teller Hamiltonians and a qubit coupled to an oscillator in the USC regime. This initial work was followed by several extensions into other classes of Jahn-Teller models and how to efficiently simulate them using superconducting quantum circuits \cite{meaney2010, dereli2012}. Following the original work~\cite{larson2008}, the most general Hamiltonian of a $E\times\epsilon$ Jahn-Teller model implemented in a cavity QED setting using a single two-level system coupled to two degenerate modes of a cavity has the form \begin{multline} \hat{H}_{\epsilon\times E}/\hbar = \omega_c(\hat{a}^{\dag}\hat{a}+\hat{b}^{\dag}\hat{b})+\frac{\Omega_q}{2}\hat{\sigma}_z +\\ \lambda[(\hat{a}^{\dag}+\hat{a})(\hat{\sigma}_+e^{-i\theta}+\hat{\sigma}_-e^{i\theta}) +\\ (\hat{b}^{\dag} + \hat{b})(\hat{\sigma}_+e^{-i\phi} + \hat{\sigma}_-e^{i\phi})]. \end{multline} Here, $\omega_c$ is the frequency of the two cavity modes. $\theta$ and $\phi$ represent different phases of the mode field interacting with the two-level system. $\lambda$ is the interaction strength between the two-level system and each cavity mode. This Hamiltonian has a very strong resemblance to the QRM [Eq.~(\ref{QRH})], where the only difference is the presence of the second mode $\hat{b}$. The Jahn-Teller transition occurs for values of the qubit-oscillator coupling strengths which correspond to the DSC regime. Such a regime has recently been attained unambiguously in a superconducting circuit \cite{yoshihara2017a}, as detailed in Sec.~\ref{sec:3A}. The $\epsilon\times E$ Jahn-Teller model is the simplest of its kind. More complex models, and thus more realistic, contain several oscillator modes, with a hopping interaction between those modes. The simplest of such multi-mode models is the $E\times(\beta_1+\beta_2)$ Jahn-Teller model, also known as Herzberg-Teller model. Work by Dereli \emph{et al.} \cite{dereli2012} studied the behavior of two coupled modes interacting with the same qubit. Its implementation in a superconducting circuit is presented in Fig.~\ref{fig:JT}. The Hamiltonian of such a system can be expressed as \begin{multline} \hat{H}_{(\beta_1+\beta_2)\times E}/\hbar = \frac{\Omega_q}{2}\hat{\sigma}_z + \Omega_1\hat{a}_1^{\dag}\hat{a}_1+\Omega_2 \hat{a}_2^{\dag}\hat{a}_2 +\\ \left[g_1(\hat{a}_1+\hat{a}_1^{\dag})+ g_2(\hat{a}_2+\hat{a}_2^{\dag})\right]\hat{\sigma}_x + J(\hat{a}_1^{\dag}\hat{a}_2 + \hat{a}_2^{\dag}\hat{a}_1), \end{multline} where $J$ is the mode-mode coupling energy representing the hopping rate of phonons in the simulated system. $g_i$ are the qubit-mode interaction strength coefficients, representing the coupling of a molecular transition to each of the two vibrational modes of the simulated molecule. This type of Hamiltonian can be realized using the technology of superconducting quantum circuits. This simple Hamiltonian already contains the physics of real systems of interest such as the two-phonon modes in C$_6$H$_6$$^{\pm}$ and the two phonon-modes of Fe$^{2+}$ in ZnS. More complex Jahn-Teller models involve the interaction of a qubit to several bosonic modes. A possible candidate to perform an analog simulation would correspond to a qubit ultrastrongly coupled to a coplanar waveguide resonator supporting a collection of modes. By reducing the fundamental mode frequency of the resonator the qubit can simultaneously interact to many modes. Experiments have already been performed using superconducting qubit circuits where such a configuration has been engineered \cite{sundaresan2015, puertas2018}. Irrespective of the simulated type of Jahn-Teller model, it is crucial to attain ultrastrong coulings between the two-level system, or qubit, and the bosonic modes involved in order to perform a faithful analog simulation of the actual molecular system. \begin{figure} \includegraphics[width = \columnwidth]{JT} \caption{\label{fig:JT} Circuit schematic to produce the Jahn-Teller $E\times(\beta_1 + \beta_2)$ model. (a) Circuit diagram of a flux qubit galvanically coupled to two lumped-element resonators, which are capacitively coupled to each other. (b) Circuit representation with the capacitors being interdigitated-finger style~\cite{dereli2012}.} \end{figure} \subsubsection{Energy transfer in photosynthetic complexes} The transfer of energy in light-haversting systems has been a subject of intense study in the last decade. The observation of excitonic quantum oscillations in molecular complexes as a result of light absorption triggered the birth of a field known now as quantum biology \cite{Engel2007}. Biological systems are inherently complex and particularly hard to describe with accuracy, especially considering the fact that key biological processes, in this case the transfer of energy within the molecular complex, are heavily influenced by the environmental fluctuations and the finite temperature. Therefore, a quantum simulator that aims at simulating such relevant processes needs to include the environmental degrees of freedom. As measured in spectroscopic experiments \cite{Wendling2000}, molecular complexes consist of several nodes which are coupled to each other in a particular network configuration. The most popular of light-harvesting complexes, the Fenna-Matthews-Olson (FMO) complex, contains seven nodes, and the interaction between nodes is in fact ultrastrong. In addition, the correlation time of the bath is found to be of comparable order as the internal dynamics of the molecule. In other words, the system is heavily non-Markovian. The strong effect of the environment is due to an USC of the nodes within the molecular complex to its environmental degrees of freedom, most likely phonons in the case of FMO. An analog quantum simulator must then consist of qubits playing the role of the FMO nodes which couple to each other ultrastrongly, with some of the qubits ultrastrongly coupled to the environment. Ultrastrong qubit-qubit interactions are relatively easy to obtain using superconducting circuits \cite{Majer2005}, while ultrastrong qubit-bath interactions have just recently been achieved in experiments using superconducting qubits in transmission lines \cite{forn-diaz2017}. A theoretical proposal of such a quantum simulator was already put forward \cite{Mostame2012} using superconducting flux qubits. Figure~\ref{fig:FMO} shows a schematic of the qubit network mimicking that of the actual FMO complex, along with a circuit representation of the qubit-environment coupling. The interaction to the flux qubit is longitudinal to simulate ultrastrong dephasing. \begin{figure}[!hbt] \centering \includegraphics{FMO} \caption{\label{fig:FMO} a) Experimental layout for simulating the exciton dynamics and ENAQT in the FMO complex \cite{Mostame2012}. b) Circuit schematic representation of a single qubit coupled to an ohmic environment. In this circuit, the coupling is longitudinal with respect to the qubit, simulating in this way ultrastrong dephasing. The environment can be simulated by a linear chain of LCR resonators, as in this figure, or by using a transmission line, as demonstrated experimentally \cite{forn-diaz2017}.} \end{figure} \section{Physics of the ultrastrong coupling regime}\label{sec:5} In this section, we will review some of the intrinsic physics occurring in the USC regime, and what kind applications have been proposed for ultrastrongly coupled systems. First, we will present several instances in which novel quantum optical phenomena are possible in the USC regime and how they could be useful for quantum information processing purposes. We will continue with an important application in quantum computing as is the generation of ultrafast quantum gates. The section will close with a description of how dissipative systems must be treated in the USC regime. \subsection{Quantum optics}\label{sec:5A} The achievement of ultrastrong couplings in any physical platform opens up the possibility to study counter-intuitive phenomena appearing in the Rabi model which is not present in the more familiar JC model~\cite{blockade_ridolfo,stassi2013,felicetti2014,garziano2014,garziano2015}. Beyond the instances described in this subsection, concepts appearing in other branches of physics are also being studied in the USC regime, such as symmetry breaking and Higgs mechanism~\cite{garziano2014} and approaches relating to Feynman diagrams~\cite{1367-2630-19-5-053010}. \subsubsection{Two atoms excited by a single photon} A particular instance is the case of a photon which excites two atoms at the same time in a reversible manner~\cite{garziano2016}. In a generalized version of the Rabi model, two two-level atoms interact with a single mode of a cavity, given by the Hamiltonian \begin{multline} \label{eq:garziano} \hat{\cal H}=\frac{\hbar\Omega}{2} \sum_{i} \hat{\sigma}_{z}^{(i)} + \hbar\omega \hat{a}^{\dagger}\hat{a} \\ +\hbar g \left( \hat{a}+\hat{a}^{\dagger}\right) \sum_{i} \left[ \cos{\left( \theta \right)} \hat{\sigma}^{(i)}_{x}+ \sin{\left( \theta \right)} \hat{\sigma}^{(i)}_{z} \right]. \end{multline} As shown in Fig.~\ref{fig:garziano_levels} and Fig.~\ref{fig:garziano_spectrum}, a mixing exists in third-order perturbation theory between states $|g,g,1\rangle$ and $|e,e,0\rangle$ due to the counter-rotating terms. At the resonance point where the frequency of the cavity is twice the frequency of each atom, the effective Hamiltonian is given by $\hat{\cal H}_{\rm eff} = - \hbar\Omega_{\rm eff} \left( |e,e,0\rangle \langle g,g,1| + \text{H.c.} \right)$, where the maximum coupling is achieved when \begin{equation} \frac{\Omega_{\rm eff} }{ \Omega} \bigg|_{\theta=\cos^{-1}{\sqrt{2/3}}} = \frac{16}{9\sqrt{2}} \left( \frac{g}{\Omega}\right)^{3}. \end{equation} \begin{figure}[!hbt] \centering \includegraphics[width = 7cm]{garziano_levels} \caption{\label{fig:garziano_levels} (Color online) Multi-atom excitation with a single photon. As a result of USC physics, two or more atoms in an optical cavity can absorb a single photon. The figure shows a sketch of the process giving the main contribution to the effective coupling between the bare states $|g,g,1\rangle$ and $|e,e,0\rangle$, via intermediate virtual transitions. The coupling $\lambda$ corresponds to $g$ in the main text. The initial state $|g,g,1\rangle$ transitions to virtual intermediate excited states which would not conserve the total energy. At the end of the process, the final state $|e,e,0\rangle$ is excited, preserving the total system energy. Here, the processes which do not conserve the excitation number are represented by an arrowed dashed line. Each path includes three virtual transitions involving out-of-resonance intermediate states. The figure displays only the process that gives the main contribution to the effective coupling between the bare states $|g,g,1\rangle$ and $|e,e,0\rangle$. Higher-order processes, depending on the atom-field interaction strength, can also contribute. The transition matrix elements are also shown~\cite{garziano2016}.} \end{figure} \begin{figure}[!hbt] \centering \includegraphics[width = 5cm]{garziano_spectrum} \caption{\label{fig:garziano_spectrum} (Color online) (a) Frequency differences $\omega_{i,o} = \omega_{i} - \omega_{o}$ for the lowest energy eigenstates of Eq.~(\ref{eq:garziano}) as a function of the resonator frequency $\omega_{c}/\omega_{q}$. $\omega_c$ and $\omega_q$ correspond to $\omega$ and $\Omega$ of the main text, respectively. Starting from the lowest excited states of the spectrum, a large anticrossing around $\omega_{c}/\omega_{q} =1$ can be observed, corresponding to the standard vacuum Rabi splitting. Here, we consider a normalized coupling rate $g/\omega_{q}=0.1$ between the resonator and each of the two qubits. The particular case $\theta = \pi/6$ is shown. The arrows indicate the ordinary vacuum Rabi splitting arising from the coupling between the states $|g,g,1\rangle$ and $(|g,e,0\rangle + |e,g,0 \rangle)/\sqrt{2}$; (b) enlarged view of the spectral region delimited by a square in (a), where the third and fourth levels display an apparent crossing. The enlarged view shows a clear avoided-level crossing. The level splitting originates from the hybridization of the states $|g,g,1\rangle$ and $|e,e,0\rangle$ due to the presence of counterrotating terms in the system Hamiltonian. The resulting states are well approximated by $(|g,g,1\rangle \pm |e,e,0\rangle)/\sqrt{2}$. This splitting is not present in the RWA, where the coherent coupling between states of different number of excitations is not allowed~\cite{garziano2016}.} \end{figure} \subsubsection{Ancilla qubit spectroscopy} Given the extreme parameters required to reach the USC regime, there is an intrinsic difficulty in performing direct spectroscopic measurements of the system as well as observing its dynamics. By contrast, several proposals were put forward to use a second qubit, known as an ancilla qubit, coupled to the USC system to extract some of its properties \cite{lolli2015, felicetti2015b, garziano2014}. The ancilla-system coupling strength is in the strong coupling regime. In this configuration, the spectrum of the ancilla qubit contains information on the eigenstates of the USC system. Therefore, the ancilla qubit can be used as a probe of the many properties of the otherwise inaccessible ultrastrongly coupled system. In the particular configuration studied by Lolli \emph{et al.} \cite{lolli2015}, the Hamiltonian that describes the dynamics of the system is given by \begin{equation} \label{eq:lolli} \hat{\cal H} = \hat{\cal H}_{S} + \frac{\Omega_{\rm an}}{2} \hat{\sigma}_{z}^{(\rm an)} + g_{\rm an} (\hat{a}+\hat{a}^{\dagger}) \hat{\sigma}_{x}^{(\rm an)} + \Omega_{d} \cos{\left( \omega_{d} t\right)} \hat{\sigma}_{x}^{(\rm an)}, \end{equation} where $\Omega_{\rm an}$ is the natural frequency of the ancillary qubit, $g_{\rm an}$ is the coupling of the ancilla qubit to a single mode of the cavity, $\Omega_{d}$ and $\omega_{d}$ characterize the periodic driving of the ancilla qubit with a classical field, and $\hat{\cal H}_{S}$ is the Hamiltonian of the ultrastrongly coupled system [Eq.~(\ref{QRH})] the ancilla qubit is probing. In the particular work of Lolli \emph{et al.}, the ultrastrongly coupled system consists of a single cavity mode coupled to an ensemble of identical two-level systems with a collective coupling well in the USC regime. Several instances were studied corresponding to the Dicke, Tavis-Cummings, and Hopfield models, whose respective Hamiltonians are \begin{equation} \label{eq:DTH} \begin{split} &\hat{\cal H}_{\text{Dicke}}= \omega \hat{a}^{\dagger}\hat{a} + \Omega \hat{J}_{z} + \frac{g}{\sqrt{N}} \left( \hat{a} + \hat{a}^{\dagger} \right) \left( \hat{J}_{+} + \hat{J}_{-}\right), \\ &\hat{\cal H}_{\text{TC}}= \omega \hat{a}^{\dagger}\hat{a} + \Omega \hat{J}_{z} + \frac{g}{\sqrt{N}} \left( \hat{a} \hat{J}_{+} + \hat{a}^{\dagger} \hat{J}_{-} \right), \\ &\hat{\cal H}_{\text{Hopfield}}= \hat{\cal H}_{Dicke} + \frac{g^{2}}{\Omega} D \left( \hat{a} + \hat{a}^{\dagger}\right)^{2}. \end{split} \end{equation} $\omega$ is the frequency of the single mode cavity, $\Omega$ corresponds to the transition frequency of the $N$ identical two-level atoms, $\lambda$ describes the collective coupling, and the collective operators are given by $\hat{J}_{z} = \frac{1}{2} \sum_{i} \hat{\sigma}_{z}^{(i)}$ and $\hat{J}_{\pm}=\sum_{i} \hat{\sigma}_{\pm}^{(i)}$. \begin{figure}[!hbt] \centering \includegraphics[width=202pt]{dicke.png} \\ \includegraphics[width=202pt]{tcummings.png} \\ \includegraphics[width=202pt]{hopfield.png} \caption{\label{fig:lolli} (Color online) Ancilla qubit spectroscopy of ultrastrongly coupled systems. The top, middle, and bottom panels correspond to Eq.~(\ref{eq:lolli}) with the system $S$ being the Dicke, Tavis-Cummings, and Hopfield models, respectively [Eq.~(\ref{eq:DTH})]. $\omega_c$ and $\lambda$ correspond to $\omega$ and $g$ of the main text, respectively. Considering the ancilla qubit as the measurement qubit $M$, for finite $g_{M}$ the coupling between $S$ and $M$ creates a mixing between states of the form $|\psi_{S}\rangle \otimes |\psi_{M}\rangle$ and the driving induces transitions from the ground state $|G_{S+M}\rangle$ to excited states. Therefore, the relevant excited states $|l\rangle$ are those having the largest values of $| \langle G_{S+M} | \hat{\sigma}^{(M)}_{x}| l \rangle |^{2}$. The results show that, due to the off-resonant coupling, there is only one dominant spectroscopically active level (black thick solid line), which has a strong overlap with the state $|G_{S}\rangle \otimes |\uparrow \rangle$. Left panels: excitation energies for the three considered systems $S$ versus the coupling $\lambda$ between the boson field and the $N$ atoms. Right panels: Lamb shift of the ancillary qubit transition as function of the coupling $\lambda/\omega_c$ of the coupled system $S$ under consideration. The red-dashed lines in the right panels depict the shift predicted by the analytic calculation. The agreement between the numerical diagonalization results and the analytical formula [Eq.(\ref{eq:lamb-lolli})] is excellent in the considered range of values for $\lambda / \omega_{c}$, except for points where there are avoided crossings with other levels~\cite{lolli2015}.} \end{figure} All three models are shown in Fig.~\ref{fig:lolli}. Due to the ancilla-system coupling, a measurable Lamb shift in the frequency of the ancillary qubit appears. Up to second order in perturbation theory in $g_{\rm an}$, this shift can be analytically calculated to be \begin{equation}\label{eq:lamb-lolli} \begin{split} \delta \omega_{\rm an} &\sim g^{2}_{\rm an} \left( \frac{1}{\omega_{\rm an}-\omega} + \frac{1}{\omega_{\rm an}+\omega} \right) \langle \left( \hat{a} + \hat{a}^{\dagger} \right)^{2} \rangle \\ &+ g^{2}_{\rm an} \left( \frac{1}{\left( \omega_{\rm an}-\omega \right)^{2}} - \frac{1}{\left(\omega_{\rm an}+\omega\right)^{2}} \right) \langle \hat{V}^{(S)} \rangle, \end{split} \end{equation} where $\hat{V}^{(\text{Dicke})} = g N^{-1/2} \left(\hat{a} + \hat{a}^{\dagger} \right) \hat{J}_{x}$, $\hat{V}^{(\text{TC})} = g N^{-1/2} \left( \hat{a}^{\dagger} \hat{J}_{-} + \hat{a} \hat{J}_{+} \right)$ and $\hat{V}^{(\text{Hopfield})} = \hat{V}^{(\text{Dicke})} + 2 g^{2}\Omega^{-1} \left( \hat{a} + \hat{a}^{\dagger}\right)^{2}$. As is explicit from the equations, the shift depends on the ground state photon population $\langle \hat{a}^{\dagger}\hat{a}\rangle$, on the anomalous expectation value $\langle \hat{a}^{\dagger 2} + \hat{a}^{2} \rangle$, and on the correlations between the cavity and the $N$ two-level systems. Figure~\ref{fig:lolli} shows the Lamb shift for the three discussed models. \subsubsection{Optomechanics in the USC regime} Solid-state nanoelectromechanical resonators have been considered as a mediator of the interactions between qubits~\cite{PhysRevA.70.052315}. In this scenario, NOON states are another type of quantum states which can be obtained as a consequence of ultrastrong interactions~\cite{macri2016}. It has been shown that the preparation of NOON states in ultrastrongly coupled optomechanical systems is possible following a completely controlled and deterministic procedure. The setup consists of two identical, optically coupled optomechanical systems which can be modeled by the photonic modes of the optical cavities and the phononic modes from the mechanical oscillators (see the description of the setup in Fig.~\ref{fig:macri}). The dynamics of each independent optomechanical subsystem are characterized by the Hamiltonian \begin{equation} \hat{\cal H}_{0}^{(i)} = \hbar\omega_{R} \hat{a}^{\dagger}_{i} \hat{a}_{i} + \hbar\omega_{M} \hat{b}^{\dagger}_{i} \hat{b}_{i} + \hbar g_{M} \hat{a}^{\dagger}_{i} \hat{a}_{i} \left( \hat{b}_{i} + \hat{b}_{i}^{\dagger} \right), \end{equation} in the local Fock basis $|n_{i},m_{i}\rangle$, where the integers $n_{i}$ and $m_{i}$ represent the number of photons and vibrational excitations in the $i$-th optomechanical system. The preparation of mechanical entangled NOON states requires two interacting optical cavities with an interaction Hamiltonian $\hat{\cal H}_{I}=\hbar g_{R}\left( \hat{a}_{1}^{\dagger} \hat{a}_{2} + \hat{a}_{1} \hat{a}^{\dagger}_{2} \right)$. Starting in the ground state of the system that contains no photons or phonons in either system, one of the optical resonators is excited with an external $\pi$-pulse resonant with the transition $|0_{1},0_{1};0_{2},0_{2}\rangle \leftrightarrow |1_{1},m_{1};0_{2},0_{2}\rangle$. Then, the system freely evolves with the interaction Hamiltonian undergoing Rabi oscillations. The time-dependent quantum state is then given by: $|\Psi(t) \rangle = \cos{\left(g_{R}t\right)} |1_{1},m_{1};0_{2},0_{2}\rangle - i \sin{\left(g_{R}t\right)} |0_{1},0_{1};1_{2},m_{2}\rangle$. A second resonant pulse with the transition $|1_{i},m_{i}\rangle \leftrightarrow |0_{i},N_{i}\rangle$ will produce the desired NOON state, \begin{equation} |\Psi\rangle = \alpha |0_{1},N_{1};0_{2},0_{2}\rangle -i \beta |0_{1},0_{1};0_{2},N_{2}\rangle. \end{equation} \begin{figure}[!hbt] \centering \includegraphics[width = 7cm]{macri} \caption{\label{fig:macri} (Color online) Optomechanical USC. Two identical coupled opto-mechanical systems, with frequency $\omega_{M}$, are parametrically coupled with a single-mode optical resonator or cavity, which can be driven by external optical pulses with specific central frequencies. One cavity mirror can be added to the end of both the opto-mechanical systems for optical readout~\cite{macri2016}.} \end{figure} It is noteworthy to mention that further developments and applications of the USC and DSC regimes to coupled mechanical systems are expected, given that the physical conditions are not necessarily equivalent to those of coupled electromagnetic oscillators~\cite{Sudhir2012}. Other work in ultrastrongly coupled oscillator systems, including optomechanics, have investigated the influence of $A^2$ terms and their possible detection in a real experiment \cite{Tufarelli2015, Rossil2017}. \subsection{Quantum computation}\label{sec:5B} Being able to tune the coupling strength in a light-matter system from strong to the ultrastrong regime allows one to observe and propose new strategies and protocols in quantum information processing, such as remote entanglement applications~\cite{PhysRevLett.120.093601,PhysRevLett.120.093602}. In this section, we discuss the possibility to achieve ultrafast quantum computation, protected qubits to store quantum information and to manipulate and prepare a desired quantum state. \subsubsection{Ultrafast quantum computation} Ultrafast two-qubit gates have been considered as one potential application~\cite{kyaw2015,wang2012,kyaw2015b,wang2016} of the USC regime in quantum computation~\cite{Romero2012}. In the original proposal \cite{Romero2012}, a two-qubit Hamiltonian was considered \begin{equation} \hat{\cal H}=\sum_{i} \frac{\hbar \Omega_{i}}{2} \hat{\sigma}_{z}^{(i)} + \hbar \omega \hat{a}^{\dagger}\hat{a} - \sum_{i} \hbar g_{i} \hat{\sigma}_{z}^{(i)} \left( \hat{a} + \hat{a}^{\dagger} \right), \end{equation} with switchable longitudinal couplings $g_{i}$ (see the circuit diagram of the experimental proposal in Fig.~\ref{fig:romero}). Based on a four-step sequential displacement of the cavity $\hat{\mathcal{D}}\left( \beta \hat{\sigma}_{z} \right)= \exp{ \left[ \left( \beta \hat{a}^{\dagger} - \beta^{*} \hat{a} \right) \hat{\sigma}_{z} \right] }$, using $\hat{\mathcal{D}}\left(\alpha \right) \hat{\mathcal{D}}\left(\beta \right) = e^{i \rm{Im}\left( \alpha \beta^{*} \right)} \hat{\mathcal{D}} \left( \alpha + \beta \right)$, the two-qubit gate was shown to be proportional to a CPHASE quantum gate \begin{equation} \hat{\mathcal{U}} \propto \exp{ \left[ 4 i \frac{g_{1}g_{2}}{\omega^{2}}\sin{\left(\omega t_{1} \right)} \hat{\sigma}_{z}^{(1)} \hat{\sigma}_{z}^{(2)}\right] }, \end{equation} where the fidelity of the gate can reach $99\%$ in the nanosecond time scale for realistic circuit QED technology. \begin{figure}[!hbt] \centering \includegraphics[width = 7cm]{romero} \caption{\label{fig:romero} (Color online) Ultrafast two-qubit gates. The figure shows the circuit schematic to realize ultrafast two-qubit controlled phase gates between two flux qubits galvanically coupled to a single-mode transmission line resonator. The bottom image shows a six Josephson-junction circuit coupled galvanically to a resonator. The flux qubit is defined by three Josephson junctions in the upper loop threaded by external flux $\Phi_1$. Two additional loops allow a tunable and switchable qubit-resonator coupling by controlling fluxes $\Phi_2,\Phi_3$. The coupling is defined by the phase drop $\Delta\psi$ across the shared junction~\cite{Romero2012}.} \end{figure} \subsubsection{Protected qubits} Another important example where the USC regime may become relevant in quantum computation is in the encoding of protected qubits~\cite{nataf2011}. Nataf and Ciuti considered the case of multiple qubits coupled to the same resonator mode \begin{equation} \hat{\cal H}/\hbar = \omega \hat{a}^{\dagger}\hat{a} + \frac{\Omega}{2} \sum_{j} \hat{\sigma}_{z}^{(j)} + i \frac{g}{\sqrt{N}} \left( \hat{a} - \hat{a}^{\dagger}\right) \sum_{j} \hat{\sigma}_{x}^{(j)}. \end{equation} Here, $N$ is the total number of qubits coupled to the resonator. It turns out that when the collective coupling of all qubits reaches very large values, the two quasi-degenerated lowest states of the Hamiltonian become \begin{equation} \begin{split} &| \Psi_{G} \rangle \sim \frac{1}{\sqrt{2}} \left[ | \alpha \rangle_{c} | + \rangle^{\otimes N} + \left(-1 \right)^{N} |- \alpha \rangle_{c} | - \rangle^{\otimes N} \right], \\ &| \Psi_{E} \rangle \sim \frac{1}{\sqrt{2}} \left[ | \alpha \rangle_{c} | + \rangle^{\otimes N} - \left(-1 \right)^{N} |- \alpha \rangle_{c} | - \rangle^{\otimes N} \right], \end{split} \end{equation} with $|\pm\rangle$ being eigenstates of $\hat{\sigma}_{x}$. Both these states are weakly coupled to each other as they belong to a different parity chain \cite{casanova2010}. This doublet $\{ | \Psi_{G} \rangle , | \Psi_{E} \rangle\}$ therefore forms a robust qubit, with an energy difference $\delta \sim \Omega\exp{\left( -2g^{2}\omega^{-2} N \right)}$. The analysis of the coherence times is shown in Fig.~\ref{fig:nataf}. Clearly, for increasing coupling strengths, and also for increasing number of qubits, the decoherence rate decreases yielding a more protected qubit, up to a certain value of the coupling where the decoherence rate saturates. \begin{figure}[!hbt] \centering \includegraphics[width = 7cm]{nataf} \caption{\label{fig:nataf} (Color online) Protected quantum computation in the USC regime. $\Omega_0$ and $\omega_{eg}$ correspond to $g$ and $\Omega$ of the main text, respectively. To investigate the robustness of the coherence between the two quasi-degenerate vacua $|\Psi_{G}\rangle$ and $|\Psi_{E}\rangle$, the authors study the non-unitary dynamics of the initially prepared pure state $|\Psi_{0}\rangle = \cos{\theta} |\Psi_{E}\rangle + \sin{\theta} e^{i\phi} |\Psi_{G} \rangle$ in the presence of anisotropic qubit dissipation rates $\Gamma_{y}, ~ \Gamma_{z} \gg \Gamma_{x}$ and for several cavity loss rates. The simulations proved that the coherence time increased while increasing the normalized vacuum Rabi frequency $g / \Omega$. In fact, the coherence time was exponentially enhanced before reaching a saturation value. Left-hand panel: Coherence time versus the normalized vacuum Rabi frequency for one atom. Inset: Number of photons plotted versus the normalized vacuum Rabi frequency. Top right-hand panel: Coherence time for $N=1,2,$ and $3$ atoms. Bottom right-hand panel: Maximum coherence time as a function of the number of atoms~\cite{nataf2011}.} \end{figure} \subsubsection{State preparation: Qubit-resonator entangled states} The eigenstates of a system in the USC regime result in many-body qubit-resonator entangled quantum states~\cite{sahel, felicetti2015b}. Certain quantum information processing protocols may require the generation of this type of states, an example being cat-state-based quantum error correction \cite{Ofek2016}. For instance, a paradigmatic multipartite entangled state, the $N$-qubit GHZ state, results from a system of superconducting qubits coupled to a transmission line resonator~\cite{wang2010}. In this system, the Hamiltonian in the interaction picture reads $\hat{\cal H}_{I}(t) = \hbar g \sum_{i} \left( \hat{a}^{\dagger} e^{i \omega t} + \hat{a} e^{-i \omega t} \right) \hat{\sigma}_{x}^{(i)}$. For particular periods $T_{n}=2 \pi n / \omega$ commensurate with the cavity frequency $\omega$, the time evolution operator in the Schr\"odinger picture takes the form \begin{equation} \hat{U}(T_{n}) \propto \exp{\left[ -i \theta (n) \sum_{i \neq j} \hat{\sigma}_{x}^{(i)} \hat{\sigma}_{x}^{(j)} \right]}, \end{equation} with $\theta (n)=g^{2}/\omega^{2} 2 \pi n$. Hence, starting from a product state $| \Psi(0) \rangle = \otimes_{i=1}^{N} |-\rangle^{(i)}_{z}$, where $\hat{\sigma}_{z} |-\rangle=- |-\rangle$, the system evolves into a GHZ state of the form \begin{equation} |\Psi (T_{\rm min}) \rangle = \frac{1}{\sqrt{2}} \left( \otimes_{i=1}^{N} |-\rangle^{(i)}_{z} + e^{i \pi (N+1)/2} \otimes_{i=1}^{N} |+\rangle^{(i)}_{z} \right), \end{equation} for the minimum preparation time given by $T_{\rm min}=\pi \omega/8 g^{2}$~\cite{wang2010}. \subsection{Dissipation in the ultrastrong coupling regime}\label{sec:5C} Dissipation, decay or decoherence rates are natural scales that appear in various platforms of quantum information processing due to the coupling of qubits to any external degrees of freedom. The first study of dissipation in the USC regime \cite{DeLiberato2009} used the second-order time-convolutionless projection operator method (TCPOM). In later work, an equivalent method was found by projecting the master equation in the dressed-state basis~\cite{Beaudoin2011}. Using either technique, modifications of the standard quantum optics master equation were obtained which do not display unphysical effects when the USC regime is reached. Here, we follow the master equation projection method \cite{Beaudoin2011} to obtain a suitable description of the system dynamics in the dissipative QRM, valid in the Bloch-Siegert regime (perturbative USC). The standard (Lindblad) form of the master equation at $T=0$ looks \begin{equation}\label{eq:Lindblad} \frac{\rm{d}\hat{\rho}}{\rm{d}t} = -i[\hat{H}, \hat{\rho}] + \mathcal{L}\hat{\rho}, \end{equation} where the Lindbladian $\mathcal{L}$ in the standard form is defined as \begin{equation}\label{eq:Lind} \mathcal{L}\hat{\rho} = \kappa\mathcal{D}[\hat{a}]\hat{\rho} + \gamma_{\rm ge}\mathcal{D}[\hat{\sigma}_-]\hat{\rho} + \frac{\gamma_{\phi}}{2}\mathcal{D}[\hat{\sigma}_z]\hat{\rho}. \end{equation} Here, $\kappa, \gamma_{ge}, \gamma_{\phi}$ are the cavity decay, qubit decay and qubit dephasing rates, respectively. The superoperator $\mathcal{D}[\mathcal{\hat{O}}]$ is defined as $\mathcal{D}[\mathcal{\hat{O}}]\hat{\rho} = \frac{1}{2}(2\mathcal{\hat{O}}\hat{\rho}\mathcal{\hat{O}}^{\dag} - \hat{\rho}\mathcal{\hat{O}}^{\dag}\mathcal{\hat{O}} - \mathcal{\hat{O}}^{\dag}\mathcal{\hat{O}}\hat{\rho})$. Equation~(\ref{eq:Lindblad}) assumes that the ground state of the qubit + cavity system is the vacuum state, $|g0\rangle$. However, in the QRM the vacuum state is a superposition of excited states of the system. The true ground state of the system $\widetilde{|g0\rangle}$ is also a superposition of multiple photon number states entangled with the qubit state (see Sec.~\ref{sec:2}). Therefore, the master equation needs to be modified in such a way that it damps any initial state towards the actual ground state $\widetilde{|g0\rangle}$. In Fig.~\ref{fig:ME} it is possible to observe the detrimental effect of not using the proper form of the master equation, which results in a fictitious heating rate. \begin{figure}[!hbt] \centering \includegraphics[width = 0.9\columnwidth]{ME} \caption{\label{fig:ME} Excess in the mean photon number due to relaxation in the steady state of the ultrastrongly coupled qubit-resonator system \cite{Beaudoin2011}. Initially, the system is in its true ground state $\widetilde{|g0\rangle}$, but, under the standard master equation Eq.~(\ref{eq:Lindblad}), relaxation unphysically excites the system even at $T=0$. The black line, which corresponds to the left axis, represents the number of additional photons introduced in steady state by dissipation. The red dots, associated to the right axis, designate one minus the fidelity of the Rabi ground state $\widetilde{|g0\rangle}$ to the vacuum state $|g0\rangle$. Inset: mean photon number as a function of time for the system starting in its ground state with $g/2\pi=2~$GHz. In both the main plot and the inset, the blue dashed line indicates results for the fidelity and the photon number as obtained with the master equation given by the Lindbladian in Eq.~(\ref{eq:LQRM}).} \end{figure} To obtain a master equation that takes into account the actual eigenvalues of the QRM, we first move to the frame that diagonalizes the quantum Rabi Hamiltonian [Eq.~(\ref{QRH})] for both the system and the system-bath Hamiltonians. Under experimentally reasonable approximations\footnote{Neglecting high-frequency terms, the resulting expressions involve transitions $|j\rangle\leftrightarrow|k\rangle$ between eigenstates at a rate that depends on the noise spectral density at frequency $\Delta_{kj} = \omega_k - \omega_j$. If their line width is small enough, these transitions can be treated as due to independent baths. As a result, these independent baths can each be treated in the Markov approximation.}, the correct form of the Lindbladian at $T=0$ reads \begin{multline}\label{eq:LQRM} \mathcal{L}_{\rm QRM}\circ = \mathcal{D}\left[\sum_j\Phi^j|j\rangle\langle j|\right]\circ + \sum_{j,k\ne j}\Gamma_{\phi}^{jk}\mathcal{D}[|j\rangle\langle k|]\circ \\+ \sum_{j,k>j}(\Gamma_{\kappa}^{jk}+\Gamma_{\gamma}^{jk})\mathcal{D}[|j\rangle\langle k|]\circ. \end{multline} $|k\rangle$ and $|j\rangle$ are eigenstates of the QRM. The circle $\circ$ represents the operator on which the Lindbladian is acting on. The first two terms in Eq.~(\ref{eq:LQRM}) are the contributions from the bath that caused only dephasing in the standard master equation [last term in Eq.~(\ref{eq:Lind})]. Here, this $\hat{\sigma}_z$ bath causes dephasing in the eigenstate basis with \begin{equation} \Phi_j = \sqrt{\frac{\gamma_{\phi}(0)}{2}}\sigma_z^{jj}, \end{equation} where $\gamma_{\phi}(\omega)$ is the dephasing rate corresponding to noise at frequency $\omega$ due to the noise spectral density. $\sigma_z^{jk} = \langle j|\hat{\sigma}_z|k\rangle$. The fact that $\hat{\sigma}_z$ is not diagonal in the system eigenbasis causes undesired transitions at rate \begin{equation} \Gamma_{\phi}^{jk} = \frac{\gamma_{\phi}(\Delta_{jk})}{2}|\sigma_z^{jk}|^2. \end{equation} This noise will only be significant if the power spectral density of dephasing noise at frequency $\Delta_{jk}$ is significant. This is the case away from the sweet spot in superconducting qubits. The longitudinal noise along $\sigma_z$ may stimulate transitions between the QRM eigenstates $|j\rangle$, leading to dephasing-induced generation of photons and qubit excitations, a phenomenon linked to the dynamical Casimir effect. The last two terms in Eq.~(\ref{eq:LQRM}) are the contributions from the resonator and qubit own baths that caused relaxation in the standard master equation. These baths now cause transitions between eigenstates at rates \begin{align} \Gamma_{\kappa}^{jk} &= \kappa(\Delta_{jk})|X_{jk}|^2,\\ \Gamma_{\gamma}^{jk} &= \gamma(\Delta_{jk})|\sigma_x^{jk}|^2, \end{align} where \begin{align} X_{jk} &= \langle j| \hat{X} |k \rangle,\\ \sigma_x^{jk} &= \langle j|\hat{\sigma}_x |k\rangle. \end{align} The rates $\kappa(\omega)$ and $\gamma(\omega)$ are proportional to noise spectra from the resonator and qubit baths, respectively. $\hat{X}$ is the cavity quadrature $\hat{X}=\hat{a}^{\dag}+\hat{a}$. The Lindbladian in Eq.~(\ref{eq:LQRM}) correctly predicts the system evolution of the QRM under the presence of dissipation and dephasing baths. This is illustrated by the dashed blue line in Fig.~\ref{fig:ME}. The new decay rates have specific selection rules due to the parity of the eigenstates in the quantum Rabi Hamiltonian. A direct consequence of the modification of the emission rates is the appearance of an asymmetry in the vacuum Rabi splitting when qubit and resonator are resonant. The spectrum of the system could be used in this way to probe dephasing noise \cite{Beaudoin2011}. With the corrected version of the master equation, it was demonstrated \cite{DeLiberato2009} that a harmonic modulation of the qubit-cavity interaction strength in the USC regime with a functional form \begin{equation} g(t) = g_0 + \Delta g\sin(\omega_{\rm mod}t), \end{equation} produces extracavity radiation originated from the spontaneous emission of virtual photons existing in the ground state of an ultrastrongly coupled system. Calculating the emitted radiation employing the standard master equation [Eq.~(\ref{eq:Lindblad})] instead produces the unphysical picture of generating radiation even when the drive is very far from the cavity resonance, which clearly violates energy conservation rules (see Figure~\ref{fig:Deliberato}). \begin{figure}[!hbt] \centering \includegraphics[width = 0.8\columnwidth]{Deliberato} \caption{\label{fig:Deliberato} (Color online) Extracavity photon emission rate $R_{\rm em}$ (in units of $\omega_0$, the cavity frequency) for a resonant qubit-cavity system as a function of the modulation frequency, $\omega_{\rm mod}$, for a modulation amplitude of the vacuum Rabi frequency $\Delta g/\gamma=0.1$, where $\gamma$ is the qubit and cavity emission rate. For comparison, the dashed line shows the extracavity emission rate $\gamma_{\rm cav}N_{\rm in}$ where $N_{\rm in}$ is the steady-state intracavity photon number, that would be predicted by the Markovian approximation: note the unphysical prediction of a finite value of the emission even far from resonance. The inset shows the dependence of the photon emission rate on the modulation amplitude, calculated both numerically and analytically \cite{DeLiberato2009}.} \end{figure} Another important aspect related to dissipation that has just recently been addressed~\cite{DeLiberato2017} is the impact of the decay rates on the number of photons in the ground state of a system in the USC regime. The ground state in an ultrastrongly coupled qubit-cavity system is composed of hybridised qubit-cavity states which lead to a non-zero value of the expectation value of the photon number operator, defined as\footnote{We would like to remind the reader that the photon number operator $\hat{N}$ as defined in traditional quantum optics textbooks is not a good quantum number in the USC regime, as it does not commute with the Quantum Rabi Hamiltonian $[\mathcal{\hat{H}}_R,\hat{N}]\ne0$. The consequence is a non-stationary value of the population of photon-number states of the cavity, as shown in \cite{casanova2010}.} $\hat{N}=\langle \hat{a}^{\dag}\hat{a}\rangle$. It is then crucial to understand what is the impact of the qubit and cavity decay rates on the population of photons in the USC ground state. The result is a bit surprising, as it turns out that the USC effects are only quantitatively affected by losses. Thus, USC phenomena such as extracavity emission may be observed in systems with very high losses, even when the usual condition of strong coupling is not satisfied, $\gamma>g$. Another important quantum optical phenomenon in open quantum systems that is modified in the USC regime is photon blockade~\cite{RidolfoetAl12PRL}. In the strong coupling condition where the RWA applies, the temporal photon-photon correlation function shows an oscillatory behaviour with a frequency given by the Rabi frequency of the externally applied drive. Instead, in the USC regime the frequency is given by the ultra-strong emitter-photon coupling which can be traced back to the presence of two-photon cascade decays induced by counterrotating interaction terms. In order to reach these conclusions, a generalized version of the input-output relations had to be extended to the USC regime. The result is the following relation \begin{equation} \label{eq:i-o} \hat{a}_{\rm out}(t) = \hat{a}_{\rm in}(t) - i \frac{\epsilon_c}{\sqrt{8\pi^2\hbar\epsilon_0v}}\dot{\hat{P}}^+. \end{equation} Here, $\epsilon_c$ is a coupling parameter to the environment, $\epsilon_0$ describes the dielectric properties of the output waveguide, and $v$ is the phase velocity. Crucially, $\dot{\hat{P}}^+$ is not proportional to the intracavity field $\hat{a}$ as is usual in quantum optics. Its explicit form is $\dot{\hat{P}}^+ = -i\sum_{j,k>j}\Delta_{kj}P_{jk}|j\rangle\langle k|$, where $P_{jk} = \langle j|\hat{P}|k\rangle$, with $\hat{P} = -iP_0(\hat{a}-\hat{a}^{\dag})$. Here, $|j\rangle$ are the QRM eigenstates. Note that $P^+|0\rangle = 0$, while $a|0\rangle\ne0$. This redefinition of the input-output relations has a direct impact on the output photon number flux, which otherwise would show a finite value even without an externally applied drive. \section{Conclusions and Outlook}\label{sec:6} The interaction between light and matter can be considered as the essential dialogue that describes and explains most fundamental phenomena in nature, emerging rather late in the history of physics out of stepwise developments in mechanics and optics. With the arrival of atomic physics in the 20th century, after the success of electromagnetism at the end of the 19th century, light-matter models were proposed to account for quantum effects observed in the laboratory, giving rise to the (semiclassical) Rabi model. Along these lines, a final key improvement had to be performed with the quantization of light to produce the full-fledged quantum Rabi model. This review article aims at producing a biased overview of light-matter interactions where the ultrastrong and deep strong coupling regimes are necessary for describing the interplay between models and experimental observations. Somehow, we needed the advent of advanced tools in quantum control of atoms and photons, in the wide frame of quantum technologies at the beginning of this 21st century, to produce key experimental results and their corresponding theoretical descriptions in the USC and DSC regimes. Exploring these novel extreme coupling strengths between quantized light and quantized matter is a fundamental task of high scientific relevance, which required conceptual and experimental improvements during the last decade. As frequently happens in the interplay between science and technology, the discovered USC and DSC phenomena may find a variety of applications in quantum simulations, quantum sensing, quantum communication, and quantum computing. Accelerating quantum dynamics should also inspire novel protocols in scalable quantum processing. We believe the study of USC and DSC regimes is still at its infancy and that most advanced discoveries and applications are still waiting to be discovered. \begin{acknowledgements} The authors would like to acknowledge Vladimir E.~Manucharyan, Xinwei Li and Motoaki Bamba for fruitful discussions, and M.~Bajcsy for providing references for the figures in the introduction. J.K. thanks Jeffery Horowitz and Jorge Zepeda for careful proofreading. J.K. acknowledges support from the National Science Foundation through Grant No. DMR-1310138. P.~F.-D. acknowledges support of the Beatriu de Pin\'os postdoctoral programme of the Government of Catalonia's Secretariat for Universities and Research of the Ministry of Economy and Knowledge. L.L., E.R. and E.S. acknowledge funding from MINECO/FEDER FIS2015-69983-P and Basque Government IT986-16, while L. L. is also supported by Ram\'on y Cajal Grant RYC-2012-11391. \end{acknowledgements}
0811.0362
\section{Introduction}\label{sec:Introduction} A particular exact solution to a classical field equation is often regarded as to an extended particle under certain special conditions, a so-called soliton (for a review, \cite{Rajaraman}), which is important in study of nonperturbative properties of field theories. The well-known example is a kink solution to the sine-Gordon equation \cite{Perring:1962vs}. Such a solution can contain not just one particle picture but multi-particle interaction process \cite{Perring:1962vs, Belinsky}. Recent examples of exact solutions of multiple kinks are obtained in $U(N)$ gauge theories at strong coupling \cite{Isozumi:2004va}. A time dependent solution which cannot be obtained by boosting a static solution can be nontrivial. It is the case if a solution represents a massless particle or more than one particle interaction process. In this paper we focus on solutions of the Nambu-Goto action \cite{Nambu:1974zg} which describes fluctuations of a brane. Some of exact static solutions of the Nambu-Goto action or the Dirac-Born-Infeld action \cite{Dirac:1962iy,Born:1934gh} for a brane are well known. For example, the electric BIon solution(catenoid) is a well-known static solution \cite{Born:1934gh,Callan:1997kz}. See \cite{Townsend:1999hi} and references therein for more examples of static solutions. As an example of time-dependent solutions, the Scherk's surface for colliding branes is known, in which two branes reconnect each other in collision \cite{Gibbons:2006rr}. On the other hand, when solitons have linear structures, propagating waves on them are time-dependent solutions of the effective action, which is typically the Nambu-Goto action. For example, waves propagating on vortex-strings and on domain walls were previously studied \cite{Garfinkle:1990jq}. Waves along strings in gravity \cite{Economou:1991bc} and supergravity \cite{Garfinkle:1992zj}, black strings and D-strings \cite{wavy-strings} were also studied. These are based on the fact that the Nambu-Goto action admits wave solutions with arbitrary shape, propagating at the speed of light; For instance the Nambu-Goto action for a $p$-brane of codimension one is given in the static gauge by \begin{equation} S = \int d^{p+1}x \mathscr{L} = -\sigma\int d^{p+1}x \sqrt{1-\partial \phi(x) \cdot \partial \phi(x)} \label{eq:Nambu-Goto action} \end{equation} from which the equation of motion for the fluctuation $\phi(x)$ reads \begin{equation} \partial^2\phi + \frac{1}{2[1-(\partial\phi)^2]} \partial \phi\cdot\partial\left[(\partial\phi)^2 \right] =0. \label{eq:EOM} \end{equation} This admits wave solutions in an arbitrary shape propagating at the speed of light into one space direction of the $p$-brane world-volume, given by \begin{equation} \phi(x)=f(\vec{k}\cdot\vec{x}\pm\omega t+c), \label{eq:single wave} \end{equation} where $\vec{k}^2 -\omega^2=0$, $c$ is an arbitrary constant and $f$ is an arbitrary function. In the case of Bogomol'nyi-Prasad-Sommerfield(BPS) solitons in supersymmetric theories, waves with an arbitrary shape, propagating at the speed of light along vortex-strings, domain walls and 1/4 BPS composite states (see, e.g.,\cite{Eto:2006pg}) have been shown to be still BPS \cite{Eto:2005sw} (see also \cite{BlancoPillado:2006qu}). To the best of our knowledge, waves propagating to only one direction of solitons have been explicitly known so far as exact solutions. If initially two waves are simultaneously prepared in well separated regions, for a while each of them will preserve the forms (\ref{eq:single wave}) without interference, so that the configuration is approximately a superposition of them if they are not much overlapped. However, once those waves get close to each other such configuration is no longer valid; we should take into consideration the full non-linear equation (\ref{eq:EOM}), which is a highly non-trivial problem, though some approximate solutions of two colliding waves can be found in the literature \cite{Siemens:2001dx}. In this paper we present an exact solution of two colliding waves moving at the speed of light in the Nambu-Goto action. In our solution two waves collide and scatter each other. The solution in our interest is a linear combination of the left and the right moving waves. This is also a solution to non-interacting wave equation of the Klein-Gordon type. It turns out that the only nontrivial solution is a special linear combination of two logarithmic waves with singular peaks. Our solution should be immediately applied to colliding waves on solitons which have extended directions, such as domain walls and strings. It will be also relevant to dynamics of cosmic strings. Our solution can be decomposed into regions having two different geometries which describe different physics. The central region between the two peaks has the Robertson-Walker type induced metric. It describes shrinking and expanding universe connected by ``Big Bounce" where no ``Big Bang" singularities exist. The region outside the two peaks represents collision dynamics of two branes moving at the speed of light. In this case, two branes reconnect with each other in collision. This solution is new and different from the Scherk's surface for colliding branes which also describes reconnection \cite{Gibbons:2006rr}. The geometry on the outside branes has an Euclidean region where the positivity of the inside of the square root of the action is violated. It is quite interesting that a single continuous solution covers over both Minkowskian and Euclidean regions. Such a situation was already discussed in the literature \cite{Gibbons:2004dz}. We also calculate the energy of the brane fluctuations and find that it diverges due to the singular peaks. A classical particle dynamics on the brane is discussed in detail in two separate regions according to their geometries. While a massive particle on the inside brane is on the scale changing universe, one on the outside branes is bounded by the potential. In the end it is verified that a massive particle on the brane cannot move faster than the speed of light in all the regions in the frame of bulk. This paper is organized as follows. In Sec.~\ref{sec:Wave solutions} we start to discuss wave solutions of the Nambu-Goto action and then move onto a particular solution in our interest, which is two waves moving at the speed of light in two different directions on the Nambu-Goto brane. There we discuss its properties in detail, before going into discussion of a massive particle dynamics on the brane. The geometries of the inside region (the Big Bounce brane universe) and the outside region (colliding branes) are investigated through geodesics of a massive particle in Sec.~\ref{sec:Expanding brane universe} and Sec.~\ref{sec:Colliding branes}, respectively. Each of these sections ends with verification that a speed of a massive particle in the bulk frame cannot exceed the speed of light. Finally, Sec.~\ref{sec:Conclusion and Discussion} is devoted to conclusion and discussion. Divergence of the energy of the brane is shown in Appx.~\ref{appx:Energy of brane}. \section{Wave solutions of the Nambu-Goto action}\label{sec:Wave solutions} In this section we look for further solutions to the equation of motion (\ref{eq:EOM}) of the Nambu-Goto action. In the linear approximation up to the first order of $\phi$, only the first term in Eq.~(\ref{eq:EOM}) remains, reducing to the Klein-Gordon equation, $\partial^2 \phi = 0$. In this limit it admits linear waves \begin{equation} \phi(x) = \sum_i f(\vec{k_i}\cdot\vec{x} \pm \omega_i t + c_i) \label{eq:linear sum} \end{equation} with $\vec{k_i}^2 -\omega_i^2=0$ and $c_i$ are arbitrary constants. This approximation is valid when the brane fluctuation is small enough. However, when a description of large fluctuations is needed, to solve the full equation is highly nontrivial because of nonlinearity of the second term. Some of solutions above, also satisfy the whole nonlinear equation (\ref{eq:EOM}). The simplest example has been already shown in Eq.~(\ref{eq:single wave}) which consists of one wave. Note that nonlinearity in general does not allow a linear combination (\ref{eq:linear sum}) of wave equation solutions in different directions. However, a certain particular type of linear combinations of waves is found here. For solutions to the linear wave equation, $\partial^2\phi=0$, Eq.~(\ref{eq:EOM}) is simply reduced to \begin{equation} \partial\phi\cdot\partial\left[(\partial\phi)^2\right]=0. \label{eq:log-EOM} \end{equation} On generalization of the solution (\ref{eq:single wave}) of waves propagating into one direction, we consider an ansatz for two waves propagating at the speed of light along two directions of Lorentzian momentum vectors $k$ and $p$, \begin{equation} \phi(x) = f(k\cdot x)+g(p\cdot x) . \end{equation} Substituting this to (\ref{eq:log-EOM}) yields \begin{equation} \begin{array}{ccl} 0&=&(k_mf^\prime +p_m g^\prime)\partial^m [k^{2}f^{\prime 2}+p^{2}{g^\prime}^2 + 2(k\cdot p)f^\prime g^\prime]\\ &=&2(k\cdot p)(k_mf^\prime +p_m g^\prime) (k^{m}f^{\prime\prime}g^\prime +p^{m}f^\prime g^{\prime\prime})\\ &=&2(k\cdot p) [k^{2}f^{\prime\prime}f^\prime g^\prime + p^{2}f^\prime g^\prime g^{\prime\prime} + (k\cdot p)(f^{\prime2}g^{\prime\prime} + {g^\prime}^2f^{\prime\prime})]\\ &=& 2(k\cdot p)^2\left[f^{\prime 2}(k\cdot x) g^{\prime\prime}(p\cdot x) +{g^\prime}^2(p\cdot x) f^{\prime\prime}(k\cdot x)\right]. \end{array} \end{equation} Here, $m$ runs over the world-vlume coordinates of the brane, $m=0,1,\cdots,p$, and a prime denotes differentiation of functions with respect to their arguments. Then, for $p \cdot k \neq 0$, we obtain \begin{equation} \frac{f^{\prime\prime}(k\cdot x)}{f^{\prime2}(k\cdot x)} =-\frac{g^{\prime\prime}(p\cdot x)}{g^{\prime2}(p\cdot x)}=-\frac{1}{s}, \end{equation} where $s$ is an arbitrary real constant. Therefore, we obtain \begin{equation} \phi(x)=s[\ln(k\cdot x+c_1)-\ln(p\cdot x+c_2)]+c_3 \end{equation} where $c_1$, $c_2$ and $c_3$ are arbitrary constants. We now consider the simplest case of colliding waves, $\vec{p} = - \vec{k}$. Without loss of generality, by Lorentz transformation and translation, the solution can be written as \begin{equation} \phi(x)=s(\ln|x^0+x^1|-\ln|x^0-x^1|). \end{equation} See Fig.~\ref{fig:log-sol} for a profile of this solution. \begin{figure}[h] \begin{center} \begin{tabular}{cc} \includegraphics[scale=0.7]{Combless.eps} & \includegraphics[scale=0.7]{Combgreater.eps}\\ (a) & (b) \end{tabular} \end{center} \caption{ The solutions $\phi(x)$ with $s=1$ are plotted for (a) at $x^0=-1$ and (b) at $x^0=1$. The arrows represent the directions of moving two peaks. The two shade rectangles marked by ``E" denote the Euclidean regions, as explained below. \label{fig:log-sol} } \end{figure} There exist two singular peaks at $x^1 = \pm x^0$ which move at the speed of light. They collide and scatter each other at $t=0$. Equivalently, it can be written for two separate regions as \begin{equation} \phi(x)= \left\{ \begin{array}{l} \displaystyle s\ln\frac{x^0+x^1}{x^0-x^1}; \mbox{ for } (x^0)^2>(x^1)^2:\mbox{I-Minkowskian},\\ \displaystyle s\ln\frac{x^0+x^1}{x^1-x^0}; \mbox{ for } (x^0)^2<(x^1)^2:\mbox{II-Minkowskian \& Euclidean}. \end{array}\right. \label{eq:separate solutions} \end{equation} It will be seen that each of the solutions has a different geometry. We call the first region between the two peaks as ``the Big Bounce brane universe'' and the second region outside the two peaks as ``the colliding branes''. We discuss these geometries in the following sections. Next, it is necessary to check whether all the region is valid making the original action real. Inside the square root of $\sqrt{1-(\partial\phi)^2}$ of the Nambu-Goto action (\ref{eq:Nambu-Goto action}), \begin{equation} \begin{array}{ccl} 1-(\partial\phi)^2&=&1-(\partial_0\phi)^2+(\partial_1\phi)^2\\ &=&1-s^2\left(\frac{1}{x^0+x^1}-\frac{1}{x^0-x^1}\right)^2+s^2\left(\frac{1}{x^0+x^1}+\frac{1}{x^0-x^1}\right)^2\\ &=&1+\frac{4s^2}{(x^0)^2-(x^1)^2}\geq0, \end{array} \end{equation} is required in order to take a real value. This condition can be rephrased as \begin{equation} \left\{ \begin{array}{l} (x^0)^2>(x^1)^2\\ (x^1)^2\geq4s^2+(x^0)^2. \end{array}\right. \end{equation} In other words, the region \begin{equation} (x^0)^2 < (x^1)^2 < (x^0)^2 + 4s^2\equiv(x^1_c)^2 \label{eq:Euclidean} \end{equation} is somewhat pathological because the action becomes purely imaginary. We call this region as the ``Euclidean region". The two Euclidean regions are shaded and are denoted by ``E" in Fig.~\ref{fig:log-sol}. The first region I in Eq.~(\ref{eq:separate solutions}) (the Big Bounce brane universe) is purely Minkowskian. But the second region II in Eq.~(\ref{eq:separate solutions}) (the colliding branes) contains the Euclidean region (\ref{eq:Euclidean}) as well as the Minkowskian region. If we could remove the Euclidean region from the solution, no singularity would be included in the rest. However, this problem is subtle because every space is connected by energy density flow. The similar situation has already occurred in the literature \cite{Gibbons:2004dz}. In this circumstance the energy of the branes is calculated in Appx.~\ref{appx:Energy of brane}. We find that the energy of the Minkowskian part of the region II is finite but the total energy is infinite due to the region I and the Euclidean region in the region II. In the next sections the dynamics and geometry of these two solutions will be discussed on the same outline. Before going into the discussion, it is necessary to mention that it is sufficient to consider only $x^0>0$ since the solution has the symmetry \begin{equation} x^0\rightarrow-x^0 \Leftrightarrow s\rightarrow-s. \end{equation} \section{The Big Bounce brane universe}\label{sec:Expanding brane universe} In this section we study the central region describing the Big Bounce brane universe, see Fig.~\ref{fig:brane_universe}. \begin{figure}[h] \begin{center} \begin{tabular}{cc} \includegraphics[scale=0.7]{phiexpnt.eps}& \includegraphics[scale=0.7]{phiexppt.eps} \\ (a) & (b) \end{tabular} \end{center} \caption{The dynamics of the Big Bounce brane universe. (a) The branes at $x^0=-1, -0.5, -0.1$ for $x^0<0$ and $s=1$ in the bulk frame. The universe shrinks as time goes on while the brane itself becomes vertical. (b) The branes at $x^0=0.1, 0.5, 1$ for $x^0>0$ and $s=1$ in the bulk frame. The brane is stretched as time goes on. The arrows represent the directions which the branes are moving to. \label{fig:brane_universe}} \end{figure} The line element with induced metric on the $p$-brane of codimension one in the static gauge is written as \begin{equation} ds^2=\eta_{mn}dx^mdx^n-d\phi^2. \end{equation} Omitting the trivially extended directions of the line element $\eta_{ij}dx^idx^j$, where $i,j=2,3,\cdots,p$, it can be written as \begin{equation} ds^2=d\sigma_+d\sigma_--d\phi^2, \end{equation} where $\sigma_+\equiv x^0+x^1$ and $\sigma_-\equiv x^0-x^1$. For the solution $\phi(x)$ in the region I, $(x^0)^2-(x^1)^2>0$ the metric can be diagonalized by using the parameters $(t,q)$, given by \begin{equation} t\equiv\sqrt{(x^0)^2-(x^1)^2},~~~q\equiv\phi=s\ln\frac{x^0+x^1}{x^0-x^1}. \label{eq:parameters-ex} \end{equation} The detailed steps are following. Defining first $s\ln(\sigma_+/|s|)\equiv\tilde{\sigma}_+$ and $s\ln(\sigma_-/|s|)\equiv\tilde{\sigma}_-$, and then $\phi\equiv\tilde{\sigma}_+-\tilde{\sigma}_-$ and $T\equiv\tilde{\sigma}_++\tilde{\sigma}_-$, the line element can be rewritten as \begin{equation} \begin{array}{ccl} ds^2&=&e^{(\tilde{\sigma}_++\tilde{\sigma}_-)/s}d\tilde{\sigma}_+d\tilde{\sigma}_--d\phi^2\\ &=&e^{T/s}\frac{1}{4}(dT^2-d\phi^2)-d\phi^2\\ &=&\frac{1}{4}e^{T/s}dT^2-(\frac{1}{4}e^{T/s}+1)d\phi^2\\ &=&dt^2-(\frac{t^2}{4s^2}+1)dq^2. \end{array} \label{eq:exp} \end{equation} This induced metric represents the Robertson-Walker type spacetime in $(1+1)$ dimension with the scale factor $a(t)=\sqrt{\frac{t^2}{4s^2}+1}$. The scale change proceeds through a time reversal symmetry. The universe starts to shrink at $t=-\infty$ and approaches to a flat spacetime while the brane in this region becomes vertical at $t=0$. Then the universe starts to expand symmetrically in time. Interesting is that the solution describes a shrinking and expanding universe connected by ``Big Bounce", where no singularities exist. The existing singular points in the bulk coordinate seem to be originated from this particular static gauge which assumes a vacuum state to be perpendicular to the brane. Although in higher than $(1+1)$ dimensions our solution may be just a toy model presenting an anisotropic expansion, a search for a solution having isotropic expansion in higher dimensions would be an interesting future project. To see a classical behavior of a massive particle we need to calculate geodesics. To this end we calculate geometric quantities here. The Christoffel symbols \begin{equation} \Gamma^a_{bc}=\frac{1}{2}g^{am}(g_{mb,c}+g_{mc,b}-g_{bc,m}) \end{equation} have the following nonzero components in the coordinates $(t,q)$: \begin{equation} \begin{array}{ccl} \Gamma^1_{10}=\frac{1}{2}g^{11}g_{11,0}=\frac{t}{t^2+4s^2}, \quad \Gamma^0_{11}=-\frac{1}{2}g^{00}g_{11,0}=\frac{t}{4s^2}. \end{array} \end{equation} The Riemann tensor \begin{equation} R^a_{bcd}=\Gamma^a_{bd,c}-\Gamma^a_{bc,d}+\Gamma^a_{mc}\Gamma^m_{bd}-\Gamma^a_{md}\Gamma^m_{bc} \end{equation} is calculated to yield \begin{equation} \begin{array}{ccl} R^1_{010}&=&0-\partial_0\Gamma^1_{01}+0-\Gamma^1_{10}\Gamma^1_{01} =-\frac{4s^2}{(t^2+4s^2)^2}. \end{array} \end{equation} The Ricci scalar is found to be \begin{equation} R=g^{ab}R_{ab}=g^{11}g^{00}g_{11}R^1_{010}+g^{00}R^1_{010}=2g^{00}R^1_{010} =-\frac{8s^2}{(t^2+4s^2)^2}. \end{equation} Now we are ready to get geodesic equations for a massive particle. They are obtained as \begin{equation} \left\{ \begin{array}{l} \frac{d^2q}{d\tau^2}+2\Gamma^1_{10}\frac{dq}{d\tau}\frac{dt}{d\tau} = \frac{d^2q}{d\tau^2} +\frac{2t}{t^2+4s^2}\frac{dq}{d\tau}\frac{dt}{d\tau} =0,\\ \frac{d^2t}{d\tau^2}+\Gamma^0_{11}\left(\frac{dq}{d\tau}\right)^2 = \frac{d^2t}{d\tau^2}+\frac{t}{4s^2}\left(\frac{dq}{d\tau}\right)^2 = 0. \end{array}\right. \end{equation} These can be reduced to \begin{equation} \left(\frac{dv}{dt}+\frac{2t}{t^2+4s^2}v\right)w=0, \quad \frac{1}{2}\frac{dw^2}{d\tau}+\frac{t}{4s^2}v^2=0, \label{eq:w-ex} \end{equation} where $v=\frac{dq}{d\tau}$ and $w=\frac{dt}{d\tau}$. The equation for $v(t)$ is integrated to give \begin{equation} v(t)=\frac{dq}{d\tau}=\frac{dq}{dt}\frac{dt}{d\tau}=\frac{v(t_0)(t_0^2+4s^2)}{t^2+4s^2}. \label{eq:v-ex} \end{equation} Then $w(t)$ can be found when $v(t)$ is substituted into Eq.(\ref{eq:w-ex}), \begin{equation} 0=\frac{1}{2}\frac{dw^2}{dt}+v^2(t_0)\frac{t}{4s^2}\left(\frac{t_0^2+4s^2}{t^2+4s^2}\right)^2 , \end{equation} yielding \begin{equation} w(t)=\pm\sqrt{w^2(t_0)+\frac{v^2(t_0)(t_0^2+4s^2)}{4s^2}\left(\frac{t_0^2+4s^2}{t^2+4s^2}-1\right)}. \end{equation} The coordinate velocity $\frac{dq}{dt}$ is obtained from $v(t)$ and $w(t)$ as \begin{equation} \frac{dq}{dt}=\frac{v(t_0)(t_0^2+4s^2)}{t^2+4s^2}\frac{1}{w} =\pm\frac{1}{\sqrt{\left[\frac{w^2(t_0)}{v^2(t_0)}-\frac{1}{4s^2}(t^2_0+4s^2)\right]\left(\frac{t_0^2+4s^2}{t^2+4s^2}\right)^2+\frac{t^2+4s^2}{4s^2}}}. \end{equation} Since only one constant is needed in solving the first order differential equation, one of $v(t_0)$ and $w(t_0)$ must be eliminated. They are related in the normalization condition for a massive particle: \begin{equation} \begin{array}{ccl} V^mV_m&=&w^2g_{00}+v^2g_{11}\\ &=&\left[w^2(t_0)+\frac{v^2(t_0)(t_0^2+4s^2)}{4s^2}\left(\frac{t_0^2+4s^2}{t^2+4s^2}-1\right)\right]\times1 +\left[\frac{v(t_0)(t_0^2+4s^2)}{t^2+4s^2}\right]^2\times\frac{-(t^2+4s^2)}{4s^2}\\ &=&v^2(t_0)\left(\frac{w^2(t_0)}{v^2(t_0)}-\frac{t^2_0+4s^2}{4s^2}\right)=1. \end{array} \end{equation} Thus, the coordinate velocity is simplified as \begin{equation} \frac{dq}{dt} =\pm\frac{1}{\sqrt{\frac{1}{v^2(t_0)}\frac{(t^2+4s^2)^2}{(t^2_0+4s^2)^2}+\frac{t^2+4s^2}{4s^2}}}. \end{equation} Since the metric does not depend on $q$, the conserved quantity $\frac{dq}{d\tau}g_{11}\equiv p_q/m= vg_{11}=-v(t)(t^2+4s^2)$, where $m$ is a mass of a particle, is found. It can be also seen in Eq.(\ref{eq:v-ex}). The particle trajectory is shown in Fig.~\ref{fig:dq(t)/dt and q(t)-BigBounce}. \begin{figure}[h] \begin{center} \begin{tabular}{cc} \includegraphics[scale=0.7]{vexp.eps} & \includegraphics[scale=0.7]{xexp.eps} \\ \end{tabular} \end{center} \caption{On the Big Bounce brane universe (a) $\frac{dq}{dt}(t)$ and (b) $q(t)$, with $p_q/m=3$, $q(0)=0$ and $s=\frac{1}{2}$. }\label{fig:dq(t)/dt and q(t)-BigBounce} \end{figure} One may want to make sure that a speed of a massive particle in any frame should not exceed the speed of light. A speed of a massive particle $|\vec{v}_{bke}|$ on the brane from an observer in the bulk is expressed by \begin{equation} |\vec{v}_{bke}|=\sqrt{\left(\frac{dq}{dt}\right)^2+\left(\frac{dx^1}{dt}\right)^2}\left|\frac{dt}{dx^0}\right|. \end{equation} Here, $\frac{dx^1}{dt}$ and $\frac{dt}{dx^0}$ need to be expressed by the equation of motion. Using the transformations from Eq.(\ref{eq:parameters-ex}), \begin{equation} \left\{ \begin{array}{l} x^0=t\cosh\frac{q}{2s}\\ x^1=t\sinh\frac{q}{2s}, \end{array}\right. \end{equation} and hence \begin{equation} \left\{ \begin{array}{l} \frac{dx^0}{dt}=\frac{x^0}{t}+\frac{x^1}{2s}\frac{dq}{dt}\\ \frac{dx^1}{dt}=\frac{x^1}{t}+\frac{x^0}{2s}\frac{dq}{dt}. \end{array}\right. \end{equation} The following inequality is expressed enough to be verified: \begin{equation} \vec{v}^2_{bke}=\frac{\left(\frac{dq}{dt}\right)^2 +\left(\frac{x^1}{t}\right)^2+\left(\frac{x^0}{2s}\right)^2\left(\frac{dq}{dt}\right)^2 +\frac{2x^0x^1}{2st}\left(\frac{dq}{dt}\right)} { \left(\frac{x^0}{t}\right)^2+\left(\frac{x^1}{2s}\right)^2\left(\frac{dq}{dt}\right)^2 +\frac{2x^0x^1}{2st}\left(\frac{dq}{dt}\right)}<1. \end{equation} This inequality is immediately reduced to the following simplified form to show $\vec{v}^2_{bke}$ is manifestly less than 1: \begin{equation} \frac{t^2+4s^2}{4s^2}\left(\frac{dq}{dt}\right)^2-1 =\frac{\frac{t^2+4s^2}{4s^2}}{\frac{1}{v^2(t_0)}\frac{(t^2+4s^2)^2}{(t^2_0+4s^2)^2}+\frac{(t^2+4s^2)}{4s^2}} -1<0. \end{equation} \section{Colliding branes and reconnection}\label{sec:Colliding branes} Let us move on to the region II, $(x^1)^2\geq(x^0)^2$. The branes can be seen in Fig.~\ref{fig:colliding-branes}. \begin{figure}[h] \begin{center} \begin{tabular}{cc} \includegraphics[scale=0.7]{phicolnt.eps} & \includegraphics[scale=0.7]{phicolpt.eps} \\ (a) & (b) \end{tabular} \end{center} \caption{The dynamics of the colliding branes. (a) The branes at $x^0=-1, -0.5, -0.1$ with $s=1$. The branes approach each other as time goes on. (b) The branes at $x^0=0.1, 0.5, 1$ with $s=1$. The branes reconnect and separate each other as time goes on. The arrows represent the directions which the branes are moving to. \label{fig:colliding-branes}} \end{figure} The metric can be diagonalized by the parameters $(\bar{t},\bar{q})$, defined by \begin{equation} \bar{t}\equiv\phi=s\ln\frac{x^0+x^1}{x^1-x^0},~~~\bar{q}\equiv\sqrt{(x^1)^2-(x^0)^2} . \label{eq:parameters-col} \end{equation} Changing the parameters similarly to the previous case but considering $x^1>x^0$, first define $s\ln(\sigma_+/|s|)=\xi_+$ and $s\ln(-\sigma_-/|s|)=\xi_-$ and then $\phi=\xi_+-\xi_-$ and $X=\xi_++\xi_-$, giving \begin{equation} \begin{array}{ccl} ds^2&=&-e^{(\xi_++\xi_-)/s}d\xi_+d\xi_--d\phi^2\\ &=&-e^{X/s}\frac{1}{4}(dX^2-d\phi^2)-d\phi^2\\ &=&-\frac{1}{4}e^{X/s}dX^2+(\frac{1}{4}e^{X/s}-1)d\phi^2\\ &=&(\frac{\bar{q}^2}{4s^2}-1)d\bar{t}^2-d\bar{q}^2. \end{array} \end{equation} Although the situation looks similar to the region I $(x^1)^2\leq(x^0)^2$, the geometry in the region II becomes quite different due to a sign change in the logarithm. The solution for the region I is transformed to one for the region II by the exchange of the coordinates, \begin{equation} x^0\Leftrightarrow x^1. \end{equation} However, since the line element $(dx^0)^2-(dx^1)^2$ stays unchanged, the geometries in the regions I and II are not symmetrical in the exchange of the coordinate in the solution. Nevertheless, it is worthwhile to recognize the diagonalized metric here can be just obtained from one (\ref{eq:exp}) for the Big Bounce brane universe by the transformation \begin{equation} t\rightarrow \pm i\bar{q},~~~q\rightarrow \bar{t}. \label{eq:transform I to II} \end{equation} Furthermore, a change of sign of the determinant of the metric occurs at $\bar{q}=\pm\sqrt{(x^1)^2-(x^0)^2}=\pm2s$. This suggests that this region of the solution should be divided into the Minkowskian ($|\bar{q}|=\sqrt{(x^1)^2-(x^0)^2}>2|s|$) and the Euclidean spaces ($|\bar{q}|=\sqrt{(x^1)^2-(x^0)^2}<2|s|$). Although many references discuss a model having the Minkowski and the Euclidean regions both \cite{Gibbons:2004dz}, its legitimacy needs to be clear. As long as the entire solution is not abandoned, the Euclidean region must be kept because of a continuation of the energy flow. For a moment we do not have to restrict ourselves to only the Minkowski region until we get the final result. Following the same routines as in the last section, the necessary Christoffel symbols, curvature tensor and scalar can be found, \begin{equation} \begin{array}{ccl} && \Gamma^0_{01}=\frac{1}{2}g^{00}g_{00,1}=\frac{\bar{q}}{\bar{q}^2-4s^2}, \quad \Gamma^1_{00}=-\frac{1}{2}g^{11}g_{00,1}=\frac{\bar{q}}{4s^2} ,\\ && R^1_{010}=\partial_1\Gamma^1_{00}-0+0-\Gamma^1_{00}\Gamma^0_{01}=-\frac{1}{\bar{q}^2-4s^2},\\ && R=2g^{00}R^1_{010}=\frac{-8s^2}{(\bar{q}^2-4s^2)^2}. \end{array} \end{equation} The geodesic equations for a massive particle read \begin{equation} \left\{ \begin{array}{l} \frac{d^2\bar{q}}{d\tau^2}+\Gamma^1_{00}\left(\frac{d\bar{t}}{d\tau}\right)^2 = \frac{d^2\bar{q}}{d\tau^2}+\frac{\bar{q}}{4s^2}\left(\frac{d\bar{t}}{d\tau}\right)^2=0,\\ \frac{d^2\bar{t}}{d\tau^2}+2\Gamma^0_{01}\frac{d\bar{t}}{d\tau}\frac{d\bar{q}}{d\tau} = \frac{d^2\bar{t}}{d\tau^2}+\frac{2\bar{q}}{\bar{q}^2-4s^2}\frac{d\bar{t}}{d\tau}\frac{d\bar{q}}{d\tau}=0. \end{array}\right. \end{equation} It can be realized that the Ricci scalar and the geodesics in the region I are transformed to those in the region II by Eq.(\ref{eq:transform I to II}), ($t\rightarrow \pm i\bar{q}$, $q\rightarrow \bar{t}$). The geodesic equations are reduced with a familiar form as before, \begin{equation} \frac{1}{2}\frac{d\bar{v}^2}{d\bar{q}}+\frac{\bar{q}}{4s^2}\bar{w}^2=0, \quad \left(\frac{d\bar{w}}{d\bar{q}}+\frac{2\bar{q}}{\bar{q}^2-4s^2}\bar{w}\right)\bar{v}=0, \end{equation} where $\bar{v}=\frac{d\bar{q}}{d\tau}$ and $\bar{w}=\frac{d\bar{t}}{d\tau}$. The equation for $w(\bar{q})$ is integrated giving \begin{equation} \bar{w}(\bar{q})=\bar{w}(\bar{q}_0)\frac{\bar{q}_0^2-4s^2}{\bar{q}^2-4s^2} . \end{equation} Plugging it into the equation for $\bar{v}(\bar{q})$, we have \begin{equation} 0=\frac{1}{2}\frac{d\bar{v}^2}{d\bar{q}}+\frac{\bar{q}}{4s^2}\bar{w}^2(\bar{q}_0)\frac{(\bar{q}_0^2-4s^2)^2}{(\bar{q}^2-4s^2)^2} , \end{equation} which is integrated to \begin{equation} \begin{array}{ccl} \bar{v}^2(\bar{q})&=&\frac{\bar{w}^2(\bar{q}_0)}{4s^2}(\bar{q}_0^2-4s^2)^2\left[\frac{1}{\bar{q}^2-4s^2}-\frac{1}{\bar{q}^2_0-4s^2}\right]+v^2(\bar{q}_0)\\ &=&\frac{\bar{w}^2(\bar{q}_0)}{4s^2}(\bar{q}^2_0-\bar{q}^2)\frac{\bar{q}_0^2-4s^2}{\bar{q}^2-4s^2}+v^2(\bar{q}_0), \end{array} \end{equation} and therefore we obtain \begin{equation} \bar{v}(\bar{q})=\pm\sqrt{\frac{\bar{w}^2(\bar{q}_0)}{4s^2}\frac{(\bar{q}_0^2-4s^2)^2}{\bar{q}^2-4s^2}-\frac{\bar{w}^2(\bar{q}_0)}{4s^2}(\bar{q}_0^2-4s^2)+\bar{v}^2(\bar{q}_0)}. \label{eq:v-col} \end{equation} The coordinate velocity can be calculated with $\bar{v}(\bar{q})$ and $\bar{w}(\bar{q})$, \begin{equation} \frac{d\bar{q}}{d\bar{t}}=\frac{1}{\bar{w}(\bar{q}_0)}\frac{\bar{q}^2-4s^2}{\bar{q}^2_0-4s^2}\bar{v}(\bar{q}) =\pm\sqrt{\left(\frac{\bar{v}^2(\bar{q}_0)}{\bar{w}^2(\bar{q}_0)}-\frac{\bar{q}_0^2-4s^2}{4s^2}\right)\left(\frac{\bar{q}^2-4s^2}{\bar{q}^2_0-4s^2}\right)^2+\frac{\bar{q}^2-4s^2}{4s^2}} . \label{eq:dq/dt-col} \end{equation} When the normalization condition for the velocity vector \begin{equation} \begin{array}{ccl} V^mV_m&=&\bar{w}^2g_{00}+\bar{v}^2g_{11}\\ &=&(\frac{\bar{q}^2}{4s^2}-1)\bar{w}^2(\bar{q}_0)\frac{(\bar{q}_0^2-4s^2)^2}{(\bar{q}^2-4s^2)^2} -\frac{\bar{w}^2(\bar{q}_0)}{4s^2}(\bar{q}_0^2-4s^2)^2\left[\frac{1}{\bar{q}^2-4s^2}-\frac{1}{\bar{q}^2_0-4s^2}\right]-\bar{v}^2(\bar{q}_0)\\ &=&\frac{\bar{w}^2(\bar{q}_0)}{4s^2}(\bar{q}_0^2-4s^2)-\bar{v}^2(\bar{q}_0)=1 \end{array}\label{eq:normalization-col} \end{equation} is applied to Eq.(\ref{eq:dq/dt-col}), the coordinate velocity is reduced to \begin{equation} \frac{d\bar{q}}{d\bar{t}}=\pm\sqrt{\frac{\bar{q}^2-4s^2}{4s^2}-\frac{1}{\bar{w}^2(\bar{q}_0)}\left(\frac{\bar{q}^2-4s^2}{\bar{q}^2_0-4s^2}\right)^2}. \end{equation} If $\bar{q}^2<4s^2$, which corresponds to the Euclidean region, $x^0<x^1<x^1_c=\sqrt{(x^0)^2+4s^2}$, the coordinate velocity $\frac{d\bar{q}}{dt}$ becomes imaginary, which is not allowed in classical mechanics. However, one can hardly imagine that the Euclidean region can be ignored in quantum mechanics when viewed as a motion of a non-relativistic particle with $E=0$ under the effective potential energy $U(\bar{q})$ given by \begin{equation} U(\bar{q})=-\frac{m}{2}\left[\frac{\bar{q}^2-4s^2}{4s^2}-\frac{1}{\bar{w}^2(\bar{q}_0)}\left(\frac{\bar{q}^2-4s^2}{\bar{q}^2_0-4s^2}\right)^2\right]. \end{equation} The potential $U(q)$ is visualized in Fig.~\ref{fig:U(q) and dq/dt(q)-col}. \begin{figure}[h] \begin{center} \begin{tabular}{cc} \includegraphics[scale=0.7]{Ucols05.eps} & \includegraphics[scale=0.7]{v2cols05.eps} \\ (a) & (b) \end{tabular} \end{center} \caption{On the colliding brane (a) $U(\bar{q})$ and (b)$\frac{d\bar{q}}{d\bar{t}}$ vs. $\bar{q}$, with $\bar{p}_{\bar{t}}/m=3$, $s=\frac{1}{2}$ and $m=1$.} \label{fig:U(q) and dq/dt(q)-col} \end{figure} Apparently, the allowed region ($U<0$) is \begin{equation} 4s^2<q^2\leq\frac{\bar{w}^2(\bar{q}_0)(\bar{q}^2_0-4s^2)^2}{4s^2}+4s^2. \end{equation} Since this inequality also holds at $\bar{q}=\bar{q}_0$, the following relation is induced \begin{equation} \bar{w}^2(\bar{q}_0)(\bar{q}_0^2-4s^2)\geq4s^2. \end{equation} However, this is nothing but the normalization condition Eq.~(\ref{eq:normalization-col}). It can be reexpressed in the conserved quantity $\frac{dt}{d\tau}g_{00}\equiv \bar{p}_{\bar{t}}/m= \frac{w(q)(q^2-4s^2)}{4s^2}\geq 1$. The two boundary points $(\bar{q}_{min},\bar{q}_{max})=(2s,2s\sqrt{(\bar{p}_{\bar{t}}/m)^2+1})$ and the minimum potential point $\bar{q}_c=2s\sqrt{(\bar{p}_{\bar{t}}/m)^2/2+1}$ can be written in terms of $\bar{p}_{\bar{t}}$, $m$ and $s$. A speed of a massive particle on the brane from an observer in the bulk is \begin{equation} \vec{v}^2_{bkc}=\left[\left(\frac{d\bar{q}}{d\bar{t}}\right)^2 +\left(\frac{dx^1}{d\bar{t}}\right)^2\right]\left(\frac{d\bar{t}}{dx^0}\right)^2. \end{equation} Using the transformations Eq.~(\ref{eq:parameters-col}) \begin{equation} \left\{ \begin{array}{l} x^0=\bar{q}\sinh\frac{\bar{t}}{2s},\\ x^1=\bar{q}\cosh\frac{\bar{t}}{2s}, \end{array}\right. \end{equation} and hence \begin{equation} \left\{ \begin{array}{l} \frac{dx^0}{d\bar{t}}=\frac{d\bar{q}}{dt}\frac{x^0}{\bar{q}}+\frac{x^1}{2s},\\ \frac{dx^1}{d\bar{t}}=\frac{d\bar{q}}{dt}\frac{x^1}{\bar{q}}+\frac{x^0}{2s}, \end{array}\right. \end{equation} we obtain \begin{equation} \vec{v}^2_{bkc}=\frac{1+ \left(\frac{d\bar{q}}{d\bar{t}}\right)^2\left(\frac{x^1}{\bar{q}}\right)^2+\left(\frac{x^0}{2s}\right)^2 +\frac{2x^0x^1}{2s\bar{q}}\left(\frac{d\bar{q}}{d\bar{t}}\right)} { \left(\frac{d\bar{q}}{d\bar{t}}\right)^2\left(\frac{x^0}{\bar{q}}\right)^2+\left(\frac{x^1}{2s}\right)^2 +\frac{2x^0x^1}{2s\bar{q}}\left(\frac{d\bar{q}}{d\bar{t}}\right)}<1 \end{equation} is required for a massive particle. This inequality becomes \begin{equation} 1+\left(\frac{d\bar{q}}{d\bar{t}}\right)^2-\frac{\bar{q}^2-4s^2}{4s^2} =-\frac{1}{w^2(x_0)}\left(\frac{\bar{q}^2-4s^2}{\bar{q}^2_0-4s^2}\right)^2<0, \end{equation} indeed showing $\vec{v}^2_{bkc}$ is manifestly less than 1 even in the Euclidean region. We plot the coordinate velocity $\frac{d\bar{q}}{d\bar{t}}$ versus $\bar{t}$ and the corresponding potential in Fig.~\ref{fig:dq(t)/dt and q(t)-col}. \begin{figure}[h] \begin{center} \begin{tabular}{cc} \includegraphics[scale=0.7]{vcols05.eps} & \includegraphics[scale=0.7]{xcols05.eps} \\ (a) & (b) \end{tabular} \end{center} \caption{On the colliding brane (a) $\frac{d\bar{q}}{d\bar{t}}$ vs. $\bar{t}$ and (b) $\bar{q}(\bar{t})$, with $\bar{p}_{\bar{t}}/m=3$ and $s=\frac{1}{2}$.} \label{fig:dq(t)/dt and q(t)-col} \end{figure} \section{Conclusion and Discussion}\label{sec:Conclusion and Discussion} A wave solution of the Nambu-Goto action with codimension one in the static gauge has been studied. The solution represents two-wave scattering as in Fig.~\ref{fig:log-sol}. An interesting feature is that the single solution is naturally decomposed into the regions having two different geometries by its singularity. That is, the whole brane consists of the central brane and the outside branes, representing the Robertson-Walker type universe and the colliding branes, as seen in Fig.~\ref{fig:brane_universe} and Fig.~\ref{fig:colliding-branes}, respectively. These two geometries are transformed by a simple coordinate exchange, but the dynamics of particles there appears quite different in the end. The former describes the expanding and shrinking universe connected by the ``Big Bounce" which does not have ``Big Bang" singularities in its geometry. The latter represents that two branes collide and reconnect each other. Specially, it has two different geometries through a signature change in the metric. The detailed classical dynamics of a massive particle on the branes has been provided. In the end, it has been verified that a speed of a massive particle on the brane cannot exceed the speed of light. As given in Appx.~\ref{appx:Energy of brane}, the normalized energy of brane in the bulk turns out to be infinite due to the Big Bounce universe and the Euclidean parts of the colliding branes. Only the Minkowskian parts of colliding branes have a finite energy. \medskip Before closing this paper, several discussions are addressed here. The colliding branes consist of the Euclidean and Minkowski regions as studied previously in \cite{Gibbons:2004dz}. Dealing with the Euclidean region is subtle, since it makes the action and the energy density imaginary but it is a part of an analytically continuous solution which leads a smooth energy flow. Hopefully, two infinities from the Big Bounce universe and the Euclidean brane could be accidentally canceled in a certain circumstance if the definition of energy is extended to a complex space. The solutions found in this paper are not the most general for waves propagating into two directions, because we have first assumed that solutions satisfy the Klein-Gordon equation $\partial^2 \phi=0$ in the form of $\phi(x) = f(k\cdot x) + g(p\cdot x)$. When two waves are made simultaneously in well separated regions of a brane, each of them keeps the same form (\ref{eq:single wave}) without interference, and the solution can be approximately written as a sum of two waves if they are not much overlapped. However, once those waves get close to each other such an approximation is no longer valid. Therefore, we have to solve a generic case of two colliding waves with the asymptotic boundary conditions of two well separated waves. This should be solved without any assumptions by solving Eq.(\ref{eq:EOM}) directly, in which non-trivial cancelation between contributions from the first and second terms may occur. We have studied the case of a brane of codimension one in this paper, corresponding to a domain wall. Extension to higher codimensional case remains as a future problem. Especially for the case of codimension two, it will describe waves on (cosmic) strings. Several solutions to string equations of motion were extensively studied in the literature, in particular, on the relation with the rigidity of strings \cite{Curtright:1986vg}. This case should be pursued further which will be also important in study of cosmic strings in cosmology \cite{Garfinkle:1990jq,Siemens:2001dx}. Fundamental strings (or branes) ending on a brane can be realized as classical solutions (or solitons) in the effective field theory (typically the Nambu-Goto or the Dirac-Born-Infeld action) on the host brane \cite{Townsend:1999hi}. This point of view is well established for static configurations of bound states of strings and branes. Endpoints of the fundamental strings are in fact singular spikes in the effective theory of the host brane, which are called BIons \cite{Callan:1997kz}. In our solution, the two spikes are moving at the speed of light as in Fig.~\ref{fig:log-sol}. They may realize moving branes ending on a host $p$-brane from its both sides. The Big Bounce brane solution found in this paper is isotropic only in $(1+1)$ dimensions. Although in higher than $(1+1)$ dimension our solution gives an anisotropic expansion, a search for a solution having an isotropic expansion in higher dimensions would be an interesting future project. On the other hand the colliding brane solution may give an interesting model for the brane world scenario. This may be applied to the ekpyrotic universe scenario of colliding branes \cite{Khoury:2001wf}. We have studied classical dynamics of particular branes and a massive particle on those branes. One of future researches can be directed to quantization problems. First, a massive particle motion can be quantized in a usual manner of the first quantization. Second, massless or massive fields localized on the brane can be considered and can be quantized. In particular, it is interesting to see if there is a particle creation or annihilation for the second quantized fields in curved space \cite{Birell-Davis} induced on the brane, especially for the case of the Big Bounce universe solution. Third, the quantization of the brane oscillation $\phi(x)$ itself has been studied for example in \cite{Lee:2007hp}. This can be applied to an oscillation around a non-trivial background as found in this paper. Finally, the brane oscillation field $\phi(x)$ becomes a brane vector if coupled to a bulk gravity \cite{Clark:2006pd}. Phenomenological consequences of its coupling to the Standard Model fields localized on the brane have been studied \cite{Clark:2007wj}. It is an interesting direction to investigate what happens for oscillations from particular backgrounds such as a solution found in this paper.
1503.03677
\section{Introduction} What is the Cosmological Constant? How can be computed? These are some of the many puzzling questions which are still unsolved. Basically the Cosmological Constant can be connected to the energy of the vacuum. However, the absence of a complete Quantum Gravitational theory increases the number of questions instead of giving answers. General Relativity (GR) is the best theory explaining the behavior of the gravitational field including also the cosmological constant. However GR fails to describe the gravitational field in the quantum range. Despite of this problem, in GR there exists a quantization procedure known as the Wheeler-De Witt equation (WDW)\cite{DeWitt} which encodes some aspects of the quantum properties of the gravitational field included the cosmological constant. We say \textquotedblleft some\textquotedblright, because a complete solution of the WDW equation does not exist and one needs to reduce the degree of the difficulty by fixing a background and freezing some degrees of freedom. The WDW equation is the quantum version of the classical Hamiltonian constraint representing the invariance under time reparametrization. Its derivation is a consequence of the Arnowitt-Deser-Misner (ADM) decomposition \cite{ADM} of space time based on the following line elemen \begin{equation} ds^{2}=g_{\mu\nu}\left( x\right) dx^{\mu}dx^{\nu}=\left( -N^{2}+N_{i N^{i}\right) dt^{2}+2N_{j}dtdx^{j}+g_{ij}dx^{i}dx^{j}, \end{equation} where $N$ is the lapse function and $N_{i}$ the shift function. In terms of the ADM variables, the four dimensional scalar curvature $R$ can be decomposed in the following wa \begin{equation} \mathcal{R}=R+K_{ij}K^{ij}-\left( K\right) ^{2}-2\nabla_{\mu}\left( Ku^{\mu}+a^{\mu}\right) ,\label{R \end{equation} wher \begin{equation} K_{ij}=-\frac{1}{2N}\left[ \partial_{t}g_{ij}-N_{i|j}-N_{j|i}\right] \end{equation} is the second fundamental form, $K=$ $g^{ij}K_{ij}$ is its trace, $R$ is the three dimensional scalar curvature and $\sqrt{g}$ is the three dimensional determinant of the metric. The last term in $\left( \ref{R}\right) $ represents the boundary terms contribution where the four-velocity $u^{\mu}$ is the timelike unit vector normal to the spacelike hypersurfaces ($t$=constant) denoted by $\Sigma_{t}$ and $a^{\mu}=u^{\alpha}\nabla_{\alpha }u^{\mu}$ is the acceleration of the timelike normal $u^{\mu}$. Thu \begin{equation} \mathcal{L}\left[ N,N_{i},g_{ij}\right] =\sqrt{-\text{\/\thinspace \thinspace}^{4}\text{\/{}\negthinspace}g}\left( \mathcal{R}-2\Lambda\right) =\frac{N}{2\kappa}\sqrt{g}\text{ }\left[ K_{ij}K^{ij}-K^{2}+\,R-2\Lambda -2\nabla_{\mu}\left( Ku^{\mu}+a^{\mu}\right) \right] \label{Lag \end{equation} represents the gravitational Lagrangian density where $\kappa=8\pi G$ with $G$ the Newton's constant and for the sake of generality we have also included a cosmological constant $\Lambda$. After a Legendre transformation, the WDW equation simply become \begin{equation} \mathcal{H}\Psi=\left[ \left( 2\kappa\right) G_{ijkl}\pi^{ij}\pi^{kl -\frac{\sqrt{g}}{2\kappa}\!{}\!\left( \,\!R-2\Lambda\right) \right] \Psi=0,\label{WDWO \end{equation} where $G_{ijkl}$ is the super-metric and where the conjugate super-momentum $\pi^{ij}$ is defined a \begin{equation} \pi^{ij}=\frac{\delta\mathcal{L}}{\delta\left( \partial_{t}g_{ij}\right) }=\left( g^{ij}K-K^{ij}\text{ }\right) \frac{\sqrt{g}}{2\kappa}.\label{mom \end{equation} Note that $H=0$, represents the classical constraint which guarantees the invariance under time reparametrization. The other classical constraint represents the invariance by spatial diffeomorphism and it is described by $\pi_{|j}^{ij}=0$, where the vertical stroke \textquotedbllef $\vert \textquotedblright\ denotes the covariant derivative with respect to the $3D$ metric $g_{ij}$. Solving Eq.$\left( \ref{WDWO}\right) $ allows to extract information on the early universe and on the cosmological constant. Of course, the form of the solution is depending on the background one considers. In this paper, we fix our attention on the Friedmann-Lema\^{\i}tre-Robertson-Walker (FLRW) metric without matter fields. To the reader, this choice could seem a restriction, however one has to think that in the very early universe, before the inflationary phase, it is likely that all the quantum information can be carried on by the gravitational field, because of its non-linear nature. However, even in this simplified vision many problems arise, especially for the inflationary epoch. In recent years, the idea of modifying GR to cure some of its diseases has been considered. From one side, the so-called $f\left( R\right) $ theories have been taken under examination to cure some problems in the infrared (IR) region\cite{f(R)} and on the other hand modifications on short scales allowing a power-counting ultraviolet (UV) renormalizable have been proposed by Ho\v{r}ava motivated by the Lifshitz theory in solid state physics\cite{Horava}\cite{Lifshitz}. This theory is dubbed as Ho\v{r}ava-Lifshitz (HL) theory and should recover general relativity in the IR limit. Nevertheless the price to pay to obtain a renormalizable theory in the HL\ proposal is that we have no general covariance or, in other words Lorentz symmetry is broken. Another proposal which distorts gravity in the UV is Gravity's Rainbow\cite{MagSmo}. Gravity's Rainbow has some appealing features to explain inflation\cite{AM}. In a series of papers, one of us used Gravity's Rainbow to cure some divergences appearing in Zero Point Energy (ZPE) calculations, at least to one loop\cite{GMLM}. The final ZPE result has been interpreted as an induced Cosmological Constant obtained as an eigenvalue of an appropriate Sturm-Liouville problem\footnote{See Ref.\cite{GMLM1} for other applications in the context of Gravity's Rainbow.}. It is interesting to note that the same idea has been applied in a HL theory\cite{RemoHL \footnote{See also Ref.\cite{OBCZ,AAEEAY}. See also Ref.\cite{RGES} to see how Gravity's Rainbos and HL theory can be connected.} in a FLRW background, the final result is that non trivial eigenvalues have been found depending on the parameters of the theory. Note that in GR, as we will show extensively in the next section, the cosmological constant cannot be considered as an eigenvalue of any Sturm-Liouville problem for the FLRW background in a mini-superspace approximation without matter fields. It appears therefore, that distortions of GR allow new results that otherwise should not be possible. It remains to consider another distortion connected with the previous ones: a Varying Speed of Light (VSL) theory\cite{harko,moffat,Albrecht,Barrow1,Barrow2,Barrow3}. In this approach, one allows the speed of light to change in some specified way, in an attempt to solve the major cosmological issues of modern theoretical physics. It is well known, that one of the major features of Einstein's theory of relativity is that the speed of light in a vacuum is always at constant rate. However, the cosmological problems that led to the theoretical introduction of dark matter and dark energy into modern cosmology have motivated some physicists to look for solutions in other directions, included the variation of the speed of light. In VSL, it is supposed that light travels faster in the early periods of the existence of the Universe and for this reason, it could solve problems related to the inflationary phase (flatness, horizon, homogeneity, etc.\ldots)\cite{Barrow1 \cite{Kolb90,guth,linde,AA,veneziano}. Of course, this hypothesis breaks the Lorentz invariance. The VSL model has been embedded within the general framework of the time varying fine structure constant theory and reformulated as a dielectric vacuum theory \cite{Barrow2}. Moreover, isotropy and homogeneity problems may find their appropriate solutions through this mechanism \cite{ellis,magueio,moff}. Recently quantum cosmological aspect of VSL models have been studied to see if the \textquotedblleft Tunneling from Nothing\textquotedblright\cite{Vilenkin}\ and the \textquotedblleft Hartle-Hawking No-boundary proposal\textquotedblright\cite{HH}\ can be better approached in this context\cite{harko1,sho}. The purpose of this paper is to repeat the calculation of Ref.\cite{RemoHL} in a VSL context to see if there are non trivial eigenvalues of an appropriate Sturm-Liouville problem, which will be interpreted as a Cosmological Constant induced by quantum fluctuations of the scale factor. The paper is organized as follows. In Sec. \ref{p1} we discuss the Wheeler-deWitt equations for Friedmann-Lema\^{\i}tre-Robertson-Walker space time. While, in Sec. \ref{p2}, we show how it is possible to derive the Wheeler-deWitt equations for Friedmann-Lema\^{\i}tre-Robertson-Walker space time in presence of varying speed of light. Conclusions are drawn in Sec.\ref{p3}. \section{The Wheeler-DeWitt equation for the Friedmann-Lema\^{\i}tre-Robertson-Walker space-time} \label{p1} A homogeneous, isotropic and closed universe is represented by the FLRW line elemen \begin{equation} ds^{2}=-N^{2}dt^{2}+a^{2}\left( t\right) d\Omega_{3}^{2}, \label{FRW \end{equation} wher \begin{equation} d\Omega_{3}^{2}=\gamma_{ij}dx^{i}dx^{j} \label{domega \end{equation} is the line element on the three-sphere, $N$ is the lapse function and $a(t)$ denotes the scale factor. Let us consider a very simple mini-superspace model described by the metric of Eq.$\left( \ref{FRW}\right) $. In this background, the Ricci curvature tensor and the scalar curvature read simpl \begin{equation} R_{ij}=\frac{2}{a^{2}\left( t\right) }\gamma_{ij}\qquad\mathrm{and}\qquad R=\frac{6}{a^{2}\left( t\right) }~, \end{equation} respectively. The Einstein-Hilbert action in $(3+1)$-dim i \begin{equation} S=\frac{1}{16\pi G}\int_{\Sigma\times I}\mathcal{L}~dt~d^{3}x=\frac{1}{16\pi G}\int_{\Sigma\times I}N\sqrt{g}\left[ K^{ij}K_{ij}-K^{2}+R-2\Lambda\right] ~dt~d^{3}x~, \label{action \end{equation} with $\Lambda$ the cosmological constant, $K_{ij}$ the extrinsic curvature and $K$ its trace. Using the line element, Eq.~$\left( \ref{FRW}\right) $, the above written action, Eq.~$\left( \ref{action}\right) $, become \begin{equation} S=-\frac{3\pi}{4G}\int_{I}\left[ \dot{a}^{2}a-a+\frac{\Lambda}{3 a^{3}\right] dt~, \end{equation} where we have computed the volume associated to the three-sphere, namely $V_{3}=2\pi^{2}$, and set $N=1$. The canonical momentum read \begin{equation} \pi_{a}=\frac{\delta S}{\delta\dot{a}}=-\frac{3\pi}{2G}\dot{a}a~, \end{equation} and the resulting Hamiltonian density i \begin{align} \mathcal{H} & =\pi_{a}\dot{a}-\mathcal{L}\nonumber\\ & =-\frac{G}{3\pi a}\pi_{a}^{2}-\frac{3\pi}{4G}a+\frac{3\pi}{4G}\frac {\Lambda}{3}a^{3}~.\label{H0 \end{align} Following the canonical quantization prescription, we promote $\pi_{a}$ to a momentum operator, settin \begin{equation} \pi_{a}^{2}\rightarrow-a^{-q}\left[ \frac{\partial}{\partial a}a^{q \frac{\partial}{\partial a}\right] ,\label{ordering \end{equation} where we have introduced a factor order ambiguity $q$. The generalization to $k=0,-1$ is straightforward. The WDW equation for such a metric i \begin{gather} H\Psi\left( a\right) =\left[ -a^{-q}\left( \frac{\partial}{\partial a}a^{q}\frac{\partial}{\partial a}\right) +\frac{9\pi^{2}}{4G^{2}}\left( a^{2}-\frac{\Lambda}{3}a^{4}\right) \right] \Psi\left( a\right) \,,\nonumber\\ \left[ -\frac{\partial^{2}}{\partial a^{2}}-\frac{q}{a}\frac{\partial }{\partial a}+\frac{9\pi^{2}}{4G^{2}}\left( a^{2}-\frac{\Lambda}{3 a^{4}\right) \right] \Psi\left( a\right) =0.\label{WDW_0 \end{gather} It represents the quantum version of the invariance with respect to time reparametrization. If we define the following reference length $a_{0 =\sqrt{3/\Lambda}$, then Eq.$\left( \ref{WDW_0}\right) $ assumes the familiar form of a one-dimensional Schr\"{o}dinger equation for a particle moving in the potentia \begin{equation} U\left( a\right) =\frac{9\pi^{2}a_{0}^{2}}{4G^{2}}\left[ \left( \frac {a}{a_{0}}\right) ^{2}-\left( \frac{a}{a_{0}}\right) ^{4}\right] \,,\label{U(a) \end{equation} with zero total energy. The potential $U\left( a\right) $ resembles a potential well which is unbounded from below.\textbf{ }When $0<a<a_{0}$, Eq.~$\left( \ref{U(a)}\right) $ implies $U\left( a\right) >0$, which is the classically forbidden region, while for $a>a_{0}$, one gets $U\left( a\right) <0$, which is the classically allowed region. It is interesting to note that for for the special case of the operator ordering $q=-1$, one can determine exact solution\cite{Vilenkin}. This can be easily verified by introducing the functio \[ z\left( a\right) =\left( \frac{3\pi a_{0}^{2}}{4G}\right) ^{\frac{2}{3 }\left( 1-\frac{a^{2}}{a_{0}^{2}}\right) =z_{0}\left( 1-\frac{a^{2} {a_{0}^{2}}\right) , \] where the solution can be written in terms of Airy functions, namel \begin{equation} \Psi\left( a\right) =\alpha Ai\left( z\right) +\beta Bi\left( z\right) .\label{Airy \end{equation} However, the wave function $\left( \ref{Airy}\right) $, cannot be normalized in the following sens \begin{equation} \int_{0}^{\infty}daa^{q}\Psi^{\ast}\left( a\right) \Psi\left( a\right) .\label{Norm \end{equation} The same happens for the other special value $q=3$. Even if the WDW equation $\left( \ref{WDW_0}\right) $ has a zero energy eigenvalue, it also has a hidden structure. Indeed Eq.$\left( \ref{WDW_0}\right) $ has the structure of a Sturm-Liouville eigenvalue problem with the cosmological constant as eigenvalue. We recall to the reader that a Sturm-Liouville differential equation is defined b \begin{equation} \frac{d}{dx}\left( p\left( x\right) \frac{dy\left( x\right) }{dx}\right) +q\left( x\right) y\left( x\right) +\lambda w\left( x\right) y\left( x\right) =0\label{SL \end{equation} and the normalization is defined b \begin{equation} \int_{a}^{b}dxw\left( x\right) y^{\ast}\left( x\right) y\left( x\right) . \end{equation} In the case of the FLRW model we have the following correspondenc \begin{align} p\left( x\right) & \rightarrow a^{q}\left( t\right) \,,\nonumber\\ q\left( x\right) & \rightarrow\left( \frac{3\pi}{2G}\right) ^{2 a^{q+2}\left( t\right) \,,\nonumber\\ w\left( x\right) & \rightarrow a^{q+4}\left( t\right) \,,\nonumber\\ y\left( x\right) & \rightarrow\Psi\left( a\right) \,,\nonumber\\ \lambda & \rightarrow\frac{\Lambda}{3}\left( \frac{3\pi}{2G}\right) ^{2}\,, \end{align} and the normalization become \begin{equation} \int_{0}^{\infty}daa^{q+4}\Psi^{\ast}\left( a\right) \Psi\left( a\right) .\label{Norm1 \end{equation} It is a standard procedure, to convert the Sturm-Liouville problem $\left( \ref{SL}\right) $ into a variational problem of the form\footnote{Actually the standard variational procedure prefers the following for \begin{equation} F\left[ y\left( x\right) \right] =\frac{-\left[ y^{\ast}\left( x\right) p\left( x\right) \frac{d}{dx}y\left( x\right) \right] _{a}^{b}+\int _{a}^{b}dxp\left( x\right) \left( \frac{d}{dx}y\left( x\right) \right) ^{2}-q\left( x\right) y\left( x\right) }{\int_{a}^{b}dxw\left( x\right) y^{\ast}\left( x\right) y\left( x\right) }\,, \end{equation} with appropriate boundary conditions. \begin{equation} F\left[ y\left( x\right) \right] =\frac{-\int_{a}^{b}dxy^{\ast}\left( x\right) \left[ \frac{d}{dx}\left( p\left( x\right) \frac{d}{dx}\right) +q\left( x\right) \right] y\left( x\right) }{\int_{a}^{b}dxw\left( x\right) y^{\ast}\left( x\right) y\left( x\right) }\,.\label{Funct \end{equation} with boundary condition to be specified. If $y\left( x\right) $ is an eigenfunction of $\left( \ref{SL}\right) $, the \begin{equation} \lambda=\frac{-\int_{a}^{b}dxy^{\ast}\left( x\right) \left[ \frac{d {dx}\left( p\left( x\right) \frac{d}{dx}\right) +q\left( x\right) \right] y\left( x\right) }{\int_{a}^{b}dxw\left( x\right) y^{\ast}\left( x\right) y\left( x\right) }\,, \end{equation} is the eigenvalue, otherwis \begin{equation} \lambda_{1}=\min_{y\left( x\right) }\frac{-\int_{a}^{b}dxy^{\ast}\left( x\right) \left[ \frac{d}{dx}\left( p\left( x\right) \frac{d}{dx}\right) +q\left( x\right) \right] y\left( x\right) }{\int_{a}^{b}dxw\left( x\right) y^{\ast}\left( x\right) y\left( x\right) }\,. \end{equation} \textbf{ }The minimum of the functional $F\left[ y\left( x\right) \right] $ corresponds to a solution of the Sturm-Liouville problem $\left( \ref{SL}\right) $ with the eigenvalue $\lambda.$ In the mini-superspace approach with a FLRW background, one finds \cite{RemoHL \begin{equation} \frac{\int\mathcal{D}aa^{q}\Psi^{\ast}\left( a\right) \left[ -\frac {\partial^{2}}{\partial a^{2}}-\frac{q}{a}\frac{\partial}{\partial a +\frac{9\pi^{2}}{4G^{2}}a^{2}\right] \Psi\left( a\right) }{\int \mathcal{D}aa^{q+4}\Psi^{\ast}\left( a\right) \Psi\left( a\right) =\frac{3\Lambda\pi^{2}}{4G^{2}},\label{WDW_1 \end{equation} The best form of the trial wave function can be guessed by looking the asymptotic behavior of Eq.$\left( \ref{WDW_0}\right) $. For $a\rightarrow \infty$, we fin \begin{equation} \left[ -\frac{\partial^{2}}{\partial a^{2}}-\frac{q}{a}\frac{\partial }{\partial a}+\frac{9\pi^{2}}{4G^{2}}\left( a^{2}-\frac{\Lambda}{3 a^{4}\right) \right] \Psi\left( a\right) \simeq\left[ -\frac{\partial ^{2}}{\partial a^{2}}-\frac{3\Lambda\pi^{2}}{4G^{2}}a^{4}\right] \Psi\left( a\right) =0\label{asy \end{equation} and when $a\rightarrow0$, we fin \begin{equation} \left[ -\frac{\partial^{2}}{\partial a^{2}}-\frac{q}{a}\frac{\partial }{\partial a}+\frac{9\pi^{2}}{4G^{2}}\left( a^{2}-\frac{\Lambda}{3 a^{4}\right) \right] \Psi\left( a\right) \simeq\left[ -\frac{\partial ^{2}}{\partial a^{2}}-\frac{q}{a}\frac{\partial}{\partial a}+\frac{9\pi^{2 }{4G^{2}}a^{2}\right] \Psi\left( a\right) =0. \end{equation} When $a\rightarrow0$, the previous equation can be exactly solved by a superposition of modified Bessel functions of the first $\left( I_{\nu }\left( x\right) \right) $ and second kind $\left( K_{\nu}\left( x\right) \right) $. We find\cite{Wiltshire \begin{equation} \Psi_{0}\left( a\right) =C_{1}{a}^{\left( 1-q\right) /2}I_{\left( q-1\right) /4}{\left( \frac{3\pi}{4G}{a}^{2}\right) }+C_{2}{a}^{\left( 1-q\right) /2}K_{\left( q-1\right) /4}{\left( \frac{3\pi}{4G}{a ^{2}\right) .}\label{sol \end{equation} However, the solution $\left( \ref{sol}\right) $ is exact for a vanishing eigenvalue. Since our purpose is the evaluation of Eq.$\left( \ref{WDW_1 \right) $, we need a solution for $a\rightarrow0$, which considers a generic not vanishing eigenvalue. This is described b \begin{gather} \Psi\left( a\right) =C_{1}\exp\left( -\frac{3\pi a^{2}}{8G}\right) M{\left( \frac{q+1}{4}-\frac{GE^{2}}{3\pi},\frac{q+1}{2},\frac{3\pi}{4G {a}^{2}\right) }\nonumber\\ +C_{2}\exp\left( -\frac{3\pi a^{2}}{8G}\right) U{\left( \frac{q+1}{4 -\frac{GE^{2}}{3\pi},\frac{q+1}{2},\frac{3\pi}{4G}{a}^{2}\right) ,}\label{twf \end{gather} where $M{\left( a,b,x\right) }$ and $U{\left( a,b,x\right) }$ are the Kummer functions. For practical purposes, it is useful to transform $M{\left( a,b,x\right) }$ and $U{\left( a,b,x\right) }$ in terms of $I_{\nu}\left( x\right) $ and $K_{\nu}\left( x\right) $. We fin \begin{equation} \left\{ \begin{array} [c]{c M{\left( a+1/2,2a+1,2x\right) =\Gamma}\left( 1+a\right) \mathrm{\exp }\left( x\right) I_{a}{\left( x\right) /}\left( x/2\right) ^{a}\\ \\ U{\left( a+1/2,2a+1,2x\right) =}\mathrm{\exp}\left( x\right) K_{a}{\left( x\right) /}\left( \sqrt{\pi}\left( 2x\right) ^{a}\right) \end{array} \right. .\label{Kummer \end{equation} Since $M{\left( a,b,x\right) }$ is proportional to $I_{a}{\left( x\right) }$ which is divergent for large $x$, we will fix $C_{1}=0$ to obtain normalizable solutions. Thus, we consider the following for \begin{equation} \Psi\left( a\right) =\exp\left( -\frac{\beta{a}^{2}}{2}\right) U{\left( \frac{q+1}{4},\frac{q+1}{2},\beta{a}^{2}\right) =\frac{\left( \beta{a ^{2}\right) ^{\left( 1-q\right) /4}}{\sqrt{\pi}}K_{\left( q-1\right) /4}{\left( \frac{\beta{a}^{2}}{2}\right) }},\label{Psi \end{equation} for the trial wave function and we plug $\left( \ref{Psi}\right) $ into Eq.$\left( \ref{WDW_1}\right) $. After an integration over the scale factor $a\left( t\right) $, one get \begin{equation} \frac{3\Lambda\left( \beta\right) \pi^{2}}{4G^{2}}=\frac{\int_{0}^{\infty }dxxK_{\left( q-1\right) /4}^{2}{\left( x\right) }}{2\int_{0}^{\infty }dxx^{2}K_{\left( q-1\right) /4}^{2}{\left( x\right) }}\left( -\beta ^{3}+\frac{9\pi^{2}}{4G^{2}}\beta\right) ,\label{LambdaB \end{equation} where $\beta$ is a variational parameter and where we have rescaled the integrals with the help of the results of Appendix \ref{Appe1}. By imposing that $\Lambda\left( \beta\right) $ be stationary against arbitrary variations of the parameter $\beta$, we obtai \begin{equation} \frac{d}{d\beta}\Lambda\left( \beta\right) =\frac{3\int_{0}^{\infty }dxxK_{\left( q-1\right) /4}^{2}{\left( x\right) }}{2\int_{0}^{\infty }dxx^{2}K_{\left( q-1\right) /4}^{2}{\left( x\right) }}\left( \frac {2G}{3\pi}\right) ^{2}\frac{d}{d\beta}\left( -\beta^{3}+\left( \frac{3\pi }{2G}\right) ^{2}\beta\right) =\,0. \end{equation} This implie \begin{equation} \beta_{\pm}=\pm\frac{\sqrt{3}\pi}{2G}.\label{sol12 \end{equation} Plugging $\left( \ref{sol12}\right) $ into Eq.$\left( \ref{LambdaB}\right) $, one find \begin{equation} \Lambda\left( \beta_{\pm}\right) =4\beta_{\pm}\frac{\Gamma\left( \frac {3+q}{4}\right) \Gamma\left( \frac{5-q}{4}\right) }{\Gamma\left( \frac{5+q}{4}\right) \Gamma\left( \frac{7-q}{4}\right) }. \end{equation} It is easy to check that $\beta_{+}$ is a maximum and $\beta_{-}$ is a minimum. However $\beta_{-}$ is negative independently on $q$ and this leads to a normalization $\left( \ref{Norm1}\right) $ in the range $\left( -\infty,0\right] $ which is non physical. A further exploration with a pure Gaussian choice, namel \begin{equation} \Psi\left( a\right) =\exp\left( -\frac{\beta{a}^{2}}{2}\right) \label{Gauss \end{equation} for $q=0$, leads t \begin{equation} 3\Lambda\left( \frac{\pi}{2G}\right) ^{2}=\frac{\int\mathcal{D}a\Psi^{\ast }\left( a\right) \left[ -\frac{\partial^{2}}{\partial a^{2}}+\left( \frac{3\pi}{2G}\right) ^{2}a^{2}\right] \Psi\left( a\right) {\int\mathcal{D}aa^{4}\Psi^{\ast}\left( a\right) \Psi\left( a\right) }=\frac{2}{3}\left( \beta^{3}+\left( \frac{3\pi}{2G}\right) ^{2 \beta\right) . \end{equation} The application of the variational procedure leads to imaginary solutions and therefore it will be discarded. It remains to test the following assumptio \begin{equation} \Psi\left( a\right) =\exp\left( -\frac{\beta{a}^{4}}{2}\right) ,\label{a4 \end{equation} suggested by the asymptotic behavior $\left( \ref{asy}\right) $. In the next section we will discuss the choice $\left( \ref{a4}\right) $ as a particular case of a VSL theory. One could insist in this direction and try to explore other trial wave functions. However, choices $\left( \ref{twf}\right) $, $\left( \ref{Gauss}\right) $ and $\left( \ref{a4}\right) $ have been chosen following the standard procedure for a variational approach. Therefore, the other choices can only be small variations of the proposed trial wave functions above mentioned. Therefore we are led to consider a distorted version of the gravitational field induced by a VSL theory. \section{The Wheeler-DeWitt equation for the Friedmann-Lemaitre-Robertson-Walker space-time in the presence of varying speed of light} \label{p2}A VSL cosmology model is described by the following line elemen \begin{equation} ds^{2}=-N^{2}\left( t\right) c^{2}\left( t\right) dt^{2}+a^{2}\left( t\right) d\Omega_{3}^{2},\label{FRWc \end{equation} where $d\Omega_{3}^{2}$ is described by Eq.$\left( \ref{domega}\right) $ and where $c\left( t\right) $ is an arbitrary function of time with the dimensions of a $\left[ length/time\right] $. The form of the background is such that the shift function $N^{i}$ vanishes. Thus, the extrinsic curvature read \begin{equation} K_{ij}=-\frac{\dot{g}_{ij}}{2N\left( t\right) c\left( t\right) =-\frac{\dot{a}\left( t\right) }{N\left( t\right) c\left( t\right) a\left( t\right) }g_{ij},\label{Kij \end{equation} where the dot denotes differentiation with respect to time $t$. The gravitational action fulfilling the Einstein's Field equation with the speed of light explicitly written i \begin{equation} S=\frac{1}{16\pi G}\int_{\mathcal{M}}c^{4}\left( t\right) \sqrt{-g}Rd^{4}x, \end{equation} where we have used the following relationship\textbf{ }$dx^{0}=c\left( t\right) dt$. It is easy to write the form of the reduced action of the mini-superspace. Indeed, reintroducing the speed of light into the action $\left( \ref{action}\right) $, one gets in $(3+1) \begin{equation} S=\frac{1}{16\pi G}\int_{\Sigma\times I}N\left( t\right) c^{4}\left( t\right) \sqrt{g}\left[ K^{ij}K_{ij}-K^{2}+R-2\Lambda\right] ~dt~d^{3 x~,\label{actionc \end{equation} Using the line element, Eq.~$\left( \ref{FRWc}\right) $, the above written action, Eq.~$\left( \ref{actionc}\right) $, become \begin{equation} S=-\frac{3\pi}{4G}\int_{I}c^{2}\left( t\right) \left[ \dot{a}^{2 a-ac^{2}\left( t\right) +\frac{\Lambda}{3}a^{3}c^{2}\left( t\right) \right] dt~, \end{equation} where we have computed the volume associated to the three-sphere, namely $V_{3}=2\pi^{2}$, and set $N=1$. The canonical momentum read \begin{equation} \pi_{a}=\frac{\delta S}{\delta\dot{a}}=-\frac{3\pi}{2G}\dot{a}\,a\,c^{2 \left( t\right) ~, \end{equation} and the resulting Hamiltonian density i \begin{align} \mathcal{H} & =\pi_{a}\dot{a}-\mathcal{L}\nonumber\\ & =-\frac{G}{3\pi ac^{2}\left( t\right) }\pi_{a}^{2}-\frac{3\pi {4G}a\,c^{4}\left( t\right) +\frac{3\pi}{4G}\frac{\Lambda}{3}a^{3 c^{4}\left( t\right) ~. \end{align} According to the usual prescription where $\pi_{a}$ is promoted to an operator, we can writ \begin{equation} \pi_{a}\rightarrow-i\hbar c\left( t\right) \frac{\partial}{\partial a \end{equation} and introducing the factor ordering ambiguit \begin{equation} \pi_{a}^{2}\rightarrow-\left( \hbar c\left( t\right) \right) ^{2 a^{-q}\frac{\partial}{\partial a}a^{q}\frac{\partial}{\partial a}, \end{equation} the WDW equation $\mathcal{H}\Psi=0$ simply become \begin{equation} \left[ -\frac{G}{3\pi ac^{2}\left( t\right) }\pi_{a}^{2}-\frac{3\pi {4G}a\,c^{4}\left( t\right) +\frac{3\pi}{4G}\frac{\Lambda}{3}a^{3 c^{4}\left( t\right) \right] \Psi\left( a\right) =0. \end{equation} Following\cite{Barrow1,Barrow2,Barrow3}, we assume tha \begin{equation} c\left( t\right) =c_{0}\left( \frac{a\left( t\right) }{a_{0}}\right) ^{\alpha}\label{c(t) \end{equation} where $a_{0}$ is a reference length scale whose value will be fixed later. If the factor ordering is not distorted by the presence of a varying speed of light, one can further simplify the above equation to obtai \begin{equation} \left( -\frac{\partial^{2}}{\partial a^{2}}-\frac{q}{a}\frac{\partial }{\partial a}+U_{c}\left( a\right) \right) \Psi\left( a\right) =0,\label{WDWg \end{equation} where we have set $N=1$ and the quantum potential is defined a \begin{equation} U_{c}\left( a\right) =\left( \frac{3\pi}{2G\hbar}\right) ^{2}a^{2 c^{6}\left( t\right) \left( 1-\frac{\Lambda}{3}a^{2}\right) =\left( \frac{3\pi c_{0}^{3}}{2G\hbar a_{0}^{3\alpha}}\right) ^{2}a^{2+6\alpha }\left( 1-\frac{\Lambda}{3}a^{2}\right) .\label{Uac \end{equation} Note that the potential $U_{c}\left( a\right) $ vanishes in the same points where $U\left( a\right) $ has its roots. Now, we are ready to discuss the analogue of Eq.$\left( \ref{WDW_1}\right) $ in presence of a VSL distortion\footnote{Note that for the special case $\alpha=-2/3$, one find \begin{equation} \left( -\frac{\partial^{2}}{\partial a^{2}}-\frac{q}{a}\frac{\partial }{\partial a}+\left( \frac{3\pi c_{0}^{3}}{2G\hbar a_{0}^{-2}}\right) ^{2}\left( a^{-2}-\frac{\Lambda}{3}\right) \right) \Psi\left( a\right) =0, \end{equation} that it mean \begin{equation} \left( -\frac{\partial^{2}}{\partial a^{2}}-\frac{q}{a}\frac{\partial }{\partial a}+\frac{K^{2}}{a^{2}}\right) \Psi\left( a\right) =\frac{\Lambda K^{2}}{3}\Psi\left( a\right) , \end{equation} where we have defined $K=3\pi a_{0}^{2}/\left( 2l_{P}^{2}\right) $. This equation has exact solution in the form of a superposition of Bessel functions $J_{\nu}\left( x\right) $ and $Y_{\nu}\left( x\right) $. However to obtain eigenvalues one has to impose a large but finite boundary where the Bessel functions vanish.}. To this purpose Eq.$\left( \ref{WDWg}\right) $ can be cast into the for \begin{equation} \frac{\int\mathcal{D}aa^{q}\Psi^{\ast}\left( a\right) \left[ -\frac {\partial^{2}}{\partial a^{2}}-\frac{q}{a}\frac{\partial}{\partial a}+\left( \frac{3\pi}{2l_{P}^{2}a_{0}^{3\alpha}}\right) ^{2}a^{2+6\alpha}\right] \Psi\left( a\right) }{\int\mathcal{D}aa^{q+4+6\alpha}\Psi^{\ast}\left( a\right) \Psi\left( a\right) }=3\Lambda\left( \frac{\pi}{2l_{P}^{2 a_{0}^{3\alpha}}\right) ^{2},\label{VeV \end{equation} where we have defined\textbf{ }$l_{P}=\sqrt{G\hbar/c_{0}^{3}}$. Because of the VSL distortion, the asymptotic behavior of the trial wave function must be different compared to $\left( \ref{WDW_1}\right) $. Sinc \begin{equation} \left\{ \begin{array} [c]{cc K_{\nu}{\left( x\right) \rightarrow}\sqrt{\pi/\left( 2x\right) }\mathrm{\exp}\left( -x\right) & x\rightarrow\infty\\ & \\% \begin{array} [c]{c K_{0}{\left( x\right) \rightarrow-\ln}\left( x\right) \\ K_{\nu}{\left( x\right) \rightarrow}\Gamma\left( \nu\right) \left( x/2\right) ^{-\nu}/2 \end{array} & x\rightarrow0 \end{array} \right. , \end{equation} we find extremely useful the following assumption of the trial wave functio \begin{equation} \Psi\left( a\right) ={a}^{-\frac{q+1}{2}}\left( \beta{a}\right) ^{-3\alpha}\exp\left( -\frac{\beta{a}^{4}}{2}\right) ,\label{Psib \end{equation} which is a small variation of $\left( \ref{Psi}\right) $. The exponential encodes the large $a$ behavior, while $\left( \beta{a}\right) ^{-3\alpha}$ encodes the small scale factor behavior and $\beta$ is the variational parameter. Plugging $\left( \ref{Psib}\right) $ into Eq.$\left( \ref{VeV}\right) $, after an integration over the scale factor $a\left( t\right) $, we fin \begin{equation} \Lambda_{q,\alpha}\left( \beta\right) =3\left( \frac{2l_{P}^{2 a_{0}^{3\alpha}}{3\pi}\right) ^{2}\left( C_{q}\left( \alpha\right) {{\beta}^{\frac{3\left( 1+\alpha\right) }{2}}}+\left( \frac{3\pi {2l_{P}^{2}a_{0}^{3\alpha}}\right) ^{2}\sqrt{{{\beta\pi}}}\right) ,\label{LB \end{equation} wher \begin{equation} C_{q}\left( \alpha\right) =\frac{1}{4}{({q}^{2}-24\alpha-2\,q-7)\Gamma \left( {-\frac{1+3{\alpha}}{2}}\right) }. \end{equation} We demand tha \begin{equation} \frac{d\Lambda_{q,\alpha}\left( \beta\right) }{d\beta}=\frac{3}{2{{\beta }^{\frac{1}{{2}}}}}\left( \frac{2l_{P}^{2}a_{0}^{3\alpha}}{3\pi}\right) ^{2}\left( 3\left( 1+\alpha\right) C_{q}\left( \alpha\right) {{\beta }^{\frac{2+3\alpha}{2}}}+\left( \frac{3\pi}{2l_{P}^{2}a_{0}^{3\alpha }\right) ^{2}\sqrt{\pi}\right) =0,\label{dLB \end{equation} wher \begin{equation} \bar{\beta}_{q}\left( \alpha\right) =\left( -\left( \frac{3\pi}{2l_{P ^{2}a_{0}^{3\alpha}}\right) ^{2}\frac{\sqrt{\pi}}{3\left( 1+\alpha\right) C_{q}\left( \alpha\right) }\right) ^{\frac{2}{2+3\alpha}}.\label{betaSol \end{equation} with the conditions $1+\alpha\neq0$ and $2+3\alpha\neq0$. Plugging $\bar {\beta}$ into $\Lambda_{q,\alpha}\left( \beta\right) $, one find \begin{equation} \Lambda_{q,\alpha}\left( \beta\right) =\bar{\beta}_{q}^{\frac{1}{2}}\left( \alpha\right) \sqrt{{{\pi}}}\frac{2+3{\alpha}}{{1+\alpha}}.\label{LB0 \end{equation} The result is again dependent by the VSL\ parameter and on the reference scale $a_{0}$. To this purpose we assume, without a loss of generality, that $a_{0}=kl_{P}$. Then one get \begin{equation} \Lambda_{q}\left( \alpha\right) =l_{P}^{-2}\left( -\left( \frac{3\pi }{2k^{3\alpha}}\right) ^{2}\frac{\sqrt{\pi}}{3\left( 1+\alpha\right) C_{q}\left( \alpha\right) }\right) ^{\frac{1}{2+3\alpha}}\sqrt{{{\pi} }\frac{2+3{\alpha}}{{1+\alpha}}.\label{Lq \end{equation} To have one and only one solution, we find a stationary point for $\Lambda _{q}\left( \alpha\right) $ and we impose tha \begin{equation} \frac{d\Lambda_{q}\left( \alpha\right) }{d\alpha}=0\Longleftrightarrow k=A\left( q,\alpha\right) \exp\left( B\left( q,\alpha\right) \right) \label{LambdaA \end{equation} wher \begin{equation} A\left( q,\alpha\right) =\frac{\pi^{\frac{5}{8}}}{3^{\frac{1}{4}}}\left( {\left( 1+\alpha\right) \Gamma\left( -\frac{1+3\alpha}{2}\right) \left( -{q}^{2}+24\alpha-2q-7\right) }\right) ^{\frac{1}{4} \end{equation} an \begin{equation} B\left( q,\alpha\right) =\frac{{(3\alpha{q}^{2}-72{\alpha}^{2}+6\alpha q+2{q}^{2}-27\alpha+4q+14)\,\Psi\left( -\frac{1+3\alpha}{2}\right) +48\alpha+32}}{8({q}^{2}-24\alpha+2q+7)}. \end{equation} In the table below, it is shown the result of the procedure $\left( \ref{LambdaA}\right) $ for some specific choices of $q \begin{equation} \left\{ \begin{tabular} [c]{|c|c|c|}\hline $q=1$ & $k_{0}=0.5779378002$ & $\bar{\alpha}=-2.007150679$\\\hline $q=0$ & $k_{0}=0.5843673484$ & $\bar{\alpha}=-1.988596177$\\\hline $q=-1$ & $k_{0}=0.6030705325$ & $\bar{\alpha}=-1.940190188$\\\hline \end{tabular} \ \ \ \ \right. . \end{equation} \begin{figure}[h] \centering\includegraphics[width=2.8in]{Lambdaq.eps}\caption{Plot of $\Lambda_{q}\left( \alpha\right) $ as a function of $\alpha$ depicted for $q=1$. The local minimum and the local maximum appear below the critical scale $k_{0}$. For $k=k_{0}$ there is only a stationay point which disappears for $k>k_{0}$. \label{Lambda \end{figure}As depicted in Fig.$\left( \ref{Lambda}\right) $, the couple $\left( k_{0},\bar{\alpha}\right) $ does not represent the solution of the problem, because the point is stationary and not a local minimum. Rather we can interpret the couple $\left( k_{0},\bar{\alpha}\right) $ as a critical value below which a minimum and a maximum appear. In particular, as shown in Fig..$\left( \ref{Lambda}\right) $, for $k<k_{0}$ and $\alpha<\bar{\alpha}$ we have a minimum and for $k<k_{0}$ and $\alpha>\bar{\alpha}$, we have a maximum. In the spirit of the variational procedure only the minimum can be considered as the solution of the problem. Note that the lower the value of $k_{0}$, the higher the value of $\Lambda_{q}\left( \bar{\alpha}\right) $. Note also that the value of $k_{0}$ is transplanckian. From the expression of $\Lambda_{q}\left( \alpha\right) $, this is true when ${\alpha>0}$ or when ${\alpha<-2/3}$, otherwise when ${-2/3<\alpha<0}$, the behavior on $k$ reverses. From the assumption $\left( \ref{c(t)}\right) $, we see that for ${\alpha>0}$, we have $c\left( t\right) \gg c_{0}$ when the scale factor $a\left( t\right) \gg a_{0}$ and this is ruled out by observation. Therefore, the correct range of solutions is when ${\alpha<-2/3}$. Furthermore to have also positive solutions we need ${\alpha<-1}$. The physical reason of why we obtain solutions in the negative range of ${\alpha}$ is that in the early universe one expects to measure strong quantum effects when the scale factor $a\left( t\right) $ is really small. In this framework, this is realized with a speed of light which is really big compared to $c_{0}$. It is interesting to note that for $k>k_{0}$, there is no solution at all and for $k\gg k_{0}$, $\Lambda_{q}\left( \alpha\right) \ll1$ in Planck's units and $\alpha\in\left( \bar{\alpha}-\varepsilon,\bar{\alpha}\right) $ with $\varepsilon>0$ arbitrarily small. \section{Conclusions} \label{p3}Motivated by the results obtained in different contexts of distorted gravity and in special way in the HL theory\cite{RemoHL}, in this paper we have examined the possibility that the cosmological constant can be considered as an eigenvalue of an appropriate Sturm-Liouville problem of a VSL theory. This interpretation is not new and it has been explored in different contexts\cite{GMLM,GMLM1,RemoHL}. What is different in this paper is that the WDW equation on a FLRW background in a mini-superspace approach reveals a complete analogy with a Sturm-Liouville problem and the cosmological constant has the natural interpretation of its related eigenvalue. The WDW equation has been examined taking account of the factor order ambiguity. We have probed ordinary gravity without matter fields with different families of trial wave functions and we have found no sign of a cosmological constant induced by quantum fluctuations. Of course we have not exhausted all the possible choices of the trial wave functions. However, the form we have adopted has been chosen using the standard criteria for a variational approach. Therefore, we can conjecture that for a mini-superspace approach without matter fields, a cosmological constant cannot be generated. The introduction of a VSL \begin{equation} c\left( t\right) =c_{0}\left( \frac{a\left( t\right) }{a_{0}}\right) ^{\alpha \end{equation} makes the situation different because of the power law on the scale factor. This modification is also supported by the following alternative definition of the speed of ligh \begin{equation} c\left( E/E_{\mathrm{Pl}}\right) =\frac{dE}{dp}=c_{0}\frac{g_{2}\left( E/E_{\mathrm{Pl}}\right) }{g_{1}\left( E/E_{\mathrm{Pl}}\right) },\label{c(E) \end{equation} which can be easily extracted if one introduces Gravity's Rainbow into the FLRW metric. In this formulation, the space-time geometry is described by the deformed metric \begin{equation} ds^{2}=-\frac{N^{2}\left( t\right) }{g_{1}^{2}\left( E/E_{\mathrm{Pl }\right) }dt^{2}+\frac{a^{2}\left( t\right) }{g_{2}^{2}\left( E/E_{\mathrm{Pl}}\right) }d\Omega_{3}^{2}~,\label{FLRWMod \end{equation} where $g_{1}(E/E_{\mathrm{Pl}})$ and $g_{2}(E/E_{\mathrm{Pl}})$ are functions of energy, which incorporate the deformation of the metric. Concerning the low-energy limit it is required to consider \begin{equation} \lim_{E/E_{\mathrm{Pl}}\rightarrow0}g_{1}\left( E/E_{\mathrm{Pl}}\right) =1\qquad\mathrm{and}\qquad\lim_{E/E_{\mathrm{Pl}}\rightarrow0}g_{2}\left( E/E_{\mathrm{Pl}}\right) =1, \end{equation} and thus to recover the usual background $\left( \ref{FRW}\right) $. Hence, $E$ quantifies the energy scale at which quantum gravity effects become apparent. For instance, one of these effects would be that the graviton distorts the background metric as we approach the Planck scale. In a distorted FLRW metric the dispersion relation for a massless graviton i \begin{equation} E^{2}g_{1}^{2}\left( \frac{E}{E_{P}}\right) =p^{2}g_{2}^{2}\left( \frac {E}{E_{P}}\right) , \end{equation} leading to $\left( \ref{c(E)}\right) $. Setting for exampl \begin{align} g_{1}\left( E/E_{\mathrm{Pl}}\right) & =1\nonumber\\ g_{2}\left( E/E_{\mathrm{Pl}}\right) & =1+\left( \frac{a\left( t\right) }{a_{0}}\right) ^{\alpha},\label{VSLb \end{align} one obtains a different, but equivalent form of the VSL. This formulation has the advantage to avoid technical complications as in Ref.\cite{RGES}. The choice in $\left( \ref{VSLb}\right) $ appears to be connected also to the following potentia \begin{equation} a^{4}\left( t\right) \left[ \frac{6}{a^{2}\left( t\right) }-\frac{96\pi Gb}{a^{4}\left( t\right) }-\frac{\allowbreak3456\pi^{2}G^{2}c}{a^{6}\left( t\right) }-2\Lambda\right] ,\label{HL \end{equation} coming from a HL theory without detailed balanced condition. In this kind of potential, one discovers positive eigenvalues depending on the various coupling constants choices. However the potential $\left( \ref{HL}\right) $ appears to be more flexible to produce positive eigenvalues. It is for this reason that the structure of the trial wave function we have used in this paper is more elaborated compared to a simple gaussian function which has bees used in a HL theory\cite{RemoHL}. The procedure of finding a minimum for $\Lambda\left( \beta\right) $ of Eq.$\left( \ref{LB}\right) $ has produced a result depending on two other parameters, the power $\alpha$ and the reference scale $k$. A further minimization procedure allows to select one value compatible with the procedure which however does not constitute the final answer, rather it has been interpreted as a critical value below which we have eigenvalues while above which we have none. Note that the appearance of eigenvalues compatible with the procedure is in the transplanckian regime and for negative values of $\alpha$. Negative values of $\alpha$ have been found in Ref.\cite{harko}, even if the authors discuss the \textquotedblleft Creation from Nothing\textquotedblright\ problem. Note that for Planckian and cisplanckian values of the scale $a_{0}$, the eigenvalue does not appear and for larger scales, like the inflationary one, the whole expression in $\left( \ref{Lq}\right) $ becomes very small for every value of $\alpha<-1$. At this stage of calculation, we do not know if this behavior is simply a failure of the approach or further information can be extracted.
1503.03522
\section{Introduction} When atomic spectral lines coalesce into broad unresolved patterns due to physical broadening mechanisms (Stark effect, auto-ionization, etc.), they can be handled by so-called statistical methods (Bauche et al., 1988). Global characteristics - average energy, variance, asymmetry and sharpness - of level-energy, absorption or emission spectra can be useful for their analysis and the investigation of their regularities (Bauche \& Bauche-Arnoult, 1987 \& 1990). Systematic studies of these average characteristics for transition arrays and applications to the interpretation of experimental spectra of high-temperature plasmas were initiated in (Moszkowski 1962, Bauche et al., 1979). The elaboration of the general group-diagrammatic summation method (Ginocchio, 1973) and its realization in computer codes (Kucas \& Karazija, 1993 \& 1995, Kucas et al., 2005, Karazija \& Kucas, 2013) opened up new possibilities for the use of global properties in atomic spectroscopy. On the other hand, some transition arrays exhibit a small number of lines that must be taken into account individually. Those lines are important for plasma diagnostics, interpretation of spectroscopy experiments and for calculating the Rosseland mean $\kappa_R$, important for radiation transport, and defined as \begin{equation} \frac{1}{\kappa_R}=\int_0^{\infty}\frac{1}{\kappa(h\nu)}\frac{\partial B_T(h\nu)}{\partial T}dh\nu\Bigg/\int_0^{\infty}\frac{\partial B_T(h\nu)}{\partial T}dh\nu, \end{equation} \noindent $h\nu$ being the incident photon energy, $\kappa(h\nu)$ the opacity including stimulated emission, $T$ the temperature and $B_T(h\nu)$ Planck's distribution function. The Rosseland mean is very sensitive to the gaps between lines in the spectrum. These are the reasons why we developed the hybrid opacity code SCO-RCG (Porcherot et al., 2011), which combines statistical methods and fine-structure calculations, assuming local thermodynamic equilibrium. The main features of the code are described in section \ref{sec1}, the extension to the hybrid approach of the Partially Resolved Transition Array (PRTA) model (Iglesias \& Sonnad, 2012), which enables one to replace many statistical transition arrays by small-scale DLA (Detailed Line Accounting) calculations, is presented in section \ref{sec2} and comparisons with experimental spectra are shown and discussed in section \ref{sec3}. In section \ref{sec4}, an approximate modeling of Zeeman effect is proposed together with a fast numerical algorithm for the convolution of a Lorentzian function with Gram-Charlier expansion series, based on a cubic-spline representation of the Gaussian. \section{\label{sec1} Description of the code and effect of detailed lines} In order to decide, for each transition array, whether a detailed treatment of lines is necessary or not and to determine the validity of statistical methods, the SCO-RCG code uses criteria to quantify the porosity (localized absence of lines) of transition arrays. The main quantity involved in the decision process is the ratio between the individual line width and the average energy gap between two lines in a transition array. Data required for the calculation of lines (Slater, spin-orbit and dipolar integrals) are provided by SCO (Superconfiguration Code for Opacity) code (Blenski et al., 2000), which takes into account plasma screening and density effects on the wave-functions. Then, level energies and lines are calculated by an adapted routine (RCG) of Cowan's atomic-structure code (Cowan, 1981) performing the diagonalization of the Hamiltonian matrix. Transition arrays for which a DLA treatment is not required or impossible are described statistically, by UTA (Unresolved Transition Array, Bauche-Arnoult et al., 1979), SOSA (Spin-Orbit Split Array, Bauche-Arnoult et al., 1985) or STA (Super Transition Array, Bar-Shalom et al., 1989) formalisms used in SCO. In SCO-RCG, the orbitals are treated individually up to a certain limit, consistent with Inglis-Teller limit (Inglis \& Teller, 1939), beyond which they are gathered in a single super-shell. The grouped orbitals are chosen so that they weakly interact with inner orbitals (this is why we sometimes name that super-shell ``Rydberg'' super-shell). The total opacity is the sum of photo-ionization, inverse Bremsstrahlung and Thomson scattering spectra calculated by SCO code and a photo-excitation spectrum in the form \begin{equation} \kappa\left(h\nu\right)=\frac{1}{4\pi\epsilon_0}\frac{\mathcal{N}}{A}\frac{\pi e^2h}{mc}\sum_{X\rightarrow X'}f_{X\rightarrow X'}\mathcal{P}_X\Psi_{X\rightarrow X'}(h\nu), \end{equation} \noindent where $h$ is Planck's constant, $\mathcal{N}$ the Avogadro number, $\epsilon_0$ the vacuum polarizability, $m$ the electron mass, $A$ the atomic number and $c$ the speed of light. $\mathcal{P}$ is a probability, $f$ an oscillator strength, $\Psi(h\nu)$ a profile and the sum $X\rightarrow X'$ runs over lines, UTA, SOSA or STAs of all ion charge states present in the plasma. Special care is taken to calculate appropriately the probability of $X$ (which can be either a level $\alpha J$, a configuration $C$ or a superconfiguration $S$) because it can be the starting point for different transitions (DLA, UTA, SOSA, STA). In order to ensure the normalization of probabilities, we introduce three disjoint ensembles: $\mathcal{D}$ (detailed levels $\alpha J$), $\mathcal{C}$ (configurations $C$ too complex to be detailed) and $\mathcal{S}$ (superconfigurations $S$ that do not reduce to ordinary configurations). The total partition function then reads \begin{equation} U_{\mathrm{tot}}=U\left(\mathcal{D}\right)+U\left(\mathcal{C}\right)+U\left(\mathcal{S}\right)\;\;\;\;\mathrm{with}\;\;\;\;\mathcal{D}\cap\mathcal{C}\cap\mathcal{S}=\emptyset, \end{equation} \noindent where each term is a trace over quantum states of the form Tr$\left[e^{-\beta\left(\hat{H}-\mu\hat{N}\right)}\right]$, where $\hat{H}$ is the Hamiltonian, $\hat{N}$ the number operator, $\mu$ the chemical potential and $\beta=1/\left(k_BT\right)$. The probabilities of the different species of the $N$-electron ion are \begin{equation}\label{probs} \mathcal{P}_{\alpha J}=\frac{1}{U_{\mathrm{tot}}}\left(2J+1\right)e^{-\beta\left(E_{\alpha J}-\mu N\right)}, \end{equation} \noindent for a level belonging to $\mathcal{D}$, \begin{equation} \mathcal{P}_C=\frac{1}{U_{\mathrm{tot}}}\sum_{\alpha J\in C}\left(2J+1\right)e^{-\beta\left(E_{\alpha J}-\mu N\right)}, \end{equation} \noindent for a configuration that can be detailed, \begin{equation} \mathcal{P}_C=\frac{1}{U_{\mathrm{tot}}}g_C~e^{-\beta\left(E_C-\mu N\right)} \end{equation} \noindent for a configuration that can not be detailed (\emph{i.e.} belonging to $\mathcal{C}$) and \begin{equation} \mathcal{P}_S=\frac{1}{U_{\mathrm{tot}}}\sum_{C\in S}g_C~e^{-\beta\left(E_C-\mu N\right)} \end{equation} \noindent for a superconfiguration. We can see in Fig. \ref{DLA_dilue} that fine-structure calculations can have a strong impact on the Rosseland mean. The physical broadening mechanisms are the same for both calculations (statistical: SCO and detailed: SCO-RCG). The modelling of the (impact) collisional broadening relies on the Baranger formulation (Baranger, 1958) and expressions provided by Dimitrijevic and Konjevic (Dimitrijevic \& Konjevic, 1987) corrected by inelastic Gaunt factors similar, for high energies, to the ones proposed by Griem (1962 \& 1968). Ionic Stark effect is treated in the quasi-static approximation following an approach proposed by Rozsnyai (1977), corrected in order to reproduce the exact second-order moment of the electric micro-field distribution in the framework of the OCP (One Component Plasma) model (Iglesias et al., 1983). The code can be useful for astrophysical applications (Gilles et al., 2011, Turck-Chi\`eze et al., 2011). Figures \ref{Fe_BCZ_100_cut}, \ref{Fe_BCZ_800000_cut} and \ref{Fe_BCZ_hybrid+sco_cut} represent the different contributions to opacity (DLA, statistical and PRTA) for an iron plasma in conditions corresponding to the boundary of the convective zone of the Sun, with a maximum imposed number $N_{\mathrm{max}}$ of detailed lines per transition array equal to 100 and 800000 respectively. As expected, when $N_{\mathrm{max}}$ increases, the statistical part becomes smaller. In Fig. \ref{Fe_BCZ_100_cut}, the detailed part is obviously not sufficiently larger than the statistical part around $h\nu$=850 eV. As shown on figures \ref{fer_gilles_15}, \ref{fer_gilles_27} and \ref{fer_gilles_38}, the statistical calculation (SCO) may depart significantly from the detailed one (SCO-RCG), and the differences are essentially the signature of the porosity of transition arrays. The density being quite low ($\rho$= 4 10$^{-3}$ g.cm$^{-3}$), the lines emerge clearly in the spectrum. These conditions are accessible to laser spectroscopy experiments (see Sec. \ref{sec3}) and we can see that the opacity changes notably with temperature. Therefore, even if the quantity which is measured in absorption point-projection-spectroscopy experiments is not the opacity itself, but the transmission (see Sec. \ref{sec3}), one may expect to have a reliable idea of the plasma temperature during the measurement. \begin{figure} \vspace{10mm} \begin{center} \includegraphics[width=10cm]{fig1.eps} \end{center} \vspace{5mm} \caption{(Color online) Comparison between the SCO-RCG and full-statistical (SCO) calculations for an iron plasma at $T$=50 eV and $\rho$=10$^{-3}$ g/cm$^2$. The Rosseland mean is equal to 1691 cm$^2$/g for the SCO calculation and to 1261 cm$^2$/g for SCO-RCG.\label{DLA_dilue}} \end{figure} \begin{figure} \vspace{10mm} \begin{center} \includegraphics[width=10cm]{fig2.eps} \end{center} \vspace{5mm} \caption{(Color online) Different contributions to opacity calculated by SCO-RCG code for an iron plasma at $T$=193 eV and $\rho$=0.58 g.cm$^{-3}$ (boundary of the convective zone of the Sun). The maximum number of lines potentially detailed per transition array is chosen equal to 100.\label{Fe_BCZ_100_cut}} \end{figure} \begin{figure} \vspace{5mm} \begin{center} \includegraphics[width=10cm]{fig3.eps} \end{center} \vspace{5mm} \caption{(Color online) Different contributions to opacity calculated by SCO-RCG code for an iron plasma at $T$=193 eV and $\rho$=0.58 g.cm$^{-3}$ (boundary of the convective zone of the Sun). The maximum number of lines potentially detailed per transition array is chosen equal to 800000.\label{Fe_BCZ_800000_cut}} \end{figure} \begin{figure} \vspace{10mm} \begin{center} \includegraphics[width=10cm]{fig4.eps} \end{center} \vspace{5mm} \caption{(Color online) Comparison between the full-statistical (SCO) spectrum and SCO-RCG around the maximum of the opacity bump in the conditions of Fig. \ref{Fe_BCZ_100_cut} (boundary of the convective zone of the Sun). The maximum number of lines potentially detailed per transition array is chosen equal to 800000.\label{Fe_BCZ_hybrid+sco_cut}} \end{figure} \begin{figure} \vspace{10mm} \begin{center} \includegraphics[width=10cm]{fig5.eps} \end{center} \vspace{5mm} \caption{(Color online) Iron opacity at $T$=15 eV and $\rho$= 4 10$^{-3}$ g.cm$^{-3}$. Comparison between the full-statistical (SCO) and SCO-RCG calculations. The maximum number of lines potentially detailed per transition array is chosen equal to 800000.\label{fer_gilles_15}} \end{figure} \begin{figure} \vspace{10mm} \begin{center} \includegraphics[width=10cm]{fig6.eps} \end{center} \vspace{5mm} \caption{(Color online) Iron opacity at $T$=27 eV and $\rho$= 4 10$^{-3}$ g.cm$^{-3}$. Comparison between the full-statistical (SCO) and SCO-RCG calculations. The maximum number of lines potentially detailed per transition array is chosen equal to 800000.\label{fer_gilles_27}} \end{figure} \begin{figure} \vspace{10mm} \begin{center} \includegraphics[width=10cm]{fig7.eps} \end{center} \vspace{5mm} \caption{(Color online) Iron opacity at $T$=38 eV and $\rho$= 4 10$^{-3}$ g.cm$^{-3}$. Comparison between the full-statistical (SCO) and SCO-RCG calculations. The maximum number of lines potentially detailed per transition array is chosen equal to 800000.\label{fer_gilles_38}} \end{figure} \begin{figure} \vspace{5mm} \begin{center} \includegraphics[width=10cm]{fig8.eps} \end{center} \vspace{5mm} \caption{(Color online) Comparison between two SCO-RCG calculations relying respectively on DLA and PRTA treatments of lines for transition arrays $3p_{3/2}\rightarrow 5s$ in a Hg plasma at $T$=600 eV and $\rho$=0.01 g/cm$^3$. The DLA calculation contains 102 675 lines and the PRTA 26 903 lines.\label{Hg_prta3.1_bis}} \end{figure} \begin{figure} \vspace{5mm} \begin{center} \includegraphics[width=10cm]{fig9.eps} \end{center} \vspace{5mm} \caption{(Color online) The three independent contributions to photo-excitation calculated by SCO-RCG code for an iron plasma at $T$=193 eV and $\rho$=0.58 g.cm$^{-3}$ (boundary of the convective zone of the Sun).\label{figure2}} \end{figure} \begin{figure} \vspace{5mm} \begin{center} \includegraphics[width=10cm]{fig10.eps} \end{center} \vspace{5mm} \caption{(Color online) Number of detailed transition arrays (DLA), partially resolved transition arrays (PRTA) and unresolved transition arrays (UTA) for different values of the maximum number of detailed lines imposed : 10$^2$, 10$^3$, 10$^4$ and $10^6$. For each case, two histograms are displayed: in the first one, the detailed calculations are only pure DLA and in the second one they can be either DLA or PRTA.\label{dla-prta_2}} \end{figure} \section{\label{sec2} Adaptation of the PRTA model to the hybrid approach} In order to complement DLA efforts, the code was recently improved (Pain et al., 2013a \& 2015) with the PRTA (Partially Resolved Transition Array) model (Iglesias \& Sonnad, 2012), which may replace the single feature of a UTA by a small-scale detailed transition array that conserves the known transition-array properties (energy and variance) and yields improved higher-order moments. In the PRTA approach, open subshells are split in two groups. The main group includes the active electrons and those electrons that couple strongly with the active ones. The other subshells are relegated to the secondary group. A small-scale DLA calculation is performed for the main group (assuming therefore that the subshells in the secondary group are closed) and a statistical approach for the secondary group assigns the missing UTA variance to the lines. In the case where the transition $C\rightarrow C'$ is a UTA that can be replaced by a PRTA (see Fig. \ref{Hg_prta3.1_bis}), its contribution to the opacity is modified according to \begin{equation} f_{C\rightarrow C'}~\mathcal{P}_C~\Psi_{C\rightarrow C'}(h\nu)\approx\sum_{\bar{\alpha}\bar{J}\rightarrow\bar{\alpha'}\bar{J'}}f_{\bar{\alpha}\bar{J}\rightarrow\bar{\alpha'}\bar{J'}}~\mathcal{P}_{\bar{\alpha}\bar{J}}~\Psi_{\bar{\alpha}\bar{J}\rightarrow\bar{\alpha'}\bar{J'}}(h\nu), \end{equation} \noindent where the sum runs over PRTA lines $\bar{\alpha}\bar{J}\rightarrow\bar{\alpha'}\bar{J'}$ between pseudo-levels of the reduced configurations, $f_{\bar{\alpha}\bar{J}\rightarrow\bar{\alpha'}\bar{J'}}$ is the corresponding oscillator strength and $\Psi_{\bar{\alpha}\bar{J}\rightarrow\bar{\alpha'}\bar{J'}}$ is the line profile augmented with the statistical width due to the other (non included) spectator subshells. The probability of the pseudo-level $\bar{\alpha}\bar{J}$ of configuration $\bar{C}$ reads \begin{equation} \mathcal{P}_{\bar{\alpha}\bar{J}}=\frac{\left(2\bar{J}+1\right)e^{-\beta\left(E_{\bar{\alpha}\bar{J}}-\mu N\right)}}{\sum_{\bar{\alpha}\bar{J}\in\bar{C}}\left(2\bar{J}+1\right)e^{-\beta \left(E_{\bar{\alpha}\bar{J}}-\mu N\right)}}\times\mathcal{P}_C \end{equation} \noindent with ensures that $\sum_{\bar{\alpha}\bar{J}\in\bar{C}}\mathcal{P}_{\bar{\alpha}\bar{J}}=\mathcal{P}_C$, where $\mathcal{P}_C$ is the probability of the genuine configuration given in Eq. (\ref{probs}). Figure \ref{figure2} represents the different contributions to opacity (DLA, statistical and PRTA) for an iron plasma in conditions corresponding to the boundary of the convective zone of the Sun. We can see that the PRTA contribution is of the same order of magnitude here as the statistical one. The calculation was performed with a maximum imposed of 10 000 detailed lines per transition arrays. We can see in Fig. \ref{dla-prta_2} that for each value of the maximum number of lines that can be detailed ($N_{\mathrm{max}}$), some UTA are replaced by PRTA transition arrays. Of course, the number of remaining UTA decreases with $N_{\mathrm{max}}$. \section{\label{sec3} Interpretation of experimental spectra} The SCO-RCG code has been successfully compared to several absorption and emission experimental spectra, measured in experiments at several laser (Fig. \ref{figure5}) or Z-pinch facilities (Fig. \ref{figure4}). The comparisons show the relevance of the hybrid model and the necessity to carry out detailed calculations instead of full statistical calculations. As mentioned in Sec. \ref{sec1}), the quantity which is measured experimentally is the transmission, related to the opacity by Beer-Lambert-Bouguer's law: \begin{equation}\label{blb} T(h\nu)=e^{-\rho L \kappa(h\nu)}, \end{equation} \noindent where $L$ is the thickness of the sample. The relation (\ref{blb}) between transmission and opacity is valid under the assumption that the material is optically thin and that re-absorption processes are neglected. In SCO-RCG, configuration interaction is limited to electrostatic one between relativistic sub-configurations ($n\ell j$ orbitals) belonging to a non-relativistic configuration ($n\ell$ orbitals), namely ``relativistic configuration interaction''. That effect has a strong impact on the ratio of the two relativistic substructures of the $2p\rightarrow 3d$ transition on Fig. \ref{figure5}. \begin{figure} \vspace{5mm} \begin{center} \includegraphics[width=10cm]{fig11.eps} \end{center} \vspace{5mm} \caption{(Color online) Interpretation with SCO-RCG code of the copper spectrum ($2p\rightarrow 3d$ transitions) measured by Loisel et al. (Loisel et al., 2009, Blenski et al., 2011a \& 2011b). The temperature is $T$=16 eV and the density $\rho$=5 10$^{-3}$ g.cm$^3$.\label{figure5}} \end{figure} \begin{figure} \vspace{5mm} \begin{center} \includegraphics[width=14cm]{fig12.eps} \end{center} \vspace{5mm} \caption{(Color online) Interpretation with SCO-RCG code of the iron spectrum ($2p\rightarrow 3d$ transitions) measured by Bailey et al. (2009). The temperature is $T$=150 eV and the density $\rho$=0.058 g.cm$^3$.\label{figure4}} \end{figure} \clearpage \section{\label{sec4} Statistical modeling of Zeeman effect} \subsection{Determination of the moments} \noindent Quantifying the impact of a magnetic field on spectral line shapes is important in astrophysics, in inertial confinement fusion (ICF) or for Z-pinch experiments. Because the line computation becomes even more tedious in that case, we propose, in order to avoid the diagonalization of the Zeeman Hamiltonian, to describe Zeeman patterns in a statistical way. This is also justified by the fact that in a hot plasma, the number of lines is huge, and therefore the number of Zeeman transitions, arising from the splitting of spectral lines, is even greater, which makes the coalescence of the spectral features more important. Due to the other physical broadening mechanisms, the Zeeman components can not be resolved (Doron et al., 2014). In the presence of a magnetic field $B$, a level $\alpha J_1$ (energy $E_1$) splits into $2J_1+1$ states $M_1$ ($-J_1\leq M_1\leq J_1$) of energy $E_1+\mu_Bg_1M_1$ , $\mu_B$ being the Bohr magneton and $g_1$ the Land\'e factor in intermediate coupling (provided by RCG routine). Each line splits in three components associated to selection rule $\Delta M$=$q$, where $q$=0 for a $\pi$ component and $\pm 1$ for a $\sigma_{\pm}$ component. The intensity of a component can be characterized by the strength-weighted moments of the energy distribution. The $n^{th}-$ order moment reads \begin{equation} \mu_n\left[q\right]=3\sum_{M_1,M_2}\threejm{J_1}{1}{J_2}{-M_1}{-q}{M_2}^2\left(E_2-E_1+\mu_BB\left[g_2M_2-g_1M_1\right]\right)^n, \end{equation} \noindent which can be evaluated analytically (Pain \& Gilleron, 2012a \& 2012b), using graphical representation of Racah algebra or Bernoulli polynomials (Mathys \& Stenflo, 1987). \begin{figure}[ht] \vspace{17mm} \begin{center} \includegraphics[width=10cm]{fig13.eps} \end{center} \vspace{7mm} \caption{Effect of a 1 MG magnetic field on triplet transition $1s2s~^3S\rightarrow 1s2p~^3P$ of carbon ion C$^{4+}$ with a convolution width (FWHM) of 0.005 eV. The observation angle $\theta$ is such that $\cos^2(\theta)=1/3$.\label{C}} \label{fig1} \end{figure} \begin{figure} \vspace{7mm} \begin{center} \includegraphics[width=10cm]{fig14.eps} \end{center} \vspace{7mm} \caption{(Color online) Convolution of Gram-Charlier expansion series with a Lorentzian. The parameters are $a$=0.1, $\sigma$=1, $\alpha_3$=1 and $\alpha_4$=5. The present approach (red curve) relying on a cubic-spline representation of the Gaussian and the direct numerical convolution (dashed black curve) are superimposed.\label{gcv_log}} \end{figure} \begin{table*}[t] \begin{center} \begin{tabular}{|c|c|c|c|}\hline \multicolumn{2}{|c|}{} & $J_2=J_1$ & $J_2=J_1\pm 1$ \\\hline $\sigma_q$ & $\alpha_3$ & \multicolumn{2}{c|}{$(-1)^qq\left(J_1-J_2\right)\mathrm{sgn}\left[g_1-g_2\right]\frac{2\sqrt{5}}{3\sqrt{3}}\frac{J_>}{\sqrt{J_<\left(J_>+1\right)}}$} \\\cline{2-4} & $\alpha_4$ & $\frac{5}{7}\left(\frac{12J_1\left(J_1+1\right)-17}{4J_1\left(J_1+1\right)-3}\right)$ & $\frac{5}{21}\left(13-\frac{4}{J_<\left(J_>+1\right)}\right)$ \\ \hline $\pi$ & $\alpha_3$ & \multicolumn{2}{c|}{0} \\\cline{2-4} & $\alpha_4$ & $\frac{25}{7}\left(\frac{3\left[\left(J_1+2\right)J_1^2-1\right]J_1+1}{\left[1-3J_1\left(J_1+1\right)\right]^2}\right)$ & $\frac{5}{7}\left(3-\frac{2}{J_<\left(J_>+1\right)}\right)$ \\\hline \end{tabular} \caption{Values of $\alpha_3$ and $\alpha_4$ of the Zeeman components. $J_<=\min(J_1,J_2)$ and $J_>=\max(J_1,J_2)$. $\mathrm{sgn}\left[x\right]$ is the sign of $x$.} \label{tab:a} \end{center} \end{table*} \begin{table*}[t] \begin{center} \begin{tabular}{|c|c|}\hline $a_k$ & $e^{-(k+1)^2/2}\left[k^2(k+2)^2+e^{k+1/2}\left(k^2-1\right)^2\right]$ \\ \hline $b_k$ & $e^{-(k+1)^2/2}k(k+1)\left[8+3k+e^{k+1/2}(3k-5)\right]$ \\\hline $c_k$ & $e^{-(k+1)^2/2}\left[4+10k+3k^2+e^{k+1/2}\left(3k^2-4k-3\right)\right]$ \\\hline $d_k$ & $-e^{-(k+1)^2/2}\left[3+e^{k+1/2}(k-2)+k\right]$ \\\hline \end{tabular} \caption{Expression of the coefficients $a_k$, $b_k$, $c_k$ and $d_k$ involved in the cubic-spline representation of the Gaussian (Eq. (\ref{cubic})) for $k\le u\le k+1$.}\label{tab:c} \end{center} \end{table*} \begin{table*}[t] \begin{center} \begin{tabular}{|c|c|}\hline $i$ & $\gamma_{p,i}$ \\ \hline \hline 0 & $\left[1+\frac{\alpha_4-3}{8}\right]a_p$ \\\hline 1 & $-\frac{\alpha_3}{2}a_p+\left[1+\frac{\left(\alpha_4-3\right)}{8}\right]b_p$ \\\hline 2 & $-\frac{\left(\alpha_4-3\right)}{4}a_p-\frac{\alpha_3}{2}b_p+\left[1+\frac{\left(\alpha_4-3\right)}{8}\right]c_p$ \\\hline 3 & $\frac{\alpha_3}{6}a_p-\frac{\left(\alpha_4-3\right)}{4}b_p-\frac{\alpha_3}{2}c_p+\left[1+\frac{\left(\alpha_4-3\right)}{8}\right]d_p$ \\\hline 4 & $\frac{\left(\alpha_4-3\right)}{24}a_p+\frac{\alpha_3}{6}b_p-\frac{\left(\alpha_4-3\right)}{4}c_p-\frac{\alpha_3}{2}d_p$ \\\hline 5 & $\frac{\left(\alpha_4-3\right)}{24}b_p+\frac{\alpha_3}{6}c_p-\frac{\left(\alpha_4-3\right)}{4}d_p$ \\\hline 6 & $\frac{\left(\alpha_4-3\right)}{24}c_p+\frac{\alpha_3}{6}d_p$ \\\hline 7 & $\frac{\left(\alpha_4-3\right)}{24}d_p$ \\\hline \end{tabular} \caption{Coefficients $\gamma_{p,i}$ involved in Eq. (\ref{gam}).}\label{tab:b} \end{center} \end{table*} \subsection{Gram-Charlier distribution} Gram-Charlier expansions are useful to model densities which are deviations from the normal one. The expansion is named after the Danish mathematician Jorgen P. Gram (1850-1916) and the Swedish astronomer Carl V. L. Charlier (1862-1934). Historical accounts of the origin of the Gram-Charlier expansion are given in Hald (2000) and Davies (2005). This expansion, that finds applications in many areas including finance (Jondeau \& Rockinger, 2001), analytical chemistry (Di Marco \& Bombi, 2001), spectroscopy (O'Brien, 1992) and astrophysics and cosmology (Blinnikov, 1998) reads \begin{equation} GC(u)=\frac{1}{\sqrt{2\pi v}}e^{-u^2/2}\left[\sum_{k=0}^{\infty}c_k\he_k\left(\frac{u}{\sqrt{2}}\right)2^{-k/2}\right], \end{equation} \noindent where $u=\left(h\nu-\mu_1\right)/\sqrt{v}$, $v=\mu_2-\left(\mu_1\right)^2$ being the variance. The polynomials $\he_k(x)$ can be expressed as \begin{equation} \he_k(x)=\frac{1}{2^{k/2}}\h_k\left(\frac{x}{\sqrt{2}}\right), \end{equation} \noindent where $\h_k(x)$ are the usual Hermite polynomials obeying the recurrence relation (Szego, 1939): \begin{equation} \h_{k+1}(x)=2x\h_k(x)-2k\h_{k-1}(x) \end{equation} \noindent initialized with $\h_0(x)$=1 and $\h_1(x)=2x$. The coefficients $c_k$ are given by \begin{equation} c_k=\sum_{j=0}^{\left[k/2\right]}\frac{(-1)^j}{j!(k-2j)!2^j}\alpha_{k-2j} \end{equation} \noindent where $[.]$ denotes the integer part and $\alpha_k$ is the dimensionless centered $k-$order moment of the distribution \begin{equation} \alpha_k=\left(\sum_{p=0}^k\bin{k}{p}\mu_p\left(-\mu_1\right)^{k-p}\right)/v^{k/2}. \end{equation} \noindent A good representation of the Zeeman profile is obtained using, for each component, the fourth-order Gram-Charlier expansion series: \begin{equation}\label{gc4} \Psi_Z(u)=\frac{1}{\sqrt{2\pi v}}\exp\left(-\frac{u^2}{2}\right)\left[1-\frac{\alpha_3}{2}\left(u-\frac{u^3}{3}\right)+\frac{\left(\alpha_4-3\right)}{24}\left(3-6u^2+u^4\right)\right], \end{equation} \noindent where $\alpha_3$ (skewness) and $\alpha_4$ (kurtosis) quantify respectively the asymmetry and the sharpness of the component (see Table \ref{tab:a}) (Kendall \& Stuart, 1969). This approximate method was shown to provide quite a good description (see Fig. \ref{C}) of the effect of a strong magnetic field on spectral lines (Pain \& Gilleron, 2012a \& 2012b). The contribution of a magnetic field to an UTA can be taken into account roughly by adding a contribution $2/3\;(\mu_BB)^2\approx 3.35\; 10^{-5}$ [$B$(MG)]$^2$ eV$^2$ to the statistical variance. When all the other broadening mechanisms (statistical, Doppler, ionic Stark) are described by a Gaussian, the resulting profile (convolution of a Gaussian by Gram-Charlier) remains a Gram-Charlier function with modified moments. However, electron collisional broadening is usually modeled by a Lorentzian function \begin{equation} L(h\nu,a)=\frac{a}{\pi}\frac{1}{a^2+\left(h\nu\right)^2}, \end{equation} \noindent as well as natural width. The convolution of a Gaussian by a Lorentzian leads to a Voigt profile (Voigt, 1912, Matveev, 1972 \& 1981, Ida et al., 2000) but in the presence of a magnetic field, the problem is more complicated, since the numerical cost of the direct numerical convolution of a Lorentzian with Gram-Charlier function is prohibitive, due to the huge number of lines involved in the computation. It reads \begin{equation}\label{conv} C(t)=\left(GC\otimes L\right)(t)=\int_{-\infty}^{\infty}GC\left(u\right)L(t-u,\lambda)du \end{equation} \noindent where $t=h\nu/\sigma,$ $\sigma=\sqrt{v}$ being the standard deviation of the distribution and $\lambda=a/\sigma$. \subsection{Convolution of a Lorentzian function with Gram-Charlier expansion series} The convolution product (\ref{conv}) requires the evaluation of a cumbersome integral which reads \begin{equation}\label{dep} C(t)=\frac{1}{\pi\sqrt{2\pi}}\frac{\lambda}{\sigma}\int_{-\infty}^{\infty}\frac{e^{-u^2/2}}{\lambda^2+(t-u)^2}\times\left[\sum_{k=0}^{\infty}c_k\he_k\left(\frac{u}{\sqrt{2}}\right)2^{-k/2}\right]du. \end{equation} \noindent In order to fasten the calculation, the Gaussian is sampled at the points $u=-m,-m+1,\cdots,0,\cdots,m-1,m$ (in practice we use $m$=6) and interpolated using cubic splines (de Boor, 1978) on each interval $\left[k,k+1\right]$ by the formula \begin{equation}\label{cubic} e^{-u^2/2}=a_k+b_k~u+c_k~u^2+d_k~u^3. \end{equation} \noindent The coefficients $a_k$, $b_k$, $c_k$ and $d_k$ in the interval $\left[k,k+1\right]$ are determined by the continuity of the function and its derivative at the points $u=k$ and $u=k+1$. The resulting expressions are given in Table \ref{tab:c}. The Gaussian is assumed to be zero for $|u|>m$. Limiting Gram-Charlier expansion series to fourth order (see Eq. (\ref{gc4})), one now has to deal with the convolution of a Lorentzian by a polynomial of order 7. This can be written\footnote{In the general case, the upper bound of the sum over $k$ is equal to $N$+3, where $N$ is the order of the Gram-Charlier expansion series.} \begin{equation}\label{gam} C(t)=\frac{1}{\pi\sqrt{2\pi}\sigma}\sum_{p=-m}^{m-1}\sum_{k=0}^7\gamma_{p,k}~S_{p,k}(t), \end{equation} \noindent where the coefficients $\gamma_{p,k}$ of the polynomial are given in Table \ref{tab:b}, and \begin{eqnarray} S_{p,k}(t)&=&\int_{p}^{p+1}\frac{u^k}{\lambda^2+\left(t-u\right)^2}du\nonumber\\ &=&\frac{1}{\lambda}\sum_{\ell=0}^k\bin{k}{\ell}\lambda^{\ell}t^{k-\ell}\left[\phi_{\ell}\left(\tfrac{p+1-t}{\lambda}\right)-\phi_{\ell}\left(\tfrac{p-t}{\lambda}\right)\right]. \end{eqnarray} \noindent The function $\phi_{\ell}(w)$ is equal to \begin{equation} \phi_{\ell}(w)=\frac{w^{\ell+1}}{\ell+1}~_2F_1 \left(\begin{array}{l} 1,\frac{\ell+1}{2}\\ \frac{\ell+3}{2} \end{array};-w^2\right), \end{equation} \noindent where $_2F_1$ is a hypergeometric function, but can be efficiently obtained using the recurrence relation \begin{equation} \phi_{\ell}(w)=w^{\ell-2}-\phi_{\ell-2}(w) \end{equation} \noindent with \begin{equation} \phi_0(w)=\arctan(w)\;\;\;\mathrm{and}\;\;\;\phi_1(w)=\frac{1}{2}\ln\left[1+w^2\right]. \end{equation} \noindent Such a method provides fast and accurate results, even for very asymmetrical and sharp Gram-Charlier distribution (see Fig. \ref{gcv_log}). The total line profile results from the convolution of $\Psi_Z$ with other broadening mechanisms. If $\sigma\leq a/10$, we take only the Lorentzian $L(h\nu,a)$. On the other hand, if $a\leq \sigma/150$, we keep the Gaussian. If one has to convolve $C(t)$ by an additional Gaussian of variance $\sigma'$ (representing Doppler broadening for instance), $\sigma$, $\alpha_3$ and $\alpha_4$ must be replaced respectively by: \begin{equation} \left\{\begin{array}{l} \tilde{\sigma}=\sqrt{\sigma^2+\sigma'^2}\\ \tilde{\alpha}_3=\alpha_3\left(\frac{\tilde{\sigma}}{\sigma}\right)^3\\ \tilde{\alpha}_4=\alpha_4\left(\frac{\tilde{\sigma}}{\sigma}\right)^4 \end{array}\right. \end{equation} \noindent Figs. \ref{fig1_FCI} and \ref{fig2_FCI} illustrate the impact of a 10 MG magnetic field (typical of ICF) in the XUV range for a carbon plasma at $T$=50 eV and $\rho$=0.01 g/cm$^3$. \begin{figure} \vspace{7mm} \begin{center} \includegraphics[width=10cm]{fig15.eps} \end{center} \vspace{7mm} \caption{(Color online) SCO-RCG calculations (transitions $1s\rightarrow 2p$) with and without magnetic field for a carbon plasma at $T$=50 eV and $\rho$=10$^{-2}$ g.cm$^{-3}$ (conditions typical to ICF).\label{fig1_FCI}} \end{figure} \begin{figure} \vspace{7mm} \begin{center} \includegraphics[width=10cm]{fig16.eps} \end{center} \vspace{7mm} \caption{(Color online) SCO-RCG calculations (transitions $1s\rightarrow 2p$) with and without magnetic field for a carbon plasma at $T$= 50 eV and $\rho$=10$^{-2}$ g.cm$^{-3}$ (condidions typical to ICF) in a spectral range close to the one of Fig. \ref{fig1_FCI}.\label{fig2_FCI}} \end{figure} \section{Conclusion} By combining different degrees of approximation of the atomic structure (levels, configurations and superconfigurations), the SCO-RCG code allows us to explore a wide range of applications, such as the calculation of Rosseland means, the generation of opacity tables, or the spectroscopic interpretation of high-resolution spectra. The Partially Resolved Transition Array model was recently adapted to the hybrid statistical / detailed approach in order to reduce the statistical part and speed up the calculations. An approximate approach providing a fast and quite accurate estimate of the effect of an intense magnetic field on opacity was also implemented. The formalism requires the moments of the Zeeman components of a line $\alpha J\rightarrow\alpha'J'$, which can be obtained analytically in terms of the quantum numbers and Land\'{e} factors and the profile is modeled by the fourth-order A-type Gram-Charlier expansion series. We also proposed a fast and accurate method to perform the convolution of this Gram-Charlier series with a Lorentzian function. Such an algorithm is useful in order to account for distorsions of the Voigt profile, since the direct numerical evaluation of the integral becomes rapidly prohibitive. More generally, It can be helpful for models relying on the theory of moments (Bancewicz \& Karwowski, 1987), used in most opacity and emissivity codes. In the future, we plan to extend the statistical modeling of Zeeman effect using temperature-dependent moments (see Appendix) and to improve the treatment of Stark broadening in order to increase the capability of the code as concerns K-shell spectroscopy. \clearpage
1503.03907
\section{Introduction} The treatment of linear perturbations has become a prolific field in modern cosmology. The advent of inflationary universe models \cite{inflationmodels} generated great interest in cosmological perturbations and gave rise to a lot of activity in this area. Beyond its early success, the study of perturbed systems in general relativity requires great care because one must deal with the gauge freedom inherent to the theory, which affects the description of the perturbations. Once this problem was realized, most efforts focused on getting a manifestly gauge-invariant formalism. Nowadays, cosmology is entering a golden age owing to the recent observational progress, which has opened new windows to test the predictions of theoretical models \cite{precision}. The latest observations are providing us with increasingly accurate data of cosmological phenomena and, for the first time, it seems possible for astrophysics to think about finding traces of the quantum geometric structure of the early history of the universe \cite{gravpert}. For this reason, the ultimate hope of the community of physicists working in the quantization of gravity is to develop a quantum theory capable of leading to testable predictions. In order to fully capture the quantum nature of spacetime, this theory must involve simultaneously both the geometry and the perturbations, with interplay between them. Over the last decades, the theory of cosmological perturbations \cite{theorypertu}, combined with the inflationary paradigm \cite{inflationmodels}, has emerged as the framework which conciliates the theoretical models of the early universe with observations, since it provides a good approximation to the anisotropies of the cosmic microwave background (CMB) and explains quite satisfactorily the formation of structures at large scales \cite{inflation}. The study of the CMB is a powerful tool for understanding the universe in its origins. It supports the approximation that the observed region is homogeneous and isotropic in a suitable average (demonstrated under certain theoretical assumptions \cite{EGS}). However, this leads to questions about how the anisotropies and cosmological structures formed and developed. The pioneer work in the analysis of perturbations around classical Friedmann-Robertson-Walker (FRW) cosmologies\footnote{These cosmologies are also called Friedmann-Lema\^itre-Robertson-Walker cosmologies by many authors.} is due to Lifshitz \cite{Lpert}, as a first modelization of the universe considered at a large scale. Nonetheless, it was relatively soon noticed that this analysis had been carried out with a specific gauge choice, and hence it did not address the gauge freedom satisfactorily. This gauge dependence in the description of the perturbations has caused many controversies because keeping track of the gauge modes can get cumbersome, something which makes difficult the extraction of the physically meaningful degrees of freedom. An attempt to provide a covariant treatment of the perturbations was made by Hawking \cite{Hpert}, but this work did not resolve totally the gauge ambiguities. It was completed later by Olson \cite{Olson} for the case of an isentropic perfect fluid in a spatially flat spacetime. However, it was Bardeen who first constructed a truly gauge-invariant formalism (originally for a perfect fluid), which mixes the perturbations of the matter with the perturbations of the four-dimensional metric \cite{bardeen}. This work was followed by many other contributions \cite{gaugeinvariant,sasa}. Likewise, Mukhanov, based on Sasaki's investigations \cite{sasa}, proposed some gauge-invariant field-like variables for the case of a scalar field on a spatially flat FRW background, directly related to the co-moving curvature perturbations \cite{mukhanov}. Mukhanov expanded the action for the gravitational and scalar fields up to second order in the perturbations and introduced a gauge-invariant field that completely characterized those perturbations and allowed one to rewrite their action exclusively in terms of it when the background equations were employed. In a standard analysis of primordial fluctuations, one studies the perturbations within the scheme of Quantum Field Theory (QFT) in a classical and fixed curved spacetime. From this viewpoint, the nature of these primordial perturbations is in fact quantum and it is rather general to assume that they started out at a very early time as fluctuations with a small amplitude and gradually grew in time as consequence of gravitational instabilities. For doing this, essentially, one represents the perturbations by quantum fields and considers that they were initially propagating on a given geometry, such as a de Sitter spacetime, which describes rather well the inflationary stage of the universe \cite{bunchdavies}. Actually, we already know from the CMB data that fluctuations with amplitudes of $10^{-5}$ are sufficient to reproduce the cosmic structures observed today. Despite how well this treatment is able to explain the present observations, the challenge for quantum cosmology is to build a formalism which includes the homogeneous background and the inhomogeneous perturbations and proves to be potentially predictive, in order to elucidate whether the relics of the quantum fluctuations of the early universe may encode information about the quantum character of the spacetime geometry itself. A canonical approach to cosmological perturbations in which the background and the inhomogeneities (both in the geometry and in a matter scalar field) were treated together quantum mechanically, although fixing the gauge, was developed by Halliwell and Hawking \cite{HH}, in the context of closed FRW universes. They expanded the action up to second order in the perturbations and then built the corresponding Hamiltonian. Shortly after, Shirai and Wada \cite{shiwa} reformulated this formalism, isolating the quantum dependence on gauge invariants. Actually, they introduced not only a canonical transformation for the perturbations but modified the variables of the background by terms quadratic in those perturbations. Even so, physical states still depended on perturbative variables that were not gauge invariants (although in a way that was completely determined), and the introduced canonical transformations were only defined in a semiclassical limit. Therefore, they depended on the solution to the effective equations that was considered in each case. Later, Langlois \cite{Langlois} tried to refine and clarify this procedure by using techniques inspired by Hamilton-Jacobi theory in order to obtain the gauge-invariant perturbations. Nevertheless, he did not include transformations of the background and, besides, obviated the explicit form in which the Hamiltonian constraint depends on non gauge-invariant variables, because this should be physically irrelevant. More recently, Pinto-Neto and collaborators \cite{PinhoPinto} proposed another approach by means of canonical transformations which involve the perturbations and the background and, in particular, include the Mukhanov-Sasaki (MS) variables. More specifically, they considered the case of a perfect fluid and of a scalar field, and reformulated the system so that the global Hamiltonian constraint depends only on the gauge invariants and the new background variables. In this reformulation, the equations of motion of the background were not used, and the lapse function was redefined. Nonetheless, second-class constraints appeared in this reformulation procedure which had to be eliminated by reduction and the introduction of Dirac brackets, a step that obscures the full gauge invariance of the ultimate system and the role in it of the perturbative constraints. In the last years, the canonical quantization of cosmological perturbations has received as well a lot of attention within the framework of Loop Quantum Cosmology (LQC). LQC \cite{lqc} addresses the quantization of cosmological systems using the ideas and techniques of Loop Quantum Gravity (LQG) \cite{lqg}, a non-perturbative and background independent program for the quantization of general relativity that provides one of the most promising approaches for a quantum theory of the gravitational interaction. Initially, LQC was applied successfully to homogeneous models in cosmology, leading to a consistent quantization of the FRW spacetime in which the Big Bang singularity is replaced with a quantum bounce, namely the Big Bounce \cite{bounce1,bounce2}. Clearly, the limitation of homogeneity is a restriction that must be surpassed when one is interested in studying more realistic scenarios. Therefore, it is natural to try and go beyond this simplification and incorporate small fluctuations in the quantum treatment of the geometry and the matter content. The loop quantization of the FRW background has been combined with a Fock quantization of the perturbations in a so-called {\it{hybrid}} approach in which the whole system that describes the perturbed cosmologies is quantized with canonical methods \cite{hybridFRW,hybridFRWflat,hybridFRWflatMS}. In particular, in this formalism the perturbed spacetime geometry can be treated as a fully quantum entity. This hybrid approach was originally developed in the linearly polarized Gowdy models \cite{hybridgowdy}. In this case, the inhomogeneities can be dealt with exactly, rather as pertubatively. An alternative proposal, put forward by Ashtekar {\it et al.} \cite{dressed}, investigates the propagation of the perturbations in the {\it{dressed}} FRW geometry obtained by incorporating the quantum effects of LQC on the background. Consequently, in this analysis, known as the {\it{dressed metric}} formalism, one loses a full quantum description of the geometry in the perturbed system. Both, in the {\it{hybrid}} and the {\it{dressed metric}} approaches in LQC, the perturbations have been expressed in terms of gauge invariants, though just after eliminating degrees of freedom in a reduction that casts doubt on the covariance of the system. Finally, a third route to analyze the effects of LQG in cosmological perturbations consists in assuming corrections in the quantum constraints arising from the use of holonomies and the regularization of the inverse of the volume operator \cite{effective,BBC}. Demanding the closure of the modified algebra of constraints, one can deduce the form of those quantum corrections and the corresponding effective field equations for the propagation of the perturbations. This strategy is intrinsically covariant, although it still rests on a series of hypotheses about, e.g., the form of the possible loop quantum corrections, the use of local expansions for non-local quantities, or the independence of the results on the existence of superselection sectors. In this work, we will provide a canonical formalism for perturbed flat FRW spacetimes with a matter scalar field which is specially designed to preserve covariance at the considered perturbative level. In particular, the perturbations will be described by the MS gauge invariants, by combinations of the perturbative constraints, and by variables canonically conjugate to them. For completeness and self-consistency in the presentation, we will detail the change to this description from a classical formulation similar to that of Halliwell and Hawking \cite{HH} for the case of flat spatial topology. This change will be obtained by means of a canonical transformation which will be completed in order to include as well the background variables. In addition, we will discuss how to quantize the resulting model using generalized hybrid techniques, namely, combining any given quantization of the background with a quantum theory for the perturbations (assuming the compatibility of both quantizations in the whole system). In this framework, we will show how to extract quantum field equations for the MS variables without any gauge fixing. Furthermore, this will be done without appealing to semiclassical approximations, like in Refs. \cite{HH,shiwa}, nor a Bohm-De Broglie scheme, like in Ref. \cite{PinhoPinto}. Instead, we will employ a kind of Born-Oppenheimer ansatz and discuss the validity of its application. In this derivation, we will not make use of any background equation of motion or constraint, neither at a classical, effective, nor quantum level. Finally, we will particularize our discussion to the specific case of a hybrid quantization in loop cosmology. In this sense, our analysis extends other treatments like that of Ref. \cite{hybridFRWflatMS}, where the gauge freedom associated with the perturbative constraints was fixed, while we do not make any such classical gauge fixing here. For that, we will need to abelianize the algebra of constraints in our perturbed FRW system. Let us finally mention that, in the context of LQG as well, there have been some attempts to develop a manifestly gauge-invariant perturbation theory in the canonical framework by constructing approximate complete observables \cite{bianca}, adopting an approach different from ours. The plan of the rest of the paper is as follows. Sec. \ref{sec:system} contains the notation and the description of the classical perturbed system. In Sec. \ref{sec:inv} we will introduce the set of transformations for the inhomogeneous modes that leads to gauge invariants, perturbative constraints, and their conjugate variables. This will be extended in Sec. \ref{sec:complete} to a transformation for the whole perturbed system that preserves the canonical structure. In doing this, we will have to include quadratic perturbative corrections to the background variables. In addition, in this section we will study how these changes modify the zero-mode of the Hamiltonian constraint. In Sec. \ref{sec:hybrid} we will explain how to proceed to a (generalized) hybrid quantization of the total system formed by the background and the perturbations. Adopting then a Born-Oppenheimer ansatz, we will deduce a quantum equation for the propagation of the MS variables, discussing its range of validity. Besides, we will derive an effective equation governing the evolution of the corresponding classical variables. This formalism will be applied to LQC in Sec. \ref{sec:lqc}. We will conclude and summarize in Sec. \ref{sec:conclusions}. Finally, we will include two appendices with extra details of the calculations. \section{The perturbed flat FRW model} \label{sec:system} In this section we review the classical description of the FRW model with a minimally coupled scalar field and with inhomogeneous perturbations (see e.g. Ref. \cite{HH}). We consider the case of flat spatial topology which (in order to avoid technical problems in the ultraviolet regime, as we will comment later on) we assume to be compact, namely that of a three-torus. This flat case describes the physically relevant situation in cosmology, since it is in agreement with observations, as long as the compactification scales of the three-torus are large compared to the radius of our observable universe. The matter content of the model is provided by a scalar field $\Phi$ minimally coupled to the geometry and subject to a potential term. Although the discussion can be carried out for generic potentials, we will particularize our analysis to the case of a mass contribution for simplicity, explaining just how the main formulas of our study generalize for any other potential of the scalar field. On the other hand, as it is well known, scalar, vector, and tensor perturbations decouple from each other at leading order in the perturbative treatment, and may be considered independently. Actually, for scalar fields, vector perturbations are pure gauge and therefore do not contain any physical degree of freedom. On the contrary, tensor perturbations are in fact gauge invariant, and hence their treatment is relatively simple, at least with respect to the issues of covariance of the perturbative formulation that we want to address. Besides, from an observational point of view, scalar perturbations are the most interesting ones, since they are ultimately responsible for the anisotropies measured in the temperature of the CMB. We will thus focus our discussion on scalar perturbations. This model was also considered in Refs. \cite{hybridFRWflat, hybridFRWflatMS} but, unlike in this work, the system was reduced by partially fixing the gauge freedom in those references. Here, we follow conventions and notation similar to the ones of those works, and we refer the reader to them for further details. Adopting a canonical 3+1 decomposition, we parameterize the four-metric in terms of the three-metric $h_{ij}$ induced on the sections of constant time $t$, a lapse function $N$, and a shift vector $N^i$ (or co-vector $N_i$). Spatial indices $i,j$ run from 1 to 3. For the unperturbed FRW spacetime, these metric functions are completely characterized by a homogeneous lapse $N_0(t)$, and by the scale factor of the spatial metric, its square multiplying a static auxiliary three-metric $^0h_{ij}$. Instead of using the (positive) scale factor we will consider its (real) logarithm, denoted by $\alpha(t)$. On the other hand, we take $^0h_{ij}$ as the standard Euclidean metric on the three-torus $T^3$, appropriate for the considered case of a compact flat universe. We denote the corresponding angular coordinates by $\theta_i$, such that $2\pi \theta_i/ l_0 \in S^1$, so that the period is $l_0$ in each of the orthonormal directions. Any function defined on the spatial sections, such as those describing the inhomogeneous perturbations, can be expanded in the basis formed by the eigenmodes of the Laplace-Beltrami operator compatible with the metric $^0h_{ij}$. In this way, we transform the study of the spatial dependence into a spectral analysis in terms of those modes, whose dynamics decouple (in fixed backgrounds) at leading perturbative order. Then, as in Ref. \cite{hybridFRWflatMS}, we adopt a basis of real Fourier modes, formed by the sine and cosine functions \begin{eqnarray} \tilde Q_{\vec n,+} (\vec\theta)&=& \sqrt 2\cos\left(\frac{2\pi}{l_0}\vec n\cdot\vec\theta\right),\\ \tilde Q_{\vec n,-} (\vec\theta)&=& \sqrt 2\sin\left(\frac{2\pi}{l_0}\vec n\cdot\vec\theta\right). \end{eqnarray} Here, $\vec n\cdot\vec\theta=\sum_in_i\theta_i$, and $\vec n=(n_1,n_2,n_3)\in\mathbb Z^3$ is any tuple of integers such that its first non-vanishing component is strictly positive (this restriction is introduced to avoid repetition of modes). These scalar modes have a norm equal to the square root of the auxiliary volume $l_0^3$ of the three-torus, and a Laplace-Beltrami eigenvalue equal to $-\omega_n^2=-4\pi^2\vec n\cdot\vec n/l_0^{2}$. In the expansion of the inhomogeneities, the vanishing tuple $\vec n$ is not included. The zero-modes account for the background homogeneous geometry, parameterized by $N_0(t)$ and $\alpha(t)$, and for the homogeneous part of the matter field $\Phi$, that we denote by $\varphi(t)$. These degrees of freedom are treated exactly at the perturbative order of our truncations in the description of the system.\footnote{See e.g. Refs. \cite{HH,shiwa,PinhoPinto,kk}, and the discussion on this issue in Refs. \cite{BBC, hybridFRWflatMS}, as well as the full treatment beyond perturbation theory adopted in the inhomogeneous Gowdy cosmologies \cite{gow}, that confirms that when the inhomogeneities are regarded as perturbations, this procedure to deal with the zero-modes is correct.} Using this Fourier expansion, the spacetime metric and the scalar field can be expressed as \begin{subequations}\label{eqs:expansions} \begin{eqnarray}\label{3metric} h_{ij}(t,\vec\theta) &=& \sigma^2 e^{2\alpha(t)}\;{}^0h_{ij}(\vec\theta )\left[1+2\sum_{\vec n,\epsilon}a_{\vec n,\epsilon} (t)\tilde Q_{\vec n,\epsilon}(\vec\theta)\right] \nonumber\\ &+&6 \sigma^2 e^{2\alpha(t)}\sum_{\vec n,\epsilon}b_{\vec n,\epsilon}(t)\left[\frac1{\omega_n ^2}(\tilde Q_{\vec n,\epsilon})_{|ij}(\vec\theta)+\frac13{}^0h_{ij}(\vec\theta)\tilde Q_{\vec n,\epsilon}(\vec\theta)\right],\\ \label{lapse} N(t,\vec\theta) &=& \sigma \left[N_0(t)+e^{3\alpha(t)} \sum_{\vec n,\epsilon}g_{\vec n,\epsilon}(t)\tilde Q_{\vec n,\epsilon}(\vec\theta)\right], \\ \label{shift} N_i(t,\vec\theta) &=& \sigma^2e^{2\alpha(t)}\sum_{\vec n,\epsilon}\frac1{\omega_n^2}k_{\vec n,\epsilon}(t)(\tilde Q_{\vec n,\epsilon})_{|i}(\vec\theta),\\ \label{field} \Phi(t,\vec\theta) &=& \frac{1}{\sigma\sqrt{l_0^{3}}}\left[\varphi(t)+\sum_{\vec n,\epsilon}f_{\vec n,\epsilon}(t)\tilde Q_{\vec n,\epsilon} (\vec\theta)\right]. \end{eqnarray} \end{subequations} In these equations, we have defined the constant $\sigma^2=4\pi G/(3l_0^3)$, $G$ is the Newton constant, the vertical bar stands for the covariant derivative with respect to the auxiliary metric ${}^0h_{ij}$, and $\epsilon=+,-$ for cosine and sine modes, respectively. Besides, we have scaled for convenience the shift vector and the inhomogeneous part of the lapse function by a power of the scale factor\footnote{We note that ${k}_{\vec n,\epsilon}(t)$ and ${g}_{\vec n,\epsilon}(t)$ are not exactly those of Ref. \cite{hybridFRWflatMS}, owing to the commented scaling.} (and of the mode frequency in the case of the shift). The time-dependence of the geometry and matter scalar perturbations is parameterized by the functions \begin{equation}\label{vari} \{a_{\vec n,\epsilon} (t), b_{\vec n,\epsilon} (t), g_{\vec n,\epsilon} (t), k_{\vec n,\epsilon} (t), f_{\vec n,\epsilon} (t)\}. \end{equation} In what follows we will omit the time-dependence in these quantities to simplify the notation. Following an approach parallel to that in Ref. \cite{HH}, we can now substitute these expressions in the Hamiltonian form of the gravitational action with a minimally coupled scalar field and truncate the result at quadratic order in the inhomogeneous perturbations. In this way, one obtains the total Hamiltonian $H$ of the perturbed system at this order of approximation. As expected, this Hamiltonian is given by a linear combination of constraints, reflecting the freedom inherited from general relativity to perform spatial diffeomorphisms and time reparameterizations. Specifically, we get \cite{hybridFRWflat,HH} \begin{equation}\label{eq:Hamiltonian} H ={N}_0\Big[H_{|0}+\sum_{\vec n,\epsilon} H^{\vec n,\epsilon}_{|2}\Big]+\sum_{\vec n,\epsilon} g_{\vec n,\epsilon} \tilde{H}^{\vec n,\epsilon}_{|1}+\sum_{\vec n,\epsilon}k_{\vec n,\epsilon} \tilde{H}^{\vec n,\epsilon}_{\_1}. \end{equation} Here, $H_{|0}$ denotes the scalar or Hamiltonian constraint of the unperturbed flat FRW model, that generates homogeneous time reparameterizations in that system: \begin{equation}\label{eq:H_0} H_{|0} = \frac{e^{-3\alpha}}{2}\big(-\pi_\alpha^2+\pi_\varphi^2+e^{6\alpha}\tilde{m}^2\varphi^2). \end{equation} The constant $\tilde{m}$ is the mass $m$ of the scalar field conveniently re-expressed as $\tilde{m}= m \sigma$. Besides, we employ the notation $\pi_q$ to denote the momentum conjugate to any variable $q$. Notice that, in the perturbed case under study, the zero-mode of the Hamiltonian constraint (or global Hamiltonian constraint) gets contributions from the inhomogeneities. At our truncation order, these contributions are quadratic in the perturbations. We have included them in the sum of terms $ H^{\vec n,\epsilon}_{|2}$. For each mode of the perturbations, the contribution is\footnote{This formula corrects a misprint in Ref. \cite{hybridFRWflat}, that was not relevant for the results discussed in that work.} \begin{align}\label{eq:H2} H^{\vec n,\epsilon}_{|2} &= \frac{e^{-3\alpha}}{2}\Big\{ -\pi_{a_{\vec n,\epsilon}}^2+\pi_{b_{\vec n,\epsilon}}^2+\pi_{f_{\vec n,\epsilon}}^2+2\pi_\alpha(a_{\vec n,\epsilon}\pi_{a_{\vec n,\epsilon}}+4b_{\vec n,\epsilon}\pi_{b_{\vec n,\epsilon}})-6\pi_\varphi a_{\vec n,\epsilon}\pi_{f_{\vec n,\epsilon}} \nonumber\\ &\phantom{=\frac12e^{-3\alpha}\Big\{} +\pi_\alpha^2\Big(\tfrac12a_{\vec n,\epsilon}^2+10b_{\vec n,\epsilon}^2\Big)+\pi_\varphi^2\Big(\tfrac{15}2a_{\vec n,\epsilon}^2+6b_{\vec n,\epsilon}^2\Big) \nonumber\\ &\phantom{=\frac12e^{-3\alpha}\Big\{} -e^{4\alpha}\Big(\tfrac13\omega_n^2a_{\vec n,\epsilon}^2+\tfrac13\omega_n^2b_{\vec n,\epsilon}^2+\tfrac23\omega_n^2a_{\vec n,\epsilon}b_{\vec n,\epsilon}-\omega_n^2f_{\vec n,\epsilon}^2\Big) \nonumber\\ &\phantom{=\frac12e^{-3\alpha}\Big\{} +e^{6\alpha}\tilde m^2\Big[3\varphi^2\Big(\tfrac12a_{\vec n,\epsilon}^2-2b_{\vec n,\epsilon}^2\Big)+6\varphi a_{\vec n,\epsilon}f_{\vec n,\epsilon}+f_{\vec n,\epsilon}^2\Big]\Big\}. \end{align} On the other hand, $\tilde H^{\vec n,\epsilon}_{|1}$ and $ \tilde H^{\vec n,\epsilon}_{\_1}$ are linear in the inhomogeneous perturbations. $\tilde H^{\vec n,\epsilon}_{|1}$ arises from the perturbation of the Hamiltonian constraint around the FRW model in full general relativity, while $ \tilde H^{\vec n,\epsilon}_{\_1}$ comes from the perturbation of the momentum (or diffeomorphisms) constraint. These linear perturbative constraints are given by \begin{align} \tilde{H}^{\vec n,\epsilon}_{|1} &= \frac{1}{2}\Big[-2\pi_\alpha\pi_{a_{\vec n,\epsilon}}+2\pi_\varphi\pi_{f_{\vec n,\epsilon}}-\big(\pi_\alpha^2+3\pi_\varphi^2\big)a_{\vec n,\epsilon}-\tfrac23\omega_n^2e^{4\alpha}(a_{\vec n,\epsilon}+b_{\vec n,\epsilon}) \nonumber\\ &\phantom{= \frac12e^{-3\alpha}\Big[} +e^{6\alpha}\tilde m^2\varphi(3\varphi a_{\vec n,\epsilon}+2f_{\vec n,\epsilon})\Big], \label{eq:H^n_|1}\\ \tilde{H}^{\vec n,\epsilon}_{\_1} &= \frac{1}{3}\big[-\pi_{a_{\vec n,\epsilon}}+\pi_{b_{\vec n,\epsilon}}+\pi_\alpha(a_{\vec n,\epsilon}+4b_{\vec n,\epsilon})+3\pi_\varphi f_{\vec n,\epsilon}\big]. \label{eq:H^n__1} \end{align} We note that $g_{\vec n,\epsilon}$ and $k_{\vec n,\epsilon}$ do not parameterize physical degrees of freedom, but they are instead the Lagrange multipliers associated with these linear perturbative constraints. Finally, it is worth remarking that, at the order of truncation adopted in the action, the perturbed system is symplectic, with canonical variables given by the zero-modes $\alpha$ and $\varphi$, the Fourier coefficients $\{X^{\vec n,\epsilon}_{q_l}\}\equiv\{a_{\vec n,\epsilon}, b_{\vec n,\epsilon}, f_{\vec n,\epsilon}\}$ (with $l=1,2,3$), and their corresponding momenta \cite{HH,hybridFRWflat}. \section{Perturbations in terms of gauge-invariant variables} \label{sec:inv} As explained in the introduction, our first goal is to describe our perturbative system in terms of gauge-invariant variables without fixing any gauge freedom, preserving covariance at the level of the perturbative description. With this aim, we first focus our attention on the inhomogeneous sector of the phase space which contains the degrees of freedom of the perturbations. In the previous section, these degrees of freedom were parameterized by the variables $\{X^{\vec n,\epsilon}_{q_l}\}$, together with their canonically conjugate momenta $\{X^{\vec n,\epsilon}_{p_l}\}\equiv\{\pi_{a_{\vec n,\epsilon}}, \pi_{b_{\vec n,\epsilon}}, \pi_{f_{\vec n,\epsilon}}\}$, with $l$ running from 1 to 3. We will introduce a canonical transformation to describe these perturbations in terms of the MS variables, two suitable combinations of the linear perturbative constraints, and appropriate conjugate pairs of all of them. Since the intention is to respect the canonical structure of the set of elementary variables used for the perturbations, it is clear that we need to find, in particular, combinations of the linear perturbative constraints which commute, therefore abelianizing the perturbative constraint algebra. Once this abelianization is introduced, the fact that the MS variables are gauge invariant, and hence commute with the perturbative constraints, makes straightforward to find a complete set of compatible elementary variables for the inhomogeneous sector. The wanted canonical transformation is then attained with a convenient choice of conjugate variables. Later on, in Sec. \ref{sec:complete}, we will complete this transformation into a canonical one in our entire system, that is, considering not only the inhomogeneities but including also the homogeneous sector of the phase space, parameterized by the canonical variables for the zero-modes $\{\alpha,\pi_\alpha, \varphi,\pi_\varphi\}$. In total, the resulting Hamiltonian will be a linear combination of constraints, that are not only first-class (as usual), but furthermore that form an abelian algebra. So, since in this section we are only interested in the canonical transformations of the inhomogeneous sector of our system and in the symplectic structure induced on it, we will consider momentarily that the homogeneous sector is a fixed background. With this purpose, and only in that sense, we can ignore for the moment the Poisson brackets of the zero-mode variables both between themselves and with the perturbations. Hence, in this section we will operate with a Poisson bracket in the inhomogeneous sector defined accordingly as \begin{align} \{F,G\}_{(P)}\equiv\sum_{l,\vec n,\epsilon} \left(\frac{\partial F}{\partial X^{\vec n,\epsilon}_{q_l}}\frac{\partial G}{\partial X^{\vec n,\epsilon}_{p_l}}- \frac{\partial F}{\partial X^{\vec n,\epsilon}_{p_l}}\frac{\partial G}{\partial X^{\vec n,\epsilon}_{q_l}}\right), \end{align} where $F$ and $G$ are functions of the perturbations, and might also depend on the background variables. \subsection{Canonical transformation for the perturbations} Let us start by introducing the MS variables. In terms of the configuration modes $X^{\vec n,\epsilon}_{q_l}$, the modes of the MS field are given by \cite{mukhanov,hybridFRWflat,hybridFRWflatMS} \begin{equation}\label{Mmode} v_{\vec n,\epsilon} = e^\alpha\left[f_{\vec n,\epsilon}+\frac{\pi_{\varphi}}{\pi_\alpha}(a_{\vec n,\epsilon}+b_{\vec n,\epsilon})\right]. \end{equation} It is straightforward to check that these modes indeed are gauge invariant, as they Poisson commute with the linear perturbative constraints: \begin{equation} \{v_{\vec n,\epsilon}, \tilde H^{\vec n',\epsilon'}_{|1}\}_{(P)}=0=\{v_{\vec n,\epsilon}, \tilde{H}^{\vec n',\epsilon'}_{\_1}\}_{(P)}. \end{equation} We would like to complete these modes to a set of compatible elementary variables for the description of the perturbations. Since the MS variables are gauge invariant, it is natural to try this completion by considering combinations of the linear perturbative constraints, with which they commute. Besides, in this way, the information about the perturbative constraints will be straightforwardly incorporated in our system in terms of elementary variables. In particular, imposing them quantum mechanically will be a direct task. The fact that we want to extract two compatible variables from the perturbative constraints means, as we have already pointed out, that we have to abelianize the algebra of those constraints (and a posteriori, the entire constraint algebra of the perturbed FRW system). With this aim in mind, we notice that the only non-vanishing Poisson brackets between them is $\{\tilde H^{\vec n,\epsilon}_{\_1}, \tilde H^{\vec n,\epsilon}_{|1}\}_{(P)}=e^{3\alpha} H_{|0}$. It is then easy to abelianize their algebra: We only need to replace $\tilde H^{\vec n,\epsilon}_{|1}$ with the combination \begin{align} \breve{H}^{\vec n,\epsilon}_{|1}&=\tilde H^{\vec n,\epsilon}_{|1}-3e^{3\alpha} H_{|0} a_{\vec n,\epsilon}\nonumber\\ &=-\pi_\alpha\pi_{a_{\vec n,\epsilon}}+\pi_\varphi\pi_{f_{\vec n,\epsilon}}+\big(\pi_\alpha^2-3\pi_\varphi^2-\tfrac13\omega_n^2e^{4\alpha}\big)a_{\vec n,\epsilon}-\tfrac13\omega_n^2e^{4\alpha}b_{\vec n,\epsilon} +e^{6\alpha}\tilde m^2\varphi f_{\vec n,\epsilon}. \end{align} Actually, this change amounts to a redefinition of the zero-mode of the lapse function. Indeed, in the action of the system and up to the quadratic order in perturbations that we are working with, we can rewrite the Hamiltonian \eqref{eq:Hamiltonian} as \begin{equation}\label{eq:Hamiltonian2} H =\breve N_0\Big[ H_{|0}+\sum_{\vec n,\epsilon} H^{\vec n,\epsilon}_{|2}\Big]+\sum_{\vec n,\epsilon} g_{\vec n,\epsilon}\breve{H}^{\vec n,\epsilon}_{|1}+\sum_{\vec n,\epsilon}k_{\vec n,\epsilon}\tilde H^{\vec n,\epsilon}_{\_1}, \end{equation} with the new zero-mode of the lapse function acquiring contributions (of quadratic order) from the inhomogeneities: \begin{equation}\label{N0re} \breve N_0 =N_0+3 e^{3\alpha}\sum_{\vec n,\epsilon}g_{\vec n,\epsilon}a_{\vec n,\epsilon}. \end{equation} Note that the MS variables $v_{\vec n,\epsilon}$ remain invariant with respect to the new set of constraints. In summary, the set of variables $\{v_{\vec n,\epsilon},\breve H^{\vec n,\epsilon}_{|1},\tilde{H}^{\vec n,\epsilon}_{\_1}\}$ provides a parameterization of the inhomogeneous configuration space (inasmuch as that all the variables are compatible) in terms of constraints and gauge invariants, as we wanted. We now complete the canonical transformation in the inhomogeneous sector by determining variables canonically conjugate to our new set. It is straightforward to check that \begin{align} C^{\vec n,\epsilon}_{\_1}=3 b_{\vec n,\epsilon} \end{align} is canonically conjugate to $\tilde H^{\vec n,\epsilon}_{\_1}$, namely $\{C^{\vec n,\epsilon}_{\_1},\tilde H^{\vec n,\epsilon}_{\_1}\}_{(P)}=1$, while it Poisson commutes with $v_{\vec n',\epsilon'}$ and $\breve{H}^{\vec n',\epsilon'}_{|1}$. Finding the other canonical variables requires a bit more of work. We skip the details of the calculation and encourage the reader to check that one can choose \begin{align} C^{\vec n,\epsilon}_{|1}=-\frac1{\pi_\alpha}(a_{\vec n,\epsilon}+ b_{\vec n,\epsilon}), \end{align} as the variable conjugate to the constraint $\breve{H}^{\vec n,\epsilon}_{|1}$, in the sense that $\{C^{\vec n,\epsilon}_{|1},\breve{H}^{\vec n,\epsilon}_{|1}\}_{(P)}=1$, whereas \begin{align} \pi_{v_{\vec n,\epsilon}}=e^{-\alpha}\bigg[\pi_{f_{\vec n,\epsilon}}+\frac1{\pi_\varphi}\Big(e^{6\alpha}\tilde{m}^2 \varphi f_{\vec n,\epsilon}+3\pi_\varphi^2 b_{\vec n,\epsilon} \Big)\bigg] \end{align} is a momentum conjugate to the MS variable $v_{\vec n,\epsilon}$, that is $\{v_{\vec n,\epsilon}, \pi_{v_{\vec n,\epsilon}} \}_{(P)}=1$. For convenience, we will assign the role of configuration variables to $C^{\vec n,\epsilon}_{\_1}$ and $C^{\vec n,\epsilon}_{|1}$, and view the constraints $\tilde H^{\vec n,\epsilon}_{\_1}$ and $\breve H^{\vec n,\epsilon}_{|1}$ as momenta (this will simplify the discussion about the imposition of the perturbative constraints à la Dirac in the quantization of the system). Furthermore, we will use a compact notation for the new set of canonical variables for the perturbations, namely we define \begin{eqnarray} \{V^{\vec n,\epsilon}_{q_l}\}&\equiv&\{v_{\vec n,\epsilon}, C^{\vec n,\epsilon}_{|1},C^{\vec n,\epsilon}_{\_1}\},\\ \{V^{\vec n,\epsilon}_{p_l}\}&\equiv&\{\pi_{v_{\vec n,\epsilon}}, \breve H^{\vec n,\epsilon}_{|1}, \tilde H^{\vec n,\epsilon}_{\_1}\}. \end{eqnarray} In this way, our previous configuration variables $X^{\vec n,\epsilon}_{q_l}$ and the new ones $V^{\vec n,\epsilon}_{q_l}$ are related by means of a contact transformation. \subsection{Redefinition of the MS momenta} \label{MSmomentum} In Sec. \ref{sec:hybrid} we will carry out a Fock quantization of the perturbations, and in particular of the MS gauge-invariant field. If one reduces the system classically so that it gets described by QFT in a curved background, the resulting MS field verifies the Klein-Gordon equation of a massive scalar field (with time-dependent mass) propagating in an ultrastatic spacetime with compact spatial topology. Remarkably, a series of studies on the uniqueness of the Fock representation for Klein-Gordon fields of this type \cite{uniqueness1,uniqueness2,uniquenessother} proves that the use of the MS field to describe the perturbations allows for a unitary implementation of the quantum dynamics of the field, and that any other parameterization for the perturbations that includes a time-dependent rescaling of the field (as the one employed in Ref. \cite{dressed}), prevents the dynamics for being unitarily implementable in the quantum theory \cite{uniqueperturb}. The unitary implementation of the dynamics is possible in a class of (unitarily) equivalent Fock representations with vacua that are invariant under the isometries of the three-torus. Moreover, these results require a specific choice of momentum for the MS variable among all the canonical pairs, namely the evolution can be implemented unitarily as long as the MS modes verify the equation of motion \begin{align}\label{eqhamipi} \dot v_{\vec n,\epsilon}=\pi_{v_{\vec n,\epsilon}} \end{align} at the considered perturbative order. Here the dot denotes, as usual, derivative with respect to the time coordinate $t$. Since the evolution equations are generated by the Hamiltonian of the system and the MS variables commute with the perturbative constraints, it is not difficult to convince oneself that the above condition on the choice of momenta is achieved only if the zero-mode of the Hamiltonian constraint, that contains quadratic contributions of the perturbations, does not include any linear term in the momenta conjugate to the MS variables.\footnote{Actually, the condition happens to be not only necessary, but also sufficient.} We want to adhere to the above parameterization of the MS field, and therefore we want to choose its conjugate variable so as to eliminate all terms that are linear in the MS field momentum from the Hamiltonian constraint. With that aim, we need to modify the momentum mode $\pi_{v_{\vec n,\epsilon}}$ by taking advantage of the freedom in adding to it a linear contribution of the MS configuration variable $v_{\vec n,\epsilon}$. Thus, we introduce a change of the form \begin{align} \pi_{v_{\vec n,\epsilon}}\; \rightarrow \; \breve \pi_{v_{\vec n,\epsilon}}=\pi_{v_{\vec n,\epsilon}}+ Fv_{\vec n,\epsilon}, \end{align} where $F$ is a function of the homogeneous variables $(\alpha,\pi_\alpha, \varphi,\pi_\varphi)$. We now have to determine this function by appealing to condition \eqref{eqhamipi}. Notice that, by construction, the new set $\{\breve V^{\vec n,\epsilon}_{p_l}\}\equiv\{\breve\pi_{v_{\vec n,\epsilon}}, \breve H^{\vec n,\epsilon}_{|1}, \tilde H^{\vec n,\epsilon}_{\_1}\}$ still provides a canonical set together with $\{V^{\vec n,\epsilon}_{q_l}\}$. In order to find the explicit expression of $F$ we can proceed in a simple way as follows. For a moment, we go to the longitudinal gauge, which is picked out by the pair of conditions $ b_{\vec n,\epsilon}=0$ and $\pi_{a_{\vec n,\epsilon}}=\pi_\alpha a_{\vec n,\epsilon}+3\pi_\varphi f_{\vec n,\epsilon}$. In this gauge, we match the resulting MS momentum with the result that was obtained (precisely in that gauge) in Ref. \cite{hybridFRWflatMS}. This procedure turns out to provide a unique and well determined answer, showing the consistency of our calculations. Moreover, the function $F$ is actually mode independent (therefore our notation for it). Its form is \begin{align} F=-e^{-2\alpha}\bigg(\frac1{\pi_\varphi} e^{6\alpha}\tilde{m}^2 \varphi+\pi_\alpha+3\frac{\pi_\varphi^2}{\pi_\alpha} \bigg). \end{align} Thus, the modes of the new MS momentum are \begin{eqnarray} \breve \pi_{v_{\vec n,\epsilon}}&=&e^{-\alpha}\bigg[\pi_{f_{\vec n,\epsilon}}+\frac1{\pi_\varphi}\Big(e^{6\alpha}\tilde{m}^2 \varphi f_{\vec n,\epsilon}+3\pi_\varphi^2 b_{\vec n,\epsilon} \Big)\bigg] \nonumber\\ &-&e^{-2\alpha}\bigg(\frac1{\pi_\varphi} e^{6\alpha}\tilde{m}^2 \varphi+\pi_\alpha+3\frac{\pi_\varphi^2}{\pi_\alpha} \bigg)v_{\vec n,\epsilon}. \end{eqnarray} In order to simplify the notation, in the following we will use again the symbol $\pi_{v_{\vec n,\epsilon}}$ to denote these redefined momentum modes and $ V^{\vec n,\epsilon}_{p_l}$ for the corresponding set of new momenta. \subsection{Inversion of the canonical transformation} For completeness, we finish this section by giving explicitly the expression of the original perturbative variables $\{X^{\vec n,\epsilon}_{l}\}\equiv \{X^{\vec n,\epsilon}_{q_l}, X^{\vec n,\epsilon}_{p_l}\}$ in terms of the new ones $\{V^{\vec n,\epsilon}_{l}\}\equiv \{V^{\vec n,\epsilon}_{q_l}, V^{\vec n,\epsilon}_{p_l}\}$. The result is \begin{subequations}\label{eq:transf} \begin{align} a_{\vec n,\epsilon}&=-\pi_\alpha V^{\vec n,\epsilon}_{q_2}-\frac13V^{\vec n,\epsilon}_{q_3},\label{a}\\ b_{\vec n,\epsilon}&=\frac13V^{\vec n,\epsilon}_{q_3},\\ f_{\vec n,\epsilon}&=e^{-\alpha} V^{\vec n,\epsilon}_{q_1}+\pi_\varphi V^{\vec n,\epsilon}_{q_2},\\ \pi_{a_{\vec n,\epsilon}}&=-\frac1{\pi_\alpha} V^{\vec n,\epsilon}_{p_2}+\frac{\pi_\varphi}{\pi_\alpha} e^\alpha V^{\vec n,\epsilon}_{p_1}+\frac{e^{-\alpha} }{\pi_\alpha}\bigg( e^{6\alpha} \tilde{m}^2 \varphi +\pi_\varphi \pi_\alpha+ 3 \frac{\pi_\varphi^3}{\pi_\alpha} \bigg) V^{\vec n,\epsilon}_{q_1}\nonumber\\ &+\bigg(3\pi_\varphi^2+\frac13 e^{4\alpha} \omega_n^2 -\pi_\alpha^2\bigg) V^{\vec n,\epsilon}_{q_2}-\frac{1}{3} \pi_\alpha V^{\vec n,\epsilon}_{q_3},\\ \pi_{b_{\vec n,\epsilon}}&=3V^{\vec n,\epsilon}_{p_3}-\frac1{\pi_\alpha} V^{\vec n,\epsilon}_{p_2}+\frac{\pi_\varphi}{\pi_\alpha} e^\alpha V^{\vec n,\epsilon}_{p_1}+\frac{e^{-\alpha} }{\pi_\alpha}\bigg( e^{6\alpha} \tilde{m}^2 \varphi -2\pi_\varphi \pi_\alpha+ 3 \frac{\pi_\varphi^3}{\pi_\alpha} \bigg) V^{\vec n,\epsilon}_{q_1}\nonumber\\ &+\frac13 e^{4\alpha} \omega_n^2 V^{\vec n,\epsilon}_{q_2}-\frac{4}{3} \pi_\alpha V^{\vec n,\epsilon}_{q_3},\\ \pi_{f_{\vec n,\epsilon}}&=e^{\alpha} V^{\vec n,\epsilon}_{p_1}+e^{-\alpha}\bigg( \pi_\alpha+3 \frac{\pi_\varphi^2}{\pi_\alpha} \bigg) V^{\vec n,\epsilon}_{q_1}-e^{6\alpha} \tilde{m}^2 \varphi V^{\vec n,\epsilon}_{q_2}-\pi_\varphi V^{\vec n,\epsilon}_{q_3}. \end{align} \end{subequations} \section{Full canonical set in terms of gauge-invariant variables} \label{sec:complete} Recall that in the previous section we regarded the zero-modes, described by the variables of the unperturbed FRW model $\{w^a_q\}\equiv \{\alpha, \varphi\}$ and $\{w^a_p\}\equiv \{\pi_\alpha,\pi_\varphi\}$ ($a=1,2$), as corresponding to a fixed background. Now, we proceed to complete our canonical transformation in the entire system, including these zero-mode variables. In this way we will succeed in describing our perturbed system in terms of gauge invariants, without any gauge fixing, while keeping its full canonical structure. \subsection{Canonical transformation of the zero-modes} The action of our system (truncated at quadratic order in perturbations) in terms of the original parameterization of the inhomogeneous sector has the form \begin{align} S=\int \text{d}t \bigg[\sum_{a} \dot w^a_q w^a_p+\sum_{l,\vec n,\epsilon} \dot X^{\vec n,\epsilon}_{q_l} X^{\vec n,\epsilon}_{p_l} + H(w^a, X^{\vec n,\epsilon}_l)\bigg], \end{align} possibly up to surface terms which are not relevant for our discussion. Here, $H(w^a, X^{\vec n,\epsilon}_l)$ is the total Hamiltonian expressed in terms of the original variables employed in Sec. \ref{sec:system}, $\{w^a\}\equiv \{w^a_q,w^a_p\}$ and $\{X^{\vec n,\epsilon}_{l}\}\equiv \{X^{\vec n,\epsilon}_{q_l}, X^{\vec n,\epsilon}_{p_l}\}$, as in Eq. \eqref{eq:Hamiltonian2}. Let us focus on the Legendre term that contains the information about the symplectic structure of the full system, that we call $W$: \begin{align}\label{eq:W1} W=\int \text{d}t \bigg[ \sum_{a} \dot w^a_q w^a_p+\sum_{l,\vec n,\epsilon} \dot X^{\vec n,\epsilon}_{q_l} X^{\vec n,\epsilon}_{p_l}\bigg]. \end{align} Our goal is to find a canonical transformation relating the previous variables with new zero-modes $\tilde w^a\equiv (\tilde w^a_q,\tilde w^a_p)$ such that $W$ retains its canonical form when expressed in terms of the gauge invariants for the perturbations, namely \begin{align}\label{eq:W2} W=\int \text{d}t \bigg[ \sum_{a} \dot {\tilde w}^a_q \tilde w^a_p+\sum_{l,\vec n,\epsilon} \dot V^{\vec n,\epsilon}_{q_l} V^{\vec n,\epsilon}_{p_l} \bigg]. \end{align} Instead of reproducing all the details of the lengthy calculation that allows one to deduce the form of the desired transformation (and which is essentially based on the consideration of our change of perturbative variables as a canonical transformation in the inhomogeneous sector that depends on a series of time-dependent {\it{external}} variables, describing the homogeneous degrees of freedom), we will only sketch the main steps in the derivation. First, we notice that the relations $ X^{\vec n,\epsilon}_{l}= X^{\vec n,\epsilon}_{l}( V^{\vec n,\epsilon}_{m})$ given in Eqs. \eqref{eq:transf} do not mix different modes and are linear. Therefore, the original variables $X^{\vec n,\epsilon}_{l}$ can be expressed in the following way \begin{align}\label{eq:chain} X^{\vec n,\epsilon}_{l}=\sum_m \bigg(\frac{\partial X^{\vec n,\epsilon}_{l}}{\partial V^{\vec n,\epsilon}_{q_m}}V^{\vec n,\epsilon}_{q_m}+\frac{\partial X^{\vec n,\epsilon}_{l}}{\partial V^{\vec n,\epsilon}_{p_m}}V^{\vec n,\epsilon}_{p_m}\bigg), \end{align} where the partial derivatives are functions of the zero-modes $w^a$ and the frequency $\omega_n$ only (and $m=1,2,3$). Taking this relation into account in the derivatives with respect to time that appear in $W$, performing several time integrations by parts, disregarding total time derivatives which contribute at most by surface terms at initial and final times (assuming a well posed variational principle for the system), and truncating up to quadratic order in perturbations, as well as recalling that $\{V^{\vec n,\epsilon}_{l}\}$ is a canonical set as long as one ignores the homogeneous sector, one can obtain the expression of the new canonical homogeneous variables $\tilde w ^a$ as functions of the old ones $ w ^a$ and of the new perturbative variables $V^{\vec n,\epsilon}_{l}$ (or of $X^{\vec n,\epsilon}_{l}$ if preferred). In this way, one gets new zero-mode variables which are given by the original ones plus some additional contributions that are quadratic in the perturbations. This result can be inverted at the considered perturbative order, allowing one to express the original zero-mode variables as functions of the new phase-space variables for the entire system (zero-modes plus inhomogeneities). With this procedure, one arrives at \begin{subequations}\label{eq:transf-background} \begin{align}\label{homochange1} w^a_q&= \tilde w^a_q - \frac12 \sum_{l,\vec n,\epsilon} \bigg[X^{\vec n,\epsilon}_{q_l} \frac{\partial X^{\vec n,\epsilon}_{p_l}}{\partial_{\tilde w^a_p}}- \frac{\partial X^{\vec n,\epsilon}_{q_l}}{\partial_{\tilde w^a_p}} X^{\vec n,\epsilon}_{p_l}\bigg],\\ \label{homochange2} w^a_p&= \tilde w^a_p + \frac12 \sum_{l,\vec n,\epsilon} \bigg[X^{\vec n,\epsilon}_{q_l} \frac{\partial X^{\vec n,\epsilon}_{p_l}}{ \partial_{\tilde w^a_q}}- \frac{\partial X^{\vec n,\epsilon}_{q_l}}{\partial_{\tilde w^a_q}}X^{\vec n,\epsilon}_{p_l}\bigg]. \end{align} \end{subequations} In these expressions we have to understand the original variables $X^{\vec n,\epsilon}_{l}$ for the perturbations as functions of the new ones $V^{\vec n,\epsilon}_{l}$, as given by Eqs. \eqref{eq:transf}, with $\{w^a\}=\{\alpha,\pi_\alpha, \varphi,\pi_\varphi\}$ replaced with $\{\tilde w^a\}=\{\tilde\alpha,\tilde\pi_\alpha, \tilde\varphi,\tilde\pi_\varphi\}$ in those equations. Note that this replacement is consistent within our truncations, since old and new zero-modes differ in terms that are quadratic in perturbations. In this sense, let us also clarify that the partial derivatives in the above identities must be taken keeping $V^{\vec n,\epsilon}_{l}$ constant. In Appendix \ref{app} we give the explicit expressions of the zero-modes $w^a$ in terms of the new phase-space variables $\tilde w^a$ and $V^{\vec n,\epsilon}_{l}$. \subsection{Hamiltonian in terms of gauge-invariant variables} Once we know the canonical transformation relating the original phase-space parameterization, $\{w^a, X^{\vec n,\epsilon}_{l}\}$, with the new one that uses gauge invariants for the perturbations, $\{\tilde w^a, V^{\vec n,\epsilon}_{l}\}$, we can obtain the form of the Hamiltonian in the new formulation. In particular, let us consider the zero-mode of the Hamiltonian constraint. In the original variables, this zero-mode is given in Eq. \eqref{eq:Hamiltonian2} by the sum \begin{equation} H_{|0}(w^a)+ \sum_{\vec n,\epsilon} H^{\vec n,\epsilon}_{|2} \big(w^a, X^{\vec n,\epsilon}_{l}\big), \end{equation} which contains the contribution of the homogeneous FRW model and the correction quadratic in perturbations. Since the difference between the original and new variables for the homogeneous sector is precisely of quadratic order in the perturbations, the substitution of the relations between the old and new variables for our system leads to the following new expression for the zero-mode of the Hamiltonian constraint: \begin{equation}\label{newzeroham} H_{|0}(\tilde{w}^a)+ \sum_b \big(w^b-\tilde{w}^b\big) \frac{\partial H_{|0}}{\partial \tilde{w}^b}(\tilde{w}^a) + \sum_{\vec n,\epsilon} H^{\vec n,\epsilon}_{|2} \big[\tilde{w}^a, X^{\vec n,\epsilon}_{l}(\tilde{w}^a, V^{\vec n,\epsilon}_{l}) \big], \end{equation} where the difference $w^a-\tilde{w}^a$ must be regarded as a function of the new phase-space variables, given by the last terms in the two expressions \eqref{homochange1} and \eqref{homochange2}. Note that this difference is a sum of independent contributions of each of the inhomogeneous modes, so that we can write \begin{equation} w^a-\tilde{w}^a \equiv \sum_{\vec n,\epsilon} \Delta \tilde{w}^a_{\vec n,\epsilon}. \end{equation} Besides, evaluation of $H_{|0}$ and $H^{\vec n,\epsilon}_{|2} $ at the new zero-mode variables $\tilde{w}^a$ is done in Eq. \eqref{newzeroham} by simply replacing the original phase-space coordinates $w^a$ directly with these new ones in the functional dependence of the considered contributions to the constraint. To derive the above expression, it suffices to expand $H_{|0}$ in series around the new zero-mode variables and recall that we are truncating the total Hamiltonian at quadratic order in the perturbations. From these considerations, we see that the quadratic contribution of the perturbations to the zero-mode of the Hamiltonian constraint in our new variables takes the form $\sum_{\vec n,\epsilon} \tilde{H}^{\vec n,\epsilon}_{|2}$, with \begin{equation} \tilde{H}^{\vec n,\epsilon}_{|2} = H^{\vec n,\epsilon}_{|2} + \sum_a \Delta \tilde{w}^a_{\vec n,\epsilon}\frac{ \partial H_{|0} }{ \partial \tilde{w}^a } . \end{equation} Indeed, this is the form that one would expect if one regarded the change of variables for the perturbations as a canonical transformation that depends on a series of time-dependent {\it{external}} variables (namely, the zero-modes) with dynamics governed by the Hamiltonian $H_{|0}$ \cite{Langlois}. If one carries out the calculation explicitly, one gets the following quadratic contribution to the constraint of our system: \begin{align}\label{eq:newH2} \tilde{H}^{\vec n,\epsilon}_{|2}=\breve H^{\vec n,\epsilon}_{|2} + F_{|2}^{\vec n,\epsilon} H_{|0}+F_{|1}^{\vec n,\epsilon} V^{\vec n,\epsilon}_{p_2}+\bigg(F_{\_1}^{\vec n,\epsilon}-3 \frac{e^{-3{\tilde\alpha}}}{\pi_{\tilde\alpha}} V^{\vec n,\epsilon}_{p_2} + \frac92e^{-3{\tilde\alpha}} V^{\vec n,\epsilon}_{p_3}\bigg)V^{\vec n,\epsilon}_{p_3}. \end{align} In this expression we have defined \begin{subequations} \begin{align}\label{eq:H2MS} \breve H^{\vec n,\epsilon}_{|2}&=\frac{e^{-{\tilde\alpha}}}{2}\bigg[ \omega_n^2 + e^{-4{\tilde\alpha}}\pi_{\tilde\alpha}^2+\tilde{m}^2 e^{2{\tilde\alpha}}\left(1+15{\tilde\varphi}^2-12{\tilde\varphi} \frac{\pi_{\tilde\varphi}}{\pi_{\tilde\alpha}}-18 e^{6{\tilde\alpha}}\tilde{m}^2 \frac{{\tilde\varphi}^4}{\pi_{\tilde\alpha}^2} \right)\bigg] (V^{\vec n,\epsilon}_{q_1})^2 \nonumber\\ &+\frac{e^{-{\tilde\alpha}}}{2} (V^{\vec n,\epsilon}_{p_1})^2,\\ F_{|2}^{\vec n,\epsilon}&= \frac32 e^{-2{\tilde\alpha}}\bigg( 1- 9 \frac{\pi_{\tilde\varphi}^2}{\pi_{\tilde\alpha}^2}+12 e^{6{\tilde\alpha}}\tilde{m}^2 \frac{{\tilde\varphi}^2}{\pi_{\tilde\alpha}^2}\bigg) (V^{\vec n,\epsilon}_{q_1})^2 + \frac12 e^{-2{\tilde\alpha}}\left( e^{4{\tilde\alpha}}\omega_n^2 -9\pi_{\tilde\varphi}^2 \right)(V^{\vec n,\epsilon}_{q_2})^2\nonumber\\ &-3 \frac{e^{-2{\tilde\alpha}}}{\pi_{\tilde\alpha}^2}\bigg[\left(\pi_{\tilde\alpha}^2 +3\pi_{\tilde\varphi}^2\right)\pi_{\tilde\varphi}+e^{6{\tilde\alpha}}\tilde{m}^2 {\tilde\varphi} \pi_{\tilde\alpha}\bigg]V^{\vec n,\epsilon}_{q_1}V^{\vec n,\epsilon}_{q_2}-3\frac{\pi_{\tilde\varphi}}{\pi_{\tilde\alpha}}V^{\vec n,\epsilon}_{p_1}V^{\vec n,\epsilon}_{q_2},\\ F_{|1}^{\vec n,\epsilon}&=\frac12 \frac{e^{-4{\tilde\alpha}}}{\pi_{\tilde\alpha}}\bigg[6 \pi_{\tilde\varphi} V^{\vec n,\epsilon}_{q_1}+\left(6 e^{3{\tilde\alpha}} H_{|0} +5 \pi_{\tilde\alpha}^2\right)V^{\vec n,\epsilon}_{q_2}\bigg],\\ F_{\_1}^{\vec n,\epsilon}&=3 \frac{e^{-4{\tilde\alpha}}}{\pi_{\tilde\alpha}^2}\left( e^{6{\tilde\alpha}}\tilde{m}^2 {\tilde\varphi} \pi_{\tilde\alpha} +3\pi_{\tilde\varphi}^3 -2 \pi_{\tilde\alpha}^2\pi_{\tilde\varphi} \right)V^{\vec n,\epsilon}_{q_1}+ \omega_n^2 V^{\vec n,\epsilon}_{q_2}-\frac32 e^{-6{\tilde\alpha}}\pi_{\tilde\alpha} V^{\vec n,\epsilon}_{q_3}\nonumber\\ &-3 e^{-2{\tilde\alpha}}\frac{\pi_{\tilde\varphi}}{\pi_{\tilde\alpha}}V^{\vec n,\epsilon}_{p_1}. \end{align} \end{subequations} In the total Hamiltonian, we can account for the term $\sum_{\vec n, \epsilon} F_{|2}^{\vec n,\epsilon} H_{|0}$ at the considered perturbative order by means of a new redefinition of the zero-mode of the lapse function, with the new one given by $\bar N_0=\breve N_0 (1 +\sum_{\vec n,\epsilon} F_{|2}^{\vec n,\epsilon})$ [see Eq. \eqref{N0re}]. The other terms in Eq. \eqref{eq:newH2} different from $\breve H^{\vec n,\epsilon}_{|2}$ contribute to the perturbative linear constraints and their presence amounts to a redefinition of the corresponding Lagrange multipliers, that we now denote as $G_{\vec n,\epsilon}$ and $K_{\vec n,\epsilon}$. In summary, we obtain at our quadratic order a total Hamiltonian constraint of the form \begin{equation}\label{eq:Hamiltonian3} H =\bar N_0\Big[ H_{|0}+\sum_{\vec n,\epsilon} \breve H^{\vec n,\epsilon}_{|2}\Big]+\sum_{\vec n,\epsilon} G_{\vec n,\epsilon} V^{\vec n,\epsilon}_{p_2}+ \sum_{\vec n,\epsilon} K_{\vec n,\epsilon} V^{\vec n,\epsilon}_{p_3} . \end{equation} In Appendix \ref{app} we give the explicit expressions of the original lapse function $N$ and shift co-vector $N_i$ in terms of the new phase-space variables $\{\tilde w^a, V^{\vec n,\epsilon}_{l}\}$ and Lagrange multipliers $\{\bar N_0, G_{\vec n,\epsilon}, K_{\vec n,\epsilon}\}$. As we see, the terms $\breve H^{\vec n,\epsilon}_{|2}$ provide finally the contributions quadratic in perturbations to the zero-mode of the Hamiltonian constraint in the constructed gauge-invariant formulation. As expected, these terms are precisely those that depend exclusively on the MS variables and their momenta, and hence they are gauge-invariant quantities. For obvious reasons, we call their sum the MS Hamiltonian. Moreover, as we had anticipated, these terms contain no linear contribution of the momenta of the MS variables. In fact, the only contribution from these momenta is quadratic, and its mode-independent coefficient is constant up to a power of the scale factor. We notice as well that the expression given for $\breve H^{\vec n,\epsilon}_{|2}$ is linear in the momentum of the zero-mode of the scalar field, $\pi_{\tilde\varphi}$. This linear expression has been obtained by taking advantage of the identity $\pi_{\varphi}^2=2 H_{|0} e^{3{\alpha}} +\pi_{\alpha}^2-e^{6{\alpha}}\tilde{m}^2 {\varphi}^2$, that can be used in Eq. \eqref{eq:newH2} at the coast of redefining the zero-mode of the lapse function at the considered order in perturbation theory. Thanks to this, in Sec. \ref{sec:hybrid} we will be able to interpret the zero-mode of the Hamiltonian constraint in a certain approximation as a Schr\"odinger-like equation that dictates the quantum evolution of the inhomogeneities in some family of states. \section{Hybrid quantization and Born-Oppenheimer ansatz} \label{sec:hybrid} We can now proceed to quantize the symplectic manifold which describes our cosmological system, and to impose the classical constraints à la Dirac, i.e., as operators that annihilate physical states in the quantum theory. We recall that, in our classical formulation, we have split the phase space in homogeneous and inhomogeneous sectors, starting with the expansion in modes given by the eigenfunctions of the Laplace-Beltrami operator on the spatial sections. The homogeneous sector describes the zero-modes, and its degrees of freedom can be parameterized by the canonical variables $\{\tilde{w}^a\}=\{\tilde\alpha,\pi_{\tilde\alpha},\tilde\varphi,\pi_{\tilde\varphi}\}$. On the other hand, the inhomogeneous sector describes all the non-zero modes of the perturbations, and can be parameterized by $\{V^{\vec n,\epsilon}_l\}=\{ V^{\vec n,\epsilon}_{q_l}, V^{\vec n,\epsilon}_{p_l}\}$, that is a set specially selected to make manifest the covariance of the formulation at the considered perturbative level. In this way we have prepared the classical system to now carry out a (generalized) hybrid quantization, in which one adopts specific quantizations for both the homogeneous and inhomogeneous parts. On the tensor product of the corresponding representation spaces, one imposes the constraints quantum mechanically, so that the total system is not at all trivial. In fact, it seems natural to assume that there exists some regime of the quantum dynamics, in between a fully quantum gravity regime and the regime where QFT in fixed curved backgrounds makes sense, in which the most relevant quantum effects of the geometry are those affecting the zero-modes, and that perturbations, while still having a quantum description, can be represented in more standard ways, like e.g. as in QFT in curved spacetimes. Hence, we can adhere to a hybrid quantization based on this hypothesis, adopting a Fock quantization for the MS gauge invariants (and possibly for the rest of variables for the perturbations) and, in principle, a different type of quantization, of quantum gravity nature, for the homogeneous sector. Note that this hybrid approach is different from treating the perturbations as a test field propagating in a quantum background, which is the viewpoint adopted in the {\it{dressed metric}} proposal in Ref. \cite{dressed}. Rather, the inhomogeneities and the homogeneous sector, even being represented with different quantization methods, form a symplectic manifold that is quantized as a whole. The Hamiltonian constraint of the system affects both sectors, encoding the backreaction of the perturbations on the homogeneous background inasmuch as this is kept in the quadratic truncation of the action. In this section we will maintain a general discussion without specifying the concrete quantization of the homogeneous sector. Moreover, most of the details are easily extensible to other quantizations of our MS variables different from the Fock (or QFT-like) quantization that we adopt here. We will simply assume that the quantization of the zero-mode variables provides a representation for the canonical commutation relations such that the operators for the homogeneous FRW geometry commute with those representing the homogeneous sector of the matter field, and in turn all of them commute with the elementary operators representing the variables for the inhomogeneities. We will denote $\mathcal{H}_\text{kin}^\text{grav}\otimes \mathcal{H}_\text{kin}^\text{matt}$ the kinematical Hilbert space for the homogeneous sector, such that the operators representing the homogeneous FRW geometry are defined on $\mathcal{H}_\text{kin}^\text{grav}$, while the operators for the homogeneous matter sector are defined on $\mathcal{H}_\text{kin}^\text{matt}$. Finally, concerning the quantization of the homogeneous matter sector, we will also assume for simplicity that the operators that represent functions of $\tilde\varphi$ act by mere multiplication, while the operator representing $\pi_{\tilde\varphi}$ acts as a generalized derivative. In previous applications of this strategy of a hybrid quantization \cite{hybridFRW,hybridFRWflat,hybridFRWflatMS,hybridgowdy}, the polymeric representation of LQC was adopted for the background geometry, together with a standard Schr\"odinger representation for the zero-mode $\tilde\varphi$ of the matter field. Although we will briefly explain the particularization of our study to such a representation in the next section, we intend to make here a much more general discussion, by no means restricted to the framework of LQC. Therefore, our analysis generalizes the quantum treatment presented in Ref. \cite{hybridFRWflatMS} in two important ways: a) instead of reducing the system classically, we carry the linear perturbative constraints to the quantum theory, with no gauge fixing; and b) we do not adopt here any specific representation for the homogeneous sector, and in particular for the FRW geometry. In this sense, the formulation can be adapted to any quantization proposal in FRW cosmology. The physics resulting from each specific proposal might be discussed and compared afterwards. \subsection{Fock representation for the perturbations} For the inhomogeneous modes describing the MS gauge invariants, we adopt here a Fock representation similar to that of Refs. \cite{hybridFRWflat,hybridFRWflatMS}. Remarkably, this quantization is selected by the criteria of: a) invariance of the vacuum under the spatial isometries, and b) unitary implementability of the dynamics in the regime in which one recovers a QFT in a curved background for the MS field (in any finite time interval) \cite{uniqueness1,uniqueness2}. As mentioned in Subsec. \ref{MSmomentum}, these criteria remove the ambiguity in a possible rescaling of the MS modes by functions of the homogeneous variables, and they assure a unitary implementability of the evolution and a standard quantum mechanical interpretation in the regions where a QFT in a (generally effective) background is recovered. Actually, these results require the spatial sections to be compact; that is why we choose our flat model to have three-torus topology. More concretely, the above criteria pick out not just a Fock representation, but a family of them such that they are all unitarily equivalent \cite{uniqueness2}. In particular, this family contains the representation in which the annihilation-like variables, $a_{\vec n,\epsilon}$, and the creation-like variables, $a^{\dagger}_{\vec n,\epsilon}$, are those naturally associated with harmonic oscillators of frequency $\omega_n$. A more specific choice of Fock representation among the privileged family selected by those criteria can be done on the basis of further requirements not just on operators that are constructed from exponentials of the MS variables and their momenta (namely, the Weyl algebra), including linear operators, but on other physically relevant operators like e.g. the field Hamiltonian. It seems natural to demand that the MS Hamiltonian be well defined quantum mechanically (and be essentially self-adjoint). Additional conditions related with regularization may be important, although one would expect that regularization schemes should be a direct consequence of the quantization of the system, since this includes now the background, rather than having to import them from QFT, where one usually appeals to them claiming the absence of a quantization of the curved spacetime. Assume then that we take a particular Fock representation for the MS modes in the family picked out by the commented criteria of vacuum invariance and unitary dynamics in the standard QFT regime. Let us call $\hat{a}_{\vec n,\epsilon}$ and $\hat{a}^\dagger_{\vec n,\epsilon}$ the corresponding annihilation and creation operators, acting on the Fock space ${\mathcal F}$, such that $[\hat{a}_{\vec n,\epsilon},\hat{a}^\dagger_{\vec n',\epsilon'}]=\delta_{\vec n,\epsilon}^{\vec n',\epsilon'}$. A basis of states is provided by the occupancy-number states $|{\mathcal N} \rangle$, where ${\mathcal N}$ denotes an array of (positive integer) occupancy-numbers, one for each mode. The creation operator acting on these states excites the mode labeled by $\vec n$ and $\epsilon$, increasing the corresponding occupancy-number in one unit, as usual. \subsection{Quantum representation of the constraints} In our gauge-invariant formulation the classical constraints Poisson commute, and therefore they can be imposed quantum mechanically without troubles as far as their quantum counterparts commute as well. We assume that our quantization satisfies this property and impose them independently. Let us start with the linear perturbative constraints, which classically are $V^{\vec n,\epsilon}_{p_{\tilde l}}=0$ for $\tilde{l}=2,3$. These constraints are straightforward to impose adopting a quantization, for the part of the inhomogeneous sector parameterized by $\{V^{\vec n,\epsilon}_{{\tilde l}}\}$ (again with $\tilde{l}=2,3$), such that the operators representing the momenta act as derivatives (or as translations, in integrated or discrete versions) with respect to the configuration variables $V^{\vec n,\epsilon}_{q_{\tilde l}}$. Then, the constraints amount to require that physical states are independent of those variables. Hence, after imposing these constraints, we can just restrict the discussion to a representation space of the form $\mathcal{H}=\mathcal{H}_\text{kin}^\text{grav}\otimes \mathcal{H}_\text{kin}^\text{matt}\otimes {\mathcal F}$ to study all physically relevant quantum states. Let us emphasize that the restriction to these states is quantum mechanical, and not a classical reduction. Besides, let us note that these states are not yet fully physical, since we must still impose the zero-mode of the Hamiltonian constraint. More challenging is the imposition of this global Hamiltonian constraint, for which we will only be able to provide approximate solutions. Let us first focus on its quantum representation. Classically this constraint is given by \begin{align}\label{dens} H_{|0}+ \breve H_{|2} \equiv e^{-3 {\tilde \alpha} }\tilde H=0, \end{align} with $H_{|0}$ defined as in Eq. \eqref{eq:H_0} but in terms of the background variables $(\tilde\alpha,\pi_{\tilde\alpha},\tilde\varphi,\pi_{\tilde\varphi})$, and $\breve H_{|2}=\sum_{\vec n,\epsilon} \breve H^{\vec n,\epsilon}_{|2}$ defined in Eq. \eqref{eq:H2MS}. In what follows we will focus on the densitized Hamiltonian constraint\footnote{We could also choose to work with the non-densitized constraint and carry out the densitization at the quantum level (see e.g. Ref. \cite{hybridFRWflatMS}).} $\tilde H=0$. Before continuing, let us introduce some convenient notation. We first define \begin{equation} \mathcal H_0^{(2)}\equiv \pi_{\tilde\alpha}^2-e^{6\tilde\alpha}\tilde m^2 \tilde\varphi^2,\label{densH0} \end{equation} so that the contribution of the homogeneous sector to $\tilde H$ is $ e^{3 {\tilde \alpha} }H_{|0}= \big(\pi_{\varphi}^2-\mathcal H_0^{(2)}\big)/2$. In addition, we introduce the following functions of the zero-modes: \begin{subequations}\label{eq:geometry-operators} \begin{align} \vartheta_o&\equiv - 12 e^{4\tilde\alpha} \tilde m^2 \frac{\tilde\varphi}{\pi_{\tilde\alpha}}, \label{eq:geometry-operators1} \\ \!\vartheta_e &\equiv e^{2\tilde\alpha},\\ \!\vartheta_e^{q} &\equiv e^{-2\tilde\alpha} \pi_{\tilde\alpha}^2+\tilde{m}^2 e^{4{\tilde\alpha}}\left(1+15{\tilde\varphi}^2-18 e^{6{\tilde\alpha}}\tilde{m}^2 \frac{{\tilde\varphi}^4}{\pi_{\tilde\alpha}^2} \right) \nonumber\\ & = e^{-2\tilde\alpha} \mathcal H_0^{(2)} \left(19- 18 \frac{\mathcal H_0^{(2)}}{\pi_{\tilde\alpha}^2} \right)+\tilde{m}^2e^{4{\tilde\alpha}}\left(1-2{\tilde\varphi}^2 \right), \label{eq:geometry-operators3} \end{align} \end{subequations} and we call \begin{subequations}\label{eq:perturbation-operators} \begin{align} \Theta_o^{\vec n,\epsilon} &\equiv - \vartheta_o (V^{\vec n,\epsilon}_{q_1})^2\\ \Theta_e^{\vec n,\epsilon} &\equiv -\left[(\vartheta_e \omega_n^2+\vartheta_e^q)(V^{\vec n,\epsilon}_{q_1})^2+ \vartheta_e (V^{\vec n,\epsilon}_{p_1})^2\right],\\ \Theta_o&\equiv \sum_{\vec n, \epsilon}\Theta_o^{\vec n,\epsilon},\qquad \Theta_e\equiv \sum_{\vec n, \epsilon}\Theta_e^{\vec n,\epsilon}. \end{align} \end{subequations} Then, $2 e^{3\tilde{\alpha}}\breve H_{|2} = -( \Theta_e+ \Theta_o \pi_{\varphi}).$ Let us assume that we can represent the quantities introduced in Eqs. \eqref{densH0} and \eqref{eq:geometry-operators} as densely defined operators, $\hat{{\mathcal H}}_0^{(2)}$, $ \hat{\vartheta}_{o}$, $\hat{\vartheta}_{e}$, and $\hat{\vartheta}_{e}^q$, on $ \mathcal H_\mathrm{kin}^\mathrm{grav}\otimes\mathcal H_\mathrm{kin}^\mathrm{matt}$ (while acting as the identity on $\mathcal F$), and the objects given in Eqs. \eqref{eq:perturbation-operators} as densely defined operators, $\hat \Theta_o$ and $\hat \Theta_e$, on $ \mathcal H_\mathrm{kin}^\mathrm{grav}\otimes\mathcal H_\mathrm{kin}^\mathrm{matt}\otimes \mathcal F$. Then, we obtain the following operator representing $\tilde H$: \begin{align}\label{dens-constraint} \hat {\tilde H}=\frac12 \left[\hat\pi_{\tilde\varphi}^2-\hat{\mathcal H}_0^{(2)}-\hat{\Theta}_e-\left(\hat\Theta_o\hat\pi_{\tilde\varphi}\right)_\text{S} \right], \end{align} where we have adopted the symmetrization \begin{align} \left(\hat\Theta_o\hat\pi_{\tilde\varphi}\right)_\text{S} =\frac12\left(\hat\Theta_o\hat\pi_{\tilde\varphi}+\hat\pi_{\tilde\varphi}\hat\Theta_o\right)=\frac12 [\hat\pi_{\tilde\varphi},\hat\Theta_o]+\hat\Theta_o\hat\pi_{\tilde\varphi}. \end{align} \subsection{Born-Oppenheimer ansatz}\label{BOa} We will now analyze solutions to the (zero-mode of the) Hamiltonian constraint of the system by adopting the ansatz \begin{equation}\label{BOans} \Psi=\Gamma(\tilde\alpha,\tilde\varphi) \psi({\mathcal N},\tilde\varphi), \end{equation} where the dependence on the MS variables is denoted by the label ${\mathcal N}$ of the occupancy-number states for the inhomogeneous subsector $\mathcal F$, and $\tilde\alpha$ denotes the dependence on the geometry of the homogeneous FRW sector.\footnote{This notation does not imply that we adopt a representation for the homogeneous geometry in which we have an operator representing $\tilde\alpha$ that acts by multiplication, but it is rather a symbolic way to indicate functional dependence on the homogeneous geometry sector.} Thus, the ansatz for the quantum states is that their wave functions $\Psi$ can be separated into two factors, one which depends on the homogeneous degrees of freedom of the FRW geometry, and the other on the MS gauge-invariant modes. The ansatz allows for states that present different rates of variation in these two factors with respect to the zero-mode $\tilde\varphi$ of the matter scalar field, and when this happens the Hamiltonian constraint can be approximated and simplified. Hence the name of Born-Oppenheimer for the introduced ansatz. From this perspective, it is convenient to interpret $\tilde\varphi$ as an internal time for the system (at least in some intervals of the evolution). We recall that we are assuming a representation of the homogeneous matter sector such that the functions of $\tilde{\varphi}$ act as multiplicative operators on $\Psi$. Let us for a moment consider the unperturbed FRW model. Its Hamiltonian constraint operator is proportional to $\hat\pi_{\tilde\varphi}^2-\hat{\mathcal H}_0^{(2)}$. In the evolution picture that we are employing, where $\tilde\varphi$ plays the role of the time, positive-frequency solutions of this constraint are given by \begin{align}\label{gamma} \Gamma(\tilde\alpha,\tilde\varphi)= \hat U(\tilde\alpha;\tilde\varphi)\chi(\tilde\alpha), \end{align} $\hat U$ being the corresponding unitary evolution operator. If $\hat{\mathcal H}_0^{(2)}$ is self-adjoint, then one can project on its positive part (P.P.) and take $\sqrt{\text{P.P.}(\hat{\mathcal H}_0^{(2)})}$ as the $\tilde\varphi$-dependent self-adjoint operator that generates the time evolution in Eq. \eqref{gamma}. To maintain the discussion more general, and since the self-adjointness of $\hat{\mathcal H}_0^{(2)}$ might not be guaranteed, or a straightforward definition of a square root for it might not be available, we simply assume that the FRW part of the quantum states has the form \eqref{gamma}, where the family of evolution operators $\hat{U}$ is such that there exists a self-adjoint operator $\hat{\tilde{\mathcal H}}_0$ satisfying $[\hat\pi_{\tilde\varphi},\hat U] {\hat U^{-1} }=\hat{\tilde{\mathcal H}}_0$. Note that the concepts of self-adjointness and unitarity that we use here are those corresponding to the FRW-geometry part of our Hilbert space,\footnote{In a conventional Schr\"odinger representation in which $\hat\pi_{\tilde\varphi}=-i \partial_{\tilde\varphi}$ we have the standard unitary evolution operator \begin{equation}\label{chievolu} \hat U(\tilde\alpha;\tilde\varphi)={\mathcal P}\left[\exp{\left(i\int^{\tilde\varphi}_{\tilde\varphi_0} d\tilde{\varphi}\,\hat{\tilde{\mathcal H}}_0(\tilde\alpha, \tilde \varphi)\right)}\right]\nonumber. \end{equation} The symbol ${\mathcal P}$ denotes {\it time} ordering with respect to $\tilde\varphi$. For simplicity, we have fixed the reduced Planck constant $\hbar$ equal to the unit.} $\mathcal{H}_\text{kin}^\text{grav}$. Actually, within our perturbative framework, it is natural to consider $\Gamma$ as an approximate solution to the Hamiltonian constraint of the homogeneous sector, and not necessarily an exact one. In fact, as we will see, $\Gamma$ will supply an approximate solution of the FRW model inasmuch as the square of $\hat{\tilde{\mathcal H}}_0$ is {\it{close}} to $\hat{\mathcal H}_0^{(2)}$ in the sense that $ (\hat{\tilde{\mathcal H}}_0)^2 -\hat{\mathcal H}_0^{(2)} + [\hat{\pi}_{\tilde\varphi}, \hat{\tilde{\mathcal H}}_0]$ is negligibly small on $\Gamma$. On the other hand, in Eq. \eqref{gamma} we normalize the state $\chi(\tilde\alpha)$ to the unit in ${\mathcal H}_\mathrm{kin}^\mathrm{grav}$. This part of the quantum state can be understood as the state of the FRW geometry at a fixed initial time $\tilde{\varphi}_0$ (and hence it is independent of $\tilde{\varphi}$). A natural election would be to choose a very semiclassical state $\chi$, so that it is highly peaked on a certain homogeneous geometry (and, if this is possible, that it remains peaked under the evolution dictated by $\hat{U}$). Let us plug the ansatz \eqref{BOans} in the constraint equation\footnote{Since one expects that solutions do not belong to the kinematical Hilbert space, one should rather impose the constraint on some kind of generalized states $(\psi|$ with the adjoint action, in the form $(\psi| \hat {\tilde H}^{\dagger}=0$. We do not do so because we do not want to complicate the notation even more.} $\hat {\tilde H} \Psi=0$. Taking into account that \begin{subequations} \begin{align} \hat{\pi}_{\tilde\varphi}\Psi&=\Gamma (\hat{\pi}_{\tilde\varphi}\psi)+(\hat{\tilde{\mathcal H}}_0\Gamma)\psi,\\ \hat{\pi}_{\tilde\varphi}^2\Psi&=\Gamma (\hat{\pi}_{\tilde\varphi}^2\psi)+2 (\hat{\tilde{\mathcal H}}_0\Gamma) (\hat{\pi}_{\tilde\varphi}\psi)+([\hat{\pi}_{\tilde\varphi},\hat{\tilde{\mathcal H}}_0]\Gamma)\psi+ \big\{(\hat{\tilde{\mathcal H}}_0)^2\Gamma\big\}\psi, \end{align} \end{subequations} the constraint can be rewritten as \begin{align}\label{const} &\Big\{\Big( (\hat{\tilde{\mathcal H}}_0)^2- \hat{\mathcal H}_0^{(2)} + [\hat{\pi}_{\tilde\varphi},\hat{\tilde{\mathcal H}}_0]\Big)\Gamma\Big\}\psi+2 (\hat{\tilde{\mathcal H}}_0\Gamma) (\hat{\pi}_{\tilde\varphi}\psi)+\Gamma (\hat{\pi}_{\tilde\varphi}^2\psi)-\frac12[\hat{\pi}_{\tilde\varphi}-\hat{\tilde{\mathcal H}}_0,\hat{\Theta}_o](\Gamma\psi)\nonumber\\ &-\big\{\hat{\Theta}_e+(\hat{\Theta}_o\hat{\tilde{\mathcal H}}_0)_\text{S}\big\}(\Gamma\psi)-\hat{\Theta}_o\big\{\Gamma(\hat{\pi}_{\tilde\varphi}\psi) \big\}=0. \end{align} We note that,with our assumptions on the representation of the zero-mode of the scalar field, the operators $[\hat{\pi}_{\tilde\varphi},\hat{\tilde{\mathcal H}}_0]$ and $[\hat{\pi}_{\tilde\varphi},\hat{\Theta}_o]$ depend on $\tilde\varphi$, that acts by multiplication as an operator, but are independent of $\hat{\pi}_{\tilde\varphi}$. Besides, notice that the first term of the above equation is the correction coming from the consideration of wave functions $\Gamma$ that are not exact solutions to the constraint of the homogeneous sector (to see this, it suffices to make $\psi$ constant and the operators $\hat{\Theta}_o$ and $\hat{\Theta}_e$ of the MS modes equal to zero). Now, let us consider the approximation that, regarding the homogeneous FRW geometry, only the terms corresponding to the expectation values on $\Gamma$ are relevant. In other words, when taking the inner product of the left hand side of Eq. \eqref{const} with $\Gamma$ with respect to the FRW geometry (namely in $\mathcal{H}_\text{kin}^\text{grav}$), we disregard possible quantum transitions from $\Gamma$ to other states mediated by the action of the constraint. From Eq. \eqref{const} it is not difficult to see that, for this approximation to hold, we only need the following operators to have small relative dispersions on the state $\Gamma$ for all values of $\tilde\varphi$ (or, in other words, the following operators must be highly peaked on $\Gamma$ along the evolution in $\tilde\varphi$): i) $\hat{\tilde{\mathcal H}}_0$, ii) $\hat{\vartheta}_{e}$, iii) $-\frac{i}{2}{\mathrm d}_{\tilde\varphi}\hat{\vartheta}_{o}+(\hat{\vartheta}_{o}\hat{\tilde{\mathcal H}}_0)_S+\hat{\vartheta}_{e}^q$, and iv) $-i {\mathrm d}_{\tilde\varphi}\hat{\tilde{\mathcal H}}_0 +(\hat{\tilde{\mathcal H}}_0)^2-\hat{\mathcal H}_0^{(2)}$, where we have defined\footnote{In the conventional case $\hat{\pi}_{\tilde\varphi}=-i \partial_{\tilde\varphi}$, ${\mathrm d}_{\tilde\varphi}$ is the total derivative in the Heisenberg picture corresponding to a time evolution in $\tilde\varphi$ generated by $\hat{\tilde{\mathcal H}}_0$.} \begin{equation} -i{\mathrm d}_{\tilde\varphi}\hat O\equiv [\hat{\pi}_{\tilde\varphi}-\hat{\tilde{\mathcal H}}_0,\hat O], \end{equation} for any operator $\hat O$. In principle, the condition on the operator iv) may be satisfied with a suitable choice of $\hat{\tilde{\mathcal H}}_0$, in agreement with our comments above. Assuming that the considered approximation is valid, we then get \begin{align}\label{constrainBO} \hat{\pi}_{\tilde\varphi}^2 \psi &+ \left(2 \langle \hat{\tilde{\mathcal H}}_0 \rangle_{\Gamma} - \langle \hat{\Theta}_{o} \rangle_{\Gamma}\right) \hat{\pi}_{\tilde\varphi}\psi\nonumber\\ &=\bigg[\langle \hat{\Theta}_{e}+ \big(\hat{\Theta}_{o} \hat{\tilde{\mathcal H}}_0\big)_S\rangle_{\Gamma} +i \langle {\mathrm d}_{\tilde\varphi}\hat{\tilde{\mathcal H}}_0 - \frac{1}{2}{\mathrm d}_{\tilde\varphi}\hat{\Theta}_{o} \rangle_{\Gamma} +\langle \hat{\mathcal H}_0^{(2)} - (\hat{\tilde{\mathcal H}}_0)^2 \rangle_\Gamma\bigg] \psi. \end{align} Here we have introduced the symbol $\langle \hat O \rangle_\Gamma$ to denote the expectation value of the operator $\hat O$ on $\Gamma$ in the Hilbert space $\mathcal H_\mathrm{kin}^\mathrm{grav}$, namely, only with respect to the inner product of the FRW geometry. Note that, in general, the result is an operator acting on $\mathcal H_\mathrm{kin}^\mathrm{matt}\otimes \mathcal{F}$. The above equation leads to a generalized Schr\"odinger equation for the evolution (in $\tilde\varphi$) of the inhomogeneities provided that the following conditions are satisfied: \begin{itemize} \item[a)] $\langle \hat{\Theta}_{o} \rangle_{\Gamma}$ has to be negligible as compared to $\langle \hat{\tilde{\mathcal H}}_0 \rangle_{\Gamma}$. This is valid as long as the quadratic contribution of the MS modes given by $\hat{\Theta}_{o}$ remains small when compared to the introduced generator of the $\tilde\varphi$-evolution in the FRW case, something that certainly fits in the perturbative scheme that we have adopted for the treatment of the inhomogeneities. \item[b)] $\hat{\pi}_{\tilde\varphi}^2 \psi$ has to be negligible as well. The self-consistency of this assumption is checked in Appendix \ref{appb}. \item[c)] $i \langle{\mathrm d}_{\tilde\varphi}\hat{\tilde{\mathcal H}}_0 - \frac{1}{2}{\mathrm d}_{\tilde\varphi}\hat{\Theta}_{o}\rangle_\Gamma$ has to be negligible in comparison with $\langle \hat{\Theta}_{e}+ \big(\hat{\Theta}_{o} \hat{\tilde{\mathcal H}}_0\big)_S\rangle_\Gamma$. Otherwise, the Schr\"odinger equation would contain a term destroying the unitary evolution of the MS modes and which is not present in the classical field equations of these gauge invariants. \end{itemize} Indeed, if these three conditions are satisfied, we get the generalized Schr\"odinger equation \begin{align}\label{schroMS1} \hat{\pi}_{\tilde\varphi}\psi=\frac{\langle \hat{\Theta}_{e}+ \big(\hat{\Theta}_{o} \hat{\tilde{\mathcal H}}_0\big)_S\rangle_{\Gamma}+\langle \hat{\mathcal H}_0^{(2)} - (\hat{\tilde{\mathcal H}}_0)^2 \rangle_\Gamma}{2 \langle \hat{\tilde{\mathcal H}}_0 \rangle_{\Gamma}}\psi. \end{align} On top of the previous conditions, let us consider as well the requirement \begin{itemize} \item[d)] $ \langle \hat{\mathcal H}_0^{(2)} - (\hat{\tilde{\mathcal H}}_0)^2 \rangle_\Gamma$ has to be negligible in comparison with $\langle \hat{\Theta}_{e}+ \big(\hat{\Theta}_{o} \hat{\tilde{\mathcal H}}_0\big)_S\rangle_\Gamma$. This is natural to assume (once $ \langle{\mathrm d}_{\tilde\varphi}\hat{\tilde{\mathcal H}}_0\rangle_{\Gamma}$ has been ignored) if $\Gamma$ is an approximate solution of the unperturbed FRW model, as argued before. \end{itemize} If the four conditions a)--d) hold, then the Schr\"odinger equation simplifies to \begin{align}\label{schroMS} \hat{\pi}_{\tilde\varphi}\psi=\frac{\langle \hat{\Theta}_{e}+ \big(\hat{\Theta}_{o} \hat{\tilde{\mathcal H}}_0\big)_S\rangle_{\Gamma}}{2 \langle \hat{\tilde{\mathcal H}}_0 \rangle_{\Gamma}}\psi. \end{align} Correspondingly, the Hamiltonian generating the dynamics of the perturbations in the internal time $\tilde\varphi$ is given by ${\langle \hat{\Theta}_{e}+ \big(\hat{\Theta}_{o} \hat{\tilde{\mathcal H}}_0\big)_S\rangle_{\Gamma}}/{2 \langle\hat{\tilde{\mathcal H}}_0 \rangle_{\Gamma}}$. This Hamiltonian is just the MS Hamiltonian, with its dependence on the variables of the homogeneous geometry evaluated at the expectation values corresponding to the quantum state $\Gamma$, and divided by the expectation value of $\hat{\tilde{\mathcal H}}_0$ on it. We see that assuming the Born-Oppenheimer ansatz and introducing a controlled number of approximations allows us to recover standard QFT for the gauge-invariant perturbations, which propagate on an effective homogeneous geometry than can be regarded as {\it{dressed}}. The term {\it{dressed}} refers here to the fact that this geometry is not a classical one, but it retains the main quantum corrections to the geometry that the MS variables feel \cite{dressed}. \subsection{Effective equations for the Mukhanov-Sasaki variables} \label{MSequations} Employing the Born-Oppenheimer ansatz and the approximation that the state $\Gamma$ remains highly peaked on the operators that encode the effect of the FRW geometry in the zero-mode of the Hamiltonian constraint (and hence, strictly speaking, without the need to introduce the rest of approximations that lead to a Schr\"odinger equation for the wave function of the MS modes), one can further derive effective classical equations for the MS variables. These effective dynamics treat the perturbations as classical, namely they replace the annihilation and creation operators of the gauge-invariant inhomogeneities by their classical counterpart. The assumption that this replacement is acceptable in order to get effective equations does not seem too stringent, because the zero-mode of the Hamiltonian constraint is quadratic in the MS gauge invariants. Since the functions of $\tilde\varphi$ act by multiplication, we can see the quantum dynamics of the MS modes ruled by that Hamiltonian constraint as corresponding to time-dependent harmonic oscillators, with $\tilde\varphi$ playing the role of internal time. In the previous subsection we saw that the evolution of the MS variables is governed by Eq. \eqref{constrainBO}, which can be interpreted as the result of imposing a constraint of the form: \begin{align} \hat{\mathcal C}_{\mathrm{per}}=\hat \pi_{\tilde\varphi}^2+ D_{\Gamma}(\tilde\varphi) \hat \pi_{\tilde\varphi} + E_{\Gamma}(\tilde\varphi) - \Big\langle \hat{\Theta}_{e}+ \big(\hat{\Theta}_{o} \hat{\mathcal H}_{0}\big)_S- \frac{i}{2}{\mathrm d}_{\tilde\varphi}\hat{\Theta}_{o} \Big\rangle_{\Gamma} . \end{align} Here, $D_{\Gamma}$ and $E_{\Gamma}$ are two functions of $\tilde\varphi$ which depend on the state $\Gamma$ of the homogeneous geometry, and which we do not specify because they are irrelevant for our calculations. As for the momentum $\hat \pi_{\tilde\varphi}$, it is supposed to act as a generalized derivative with respect to $\tilde\varphi$, and we neither need to specify it at this level of discussion. The constraint operator $\hat{\mathcal C}_{\mathrm{per}}$ is imposed on the sector of the model composed by $\mathcal H_\mathrm{kin}^\mathrm{matt}\otimes\mathcal F$. The corresponding classical evolution generator, denoted by ${\mathcal C}_{\mathrm{per}}$, is then obtained by replacing $\hat \pi_{\tilde\varphi}$ by its explicit form as a generalized derivative, and the MS operators $\hat V^{\vec n,\epsilon}_{q_1}$ and $\hat V^{\vec n,\epsilon}_{p_1}$ by their classical counterparts, according to our comments above. Taking into account the densitization of the constraint [set in Eq. \eqref{dens}] and the definition of the homogeneous part of the lapse function, one can check that ${\mathcal C}_{\mathrm{per}}/2$ generates reparameterizations in a time $\bar T$ that, at leading perturbative order, is related with the proper time by $dt=\sigma e^{3\alpha} d\bar T$. Moreover, we can change to a conformal time $\eta_{\Gamma}$, adapted to the {\it{dressed}} FRW geometry associated with the state $\Gamma$. Observing expressions \eqref{eq:perturbation-operators}, we see that all the dependence of ${\mathcal C}_{\mathrm{per}}/2$ on the MS momenta $V^{\vec n,\epsilon}_{p_1}$ is given by the term $\langle\hat \vartheta_e\rangle_{\Gamma}(V^{\vec n,\epsilon}_{p_1})^2/2$ coming from $\langle \hat{\Theta}_{e}\rangle_{\Gamma}$. Then, it is natural to define\footnote{It is possible to see that, with our choice of numeric factors, this definition of conformal time is not sensitive to the choice of $l_0$.} $l_0d{\eta_{\Gamma}}=\langle\hat \vartheta_e\rangle_{\Gamma} d\bar T$. This change of time is well defined because $\langle\hat \vartheta_e\rangle_{\Gamma}$ is a c-number (depending on $\tilde\varphi$ through $\Gamma$), and it is monotonous inasmuch as the operator $\hat \vartheta_e$ representing $\vartheta_e=e^{2\tilde\alpha}$ should be positive. It is worth emphasizing that we would have failed to define a change of time parameter had the time derivative $d{\eta_{\Gamma}}/d\bar T$ been an operator. Hence, the expectation value on $\Gamma$ is essential in order to introduce the above change of time. We also point out that the change is state dependent, and hence the properties of the evolution in the times $\bar T$ and $\eta_{\Gamma}$ can be quite different when considered in the physical Hilbert space of the system. In summary, the evolution of $V^{\vec n,\epsilon}_{1}$ in the time $\eta_{\Gamma}$ is given (under Poisson brackets) by \begin{align} d_{\eta_{\Gamma}} V^{\vec n,\epsilon}_{1}=\frac{l_0}{2\langle\hat \vartheta_e\rangle_{\Gamma}}\{V^{\vec n,\epsilon}_{1},{\mathcal C}_{\mathrm{per}}\}, \end{align} where $d_{\eta_{\Gamma}}$ denotes the derivative with respect to $\eta_{\Gamma}$. The evolution of $V^{\vec n,\epsilon}_{q_1}$ is simply given by $d_{\eta_{\Gamma}} V^{\vec n,\epsilon}_{q_1}=l_0 V^{\vec n,\epsilon}_{p_1}$. Using this result and taking again the time derivative we get the following effective MS equations: \begin{align}\label{MSeqhybrid} d^2_{\eta_{\Gamma}}V^{\vec n,\epsilon}_{q_1}=- V^{\vec n,\epsilon}_{q_1} \left[\tilde \omega_n^2 + \langle \hat{\theta}_{e}+ \hat{\theta}_{o}\rangle_{\Gamma}\right], \end{align} where we have defined $\tilde \omega_n^2=l_0^2 \omega_n^2$ and \begin{subequations} \begin{align} \langle \hat{\theta}_{e}\rangle_{\Gamma}\equiv l_0^2 \frac{ \langle \hat{\vartheta}_{e}^q\rangle_{\Gamma}}{ \langle \hat{\vartheta}_{e}\rangle_{\Gamma}},\qquad \langle \hat{\theta}_{o}\rangle_{\Gamma}\equiv l_0^2 \frac{ \langle (\hat{\vartheta}_{o}\hat{\tilde{\mathcal H}}_0)_S-\frac{i}{2}\mathrm d_{\tilde\varphi}\hat{\vartheta}_{o}\rangle_{\Gamma}}{ \langle \hat{\vartheta}_{e}\rangle_{\Gamma}}. \end{align} \end{subequations} We note that the last term in the square brackets of this MS equation is a function of only $\tilde\varphi$, and hence of time, when the scalar field is evaluated on the solutions to the effective equations. This factor contains quantum modifications with respect to the standard MS equation, which encode the most relevant quantum geometry effects of the homogeneous background. The derived equations are still of harmonic oscillator type with time-dependent frequencies. Besides, no dissipation term appears and the equations are hyperbolic in the ultraviolet regime, where $\tilde \omega_n^2$ dominates in the square brackets. We have included the contribution of $\big\langle \hat{\theta}_{o} \big\rangle_{\Gamma}$, although, in view of our discussion in Appendix \ref{appb}, we expect it to be negligible in practice as it is proportional to $\tilde m^2$, in the studied case of a mass term as the potential. Let us say that, although in our analysis we have assumed for simplicity that the potential of the scalar field is given by a mass contribution, actually we can easily extend the discussion to a general potential $W(\tilde\varphi)$. For that, we simply need to replace in Eq. \eqref{eq:geometry-operators3} the two powers of $\tilde m^2\tilde\varphi^2$ with $2W(\tilde\varphi)$, and the term $\tilde m^2$ with $W''(\tilde\varphi)$, whereas the factor $\tilde m^2\tilde\varphi$ in Eq. \eqref{eq:geometry-operators1} has to be replaced with $W'(\tilde\varphi)$ (here the prime denotes derivative with respect to $\tilde\varphi$). Then, for the approximations related with the Born-Oppenheimer ansatz to hold, instead of requiring a small mass, we need to require small variations of the potential. \section{LQC representation for the homogeneous sector} \label{sec:lqc} So far in our discussion we have left unspecified the concrete quantization adopted in the FRW-geometry part of the homogeneous sector. For the sake of illustrating our analysis with an example of particular interest, in this section we will adopt the polymeric quantization employed in LQC. There are plenty of references where the details of the polymeric quantization of the unperturbed FRW model can be found (see e.g. \cite{lqc,bounce1,bounce2,mmo}). More specifically, we will adhere to the so-called {\it improved dynamics} prescription of LQC \cite{bounce2} and, among the possible symmetric orderings for the Hamiltonian constraint operator, we will adopt the prescription put forward in Ref. \cite{mmo}, as it proves to be most convenient both for theoretical and practical purposes \cite{pres}. Moreover, in addition, for the homogeneous scalar field, we will use the standard Schr\"odinger quantization which is usually taken in LQC. Besides, in the perturbed model, when representing the operators of the homogeneous sector that are coupled to perturbations, we will follow Ref. \cite{hybridFRWflatMS}, that adopts the factor ordering of Ref. \cite{mmo} as well. In the loop formalism, the geometry is described by the Ashtekar-Barbero $su(2)$ connection and by the densitized triad \cite{lqg}, that form a canonical pair. In FRW cosmologies, owing to homogeneity and isotropy, they are respectively determined simply by two dynamical variables, $c$ and $p$, with Poisson bracket equal to $8\pi G\gamma/3$, where $\gamma$ is the Immirzi parameter \cite{Immirzi}. The mentioned improved dynamics scheme accounts for the existence of a minimum non-vanishing eigenvalue $\Delta$ of the area operator in LQG. This scheme involves a transformation to the new variables \begin{align}\label{vp} v=\sgn{(p)} \frac{|p|^{3/2}}{2\pi G \gamma \sqrt{\Delta}},\quad b=\sqrt{\frac{\Delta}{|p|}}c\equiv \bar\mu c. \end{align} The sign of $p$ determines the orientation of the triad. The new variables verify $\{b, v\}=2$. The variable $v$ is proportional to the volume of the homogeneous model, which is finite for the three-torus spatial topology under study, and given by $V=2\pi G \gamma \sqrt{\Delta}|v|$. These variables for the homogeneous geometry are related with those of previous sections, $(\tilde\alpha,\pi_{\tilde\alpha})$, via the canonical transformation \begin{align} e^{\tilde\alpha}=\left( \frac{3\gamma \sqrt{\Delta}}{2\sigma}|v|\right)^{1/3}, \qquad \pi_{\tilde\alpha}=-\frac32 v b. \end{align} On the other hand, for the zero-mode of the matter scalar field, it is convenient to introduce the following scaling by a constant, $\phi = \tilde \varphi/(l_0^{3/2}\sigma)$ and $\pi_\phi = l_0^{3/2}\sigma\pi_{\tilde \varphi}$, since this is the usual parameterization employed in the LQC literature. In LQC one {\it{polymerizes}} the connection, something which means that the connection coefficient $c$ has no well-defined operator in the quantum theory but one represents instead its holonomy elements, given by the exponentials of $b$. As a consequence, the FRW-geometry sector of the kinematical Hilbert space, $\mathcal{H}_\text{kin}^\text{grav}$, is the span of basis states $|v\rangle$, with $v\in\mathbb{R}$, normalized with respect to the discrete inner product $\langle v'|v\rangle=\delta_{v}^{v'}$. The operator $\hat{v}$ , which acts by multiplication, $\hat{v} |v\rangle=v |v\rangle$, has a discrete spectrum. We are taking the reduced Planck constant $\hbar$ equal to one (as in Sec. \ref{BOa}) to simplify the notation. The holonomy operators $N_{\pm\bar\mu}\equiv e^{\pm i\bar\mu c/2}= e^{\pm i b/2}$ produce a constant shift in the label of these states, $ \hat N_{\pm\bar\mu} |v\rangle=|v\pm1\rangle$, as one can deduce from the commutation relations $[ \hat N_{\bar\mu},\widehat{v}]=i\widehat{\{N_{\bar\mu},v}\}$. As a result of this loop representation, the classical expression $cp=2\pi G \gamma v b$ gets promoted in the quantum theory to a symmetric version of $\hat{v}\,\widehat{\sin(b)}$ multiplied by $2\pi G \gamma$, where $\widehat{\sin(b)}=i( \hat N_{-2\bar\mu}- \hat N_{2\bar\mu})/2$. We choose the symmetric ordering proposed in Ref. \cite{mmo}, after which the operator representing $cp$ becomes \begin{align}\label{eq:Omega} \hat{\Omega}_0\equiv\frac1{2\sqrt{\Delta}}{\hat V}^{1/2}\left[\widehat{\text{sgn}(v)}\widehat{\sin(b)}+\widehat{\sin(b)}\widehat{\text{sgn}(v)}\right]{\hat V}^{1/2}. \end{align} As we have said, for the zero-mode of the scalar field, we adopt a standard Schr\"odinger representation, with $\hat{\pi}_\phi=i\partial_\phi$ and $\hat\phi$ acting by multiplication, so that $\mathcal H_\mathrm{kin}^\mathrm{matt}=L^2(\mathbb{R},d\phi)$. In total, this LQC representation yields the following expression for the operator $\hat{\mathcal H}_0^{(2)}$ [see Eqs. \eqref{densH0} and \eqref{dens-constraint}]: \begin{align}\label{eq:calH_0} \hat{\mathcal H}_0^{(2)} &=\frac{3}{4\pi G}\left( \frac{3}{4\pi G \gamma^2}\hat\Omega_0^2- \hat V^2 m^2\hat\phi^2\right). \end{align} The operator $\hat\Omega_0^2$ annihilates the zero-volume state $| v=0 \rangle$ and leaves invariant its orthogonal complement. Moreover, it leaves invariant the subspaces $\mathcal H_\mathrm{\varepsilon}^\pm$ formed by states with support on the semilattices $\mathcal L_\mathrm{\varepsilon}^\pm=\{\pm(\varepsilon+4n)|n\in\mathbb N\}$, where $\varepsilon\in(0,4]$. $\mathcal H_\mathrm{\varepsilon}^\pm$ are separable, in contrast with the original $\mathcal H_{\mathrm{kin}}^{\mathrm{grav}}$. Notice also that, in each of these sectors, the homogeneous volume $v$ has a strictly positive minimum $\varepsilon$ (or negative maximum $-\varepsilon$). Below we will represent the quadratic contributions of the inhomogeneities to the (zero-mode of) the Hamiltonian constraint by operators that also leave these semilattices invariant, that therefore get superselected. In the following, we will restrict the discussion, e.g., to $\mathcal H_\mathrm{\varepsilon}^+$, spanned by states with positive $v \in \mathcal L_\mathrm{\varepsilon}^+$. Let us consider now the representation of the homogeneous terms entering the quadratic contribution of the inhomogeneities to the zero-mode of the Hamiltonian constraint, namely the operators $ \hat{\vartheta}_{o}$, $\hat{\vartheta}_{e}$, and $\hat{\vartheta}_{e}^q$ representing the $\vartheta$-terms in Eqs. \eqref{eq:geometry-operators}. These terms are affected in principle by some quantization ambiguities. As it was done in Refs. \cite{hybridFRWflat,hybridFRWflatMS}, we will introduce a symmetric factor ordering that tries to respect, as far as possible, the assignations of representation made in the homogeneous sector of the system as follows. i) We represent the inverse of the volume with the standard regularization of LQC, namely $\widehat{[1/V]}$ is the cube of the regularized operator \begin{align} \widehat{\left[\frac1{V}\right]}^{1/3}=\frac3{4\pi G \gamma \sqrt{\Delta}}\widehat{\text{sgn}(v)} \hat V^{1/3}\left[\hat N_{-\bar\mu}\hat V^{1/3}\hat N_{\bar\mu}-\hat N_{\bar\mu}V^{1/3}\hat N_{-\bar\mu}\right]. \end{align} This operator annihilates the state $|v=0\rangle$ and commutes with $\hat V$, and hence it is well defined on the subspaces $\mathcal H_\mathrm{\varepsilon}^\pm$. ii) The products of the form $f(\phi)\pi_{\phi}$, where $f$ is an arbitrary function, are represented with the symmetric factor ordering $\big(f(\hat \phi) \hat\pi_\phi+\hat\pi_\phi f(\hat\phi)\big)/2$. iii) We adopt an algebraic symmetrization in factors of the form $V^rg(cp)$, that are promoted to the operators $\hat V^{r/2}\hat g\hat V^{r/2}$, where $g$ is any function and $r$ a real number. This algebraic symmetric factor ordering is also adopted for powers of the inverse volume. iv) Even powers of $\Omega_0\equiv -l_0^3\sigma^2 \gamma \pi_{\tilde\alpha}$ are represented by the same powers of the operator $\hat \Omega_0$, as in FRW. And v) in the case of odd powers of $\Omega_0$, let us say $\Omega_0^{2k+1}$ with $k$ equal to an integer, we adopt the representation $|\hat\Omega_0|^k \hat\Lambda_0|\hat\Omega_0|^k$, where $|\hat\Omega_0|$ is the square root of the positive operator $\hat\Omega_0^2$ and $\hat{\Lambda}_0$ is defined exactly as $\hat \Omega_0$ in Eq. \eqref{eq:Omega}, but with holonomies of double length. In this way, its action only shifts $v$ in multiples of four units, and hence preserves the superselection sectors of the homogeneous geometry. Following these prescriptions, we obtain the operators \begin{subequations} \begin{align} \hat\vartheta_o&= \frac{4}{l_0} \sqrt{12\pi G} \gamma m^2\hat\phi \hat V^{2/3} |\hat\Omega_0|^{-1} \hat\Lambda_0|\hat\Omega_0|^{-1}\hat V^{2/3} ,\\ \hat\vartheta_e&=\frac{3 l_0}{4\pi G}\hat V^{2/3},\\ \hat\vartheta_e^q&=\frac{4\pi G}{3 l_0}\widehat{\left[\frac1{V}\right]}^{1/3}\hat{\mathcal H}_0^{(2)}\left(19-32 \pi^2 G^2 \gamma^2 \hat\Omega_0^{-2}\hat{\mathcal H}_0^{(2)}\right) \widehat{\left[\frac1{V}\right]}^{1/3} \nonumber\\ & +\frac{3 m^2}{4 \pi G l_0 }\hat V^{4/3}\left( 1- \frac{8\pi G}{3} \hat\phi^2\right), \end{align} \end{subequations} which are densely-defined on $\mathcal H_\mathrm{\varepsilon}^+\otimes L^2(\mathbb{R},d\phi)$. These results coincide with those of Ref. \cite{hybridFRWflatMS} except for some factors of the inverse of the volume operator, which appear now as the inverse powers of the volume operator itself. This difference arises from our different choice of densitization for the zero-mode of the Hamiltonian constraint at the classical level (the densitization was done at the quantum level in that mentioned, previous work). As we have already said, the validity of the Born-Oppenheimer approximation depends both on the particular representation chosen for the above homogeneous operators, and on the properties of the homogeneous states $\Gamma$. We refer the reader to Ref. \cite{hybridFRWflatMS} for more comments at this respect regarding the loop quantization. \section{Conclusions} \label{sec:conclusions} In this work we have developed a covariant formulation of the gravitational system that describes a perturbed flat FRW spacetime with a compact three-torus topology and minimally coupled to a scalar field. In our perturbative scheme, we truncate the action of the model at quadratic order in the perturbations, and focus our attention on scalar perturbations. We expand the spatial dependence in Fourier modes, using a basis of eigenfunctions of the Laplace-Beltrami operator of the spatial sections. Besides, we treat the degrees of freedom of the zero-modes exactly, up to the order of our truncation. The total Hamiltonian of the resulting model is a sum of constraints, formed by a linear combination of the zero-mode of the scalar (or Hamiltonian) constraint, and of all the inhomogeneous modes of two local constraints which are linear in the perturbations. One of them comes from the perturbations of the Hamiltonian constraint and the other corresponds to the perturbation of the momentum (or diffeomorphisms) constraint. The zero-mode of the Hamiltonian constraint is, in turn, formed by the constraint of the unperturbed model plus contributions that are quadratic in the perturbations. The first important aspect of our treatment is that we do not fix the gauge freedom associated with the linear perturbative constraints. In order to deal with them, we abelianize their algebra. We achieve this by replacing the linear perturbative Hamiltonian constraint with a suitable linear combination of it and of the constraint of the unperturbed model. At the perturbative order of our truncation of the action, this replacement amounts to a redefinition of the zero-mode of the lapse function. As a result of this process, we are able to parameterize the perturbations of the model with these Abelian linear perturbative constraints and the modes of the MS field, together with their corresponding canonically conjugate variables. Since the MS modes Poisson commute with the constraints, we indeed attain a parameterization for the perturbations fully adapted to gauge invariance. Another goal of our work consists in extending the canonical transformation from the perturbations to the entire system, namely considering not only the inhomogeneities, but also the homogeneous sector of the model (the zero-modes describing the degrees of freedom present in the unperturbed FRW model). In this way we get at our level of truncation in the perturbations a fully covariant description of the entire symplectic manifold, and not just of its inhomogeneous sector formed by the perturbations, something that would have required treating the zero-modes as fixed, rather than as genuine degrees of freedom. This is an important step towards the quantization of the system, inasmuch as without this completion of the canonical transformation, the system would have lost its symplectic structure. As a result of this canonical transformation, the zero-mode of the lapse gets redefined again, absorbing a quadratic contribution of the perturbations, but keeping the original freedom as an unspecified Lagrange multiplier. In addition, also the Lagrange multipliers accompanying the perturbative linear constraints get redefined similarly (though in this case with linear contributions of the perturbations), so that the final Hamiltonian is a linear combination of (first-class) commuting constraints. We have derived the explicit expression of the spacetime metric in terms of our gauge-invariant phase-space variables and new Lagrange multipliers. In our construction, we have started with a form of the linear perturbative constraints that has been suitably scaled by powers of the scale factor. It is worth explaining that we might have started as well without this scaling of the perturbative constraints: The difference would have simply led to another definition of the momentum of the zero-mode of the FRW geometry in the final canonical set of variables for the entire system. Once the classical description has been completed, we have proceeded to quantize the system. For this, we have proposed a hybrid approach that combines any quantization of the homogeneous FRW sector of the model with a more standard field description of the inhomogeneous degrees of freedom. The utility of this approach rests on the assumption that the most relevant quantum geometry effects are those affecting the zero-modes of the system, and therefore may require a more careful analysis, while perturbations, even being quantum, can be described in a more conventional way. In other words, the philosophy behind the hybrid quantization consists in adopting a representation for the homogeneous sector of the model capable to capture its quantum gravity nature, while the perturbations are treated essentially along the lines of QFT in curved spacetimes. We have followed the Dirac approach of imposing the constraints as quantum operators in order to find physical states of the system. As the classical constraints Poisson commute, it is natural to pass to quantum counterparts that commute as well, and hence we can impose them independently. The linear perturbative constraints generate translations in their canonically conjugate variables. Their quantum imposition then means that physical states do not depend at all on those gauge degrees of freedom. Therefore, thanks to our abelianization of the constraint algebra, we see that the process of imposing the perturbative linear constraints in the quantum theory leads to quantum states that depend only on the MS modes and the homogeneous sector. Remarkably, this is precisely the kind of states obtained with the approach adopted e.g. in Ref. \cite{hybridFRWflatMS}, even if the avenue followed in that case included a gauge fixing of the perturbations, although the description of the remaining physical degrees of freedom was done only in terms of gauge invariants. In this sense our results, in the present work, support those obtained with gauge fixing and show the covariance of the conclusions attained previously, at the considered perturbative level. After the imposition of the linear perturbative constraints, the only constraint on the model is the zero-mode of the Hamiltonian constraint. To solve this Hamiltonian constraint, we have adopted an ansatz for physical quantum states of Born-Oppenheimer type, and discussed the validity of such approximation. This Born-Oppenheimer ansatz has a similar motivation as the one proposed in Ref. \cite{vidotto}. It regards the zero-mode of the scalar field as an internal clock and assumes that the MS modes do not affect much the motion of the zero-modes of the geometry. The ansatz assumes a separation of the quantum dependence on the FRW geometry and on the MS gauge invariants. If the state of the FRW geometry remains highly peaked with respect to three operators [the operators listed as i)-iii) in the paragraph above Eq. \eqref{constrainBO}] when it changes in the internal time, and this evolution fits suitably that of the unperturbed system, the zero-mode of the Hamiltonian constraint provides a quantum dynamical equation for the MS variables in terms of this time. Furthermore, under certain conditions, the ansatz allows one to introduce a series of approximations that lead to a Schr\"odinger equation for the MS modes, characterized by a Hamiltonian that is given by the MS Hamiltonian corrected by the fact that its dependence on the homogeneous geometry gets {\it dressed} with quantum corrections. In this manner, one recovers a QFT for the gauge-invariant perturbations, that propagate in a fixed (and in general non-classical) spacetime. Generically, when one is dealing with quantum fields in curved spacetimes, there is ambiguity both in the way that one parameterizes the field (since one can introduce scalings by background functions) and in the Fock representation chosen for that field. Remarkably, for the system under study, there exist uniqueness theorems that pick out a preferred parameterization (in fact, a canonical pair for the field) and a class of unitarily equivalent Fock representations for it \cite{uniqueness1,uniqueness2,uniquenessother,uniqueperturb}. These choices are selected by the natural criteria of having a vacuum invariant under the symmetries of the background and a unitary implementation of the dynamics. Our parameterization for the MS modes matches precisely this description.\footnote{Nonetheless, we emphasize that our strategy of a hybrid approach to the quantization can be easily adapted to other parameterizations.} No regime with a unitary QFT would have been accessible if we had chosen a different parameterization for the MS field (by means of a time-depending scaling). Another important conclusion of our analysis is the robustness of the class of dynamical equations that govern the evolution of the MS gauge invariants. We have shown that the dynamics of these invariants are always given by a harmonic oscillator equation, with a frequency that depends on the internal time, as long as the Born-Oppenheimer states are highly peaked with respect to the operators that feel the FRW geometry in the zero-mode of the Hamiltonian constraint, and irrespectively of whether this constraint equation can be approximated by one of Schr\"odinger type. To arrive at this conclusion, essentially our only additional assumption has been that the effective dynamics of the MS modes can be derived with the substitution of annihilation and creation operators by their counterparts as classical variables. The MS equation obtained in this manner contains quantum modifications to the time-dependent part of the frequency, but the modification is the same for all modes, and in particular it does not affect the ultraviolet behavior of the classical equations for these perturbations in general relativity. Let us also note that, in order to obtain the quantum evolution for the perturbations, we have not employed a semiclassical approximation for the homogeneous geometry. We do not even need to exactly solve the unperturbed model to determine the quantum state of the background geometry. Indeed, within our perturbative scheme, it is enough that we consider approximate solutions, whose difference with respect to the exact ones can be neglected perturbatively. We finish by emphasizing that, if we believe in QFT in curved spacetimes, it is natural to think that there is a deeper quantum regime, before reaching a complete quantum gravity description, in which our hybrid quantization makes sense. This hybrid approach encodes the main quantum gravity effects and it is suitable to potentially predict whether there exist modifications of quantum gravity nature regarding cosmological observables, for instance in the power spectrum of the CMB. It therefore offers a framework to extract physical consequences of quantum gravity in cosmology. \acknowledgments The authors are greatly thankful to J. Olmedo for enlightening conversations, assistance in computations with {\it Mathematica}, and pointing out some important references. In addition, they are grateful to B. Elizaga Navascu\'es, T. Pereira, and S. Tsujikawa for discussions. This work was partially supported by the Spanish MICINN/MINECO Project No. FIS2011- 30145-C03-02 and its continuation FIS2014-54800-C2-2-P. M. M-B acknowledges financial support from the Netherlands Organisation for Scientific Research (NWO) (Project No. 62001772).
2208.09335
\section{Introduction}\label{sec:intro} Comparing model predictions to observations drawn from an astronomical catalogue requires knowledge of the selection effects and incompleteness affecting the observed list of objects. Knowing what we could not observe could be as essential as what we observed, even for simple endeavours. For instance, mapping the stellar distribution around the Sun to reconstruct our Galaxy's overall shape requires knowing the sample's limiting magnitude and whether this limit varies across the sky. A catalogue {\it selection function} $S_\ensuremath{\mathcal{C}}$ describes the probability of an object to be included in an astronomical catalogue. Such a function represents the combined effects of the data collection (such as detection efficiency decreasing with apparent magnitude) and data processing (such as removing sources with noisy observations). To avoid biases caused by incomplete data, astronomers commonly restrict their studies to regions of the parameter space where the sample is assumed to be complete ($S_\ensuremath{\mathcal{C}} \sim 1$). This approach is generally substantially restrictive and could lead to a poor representation of the problem one wants to address. Instead, one needs to fold in the selection function of a catalogue. In \citet{Rix21}, we presented a general approach to evaluating and \textit{accounting} for known selection functions in modelling astronomical data. {\it Gaia}\xspace observes the sky continuously according to a complex scanning law: a six-hour rotation around its spin axis, a 63-day day precession of the spin axis, and the annual motion of the Earth (and its Lagrange 2 point) around the Sun \citep{gaiaMission}. This results in an intricate pattern, covering the entire celestial sphere with on average $\sim15$ visits per year, but with significant variations. The {\it Gaia}\xspace catalogue, only includes sources with at least five observations \citep{Lindegren18}. The probability that a transit across the {\it Gaia}\xspace field of view leads to an observation is lower for fainter sources, mainly due to two reasons. First, the onboard source detection algorithm has a nominal faint-end threshold of $G=20.7$\,mag beyond which sources are not selected for observation, but the onboard magnitude estimate has a precision of a few tens of magnitude \citep{deBruijne15}, giving sources fainter than 20.7 a non-zero observation probability. Second, in crowded areas where the density exceeds $\sim$1,050,000 per square degree, {\it Gaia}\xspace cannot follow all transiting sources and prioritises bright objects over fainter ones \citep{gaiaMission}. The probability that a source benefits from five observations is therefore a complex function of sky position (via both crowding and the scanning law) and magnitude. Using the notation introduced in \citet{Rix21}, this paper models the catalogue selection function $S^{\rm parent}(\ensuremath{\mathbf{q}})$ of {\it Gaia}\xspace~DR3, where in the present case the catalogue properties $\ensuremath{\mathbf{q}}=(\ell,b,G)$ are the sky-position $(\ell, b)$ and $G$ magnitude of a source. The most common and straightforward approach to estimating a catalogue selection function of a sample is to compare the dataset with a more complete catalogue, which often means a deeper one in terms of magnitude limit. This comparison is generally made by binning both catalogues by magnitude, colour, or sky position and computing the ratio of source count in each bin. For instance, \citet{Rybizki21rvs} followed this procedure to characterise the selection function of the {\it Gaia}\xspace~DR2 Radial Velocity sample \citep{Katz19}, and \citet{Everall20seestar} improved upon this version using a smooth Gaussian Mixture Model to solve the issue with sparsely-populated bins. However, this technique is empirical and relies on an external reference of complete samples. There are ongoing efforts to reconstruct the {\it Gaia}\xspace selection function from a forward-modelling approach \citep[][and subsequent papers in their \textit{Completeness of the Gaiaverse} series]{Boubert20gaiaverse1, Boubert20gaiaverse2}. This approach requires modelling each step of the {\it Gaia}\xspace processing, from the scanning law and onboard filtering to the astrometric processing. In an upcoming paper (Castro-Ginard et al., in prep.), we will update their model, using transit data of non-variable stars to identify data-taking gaps and time-variable detection efficiencies and to extend the model to bright sources ($G\sim$ 1 to 6\,mag). This study is part of the larger GaiaUnlimited project\footnote{\url{https://gaia-unlimited.org/}}, which aims to determine the {\it Gaia}\xspace selection function and provide tools to the astronomical community to account for the selection effects in the {\it Gaia}\xspace catalogue. The present paper empirically builds an analytical model of the {\it Gaia}\xspace DR3 source catalogue selection function, i.e., the probability that the final catalogue contains a given source as a function of its sky position and an apparent $G$ magnitude. We use the deep Dark Energy Camera Plane Survey \citep[DECaPS][]{Schlafly18,Saydjari22} of the southern Galactic plane as our ``complete'' reference to calibrate our model. We identified a simple quantity derived from the {\it Gaia}\xspace catalogue itself to use as a predictor of the {\it Gaia}\xspace completeness as a function of magnitude at any location over the sky, even outside the DECaPS footprint. The approach and modelling are presented in Sect.~\ref{sec:approach}. In Sect.~\ref{sec:testing} we verify our predictions against data which were not used in the model calibration. We discuss our model and its limitation in Sect.~\ref{sec:discussion}, and close with concluding remarks in Sect.~\ref{sec:conclusion}. \section{Approach} \label{sec:approach} \subsection{Choice of reference datasets} \label{sec:data} We use the Dark Energy Camera Plane Survey (DECaPS) DR1 catalogue \citep[][]{Schlafly18} as ground truth to calibrate our model of the {\it Gaia}\xspace selection function. DECaPS is a ground-based optical and near-infrared survey of the Galactic plane using the Dark Energy Camera (DECam, \citealt{Flaugher15}) mounted on the 4m Victor M. Blanco telescope at the Cerro Tololo Inter-American Observatory (CTIO). The 2.2$^{\circ}$\ diameter field of view, 0.26$"$/pixel plate scale and arcsecond seeing make these observations well-suited to resolving even the extremely crowded inner galaxy. DECaPS DR1 covers the Galactic plane with $|b|< \sim 4^{\circ}$ and $5 > \ell > -120 ^{\circ}$. The survey reaches typical exposure depths of $\sim23$\,mag in $g$ and $r$ bands, and uses the \texttt{crowdsource} photometric pipeline, which is specifically designed to deal with crowded fields. In Sect.~\ref{sec:HST} we verify the prediction of our model on high-density regions, using Hubble Space Telescope observations of the inner $3.5 \times 3.5$\,arcmin 26 globular clusters collected by \citet{Sarajedini07}. The data were acquired with the Wide Field Channel of the Advanced Camera for Surveys, with photometry in the F606W and F814W filters, and are essentially complete down to magnitude 25. This data set was used in \citet{Arenou18validation} to visualise the completeness of {\it Gaia}\xspace~DR2 and the influence of crowding, but no quantitative model of completeness was proposed. \subsection{Choice of initial dependencies} \label{sec:dependencies} This work aims to identify observable quantities that can be computed from the {\it Gaia}\xspace data itself, to be used as a proxy to constrain the selection function in any given field. We explored possible choices of observables by computing source count ratios between {\it Gaia}\xspace and DECaPS in magnitude bins in various areas on the sky. Due to the onboard resource allocation strategy prioritising bright sources, a naive expectation would be that completeness correlates with observed source density, with more populated fields being less complete. The observed source density is, in fact, a poor indicator of the true density (two areas with the same number of {\it Gaia}\xspace sources can differ by a factor of four in DECaPS) and thus a poor predictor of completeness. This is illustrated in the left panel of Fig.~\ref{fig:c_vs_M10_magRanges}, and in Fig.~\ref{fig:gaia_density_saturation}. We also tested the following as possible indicators of completeness in a given field of view: the magnitude at which the observed luminosity function differs from an expected power law, the mode of the magnitude distribution, the magnitude of the faintest star in a given area, and the 90th percentile of magnitude. This last quantity provides a reasonable estimate of completeness in the most crowded regions but does not perform well in sparser fields. Constructing an all-sky map of the aforementioned quantities (except local source density) is also computationally very expensive, as it requires going through the entire data set of $\sim$1.8\,billion DR3 sources. The best indicator of completeness we could identify is the $G$ magnitude of the sources with the smallest number of observations. The number of observations used to compute the astrometric solution of a given source is given in the {\it Gaia}\xspace~DR3 catalogue as \texttt{astrometric\_matched\_transits}\footnote{The DR3 catalogue also contains the column \texttt{matched\_transits}, which counts all transits matched to a certain source even if they were not used in the construction of the catalogue. The quantity used in this study is \texttt{astrometric\_matched\_transits}.}. By construction, its minimum value is five because sources with fewer observations were not included. In the remainder of this paper, we denote $M_{10}$ the median magnitude of the sources with \texttt{astrometric\_matched\_transits}\,\,$\leq$ 10 in a given patch of sky. Its value is generally between 19 and 21.5 and strongly depends on how many times a given region was seen by {\it Gaia}\xspace and how crowded the region is. This is illustrated in Fig.~\ref{fig:mapM10}, where the patterns introduced by stellar density and the {\it Gaia}\xspace scanning law are clearly visible. We choose the median value rather than the mean because some bright sources might occasionally have a small number of matched transits, and the median is a more robust summary statistic. The model could also be calibrated on the median magnitude of sources with exactly five astrometric matched transits, but the chosen value of ten conveniently allows for a sufficient number of tracers even in sparse regions (the sparsest HEALPix level 7 contains 19 such sources), while keeping the total number of tracers manageable when building all-sky maps (210\,million in {\it Gaia}\xspace~DR3). The variation of completeness with the $M_{10}$ value of each investigated patch of sky is shown in Fig.~\ref{fig:c_vs_M10_magRanges}. The completeness in any magnitude range is overall a tight function of $M_{10}$, although a dispersion of up to $\sim0.1$ can be seen for some magnitude ranges. This dispersion effectively sets the limit of the precision one can achieve by using $M_{10}$ as the sole predictor. The same effect is illustrated in the bottom-left panel of Fig.~\ref{fig:obs_and_model}, where it can be seen that a given value of $M_{10}$ can correspond to slightly different completeness profiles. The effect of crowding likely depends not only on the true source density but also on the magnitude distribution of the sources, and a different parameter derived from the distributions shown in the top panel of Fig.~\ref{fig:amt_vs_Gmag} might be able to provide a second-order correction to the simple model presented in this study. \begin{figure*} \centering \includegraphics[width=0.99\textwidth]{figures/c_vs_dens_M10_magRanges.pdf} \caption{ Completeness of {\it Gaia}\xspace relative to the DECaPS survey, which is taken as ``ground truth'', in four magnitude ranges, computed in 3000 distinct patches across the DECaPS footprint. This completeness is shown as a function of {\it Gaia}\xspace source density (left) or $M_{10}$ (right): {\it Gaia}\xspace source density is a poor predictor of completeness, while the $M_{10}$ parameter -- the median magnitude of catalogued sources with \texttt{astrometric\_matched\_transits}$\leq 10$ in a surrounding patch of the sky -- is an excellent completeness predictor. $M_{10}$ combines the impact of source density and scanning law, as demonstrated e.g. in Figure~\ref{fig:mapM10}. } \label{fig:c_vs_M10_magRanges} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.99\textwidth]{figures/mapM10_hpx10_labels.pdf} \caption{Map of the parameter $M_{10}$ in the direction of the Galactic centre; $M_{10}$ is the median $G$ magnitude (here in HEALPix regions of level 10) of {\it Gaia}\xspace sources with \texttt{astrometric\_matched\_transits} $\leq 10$, reflecting the outcomes for faint sources of the {\it Gaia}\xspace pipeline completeness decisions. The complex pattern results from the combination of the {\it Gaia}\xspace scanning law and stellar density, which in turn depends on Galactic structure and dust distribution. Baade's window is prominent as a patch near $(\ell,b) = (2,-2)$, with bright $M_{10}$ presumably owing to the exceptionally high (true) source density. Several globular clusters are also visible, for instance the prominent M~22 near $(\ell,b)=(10,-7.5)$. } \label{fig:mapM10} \end{figure*} \subsection{Source count ratios relative to DECaPS} We study the completeness in $1085$ patches of size $18 \times 7.2$\,arcmin across the DECaPS DR1 footprint, sampling a wide range of source densities. The size of the patches was chosen to allow us to avoid gaps in the coverage of the DECaPS DR1 data, mainly present near the Galactic centre. The distribution of those patches is shown in Fig.~\ref{fig:map_decaps_fields}. Patches containing at least 10,000 {\it Gaia}\xspace sources are further divided into up to eight bins to provide a finer spatial resolution in densest areas, for a total of 2,906 individual regions. We spatially match the {\it Gaia}\xspace data to DECaPS with a 1\,arcsecond radius. When several DECaPS sources are present within this radius, we consider the best match to be the source whose $r$ magnitude is closest to the {\it Gaia}\xspace source's $G$. Matches with a magnitude difference larger than 1\,mag are discarded. We find that less than 0.5\% of {\it Gaia}\xspace sources have no DECaPS counterpart. The colour and magnitude of these missing sources seem to be a random subset of the {\it Gaia}\xspace data. They appear to follow lines of constant declination on the sky, which suggests that they correspond to an instrumental effect of the Dark Energy Camera (e.g. bleeding trails caused by the presence of bright stars) rather than spurious {\it Gaia}\xspace detections. A small fraction of the {\it Gaia}\xspace sources ($\sim$0.3\%, see Fig.~\ref{fig:map_noG}) lack a $G$-band magnitude. Since the present study investigates the completeness of the {\it Gaia}\xspace catalogue as a function of magnitude, our procedure treats these sources as if they were missing from the {\it Gaia}\xspace catalogue. We estimate the $G$ magnitude of the missing {\it Gaia}\xspace sources from their DECaPS $(r,r-i)$ photometry. The conversion is performed by fitting a linear relation with the form $G = a*r + b*(r-i) + c$ in each patch of sky separately to account for the fact that photometric transformations are extinction-dependent. We then compute the fraction of DECaPS sources with a {\it Gaia}\xspace counterpart in bins of $G$ magnitude of width 0.2\,mag, from $G=15$ to 23. The completeness as a function of magnitude is shown in Fig.~\ref{fig:obs_and_model}, colour-coded by the value of $M_{10}$ for each region. In the densest regions, the completeness reaches 50\% at $G\sim19$, while in sparse regions that benefited from large numbers of observations, the {\it Gaia}\xspace catalogue appears essentially complete down to $G\sim20.5$. \subsection{Fitting the model} \label{sec:fitting} We model the completeness curve computed in each region with a sigmoid function. To capture the change of slope from dense to sparse regions as well as the slight asymmetry of the curve, we define a \emph{generalised sigmoid} with the baroque but flexible analytic form: \begin{equation} S(G~|~M_{10}) = 1 - 0.5 \times \left[ \tanh \left( \frac{x(M_{10}) - G}{y(M_{10})} \right) + 1 \right] ^{~z(M_{10})} , \label{eq:sigmoid} \end{equation} \noindent where $x$ is the magnitude of the inflexion point, $y$ controls how steeply the completeness drops at the inflexion point (smaller values correspond to a steep decrease), and $z$ describes the skewness of the function ($z<1$ means it is flatter at bright magnitudes). The effect of varying these three parameters is illustrated in Fig.~\ref{fig:sigmoid_model}. We initially fit the triplet ($x,y,z$) independently in each of the 2,906 patches. The generalised sigmoid defined in equation (\ref{eq:sigmoid}) allows the fit to reproduce the observed completeness of all patches with residuals smaller than 2\%. Unfortunately, we cannot compute and provide an all-sky map of these three parameters because they can only be derived directly where DECaPS data is available. Instead, we investigate and model the relation between the three parameters and $M_{10}$. The parameter $x$ (which roughly sets the magnitude of 50\% completeness) scales almost linearly with $M_{10}$. The parameters $y$ and $z$ (describing the slope and skewness of the curve) mostly follow two regimes, remaining roughly constant when $M_{10} < 20.5$, then increasing in value which results in a steeper, less asymmetric shape of the completeness curve for higher values of $M_{10}$. The high-$M_{10}$ regime corresponds to the flux-limited selection function, where transiting sources are only granted a detection window if the {\it Gaia}\xspace sky mapper measures a magnitude $G_{onboard}<20.7$ for that particular transit \citep{gaiaMission}. The low-$M_{10}$ regime corresponds to areas on the sky where crowding plays a major role in the selection function. \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{figures/obs_and_model_four_panels.pdf} \caption{{\it Gaia}\xspace's catalogue completeness $S(G~|~M_{10})$ as a function of $G$ magnitude for a given completeness parameter $M_{10}$. Top left: empirically determined {\it Gaia}\xspace $G$ completeness derived from the comparison with DECaPS photometry in thirteen DECaPS patches that are each colour-coded by their $M_{10}$. Top right: model completeness, from Eq.~\ref{eq:sigmoid}, for the corresponding $M_{10}$ values. Bottom left: empirically-determined completeness derived from from the comparison with DECaPS photometry in 105 DECaPS patches with $M_{10}$=19.4, 19.8,20.2, 20.6, and 21 (within 0.01\,mag). The dispersion at a given $M_{10}$ corresponds to the scatter seen in Fig.~\ref{fig:c_vs_M10_magRanges} (right panel) and the residuals in Fig.~\ref{fig:residuals_tworanges}. Bottom right: model completeness for these five values of $M_{10}$. } \label{fig:obs_and_model} \end{figure*} To capture the variation of $x$, $y$, and $z$ with $M_{10}$ through these two regimes, we model them as a broken slope relation, with the same location $M_{break}$ of the break for all three: \begin{equation} x(M_{10})=\begin{cases} a_x M_{10} + b_x & \text{if $M_{10} < M_{break}$}\\ c_x M_{10} + (a_x - c_x ) M_{break} + b_x & \text{otherwise} \end{cases} \end{equation} \begin{equation} y(M_{10})=\begin{cases} a_y M_{10} + b_y & \text{if $M_{10} < M_{break}$}\\ c_y M_{10} + (a_y - c_y ) M_{break} + b_y & \text{otherwise} \end{cases} \end{equation} \begin{equation} z(M_{10})=\begin{cases} a_z M_{10} + b_z & \text{if $M_{10} < M_{break}$}\\ c_z M_{10} + (a_z - c_z ) M_{break} + b_z & \text{otherwise} \end{cases} \end{equation} The resulting hierarchical model has ten free hyperparameters (three for each of $x$, $y$, and $z$, plus the location of the break). We add a final free parameter $\sigma$ representing the noise on the observed completeness profiles. The noise is assumed to be Gaussian and constant with magnitude. This is a rough approximation, and here $\sigma$ acts like a nuisance parameter rather than a model for the noise. The corresponding log-likelihood is: \begin{equation} -n \log \sigma - \frac{1}{2 \sigma^2} \sum_{i=1}^{n} (obs_i - pred_i)^2 \end{equation} \noindent where $n$ is the total number of data points: the source count ratios in 40 magnitude bins $\times$ 2,906 patches. We maximise the log-likelihood with the Markov chain Monte-Carlo sampler \texttt{emcee} \citep{ForemanMackey13} and explore the parameter space with 32 walkers for 10,000 steps each. We impose that $a_x$, $c_x$, and $c_z$ must be positive, $c_y$ must be negative, $\sigma$ must be between 0 and 1, and $M_{break}$ between 19 and 21. The priors on the other parameters are left unbounded. The sampling takes about two hours on an 8-core laptop. We discard the first 1,000 iterations as burn-in\footnote{The resulting chain is 70 to 90 times longer (depending on the parameter) than the autocorrelation time estimated by \texttt{emcee}.}. We provide the median of posterior samples for each parameter in Table~\ref{tab:mcmc}. The final relation between the parameters of the sigmoid and $M_{10}$ are shown in Fig.~\ref{fig:emcee_hyperParams_vs_M10}. \begin{table} \begin{center} \caption{ \label{tab:mcmc} Parameters for the selection function $S(G~|~M_{10})$ in Eq.~\ref{eq:sigmoid}.} \begin{tabular}{ c c c c } \hline \hline parameter & median & \multicolumn{2}{c}{uncertainty} \\ \hline $a_x$ & 0.985 & +0.002 & -0.001 \\ $b_x$ & 0.649 & +0.030 & -0.031 \\ $c_x$ & 0.693 & +0.001 & -0.002 \\ \hline $a_y$ & -0.004 & +0.003 & -0.003 \\ $b_y$ & 0.223 & +0.060 & -0.051 \\ $c_y$ & -0.093 & +0.002 & -0.002 \\ \hline $a_z$ & 0.006 & +0.003 & -0.003 \\ $b_z$ & 0.034 & +0.068 & -0.060 \\ $c_z$ & 0.351 & +0.004 & -0.004 \\ \hline $M_{break}$ & 20.519 & +0.002 & -0.001 \\ \hline $\sigma$ & 0.02060 & +0.00004 & -0.00004 \\ \hline \hline \end{tabular} \tablefoot{The parameters for Eq.~\ref{eq:sigmoid} are defined via Eqs.2-4, and were obtained via MCMC. We adopt the median of the posterior chain as the best value, and state the 16$^{th}$ to 84$^{th}$ percentile confidence interval. Here, $\sigma$ is a nuisance parameter in the fitting procedure. The corner plot of the full posterior chain is shown in Fig.~\ref{fig:cornerplot_11}. } \end{center} \end{table} \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{figures/params_vs_M10.pdf} \caption{ Relation between the parameters of the sigmoid and $M_{10}$ in Eq.~\ref{eq:sigmoid}, for 500 samples from the MCMC chain: the inflection point, $x(M_{10})$ -- Eq.2, on the left; the inverse slope, $y(M_{10})$ -- Eq.3, at center; and the skewness,$z(M_{10})$ -- Eq.4, on the right. The magnitude of 50\% completeness, $x(M_{10})$ is well approximated by $M_{10}$. Fields with fainter $M_{10}$ have a steeper (smaller $y(M_{10})$ ) and more symmetrical ($z(M_{10})$ towards 1) selection function. Examples of resulting sigmoids are shown in Fig.~\ref{fig:obs_and_model} for a range of $M_{10}$ values. } \label{fig:emcee_hyperParams_vs_M10} \end{figure*} Figure~\ref{fig:residuals_tworanges} shows the mean and dispersion of the residuals, for two ranges of $M_{10}$. The prediction is most precise where the model predicts completeness of 0 or 100\%. Where the prediction is least precise, the dispersion of the residuals reaches about 5\%. \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{figures/residuals_tworanges.pdf} \caption{ Mean completeness residuals, i.e. model predicted minus observed completeness, as a function of magnitude, for patches in two different ranges of $M_{10}$. The shaded areas correspond to the 16th to 84th percentile and 5th to 95th percentile intervals. Expectedly, the residuals are highest near 50\% completeness and smaller in the highly complete or dramatically incomplete regime. } \label{fig:residuals_tworanges} \end{figure*} Figure~\ref{fig:workflow} summarises the workflow and how we use the hyperparameters to predict the completeness as a magnitude function at any sky position. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{figures/workflow.pdf} \caption{ Summary of the workflow used to build the selection function $S(G~|~M_{10})$ (Eq.~\ref{eq:sigmoid}) as a function of sky position and magnitude $G$. } \label{fig:workflow} \end{figure} \section{Testing the model} \label{sec:testing} \subsection{With more DECaPS data} We verify the prediction of our model by applying it to regions of the DECaPS footprint that were not used for the fitting step. The $2^{\circ} \times 2^{\circ}$field of view shown in Fig.~\ref{fig:map_prediction_330} was chosen to straddle the boundary between a stripe which received a large number of visits and an adjacent area with much fewer transits. The $M_{10}$ parameter was mapped by computing the median $G$ magnitude of stars with ten or fewer \texttt{astrometric\_matched\_transits} in spatial bins of $2.4 \times 2.4$\,arcmin. The main diagonal feature, splitting the field of view in two, is due to the {\it Gaia}\xspace scanning law. The finer structure is shaped by patchy dust extinction. \begin{figure*} \centering \includegraphics[width=0.85\textwidth]{figures/map_prediction_330_31.pdf} \caption{ Comparison of the direct, empirical and model-predicted completeness maps, illustrated at $G\sim 21$. Top left: map of the direct completeness estimate, i.e. the ratio of source densities in {\it Gaia}\xspace and DECaPS in the magnitude range 20.9 < $G$ < 21.1. Top right: map of the quantity $M_{10}$ used to predict the model completeness. Bottom left: completeness at $G=21$ predicted from the $M_{10}$ map and the model of Eq.~\ref{eq:sigmoid}. Bottom right: map of the difference between the predicted and observed completenesses. Note that using more external information, the model-predicted completeness map (bottom left) is effectively a de-noised version of the empirical completeness map (top left). } \label{fig:map_prediction_330} \end{figure*} The model-predicted completeness map (bottom left of Fig.~\ref{fig:map_prediction_330}) obtained from the $M_{10}$ map (top right) is less noisy than the map obtained directly from source count ratios (top left). However, as discussed in Sect.~\ref{sec:improvements}, the use of $M_{10}$ as the sole predictor of completeness can lead to local biases of a few per cent (within the amplitude of the residuals shown in Fig.~\ref{fig:residuals_tworanges} and the bottom left panel of Fig.~\ref{fig:obs_and_model}). In Fig.~\ref{fig:map_prediction_330} this leads to slightly overestimating the completeness of the most complete area (with $M_{10}$>21.1). \subsection{HST data of globular clusters} \label{sec:HST} The cores of globular clusters (GCs) are among the most challenging regions for {\it Gaia}\xspace due to their high densities. In some particularly dense clusters, the completeness at $G$=18 is close to zero. The data (presented in Sect.~\ref{sec:data}) contains observations of the inner 3\,arcmin of 26 GCs. We split each field of view into a core (inner 1.5\,arcmin) and a surrounding area. Since these objects are very dense and the field of view is smaller than the spatial binning we used in our model calibration, fifteen cluster regions have values of $M_{10}$ smaller than the range (19.11 to 21.23) covered by the calibration set. We show in Fig.~\ref{fig:c_vs_M10_GCs} that our model can predict the completeness even when extrapolated to these crowded, low-$M_{10}$ fields. The extrapolation only seems to fail by 20\% in the most extreme case of crowding, which corresponds to the inner 1.5\,arcmin of Omega~Centauri ($M_{10}$) As an example, we map the completeness prediction of globular cluster NGC~1261 in Fig.~\ref{fig:map_ngc_1261}. Our model correctly identifies the regions of 100\% and 0\% completeness. The intermediate regions appear as a ring of noise on the residuals map (bottom-right panel of Fig.~\ref{fig:map_ngc_1261}), due to the small spatial binning and the narrow magnitude range. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{figures/c_vs_M10_GCs.pdf} \caption{ Completeness of {\it Gaia}\xspace relative to HST as a function of $M_{10}$ in five chosen magnitude ranges, for an extreme crowding regime: the core and outskirts of 26 globular clusters. The lines show the expected completeness according to our model. Remarkably, our model (Eq.~\ref{eq:sigmoid}) provides unbiased completeness predictions based on {\it Gaia}\xspace information alone. } \label{fig:c_vs_M10_GCs} \end{figure} \begin{figure*} \centering \includegraphics[width=0.85\textwidth]{figures/map_ngc_1261.pdf} \caption{ Comparison of the direct, empirical and model-predicted completeness maps, illustrated for the globular cluster NGC~1261. Top left: map ratio of the number of sources in {\it Gaia}\xspace and HST in the magnitude range 20.9 < $G$ < 21.1. Top left: map of $M_{10}$ used to predict the completeness. Bottom left: predicted completeness at $G=21$. Bottom right: map of the difference between the predicted and observed completeness: the variance is largest in the intermediate completeness regime (see Fig.~\ref{fig:residuals_tworanges}), producing a ring-like structure in the residuals map. } \label{fig:map_ngc_1261} \end{figure*} \subsection{Comparison to the Gaiaverse model} \label{sec:gv} We compare our predictions with the model of \citet{Everall22gaiaverse5} (hereafter EB22), which itself is the {\it Gaia}\xspace DR3 update of the model developed by \citet{Boubert20gaiaverse2} for DR2. Their predictions do not rely on comparisons to reference data but on a model of the {\it Gaia}\xspace scanning law and of the detection efficiency as a function of magnitude. We show the all-sky map of completeness at $G$=21 predicted by both models in Fig.~\ref{fig:comparisons_GV_21}. \begin{figure*} \centering \includegraphics[width=0.99\textwidth]{figures/comparisons_GV_21_large.pdf} \caption{ Global comparison of the completeness maps predicted by the \emph{ab initio} completeness model \citep[][top panel]{Everall22gaiaverse5}, and our empirically derived $M_{10}$-based completeness model (bottom). The overall morphology of the two maps is similar, but our empirical completeness model implies far greater incompleteness (at $G$=21), especially in the regions of high source densities. } \label{fig:comparisons_GV_21} \end{figure*} \citet{Boubert20gaiaverse2} point out that modelling the effects of crowding is a complex task, as one can only know the observed density of sources while crowding depends on the true density (including those missing from the catalogue). For this reason, the effect of globular clusters or high-density regions near the Galactic centre is more clearly visible in our model, which naturally accounts for crowding via the $M_{10}$ parameter. The most striking difference between both models is that EB22 predict much higher completeness even in non-crowded regions, with an essentially 100\% complete catalogue at $G$=21 across most of the sky, while our model predicts that a 100\% completeness at this magnitude is only achieved in the regions most favoured by the scanning law. This is supported by comparisons to the DECaPS data (including those used to calibrate the model), which show most of the Galactic plane is only 60-80\% complete at this magnitude. A likely explanation for this discrepancy is that EB22 overestimates faint source detection probabilities. Figure~7 from EB22 shows that for sources with $G$=21, the reconstructed detection probability is $\sim$30\%, which translates to a 90\% probability of having 5 detections after 25 scans (the number for the least visited regions in DR3) and 99.5\% probability after 40 scans (the median number for the whole sky). The EB22 model estimates detection efficiencies from photometric time series published in the {\it Gaia}\xspace archive for variable stars. This sample is likely to be biased towards stars with high-quality measurements and low photometric errors. A more realistic estimate of the range of detection efficiencies in a given region can be obtained from individual transit data of all {\it Gaia}\xspace sources. This data is not publicly available and will be used in a future publication (Castro-Ginard et al., in prep.) \section{Discussion} \label{sec:discussion} \subsection{Limits and potential improvements} \label{sec:improvements} With its three parameters, the generalised sigmoid functional form defined in Equation~\ref{eq:sigmoid} turns out to be sufficiently flexible to approximate the observed {\it Gaia}\xspace-to-DECaPS count ratio as a function of magnitude in any region to within two per cent. Given the limited sky coverage of our ground truth catalogue (DECaPS, which only covers $\sim$7\% of the celestial sphere), we need to predict the values of Eq.~\ref{eq:sigmoid}'s three parameters, using the quantity $M_{10}$ defined in Sect.~\ref{sec:dependencies} and computed from the {\it Gaia}\xspace data itself. The scatter observed in the right panel of Fig.~\ref{fig:c_vs_M10_magRanges} varies with $M_{10}$ and with $G$ magnitude and illustrates the limitations of predicting selection function $S(G~|~M_{10})$ using $M_{10}$ as the sole predictor of completeness. We could not identify a single quantity providing a better precision than $M_{10}$, which encodes the combined effect of the scanning law and crowding. Nonetheless, it may be possible to establish second-order corrections based on other {\it Gaia}\xspace-derived quantities. The detection probability as a function of magnitude likely depends on the magnitude distribution of sources in a given field of view, not just on their total number. We investigated the residuals of our model but only found hints of additional correlations between completeness and the total number of scans, or completeness and observed source density, in some restricted ranges of $M_{10}$ and $G$ magnitude. Establishing the right functional form and choice of dependency for such ad-hoc corrections would be a difficult task. One might be tempted to follow a machine-learning approach and let the machine determine the most relevant predictors of the {\it Gaia}\xspace-to-DECaPS count ratio. This would, however, incur the likely risk of overfitting, as many of the correlations in the reference data cannot reliably be generalised to the entire sky unless they can be supported by some understanding of the instrumental pipeline. A potential improvement to the $M_{10}$ proxy could be to characterise the entire distribution of \texttt{astrometric\_matched\_transits} with magnitude $G$ (illustrated in Fig.~\ref{fig:amt_vs_Gmag} for five chosen patches) rather than just its value at the faint end. The slope and shape of the drop in the number of matched transits may contain information on the level and type of crowding affecting the observations. Another direction of improvement to explore is to see whether sky areas with a broader range of scanning angles are more likely to be complete since the sources missed by {\it Gaia}\xspace are more likely to be different at each visit \citep{gaiaMission, Pancino17}. The dispersion in scanning angles over a given area could therefore be an additional parameter in the empirical description of the selection function. This quantity is available for each source in the {\it Gaia}\xspace catalogue as \texttt{scan\_direction\_strength\_k2}, but testing its validity as a secondary predictor of completeness would be difficult in the present context because the regions with the densest clustering of scanning angles are located outside the DECaPS footprint \citep[see e.g. Fig. 1a in][]{Everall21gaiaverse4}. Using the DECaPS DR2 release \citep{Saydjari22}, which more than doubles the survey area, could mitigate this problem in future work. Finally, the model constructed in this study assumes that the ($x$,$y$,$z$) parameters of the sigmoid are related to $M_{10}$ via a broken-slope relation, with a total of ten free hyperparameters. A more complex model (for instance, with more breaks) would decrease the residuals shown in Fig.~\ref{fig:residuals_tworanges} (and smooth out the kink near $G\sim20.2$ in its right panel), but unless the increase in complexity can be justified by some knowledge of the instrumental behaviour of {\it Gaia}\xspace, a simpler model is more likely to be valid outside the DECaPS footprint. \subsection{Dust extinction makes {\it Gaia}\xspace more complete} A natural but perhaps counter-intuitive effect of interstellar extinction is to increase completeness as a function of apparent magnitude. Foreground sources of a given apparent magnitude $G$ are more easily detected when projected against a ``dark'' background. Of course, dust extinction still reduces the probability of {\it Gaia}\xspace catalogue membership for sources of a given set of \emph{physical} properties and distance. Given that the selection function must be phrased in terms of \emph{observables}, modelling sources as a function of distance and absolute magnitude requires a 3D extinction map. \subsection{Arguments of the selection function} In this study, we expressed the {\it Gaia}\xspace source catalogue selection function as a function of magnitude and position $(G,\ell,b)$. We find no evidence that this fundamental {\it Gaia}\xspace selection function depends significantly on the source colour: in a given part of the sky, two sources with the same $G$ magnitude appear to have equal probabilities to be included in {\it Gaia}\xspace, regardless of their colour. This result is not surprising, because the {\it Gaia}\xspace sky mapper and the astrometric instruments on board the spacecraft operate in the $G$ band. We point out that due to strong correlations between the observables, investigating the chromaticity of the selection function is a much more complex task than simply expressing detection rates as a function of colour. For astrophysical reasons, red stars tend to be intrinsically fainter than blue stars. Interstellar extinction acts in the same direction, making sources appear both fainter and redder. On the other hand, areas of the sky heavily obscured by dust are redder but also more complete due to the background being less crowded in the magnitude range where {\it Gaia}\xspace operates. \subsection{Selection function for subsets of the {\it Gaia}\xspace catalogue} This paper only addresses the completeness of the sample of {\it Gaia}\xspace catalogue entries with a published position and $G$ magnitude, establishing the selection function noted $S_{\ensuremath{\mathcal{C}}}^{\mathrm{parent}}(G,\ell,b)$ in the notation of \citet{Rix21}. In practice, most users will be interested in comparing other {\it Gaia}\xspace quantities with theoretical models, such as observed $G_{BP}$ and $G_{RP}$ fluxes, parallaxes, proper motions, or more advanced data products provided by the {\it Gaia}\xspace pipelines such as astrophysical parameters \citep{Creevey22cu8}. It is not clear whether the approach used in this study is suitable for selecting further, more restricted subsets of the {\it Gaia}\xspace data. First, $M_{10}$ might not be a good predictor of the completeness of {\it Gaia}\xspace subsets, say stars with spectra from the radial velocity spectrometer \citep{gaiaMission} (RVS), because different instruments on board the spacecraft have different crowding limits: 1,050,100 sources per square degree for the astrometric instrument, 750,000 for the BP/RP spectrographs, and 35,000 for RVS. It may, however, be possible to construct equivalent quantities to characterise particular subsets, e.g. an equivalent of $M_{10}$ for Gaia sources with BP/RP. Second, it is not clear that the generalised sigmoid function (Eq.~\ref{eq:sigmoid}) is a good functional form for the selection functions of various {\it Gaia}\xspace subsets. Third, the selection function of some subsets will depend on more than just $G$ magnitude and sky position. For instance, \citet{Everall20seestar} and \citet{Rybizki21rvs} express the RVS and $\texttt{ruwe}<1.4$ completeness as functions of ($G-G_{RP}$). For thinking about the construction of more complex selection functions, we refer the reader to \citet{Rix21}, who provide recommendations on how to construct the sample function $S_{\ensuremath{\mathcal{C}}}^{\mathrm{sample}}(\ensuremath{\mathbf{q}})$ of a given subset of the {\it Gaia}\xspace data selected on attributes $\ensuremath{\mathbf{q}}$. In general, an overall selection function can be approximated as a multiplication reflecting the different Boolean steps in the sample selection: \begin{equation} S_{\ensuremath{\mathcal{C}}}(\ensuremath{\mathbf{q}},G,\ell,b) = S_{\ensuremath{\mathcal{C}}}^{\mathrm{sample}}(\ensuremath{\mathbf{q}}) \times S_{\ensuremath{\mathcal{C}}}^{\mathrm{parent}}(G,\ell,b) \end{equation} \section{Summary and conclusion} \label{sec:conclusion} This study is part of a paper series by the GaiaUnlimited project that aims to characterise the {\it Gaia}\xspace selection function and provide the astronomical community with corresponding data and tools. This paper presents an analytical model of the {\it Gaia}\xspace DR3 completeness as a function of observed $G$ magnitude and position on the sky. Our model depends on a single quantity which is derived from the {\it Gaia}\xspace data itself: the median magnitude $M_{10}$ in a patch of the sky of catalogued sources with \texttt{astrometric\_matched\_transits}\,\,$\leq 10$. $M_{10}$ reflects the elementary processes and decisions made by the {\it Gaia}\xspace pipeline to turn observations into the published {\it Gaia}\xspace catalogue and naturally accounts for the effects of crowding and the {\it Gaia}\xspace scanning law. As ground truth, we rely on the DECaPS survey, which is deeper than {\it Gaia}\xspace and whose pipeline is optimised for high-density fields, to calibrate our model. We test our predictions against DECaPS and Hubble Space Telescope observations of globular clusters. Our model predicts the observed completeness with a precision of up to a few per cent. We make the model available as a Python package through the GaiaUnlimited web page, along with documentation and tutorials. The present model only provides a selection function for the {\it Gaia}\xspace DR3 entries with a published $G$ magnitude and sky position. Sub-samples of this catalogue will be characterised in upcoming GaiaUnlimited publications. \section*{Acknowledgments} This work is a result of the GaiaUnlimited project, which has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement No 101004110. The GaiaUnlimited project was started at the 2019 Santa Barbara Gaia Sprint, hosted by the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular, the institutions participating in the {\it Gaia} Multilateral Agreement. A.~R.~C. is supported in part by the Australian Research Council through a Discovery Early Career Researcher Award (DE190100656) and through Discovery Project DP210100018. DECaPS data were retrieved with the \texttt{astro-datalab}\footnote{\url{https://github.com/astro-datalab/datalab/}} Python package, and the {\it Gaia}\xspace data with \texttt{astroquery} \citep{2019AJ....157...98G}. This work also made use of the Python packages \texttt{astropy} \citep{2018AJ....156..123A}, \texttt{scipy} \citep{2020SciPy-NMeth}, \texttt{astroML} \citep{astroML}, \texttt{MWplot}\footnote{\url{https://pypi.org/project/mw-plot/}}, \texttt{numpy} \citep{harris2020array}, \texttt{plotly} \citep{plotly}, \texttt{healpy}\footnote{\url{http://healpix.sourceforge.net}} \citep{2005ApJ...622..759G,Zonca2019}, \texttt{pandas} \citep{mckinney2010data}, and \texttt{matplotlib} \citep{Hunter:2007}. TCG acknowledges an extensive use of TOPCAT \citep{Taylor05} and Jupyter notebooks \citep{Kluyver2016jupyter}. \bibliographystyle{aa}
2006.11707
\section{Introduction} Coronal mass ejections (CMEs) are clouds of magnetized plasma that erupt from the Sun's atmosphere and propagate into the interplanetary space. They are often accompanied by a large amount of magnetic energy release and can cause extreme space weather events when arriving at the Earth. Studies have revealed that CMEs usually undergo three stages of dynamic evolution: a slow rise, a fast acceleration, and a propagation phase \citep{ZhangJ2001,ZhangJ2004}. The final speed of a CME varies in a wide range of about 100--3500~km~s$^{-1}$\xspace \citep[{\it e.g.},][]{Gopalswamy2009,Lamy2019}, while its main acceleration usually takes place within a few to tens of minutes at low coronal heights \citep[{\it e.g.},][]{ZhangJ2001,Vrsnak2007,Temmer2008,Bein2011,Veronig2018}, where the Lorenz force that accounts for the liftoff of a CME is strong. The launch of a CME is often accompanied by a rapid release of magnetic free energy in the solar atmosphere, in the form of flares that emit radiations across the entire electromagnetic spectrum. The two phenomena are closely coupled through magnetic reconnection. By reconfiguring magnetic filed lines, reconnection on one hand provides for the impulsive and vast energy release that is used for plasma heating and particle acceleration in flares \citep[see the review by][and references therein]{Shibata2011}, on the other hand facilitates the CME acceleration by reducing the tension of the overlying arcade and supplying additional poloidal magnetic flux to the erupting structure \citep[{\it e.g.},][]{Lin2000,Vrsnak2008}. The close relationship between flares and CMEs are presented in various studies by revealing a close temporal correlation between the flare soft X-ray (SXR) flux and the CME velocity profile \citep{ZhangJ2001,ZhangJ2004,Vrsnak2004}, and even the synchronization between the flare hard X-ray (HXR) emission and the CME acceleration \citep{Temmer2008,Temmer2010,BS2012}. A number of statistical studies \citep[{\it e.g.},][]{Maricic2007,Bein2011,BS2012,Cheng2020} provide further evidence for the close relation between the flare energy release and the CME impulsive acceleration, suggestive of a close link and feedback relationship between the CME dynamics and flare reconnection \citep{Temmer2010,Vrsnak2004,Vrsnak2008,Veronig2018}. The positive feedback between the CME dynamics and the associated flare is established via magnetic reconnection in the current sheet (CS), which is most intense in the flare impulsive phase \citep[for observational signatures of the hot elongated CS, see, {\it e.g.},][]{Cheng2018,Warren2018,ChenB2020}. The vertical CS becomes observationally more prominent in the flare decay phase after the peak of the {\it GOES}\xspace SXR flux \citep[{\it e.g.},][]{LiuR2013}, which can usually be observed above the candle-flame-shaped flare loops that are considered as direct evidence of magnetic reconnection \citep[{\it e.g.},][]{Tsuneta1996,Lin2005,Gou2015,Gou2016}. Due to the magnetic tension force, these newly reconnected cusp-shaped field lines quickly retract to form more relaxed round-shape ones, which is widely known as the field line shrinkage \citep{Forbes1996,Vrsnak2006}. Meanwhile above the flare arcade, tadpole-like supra-arcade downflows (SADs) and bright supra-arcade downflow loops (SADLs) quickly descend from the reconnection site and merge into the dense flare loop region \citep[{\it e.g.}][]{McKenzie1999,Savage2011,LiuW2013,Innes2014,ChenX2017}. Although the exact physical process is still unclear, it is believed that these features are closely related to the downward outflows of magnetic reconnection. We study the dynamics of a fast CME and its relation to the associated X2.8 flare occurring on 2013 May 13. This event is the second X-class flare from NOAA active region 11748 on that day and has been studied before. \citet{MO2014} and \citet{SH2014} presented the unusual loop-prominence system in polarized white light that formed after the flare, indicative of high coronal densities. \citet{Gou2019} presented detailed observations of the buildup of the magnetic flux rope and large-scale CME from the coalescence of multiple small-scale plasmoids during the early stage of the flare. \citet{Gou2017} focused on the two distinct episodes of the flare energy release associated with two-step reconnection. The first episode is characterized by the ``standard'' flux rope eruption, and the second one is initiated by the reconnection of a loop leg behind the eruption, which leads to even stronger particle acceleration observed in emissions of high-energy HXRs and $\gamma$-rays. In this letter, we concentrate on the dynamic evolution of the associated CME and find that the two strong episodes of the flare energy release are associated with two distinct phases of CME acceleration. We also find that the distribution of the CME acceleration and the flare nonthermal emission is different between the two phases, distinct from the standard flare model where they are supposed to be synchronized. To our knowledge, this is the first time that two distinct episodes of impulsive acceleration could be identified in a fast CME, suggestive of a different energy distribution from its associated flare. \section{Data and Instruments} We use the high spatial and temporal resolution EUV imagery (0.$''$6 and 12s) by the Atmospheric Imaging Assembly \citep[AIA;][]{Lemen2012} on board the Solar Dynamics Observatory \citep[{\it SDO}\xspace;][]{Pesnell2012} to study the dynamic evolution of the eruption in the inner corona, and mainly focus on the 131~\r{A}\xspace channel (primarily Fe XXI emission line, with a peak response temperature of log$T$=7.05). We use X-ray observations from the Reuven Ramaty High-Energy Solar Spectroscopic Imager \citep[{\it RHESSI}\xspace;][]{LinRP2002} and the Gamma-ray Burst Monitor (GBM) onboard the Fermi Gamma-ray Space Telescope (Fermi hereafter), to study the energy release of the associated flare. The subsequent white-light CME is observed by the Solar and Heliospheric Observatory/Large Angle and Spectrometric Coronagraph \citep[{\it SOHO}\xspace/LASCO, C2: 1.5--6~$R_\odot$\xspace, C3: 3.7--30~$R_\odot$\xspace;][]{Brueckner1995}, and the coronagraphs \citep[COR1: 1.5--4~$R_\odot$\xspace and COR2: 2.5--15~$R_\odot$\xspace;][]{Howard2008} on both the ``Ahead" and ``Behind'' satellites of the Solar Terrestrial Relations Observatory \citep[{\it STEREO};][{\it STA}\xspace\ and {\it STB}\xspace\ hereafter]{Kaiser2008}, which are about 136.3$\degree$ and 141.6$\degree$ separated from the Earth on 2013 May 13, respectively. In addition, radio observations obtained by {\it STB}\xspace/WAVES \citep{Bougeret2008} are also included. \section{Results}\label{sec:obs} \subsection{Event Overview} \begin{figure}[htbp] \centering \includegraphics[width=\textwidth]{f1.eps} \caption{\small Solar eruption on 2013 May 13 observed by {\it SOHO}\xspace/LASCO and {\it SDO}\xspace/AIA 131~\r{A}\xspace. In panel (a), the white circle indicates the solar limb, and the rectangular shows the field of view (FOV) of panels (b,c).} \label{fig:ov} \end{figure} The event under study originates from NOAA active region 11748 near the northeast solar limb on 2013 May 13 (Figure~\ref{fig:ov}). It manifests as the eruption of a magnetic flux rope as observed in the {\it SDO}\xspace/AIA 131~\r{A}\xspace filter (see Figure~\ref{fig:ov}, also \citealp{Gou2019} for details), the bottom of which connects the cusp-shaped flare loops underneath by the vertical CS, in good accordance with the standard model of solar eruptions ({\it e.g.}, \citealp{Carmichael1964,Sturrock1966,Hirayama1974,KP1976}; see also reviews by \citealp{Shibata2011,Holman2012}). The eruption produces a fast halo CME with a velocity of $\sim$1850~km~s$^{-1}$\xspace (according to the {\it SOHO}\xspace/LASCO CME catalog, \url{http://cdaw.gsfc.nasa.gov/CME_list/}), and an intense long-duration X2.8 flare that starts at 15:48~UT and peaks at 16:05~UT. This flare is associated with strong particle acceleration as observed in emissions of high-energy HXRs and $\gamma$-rays (see details in \citealp{Gou2017}). Here we concentrate on the dynamic evolution of the CME in the solar corona, especially on the impulsive acceleration process and its relation to the flare energy release and high-energy particle acceleration. \subsection{CME Dynamics} \begin{figure}[htbp] \centering \includegraphics[width=\textwidth]{f2.eps} \caption{\small GCS reconstruction of the CME (green) and the shock (yellow) at $\sim$16:54 UT using simultaneous observations by {\it STB}\xspace/COR2 (left), LASCO/C3 (middle), and {\it STA}\xspace/COR2 (right). The inset in the top middle panel shows the positions of {\it STA}\xspace (A) and {\it STB}\xspace (B) relative to the Sun (S) and the Earth (E) on 2013 May 13.} \label{fig:gcs} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=\textwidth]{f3.eps} \caption{\small CME kinematics (scaled by the left y-axes) compared with flare X-ray fluxes (scaled by the right y-axes). The green cross symbols in panel (a) indicate the height of the flux rope front observed in AIA 131~\r{A}\xspace images. The colored diamonds in panel (a) indicate the heights of the CME apex (dark green) and the lower boundary of CME (purple) measured in the GCS model. All heights are with respect to the solar center. The dark green dots in panels (b,c) indicate direct numerical derivatives of the measured data points in panel (a). The dark green curves in (a--c) are smoothed kinematic profiles, with errors indicated by light green shadows. {\it GOES}\xspace, {\it RHESSI}\xspace and Fermi X-ray fluxes are plotted in panels (b,c) in different colors as indicated by the legends. Fermi GBM observations from 16:04--16:30~UT are added to fill in the {\it RHESSI}\xspace data gap between 16:07--16:16~UT when it crosses the South Atlantic Anomaly (SAA). The two vertical dashed lines mark the two acceleration phases of the CME. } \label{fig:kine} \end{figure} {\it SDO}\xspace/AIA observes the eruption in the inner corona at the northeast limb, which allows us to study its dynamics with least projection effects. We measure the leading front of the magnetic ejecta in AIA 131~\r{A}\xspace images to obtain its height-time evolution. The white-light CME in the outer corona is observed after 15:55~UT by the coronagraphs on board {\it STA}\xspace, {\it STB}\xspace\ and {\it SOHO}\xspace from three different viewpoints (Figure \ref{fig:gcs}). This fast CME drives a shock in front of it. Based on the stereoscopic observations, we use the Graduated cylindrical shell \citep[GCS;][]{Thernisien2006,Thernisien2009} model to reconstruct the three-dimensional morphology of the eruption (Figure \ref{fig:gcs}). The model assumes a flux rope structure of the CME, and it is determined by three geometry and three position parameters: the aspect ratio $\kappa$, the half-angle $\alpha$ and the tilt of the croissant representing the CME, the longitude and the latitude of the source region, and the height of the CME apex. We also model the shock in front of the CME by geometrically reproducing a sphere ($\alpha$ = 0, $\kappa$ = 1). Figure \ref{fig:kine}(a) shows the obtained height-time evolution of the CME (measured along the apex of the GCS fitting), as well as the lower boundary of the flux rope in the model, which basically corresponds to the upper tip of the CS underneath the erupting CME. We combine the height-time measurements to study the complete kinematics of the CME in the corona (Figure \ref{fig:kine}). To derive the velocity and acceleration profiles, we first smooth the height-time data and derive the first and second time derivatives. The smoothing algorithm is based on the method described in \citet{Podl2017} and applied in \citet{Dissauer2019}, extended toward non-equidistant data. From the obtained acceleration profiles, we further interpolate to equidistant data points based on the minimization of the second derivatives and reconstruct the corresponding velocity and height profiles by integration. We also obtain the errors of kinematic profiles by assuming that the measurement errors of heights amount to 6 AIA pixels in the inner corona and 3\% of GCS heights in the outer corona. Figure \ref{fig:kine}(b,c) shows the obtained velocity and acceleration profiles (dark green curves with errors in light green), and they are well aligned with the direct numerical derivatives of the data points (dark green dots). The magnetic ejecta starts to accelerate at $\sim$15:41~UT, several minutes earlier than the {\it GOES}\xspace flare start (15:48~UT), suggestive of the role of an ideal instability to trigger the eruption. The CME achieves its highest velocity of 2190($\pm$158)~km~s$^{-1}$\xspace at 16:25~UT ({\it i.e.}, 20~min after the peak of the {\it GOES}\xspace SXR flux) at a height of 6.07($\pm$0.26)~$R_\odot$\xspace (with respect to the solar center). For comparison, the CME velocity at the flare peak time ({\it i.e.}, 16:05~UT) is 1607($\pm$68)~km~s$^{-1}$\xspace at a height of 2.58($\pm$0.09)~$R_\odot$\xspace. The acceleration of the CME exhibits two distinct phases, an impulsive peak (15:41--16:00~UT) followed by an enhanced gradual phase (16:00--16:25~UT), as marked by the vertical dashed lines in Figure \ref{fig:kine}. The first phase of impulsive acceleration achieves a peak value of 4.88($\pm$0.52)~km~s$^{-2}$\xspace at 15:52~UT, when the CME is at a height of 1.15($\pm$0.01)~$R_\odot$\xspace. The second phase undergoes an extended acceleration of several hundreds m s$^{-2}$ (up to 0.73($\pm$0.31)~km~s$^{-2}$\xspace at 16:11~UT), and it raises the CME velocity from 1387($\pm$100)~km~s$^{-1}$\xspace at a height of 1.93($\pm$0.07)~$R_\odot$\xspace to 2190($\pm$158)~km~s$^{-1}$\xspace at 6.07($\pm$0.26)~$R_\odot$\xspace, which is considerable as compared to the velocity increment during the first phase. We study the mechanism of the CME acceleration and give interpretations in terms of the flare reconnection in the following sections. \subsection{Flare Energy Release} \subsubsection{Two-Step Reconnection} \begin{figure}[htbp] \centering \includegraphics[width=0.95\textwidth]{f4.eps} \caption{\small Two acceleration phases of the CME. a--c) {\it SDO}\xspace/AIA 131 and 304~\r{A}\xspace images showing the flux rope eruption, loop-leg inflow and upward outflow of the second-step reconnection. d,e) Stack plots derived from {\it SDO}\xspace/AIA 131 and 304~\r{A}\xspace imagery along slits S1 and S2 as shown in panels (a,b) (the starts are labeled as ``0''). f,g) Dynamic radio spectra observed by {\it STB}\xspace/WAVES. The yellow and light green curves in panels (d--f) (scaled by the right y-axes) represent the {\it GOES}\xspace plasma temperature, {\it GOES}\xspace 1--8~\r{A}\xspace flux, CME velocity and acceleration profiles, respectively. {\it RHESSI}\xspace photon fluxes are plotted in panel (e) on arbitrary y-axes. The white arrows in panel (f) mark several injections of type III radio bursts. The vertical dashed line in panels (d--g) marks the onset of the second-step reconnection. } \label{fig:rec} \end{figure} The {\it GOES}\xspace X2.8 flare associated with the CME experiences two distinct episodes of energy release attributed to a two-step magnetic reconnection process, as reported by \citet{Gou2017}. Here we summarize some observational evidences in Figure \ref{fig:rec}(a--e) so as to compare with the CME dynamics. One can see that after the first step of flux rope eruption that exhibits typical characteristics of an eruptive flare--CME event, the second-step reconnection is directly imaged by {\it SDO}\xspace/AIA at $\sim$16:00~UT, manifesting as the disappearance of the cool loop-leg inflow in 304~\r{A}\xspace and simultaneous fast outflows of hot plasma indicative of reconnection outflow jets. The second episode of energy release is associated with even stronger particle acceleration than the first one, evidenced by stronger bursts of HXR emissions and even $\gamma$-rays (Figure~\ref{fig:rec}(e)). In addition, {\it STB}\xspace/WAVES observes significant injections and increases of type III radio emission at $\sim$3~MHz (Figure \ref{fig:rec}(f,g)) that coincide with the {\it RHESSI}\xspace HXR bursts after 16:00~UT, indicative of an increased number of accelerated electrons escaping from the Sun. The timing confirms that the second-step reconnection is associated with strong acceleration of electrons, which propagate both downward to the chromosphere to emit in HXRs by the nonthermal bremsstrahlung mechanism and upward into the interplanetary space to excite fast-drift type III radio burst. {\it STB}\xspace/WAVES also observes decameter-hectometric type II radio bursts (Figure~\ref{fig:rec}(f,g); fundamental and harmonic components at frequencies between 16~MHz and 3~MHz from 16:10 to 16:30~UT, and later one component at about 0.9-0.3~MHz from 17:50 to 19:20~UT), which provide evidences for the propagation of the shock driven by the fast CME. We note that the two strong episodes of flare energy release are temporally related to the two phases of CME acceleration. The resultant two-phase evolution of the CME velocity are also associated with a two-episode enhancement of the {\it GOES}\xspace SXR flux as well as that of flare temperature evolution (Figure~\ref{fig:rec}(d--e)). It implies that the second-step reconnection not only gives rise to another stronger episode of energy release in the flare, but also contributes to an additional phase of the CME acceleration, impulsively increasing its speed beyond the main phase. \subsubsection{Ongoing Reconnection in Flare Decay Phase} \begin{figure}[htbp] \centering \includegraphics[width=\textwidth]{f5.eps} \caption{\small Flare dynamics in the decay phase. a--f) Snapshots of {\it SDO}\xspace/AIA 131 \r{A} images. {\it RHESSI}\xspace\ HXR sources observed at $\sim$16:18~UT and $\sim$16:32~UT are overlaid in panels (a,b). The plus signs in panel (f) indicate the location of 6--12~keV LT sources in three {\it RHESSI}\xspace observation windows during the flare decay phase. g) {\it GOES}\xspace, {\it RHESSI}\xspace, and Fermi GBM X-ray fluxes. The curves are vertically shifted to avoid overlap. h,i) Stack plots of slit S3 in panel (a), the y-axes indicate the height above the solar limb. The blue plus symbols in panel (h) indicate the heights of the {\it RHESSI}\xspace LT source, labeled with rising speeds of linear fits for each time period. The dotted lines in panels (h,i) indicate the linear fits of various tracks left by the moving loops, labeled with the resultant speed in ~km~s$^{-1}$\xspace. The speeds for individual retracting loops are also plotted in panel (g) with blue and purple diamonds, scaled by the right y-axis. An animation is available for the {\it SDO}\xspace/AIA 131~\r{A}\xspace observations and its running difference images from 16:00~UT to 21:00~UT on 2013 May 13.} \label{fig:decay} \end{figure} After peaking at 16:05~UT, the flare experiences a long decay phase that lasts over 4 hours (see Figure \ref{fig:decay} and its animation). During this stage, the vertical CS and the cusp-shaped flare loops underneath become prominent, and the whole post-flare loop system grows higher as observed in {\it SDO}\xspace/AIA 131~\r{A}\xspace images. The growth is also evidenced by the temporal evolution of the {\it RHESSI}\xspace loop-top (LT) source location in Figure~\ref{fig:decay}(f,h), which shows that the post-flare loop system grows higher with a speed of several~km~s$^{-1}$\xspace throughout the flare decay phase. Different from the apparent rising of the loop system, a multitude of individual post-flare loops are observed to contract downward toward the solar surface (see Figure \ref{fig:decay}(d,e) and its animation), indicative of the shrinkage of newly reconnected field lines \citep{Forbes1996,Priest2002,Vrsnak2006}. In the upper CS region, the SADLs are observed to move downward rapidly and merge into the dense flare LT underneath. We place a virtual slit across the flare LT along the CS (Figure~\ref{fig:decay}(a); with a width of 8 AIA pixels) to study in detail the dynamics during the flare decay phase. In the generated stack plots in Figure~\ref{fig:decay}~(h,i), one can see the downward motion of the high-altitude SADLs and the shrinkage of low-altitude flare loops, both of which can be clearly identified until 20:00~UT. For the former, the speed in the early decay phase is about 1000~km~s$^{-1}$\xspace, higher than many earlier reportings \citep[{\it e.g.},][]{Savage2011,Innes2014} and comparable to the typical Alfv\'en speed in the active region corona that determines the local reconnection outflow speed. After 19:00~UT, the speed decreases to $\sim$100~km~s$^{-1}$\xspace, and its temporal evolution (see the blue diamonds in Figure~\ref{fig:decay}(g)) is similar to that of the Alfv\'en speed distribution above solar active regions, which starts to decrease at a height of $\sim$4~$R_\odot$\xspace \citep[{\it e.g.}, see Figure~6 in][]{Mann2003}. For the latter, the speeds are about 20--95~km~s$^{-1}$\xspace, much smaller than those of the SADLs and consistent with the earlier measurements \citep[{\it e.g.},][]{LiuW2013}. One can see that these two kinds of motions almost merge together with similar speeds in the late decay phase (Figure~\ref{fig:decay}~(h,i)) as the reconnection site rises high enough. This finding supports the idea that they may correspond to different stages of the contraction of newly reconnected loops, which always shrink fastest at the moment they are formed and released from the reconnection region, and thereafter decelerate when approaching the rising flare loop system \citep[see also][]{LinJ2004,LiuW2013}. All these characteristic features identified, {\it i.e.}, the dynamic CS, the growth of the post-flare loop system, fast retraction of SADLs, and the intermittent shrinkage of post-flare loops, provide clear indications that there is still very efficient magnetic reconnection in process during the extended decay phase of the flare. Another strong evidence for ongoing reconnection is the significant energy release in the form of strong HXR bursts that are recorded by both {\it RHESSI}\xspace and Fermi GBM (Figure~\ref{fig:decay}(g)). One can see that as the SXR flux decreases after the flare peak, Fermi GBM records two groups of HXR bursts at 16:09--16:11~UT and 16:18--16:22~UT, at energies up to 100-300~keV. Similarly, {\it RHESSI}\xspace records a group of several HXR bursts at 16:18--16:22~UT up to 100--300~keV. In addition, {\it RHESSI}\xspace shows HXR bursts occurring until 16:45~UT up to energies of 50--100~keV. By reconstructing X-ray images, we find that the high-energy HXR emissions (50-300~keV) mainly originate from two footpoints of flare loops (Figure \ref{fig:decay}(a,b)), indicative of accelerated electrons moving downward to the chromosphere. Nearly at the same time of the HXR bursts detected at around 16:18~UT, one can see a large increase in the type III radio emission at around 3~MHz (Figure~\ref{fig:rec}(e--g)). It suggests an increased number of accelerated electrons escaping from the Sun along open field lines, which finally merge into a large branch of an interplanetary type III burst (Figure~\ref{fig:rec}(g)). The HXR and radio emissions further indicate that there is still efficient particle acceleration even beyond the flare rise phase, most probably caused by the ongoing magnetic reconnection process. This can nicely explain the second CME acceleration period during the flare decay phase, which is well in line with the extended HXR bursts (Figure \ref{fig:kine}). Namely, the ongoing magnetic reconnection beyond the flare rise phase is still coupled to the impulsive acceleration of the associated fast CME up to a height of $\sim$6 $R_\odot$. \section{Discussion and Conclusion}\label{sec:dis} We investigate the dynamic evolution of a fast CME, which experiences two distinct phases of enhanced acceleration, {\it i.e.}, an impulsive phase with a peak value of around 5~km~s$^{-2}$\xspace and an additional gradual phase with extended acceleration up to $\sim$0.7~km~s$^{-2}$\xspace. The associated X2.8 flare exhibits two strong episodes of energy release associated with two-step reconnection, which coincide with the two phases of the CME dynamic evolution. Notably, the second phase of flare energy release and high-energy particle acceleration is substantially stronger than the first one and shows nonthermal emissions even in the $\gamma$-ray range. In addition, this long-duration flare reveals clear signs of ongoing magnetic reconnection during its long decay phase, evidenced by efficient particle acceleration in the form of high-energy (up to 100--300~keV) HXR emission, and by prolonged ($>$4~hours) downflows of reconnected loops (SADLs), shrinkage of post-flare loops, and continuous growth of the post-flare loop system observed by both {\it RHESSI}\xspace and {\it SDO}\xspace/AIA. We note that the CME is accelerated as fast as 5~km~s$^{-2}$\xspace during the first phase. This acceleration is among the highest values of CME accelerations ever reported \citep[{\it e.g.}, see statistical results in][]{ZhangJ2006,Vrsnak2007,Bein2011}. Considering that the intense CME acceleration is facilitated by the flare reconnection that converts the magnetic flux confining the eruptive flux rope into the rope's own flux \citep{Temmer2010,Vrsnak2008,Veronig2018}, we suggest that a significant part of the magnetic ejecta in this event is formed during the eruption by reconnection of the overlying magnetic arcade. This generally agrees with observations in \citet{Gou2019} that the large-scale CME builds up from a small-scale seed during its impulsive rising. Also, we note that this is consistent with the recent findings for the X8.2 flare/CME on 2017 September 10, where it was clearly shown that the CME core observed in the white-light coronagraph is due to frozen-in plasma added to the rising flux rope by magnetic reconnection in the current sheet below and not due to the erupting prominence material \citep{Veronig2018}. Moreover, the second phase of the CME acceleration in this event is substantial, and to our knowledge this is the first case that features two distinct phases of significant CME acceleration. The peak value during the second phase is $>$0.7~km~s$^{-2}$\xspace, which still falls into the top $\sim$30\% of the main peak accelerations of impulsive flare-associated CMEs in statistical studies \citep[{\it e.g.},][]{Bein2011}. Such an extended acceleration phase contributes to $\sim$36\% of the CME velocity, even though it is substantially weaker than the first impulsive peak, because the coronal magnetic field magnitude decreases rapidly as the CME rises into the outer corona. Considering that the full-fledged CME carries much more mass than during the first phase, the change rate of the momentum may be comparable. Thus, this second-phase acceleration with high values and a comparable duration to the first phase is distinct from the residual CME acceleration following the main phase \citep{ZhangJ2006,Cheng2010}, which generally exhibits much lower values (these may be positive or negative, with maximum values only up to several tens of m~s$^{-2}$). As a result of the two-phase acceleration, the CME finally reaches its peak velocity at a height of $>$6~$R_\odot$\xspace from the solar center. This is among the upper range of coronal heights where the main acceleration of impulsive flare-associated CMEs typically ends \citep[{\it e.g.},][]{Bein2011}. We observe that the second-step magnetic reconnection with rapid curve-in of the loop leg not only initiates a stronger episode of the flare energy release than the first one, but also contributes to a second phase of the CME acceleration. Magnetic reconnection beneath the CME enhances its acceleration by reducing the downward tension of the overlying field and at the same time increasing the upward magnetic pressure gradient by supplying additional poloidal magnetic flux into the CME \citep[{\it e.g.},][]{LinJ2004,Vrsnak2008}. We can see a time lag between the second episode of flare energy release shown as high-energy HXR emission (peaking at 16:04~UT) and the second phase of the CME acceleration (centered around 16:10~UT; Figure~\ref{fig:kine}(c)). Considering the role of magnetic reconnection in the CME acceleration by feeding magnetic flux, this would generally correspond to the time that the reconnection outflow jets need to reach the lower part of the erupting flux rope on Alfv\'enic time scales. According to the observation, the reconnection site at the time of the second-step reconnection is located low in the corona, about 20~Mm above the solar limb (see Figure~\ref{fig:rec}, where the loop-leg inflow is swept in and outflow plasmas originate; also details in \citealp{Gou2017}), and the height of the flux rope's lower boundary is measured as $\sim$0.6~$R_\odot$\xspace by the GCS model (Figure~\ref{fig:kine}(a)). If we assume that the distance between these two is 0.5~$R_\odot$\xspace, and the Alfv\'en speed is in the order of 1000~km~s$^{-1}$\xspace (as inferred from the speeds of SADLs in Figure~\ref{fig:decay}), the time delay is about 6~minutes. This is generally consistent with the observation. Thus, for the second acceleration phase when the CME runs far out and keeps moving fast, the accelerating effect of feeding new flux to the CME will be reduced and delayed. This can also explain why we observe that the second phase of the CME acceleration is generally an extended phase of enhanced acceleration, other than a sharp peak during the first phase when the flux rope is still located low in the corona. In particular, one can see a different distribution of the CME acceleration and the flare nonthermal emission during the two phases ({\it e.g.}, Figure~\ref{fig:kine}). While the impulsive CME acceleration occurs in the first phase, the flare is associated with much more efficient acceleration of high-energy particles in the second phase. We note that in the second phase, although the CME runs far out, the reconnection still occurs at low altitudes, which is capable to accelerate large numbers of electrons needed for intense HXR emission. On the other hand, the CME acceleration during the second phase is weaker than what would be expected for such strong HXR emission, which could be attributed either to the weaker Lorenz force at larger coronal heights or to the larger CME inertia that increases with time. The observation shows a different scenario from the synchronization between the flare HXR emission and the CME acceleration as supposed in the standard model, and suggests a different energy distribution between the flare and the CME in the two phases. In conclusion, the two strong episodes of energy release in this flare are associated with two distinct phases of the CME acceleration, and the impulsive CME acceleration occurs at an earlier stage than the peak of flare nonthermal emission. This unusual two-phase evolution finally produces a very fast CME and an intense long-duration X-class flare with $\gamma$-ray emission, and is suggestive of a coupling between the flare energy release and the CME acceleration during the two phases but with different energy distributions among the two phenomena. \acknowledgments We thank the {\it SDO}\xspace, {\it RHESSI}\xspace, {\it SOHO}\xspace and \textit{STEREO} teams for the data. T.G., R.L., and Y.W. acknowledge the support by NSFC (Grant No. 11903032, 41761134088, 41774150, and 11925302), CAS Key Research Program (Grant No. KZZD-EW-01-4), the fundamental research funds for the central universities, and the Strategic Priority Program of the Chinese Academy of Sciences (Grant No. XDB41000000). A.M.V. acknowledges the support by the Austrian Science Fund (FWF): P27292-N20. B.V. and M.D. acknowledge funding from the EU H2020 grant agreement No. 824135 (SOLARNET) and support by the Croatian Science Foundation under the project 7549 (MSOC). H.A.S.R. was supported by the STFC grant ST/P000533/1.
1803.09242
\section{Introduction} In the last two decades we have witnessed an impressive surge in experimental advances pushing the frontier of realising and controlling (effectively) closed non-equilibrium quantum many-body systems.\cite{Bloch-08,ReichelVuletic11,Polkovnikov-11} This is of fundamental importance as the consequences of quantum many-body physics are observably real in these systems as well as of practical relevance as they might pave the way to quantum technologies in the future. Two quantities that have recently attracted a tremendous amount of attention are the return amplitude\cite{Silva08} \begin{equation} G(t)=\left\langle\Psi_0\right|e^{-\text{i} Ht}\left|\Psi_0\right\rangle, \label{eq:loschmidt} \end{equation} and its related rate function \begin{equation} l(t)=-\frac{1}{L}{\rm ln}\left|G(t)\right|^2. \label{eq:ratefunc} \end{equation} Here $\left|\Psi_0\right\rangle$ is some initial state, usually taken to be the ground state of some Hamiltonian $H_0$, while the subsequent time evolution is governed by a different Hamiltonian $H$; a setup that is referred to as a quantum quench.\cite{CalabreseCardy06} Intuitively, one might think of the square of the return amplitude as the probability of the ground state to return to itself under the time evolution with $H$. As has been pointed out by Heyl \emph{et al.},\cite{Heyl-13} the rate function \eqref{eq:ratefunc} in the transverse-field Ising chain shows non-analytic behaviour at ``critical times" $t_n^*$ provided the quantum quench has crossed the quantum critical point, i.e., if the ground states of the Hamiltonians $H_0$ and $H$ belong to different zero-temperature phases. The appearance of these critical times signals the breakdown of a Taylor expansion in time. Heyl \emph{et al.} also pointed out the mathematical analogy of the non-analyticities in the rate function as well as the manifestation of an equilibrium phase transition in the usual free energy, and in doing so motivating the introduction of the term dynamical quantum phase transition (DQPT) for the former. One of the hallmarks of equilibrium quantum phase transitions is the inability to adiabatically connect the ground state of one phase to the ground state of the other phase (of different symmetry). Therefore, a non-analyticity in the ground state energy is routinely encountered when passing between the phases, irrespective of the path chosen to achieve this crossing. In contrast, the robustness of DQPTs is much less clear. The appearance of DQPTs often\cite{KS13,Heyl14,Canovi-14} but not always\cite{Hickey-14,AndraschkoSirker14,VajnaDora14} coincide with whether or not a quantum critical point separates $H$ and $H_0$. Nevertheless, the further study of DQPTs has attracted a lot of theoretical interest\cite{Kriel-14,Heyl15,VajnaDora15,SchmittKehrein15,Sharma-15, JamesKonik15,HuangBalatsky16,Divakaran-16,BudichHeyl16, AbelingKehrein16,Sharma-16,PS16,Bhattacharya17new,Zunkovic-16,Heyl17,KS17, Weidinger-17,HeylBudich17,Piroli-17,Halimeh17new,Zauner17new,Homrighausen17new,Lang17new,Heyl-18,TrapinHeyl18, Piroli-18} as well as successful efforts to realise DQPTs in ionic\cite{Jurcevic-17} and atomic\cite{Flaeschner-18} systems in optical lattices. In this paper, we add to this debate a surprising flexibility in controlling DQPTs by performing double quenches (within the free transverse-field Ising chain and a non-integrable generalisation thereof). We elaborate on how the appearance of DQPTs can be tuned simply by increasing the time between the first and second quench. In particular, we show that the system can exhibit all four combinations of absence or presence of non-analyticities before and after the second quench, respectively, as is illustrated for double quenches in the transverse-field Ising chain in Fig.~\ref{fig:fancyplot}. This does not only suggest that the appearance of DQPTs is very fragile but also indicates an intriguing long-term memory of the system. With this fragility in mind and motivated by recent experiments,\cite{Jurcevic-17} we comment on the relation between non-analyticities in the rate function and the time evolution of the magnetisation. We find that the correspondence of zeros in the magnetisation to the critical times $t_n^*$, observed earlier for the transverse-field Ising model,\cite{Heyl-13} does not survive in the double quench setup (similarly to when integrability-breaking terms are included\cite{KS13} in a single quench setup). This provides further evidence that the correspondence found for a single quench in the free case seems to be accidental. The rest of this paper is organised as followed: Section~\ref{sec:setup} gives a general introduction to the physical systems studied, the observables calculated, and the methods used. Section~\ref{sec:results} summarises our main results about the controllability of DQPT in both the transverse-field Ising model and the axial next-nearest-neighbour Ising (ANNNI) chain. In Section~\ref{sec:magnetization} we analyse the connection between non-analyticities in the rate function and the magnetisation. Finally, in Section~\ref{sec:outlook} we close with a concluding summary. \section{Setup, model, and methods}\label{sec:setup} \subsection{Setup} We compute the return amplitude \eqref{eq:loschmidt} and its corresponding rate function (\ref{eq:ratefunc}) for a time-dependent Hamiltonian $H(t)$ that models a double quantum quench \begin{equation} H(t) = \begin{cases} H_0, & t<0,\\ H_1, & 0\leq t\leq T, \\ H_2, & T < t. \end{cases} \label{eq:Ht} \end{equation} As before, $\ket{\Psi_0}$ is the ground state of an initial Hamiltonian $H_0$. \subsection{Model} Specifically we consider the following one-dimensional Hamiltonian \begin{equation} H(\Delta,g)=-J\sum_i\bigl[\sigma_i^z\sigma_{i+1}^z+\Delta\sigma_i^z\sigma_{i+2}^z+g\sigma_i^x\bigr], \label{eq:h} \end{equation} where $\sigma_i^{x,y,z}$ denote Pauli matrices acting at site $i$. We assume $J>0$ and $g\ge 0$, while $\Delta$ can be positive or negative. For $\Delta=0$, one recovers the transverse-field Ising chain, which can be mapped to a system of free fermions and hence be solved exactly. The Ising chain exhibits a quantum phase transition\cite{Sachdev99} at $g_c=1$, which separates a ferromagnetic (FM) phase for $g<1$ from a paramagnetic (PM) phase for $g>1$. In the thermodynamic limit, the FM possesses two degenerate ground states $\ket{\pm}$ with $\langle\sigma_i^z\rangle\neq 0$, while the PM ground state with $\langle\sigma_i^z\rangle=0$ is unique. For finite next-nearest neighbour interactions $\Delta\neq0$, one obtains the ANNNI chain.\cite{Selke88,SuzukiInoueChakrabarti13} The model can be mapped to a system of interacting fermions with interaction strength $\propto\Delta$, which can no longer be solved exactly. The phase diagram of this model has been studied by several methods.\cite{Rujan81,PeschelEmery81,Allen-01,Beccaria-06,Beccaria-07,SelaPereira11} In addition to the FM and PM it also possesses two additional phases at large, repulsive values of the interaction $\Delta>1$. For the rest of this paper, we will keep $J$ fixed; our double quench is thus entirely determined by the three pairs of the values $(\Delta_m,g_m)$, $m=0,1,2$, together with $H_m=H(\Delta_m,g_m)$. \subsection{Analytical approach}\label{sec:analytics} For the analytical approach we consider a chain of length $L$ with periodic boundary conditions on the spin variables, $\sigma_{L+1}^a=\sigma_1^a$. Furthermore we restrict ourselves to double quenches in the transverse-field Ising model ($\Delta_m=0$) for which exact results can be obtained. To this end, we map the model to non-interacting fermions via a Jordan--Wigner transformation (see, e.g., Ref.~\onlinecite{Calabrese-12jsm1}, which we follow in our notation). In the fermionic language, the Hamiltonian can be diagonalised straightforwardly, \begin{equation} H(\Delta=0,g)=\sum_k\epsilon_k(g)\,\left(\eta_k^\dagger(g)\eta_k(g)-\frac{1}{2}\right), \label{eq:Isingfermion} \end{equation} where \begin{equation} \epsilon_k(g)=2J\sqrt{1+g^2-2g\cos k}, \label{eq:Isingenergy} \end{equation} and $\eta_k^\dagger(g)$ and $\eta_k(g)$ are fermionic creation and annihilation operators. Depending on the filling fraction, the fermions fulfil either anti-periodic boundary conditions with the momenta quantised as half-integer multiples of $2\pi/L$, or periodic boundary conditions with the momenta quantised as integer multiples of $2\pi/L$. The anti-periodic case is usually referred to as Neveu-Schwarz (NS) sector while the periodic one is known as the Ramond (R) sector, respectively. The initial state for the double-quench protocol is given by the unique ground state of the fermionic model $\ket{0,g_0}$, which lies in the NS sector for any finite system. For $g_0>1$ this corresponds to the unique, PM ground state of the Ising model. We stress, however, that in the FM phase $0\le g<1$ the fermionic ground state corresponds to a superposition of the magnetic states $\ket{\pm}$.\cite{Calabrese-12jsm1} The fermionic modes which diagonalise the Hamiltonian at different values of the transverse field are related via \begin{equation} \begin{split} \eta_k(g_1)=&\cos\frac{\theta_k(g_2)-\theta_k(g_1)}{2}\,\eta_k(g_2)\\ &+\text{i}\sin\frac{\theta_k(g_2)-\theta_k(g_1)}{2}\,\eta_{-k}^\dagger(g_2),\label{eq:Bogoliubov} \end{split} \end{equation} where the Bogoliubov angle $\theta_k(g)$ is determined from \begin{equation} e^{\text{i}\theta_k(g)}=\frac{g-e^{\text{i} k}}{\sqrt{1+g^2-2g\cos k}}. \end{equation} Using the relations \eqref{eq:Bogoliubov} together with the fact that the initial state is the vacuum state, $\eta_k(g_0)\ket{0,g_0}=0$, the rate function for the return probability for times $t<T$ is found to be\cite{Silva08} \begin{widetext} \begin{equation} l(t)=-\frac{1}{\pi}\int_0^\pi\text{d} k\,\ln\left|\cos^2\frac{\theta_k(g_1)-\theta_k(g_0)}{2}+\sin^2\frac{\theta_k(g_1)-\theta_k(g_0)}{2}\,e^{-2\text{i}\epsilon_k(g_1)t}\right|, \label{eq:ratefunctionbefore} \end{equation} while for times $t>T$ we obtain \begin{equation} l(t)=-\frac{1}{\pi}\int_0^\pi\text{d} k\,\ln\left|A_k+B_k\,e^{-2\text{i}\epsilon_k(g_2)t}\right|+2\ln 2. \label{eq:ratefunctionafter} \end{equation} The coefficients $A_k$ and $B_k$ depend on all quench parameters and are explicitly given by \begin{eqnarray} A_k&=&1+\cos\bigl[\theta_k(g_0)-\theta_k(g_1)\bigr]+\cos\bigl[\theta_k(g_0)-\theta_k(g_2)\bigr]+\cos\bigl[\theta_k(g_1)-\theta_k(g_2)\bigr]\nonumber\\ & &+\Bigl(1-\cos\bigl[\theta_k(g_0)-\theta_k(g_1)\bigr]+\cos\bigl[\theta_k(g_0)-\theta_k(g_2)\bigr]-\cos\bigl[\theta_k(g_1)-\theta_k(g_2)\bigr]\Bigr)e^{-2\text{i}\epsilon_k(g_1)T},\\ B_k&=&\Bigl(1+\cos\bigl[\theta_k(g_0)-\theta_k(g_1)\bigr]-\cos\bigl[\theta_k(g_0)-\theta_k(g_2)\bigr]-\cos\bigl[\theta_k(g_1)-\theta_k(g_2)\bigr]\Bigr)e^{2\text{i}\epsilon_k(g_2)T}\nonumber\\ & &+\Bigl(1-\cos\bigl[\theta_k(g_0)-\theta_k(g_1)\bigr]-\cos\bigl[\theta_k(g_0)-\theta_k(g_2)\bigr]+\cos\bigl[\theta_k(g_1)-\theta_k(g_2)\bigr]\Bigr)e^{-2\text{i}[\epsilon_k(g_1)-\epsilon_k(g_2)]T}. \end{eqnarray} \end{widetext} \subsection{DMRG approach}\label{sec:dmrg} In addition to the analytical approach discussed above, we employ the density matrix renormalisation group\cite{White92,white93,Schollwoeck11} (DMRG) to study the double-quench setup. The reason for this is two-fold: (1) The DMRG allows us to study quenches within the Ising chain that start from a polarised state, which is not a ground state of the fermionic model. Such quenches feature non-trivial dynamics of the magentisation and will be investigated in Sec.~\ref{sec:magnetization} in detail. (2) One can treat the ANNNI chain [$\Delta\neq0$ in Eq.~(\ref{eq:h})] which cannot be solved analytically; we will demonstrate that the picture described in Sec.~\ref{sec:generalresults} persists in such a non-integrable model. At the technical level, we employ an infinite-system DMRG algorithm that is set up directly in the thermodynamic limit. We first determine the ground state using an evolution in imaginary time and then carry out a real time evolution to compute the rate function $l(t)$. The discarded weight is kept constant during the latter, which leads to a dynamic increase of the bond dimension. We performed every calculation using various different values of the discarded weight in order to ensure convergence. Further details of the numerical implementation can be found in Ref.~\onlinecite{KS13}. \section{Results}\label{sec:results} \subsection{General observations}\label{sec:generalresults} Let us first recall\cite{Heyl-13} how DQPTs manifest for single quenches (i.e., $T=\infty$) within the Ising chain ($\Delta_m=0$). If the quench crosses the critical point $g=1$, the rate function (\ref{eq:ratefunc}) exhibits kinks in its time evolution, while such non-analytic behaviour is not observed if both $g_0$ and $g_1$ belong to the same phase (note, however, that for other models the appearance of DQPTs is no longer tied to the fact whether or not the quench crossed a critical point~\onlinecite{AndraschkoSirker14}). For the double quench setup, we will demonstrate below that the appearance or absence of DQPTs does not only depend on the values of the quench parameters but also dramatically on the time $T$ between the first and the second quench. This entails a remarkable degree of controllability of the DQPTs. In fact, all four possible combinations for the absence or presence of kinks for times $t<T$ (after the first quench) and $t>T$ (after the second quench) can be realised. Strikingly, the existence of non-analytic behaviour in the rate function after the second quench can be tuned in a highly non-monotonic fashion, where in a recurring manner the DQPTs can be suppressed and re-instantiated by increasing $T$. For future reference, we label the four cases mentioned above as follows: The rate function shows (i) no non-analyticities at all, (ii) no non-analyticities for $t<T$ but kinks for $t>T$, (iii) non-analyticities for $t<T$ but not for $t>T$, and (iv) kinks both for $t<T$ and $t>T$. The general observation that the appearance and absence of kinks can be tuned by varying $T$ is condensed in Fig.~\ref{fig:fancyplot}, which shows the critical times $t_n^*$ at which the rate function is non-analytic in dependence of the time $T$ for a typical set of parameters $g_0=1.5$, $g_1=0.5$, and $g_2=5.0$ in the Ising model (see Sec.~\ref{sec:analyticresults} for more details). Increasing $T$, we find recurring, discrete sets of lines of $t_n^*$ (solid lines in Fig.~\ref{fig:fancyplot}) which extend into the regime $t>T$. This illustrates that the appearance and vanishing of DQPTs after the second quench can be tuned by changing $T$. For a given value of $T$, the critical times $t_n^*$ at which rate function $l(t)$ shows kinks are determined by the crossing points of vertical lines in Fig.~\ref{fig:fancyplot} with the solid ones. Tokens of the classes (i)--(iv) defined above are thus, e.g., $TJ=0.5,1,2,3.5$, respectively. The recurring appearance and suppression of DQPTs after the second quench suggests an intriguing fragility of the concept (in contrast to the quite robust equilibrium quantum phase transitions) and gives rise to the high susceptibility to tuning outlined above. \subsection{Analytic results for the Ising chain}\label{sec:analyticresults} The analytic results presented in Sec.~\ref{sec:analytics} allow us to obtain a complete understanding of the appearance of DQPTs for the transverse-field Ising chain. For times $t<T$, our setup is equivalent to the sudden quench protocol which was originally studied by Heyl~\emph{et al.}\cite{Heyl-13} who realised that the rate function $l(t)$ will show non-analyticities at specific times $t_n^*$ which are determined by a vanishing argument of the logarithm in Eq.~(\ref{eq:ratefunctionbefore}). This happens if the quench crosses the quantum critical point, and the times $t_n^*$ are located at\cite{Heyl-13} \begin{equation} t_n^*=\frac{\pi}{2\epsilon_{k^*}(g_1)}(2n+1),\quad n\in\mathbb{N}_0. \label{eq:tstarbefore} \end{equation} Here, the critical momentum $k^*$ is obtained from the condition \begin{equation} \cot^2\frac{\theta_{k^*}(g_1)-\theta_{k^*}(g_0)}{2}=1 \end{equation} and explicitly given by $k^*=\arccos[(1+g_0g_1)/(g_0+g_1)]$. Depending on the value of $T$, one may thus observe a finite number of kinks before the second quench, as also shown in Fig.~\ref{fig:fancyplot}. \begin{figure}[t] \includegraphics[width=0.9\linewidth,clip]{fancyplot.eps} \caption{(Colour online) Location of the critical times $t_n^*$ in the $t$-$T$-plane for a double quench between the PM$\to$FM$\to$PM phases of the transverse-field Ising chain (quench parameters $g_0=1.5$, $g_1=0.5$ and $g_2=5.0$). The dashed line marks the time $t=T$ at which the second quench is performed; the rate function $l(t)$ is shown explicitly in Fig.~\ref{fig:ising1}(a) for the quench times $TJ=0.5,1,2$ marked by dotted lines. We stress that kinks at times $t>T$ only occur for specific quench durations.} \label{fig:fancyplot} \end{figure} \begin{figure}[t] \includegraphics[width=0.95\linewidth,clip]{rkplot.eps} \caption{(Colour online) Modulus $\rho_k=|A_k/B_k|$ for a double quench in the transverse-field Ising chain with quench parameters $g_0=1.5$, $g_1=0.5$ and $g_2=5.0$, and different quench times $T$. Generically we find either (a) no solution to \eqref{eq:condition2}, (b) one solution at $k=k^*$, or (c) two solutions $k=k_{1,2}^*$. Only the latter case results in critical times given by \eqref{eq:tstarlarget} at which the rate function shows non-analytic behaviour.} \label{fig:rkplot} \end{figure} \begin{figure*}[ht] \includegraphics[width=0.45\linewidth,clip]{ising_pm.eps}\hspace*{0.05\linewidth} \includegraphics[width=0.45\linewidth,clip]{ising_fm1.eps} \caption{(Colour online) Rate function $l(t)$ for a double quantum quench within the transverse-field Ising chain ($\Delta_m=0$) for different quench times $T$ and (a) quenches between the PM$\rightarrow$FM$\rightarrow$PM phases, (b) quenches between the FM$\rightarrow$PM$\rightarrow$FM phases starting from a mixed FM state. We compare the exact analytical results derived in Sec.~\ref{sec:analyticresults} with those obtained from a DMRG calculation. By varying $T$, one can systematically tune the appearance and suppression of DQPTs after the second quench. The different possible cases are discussed in Sec.~\ref{sec:generalresults}. } \label{fig:ising1} \end{figure*} The situation becomes considerably more involved for times $t>T$ since Eq.~(\ref{eq:ratefunctionafter}) depends on all quench parameters $g_0$, $g_1$, and $g_2$ as well as the quench time $T$. To make our analysis more transparent, we will fix the values of the transverse field and discuss the dependence on $T$, also in light of the fact that this parameter can be directly controlled in experiments. As discussed above, non-analyticities in the rate function \eqref{eq:ratefunctionafter} will appear whenever the argument of the logarithm vanishes, \begin{equation} \frac{A_k}{B_k}+e^{-2\text{i}\epsilon_k(g_2)}=\rho_ke^{\text{i}\varphi_k}+e^{-2\text{i}\epsilon_k(g_2)}=0. \label{eq:condition} \end{equation} The main difference to the case $t<T$ is that due to the time evolution until $t=T$, the coefficient $A_k/B_k=\rho_ke^{\text{i}\varphi_k}$ is now no longer real but in general complex. It is thus reasonable to introduce the modulus $\rho_k$ and the phase $\varphi_k$, for which the condition \eqref{eq:condition} implies \begin{equation} \rho_{k}=1. \label{eq:condition2} \end{equation} For double quenches starting and ending in the same phase (e.g., $g_0,g_2>1$, $g_1<1$), we generically find one of the three cases shown in Fig.~\ref{fig:rkplot}: (a) There is no momentum for which Eq.~\eqref{eq:condition2} is satisfied; this is, for example, the case for quench times $TJ\lesssim 0.9$ or $2.97\lesssim TJ\lesssim 3.92$ for the parameters shown in Fig.~\ref{fig:fancyplot}. The time evolution of the rate function is then completely analytic for all $t>T$. (b) There is one critical momentum $k^*$ with $\rho_{k^*}=1$, while for all other momenta we have $\rho_k> 1$. As we will discuss below, there are also no non-analyticities in the time evolution in this case. (c) There are two critical momenta $k_1^*$ and $k_2^*$ at which Eq.~\eqref{eq:condition2} is satisfied. Close to these momenta, the function $\rho_k$ is linear, which implies non-analytic behaviour of the rate function at the times \begin{equation} t_{i,n}^*=\frac{\pi}{2\epsilon_{k^*_i}(g_2)}(2n+1)-\frac{\varphi_{k^*_i}}{2\epsilon_{k^*_i}(g_2)}, \label{eq:tstarlarget} \end{equation} where $i=1,2$ and $n\in\mathbb{N}_0$. The phase shifts $\varphi_{k_i^*}$ originate from the time evolution for $t<T$. We note that while the individual sets $\{t_{i,n}^*\}$, $i=1,2$, are periodic in time, due to the differing values of the prefactor $\pi/[2\epsilon_{k^*_i}(g_2)]$ the complete set of critical times $\{t_{1,n}^*\}\cup\{t_{2,n}^*\}$ is not periodic. In principle there may be more than two momenta at which \eqref{eq:condition2} is satisfied, each of them giving rise to a set of critical times determined by \eqref{eq:tstarlarget}. The link between the cases (a)--(c) discussed here and the general cases (i)--(iv) introduced in Sec.~\ref{sec:generalresults} is as follows: Depending on whether or not the critical times \eqref{eq:tstarbefore} appear up to $T$, the cases (a) and (b) result in the general cases (i) or (iii). Similarly, case (c) leads to the cases (ii) or (iv). Furthermore, the analytic result \eqref{eq:ratefunctionafter} allows us to analyse the behaviour of the rate function close to the critical times \eqref{eq:tstarlarget}. We expand the integrand around $k=k_i^*$ and $t=t_{i,n}^*$. As illustrated in Fig.~\ref{fig:rkplot}(c), $\rho_k$ is linear near $k_i^*$, $\rho_k\approx 1+a(k-k_i^*)$, $a\in\mathbb{R}$, thus we can approximate the rate function as follows: \begin{eqnarray} l(t)&\sim&-\frac{1}{2\pi}\int_0^\pi\text{d} k\,\ln\Bigl[a^2(k-k_i^*)^2+(2\epsilon_{k_i^*}(g_2)\delta t)^2\Bigr]\nonumber\\ &\sim&\delta t=|t-t_{i,n}^*|. \label{eq:lineart} \end{eqnarray} This linear behaviour seems to be a general feature of DQPTs; it was previously observed after quenches across the quantum critical point in the transverse-field Ising model\cite{Heyl15} as well as quantum Potts chain.\cite{KS17} Finally, let us comment on the case where there is precisely one critical momentum $k^*$ as shown in Fig.~\ref{fig:rkplot}(b). At this value, the modulus $\rho_k$ is no longer linear in $k-k^*$. Instead, we observe that when approaching the critical quench duration $T_\text{c}$ (given by $T_\text{c}J\approx 0.9005$ for the parameters of Fig.~\ref{fig:rkplot}) from above, the critical momenta $k_{1,2}^*$ and thus the times $t_{1,n}^*$ and $t_{2,n}^*$ approach each other, and eventually the kinks in the rate function simply disappear. \subsection{Numerical results for the ANNNI chain} We start by benchmarking our DMRG data for the Ising chain against the analytic results of Sec.~\ref{sec:analytics}. Fig.~\ref{fig:ising1} shows the rate function for two quenches starting from (a) the PM ground state, and (b) the mixed FM ground state that corresponds to the NS state in the fermionic language. By varying the quench time $T$, one can realise each of the different cases discussed in Sec.~\ref{sec:generalresults}. For example, for a quench starting in the mixed FM ground state [Fig.~\ref{fig:ising1}(b)], there are DQPTs for $TJ=0.1$ (case i), for $TJ=0.19$, kinks appear only for $t>T$ (case ii; not shown in the figure), for $TJ=0.3$, there are kinks only for $t<T$ (case iii), and for $TJ=0.21$, kinks appear for both $t<t$ and $t>T$ (case iv). In all cases, the DMRG data agree perfectly with the exact result. Our general results, which we discussed in Sec.~\ref{sec:generalresults}, were mainly based on the analytic solution of the transverse-field Ising model. It is important to show that the main conclusions are robust against breaking the integrability of this model and are therefore expected to hold in generic quantum many-body systems. To this end, in Fig.~\ref{fig:annni} we report results on the ANNNI model with finite $\Delta$, which, to the best of our knowledge, is not integrable. We show that in complete analogy to the free (integrable) case, the behaviour of the rate function $l(t)$ can be flexibly controlled by changing $T$; we explicitly demonstrate the appearance of the three cases: (i) no non-analyticities ($TJ=0.4$, red solid curve online), (ii) no non-analyticities for $t<T$ but kinks for $t>T$ ($TJ=0.8$, blue solid curve online), and (iii) non-analyticities for $t<T$ but no kinks for $t>T$ ($TJ=1.1$, orange solid curve online). While we cannot rule out that a different phenomenology emerges at larger times inaccessible to the DMRG, the data of Fig.~\ref{fig:annni} indicate that flexible control of the appearance of DQPTs is possible even in non-integrable models; one can suppress and re-establish DQPTs at will. \begin{figure}[t] \includegraphics[width=0.9\linewidth,clip]{annni.eps} \caption{(Colour online) Rate function for a double quench PM$\rightarrow$FM$\rightarrow$PM with different quench times $T$ for the ANNNI model. The results were obtained using DMRG. } \label{fig:annni} \end{figure} \section{Relation between DQPTs and magnetisation}\label{sec:magnetization} By a remarkable experimental effort, the authors of Ref.~\onlinecite{Jurcevic-17} succeeded in directly measuring the rate function in a string of up to 10 Calcium ions, which are used to simulate long-ranged Ising models. Although a system of 10 ions is admittedly small, the work established a connection between the theory of DQPTs and the non-equilibrium physics expected in real quantum simulators. In Ref.~\onlinecite{Jurcevic-17} also the magnetisation was addressed for a quench starting from a FM polarised state, which is arguably a more natural quantity than the rate function. It was shown that the times where the magnetisation vanishes are tied to the critical times $t_n^*$ where kinks in the rate function show up. In Ref.~\onlinecite{Heyl-13}, a similar connection was observed for the transverse field Ising chain in the thermodynamic limit. In contrast, it was previously demonstrated that such a direct relation does \emph{not} carry over to the non-integrable case such as the ANNNI model.\cite{KS13} This begs the question whether or not such a relationship between zeros in the magnetisation and the critical times in the rate function exists for more general setups. For the double quench, we observe that this is \emph{not} the case (similarly to what is found for single quenches in non-integrable models), suggesting that the correspondence is not robust (even for free models). In Fig.~\ref{fig:ising2}, we explicitly compare the rate function and the magnetisation for a double quench starting from a FM polarised state of the transverse-field Ising model. The data was obtained using the DMRG method. One can explicitly see that the kinks in $l(t)$ at times $t>T$ after the second quench are in general \emph{not} related to the zeros in the magnetisation. This is most prominent in the $TJ=0.58$ curve (dashed blue curve online), where the rate function for times $t>T$ shows repeated kinks, while the magnetisation vanishes at a completely different time scale. \begin{figure}[t] \includegraphics[width=0.9\linewidth,clip]{ising_fm2a.eps} \includegraphics[width=0.9\linewidth,clip]{ising_fm2b.eps} \caption{(Colour online) (a) The same as in Fig.~\ref{fig:ising1}(b), but starting from a polarised FM state. (b) Behaviour of the magnetisation $|\langle\sigma^z(t)\rangle|$ during this type of quench. The data was computed using DMRG. We show that the times at which the magnetisation vanishes are in general not tied to the critical times $t_n^*$ where kinks in the rate function show up (in contrast to the transverse field Ising chain)} \label{fig:ising2} \end{figure} \section{Conclusion}\label{sec:outlook} In this paper, we have studied the phenomenon of DQPTs after double quantum quenches A$\to$B$\to$A between two equilibrium phases A and B. We have calculated the rate function analytically for the free transverse-field Ising chain. By varying the time $T$ spent in phase B, one can control at will and in a recurring manner whether or not DQPTs occur after the second quench. All four possible combinations of the appearance and absence of non-analyticities before and/or after the second quench can be realised if $T$ is tuned. A similar picture emerges using finite time DMRG numerics for the ANNNI model which is a non-integrable generalisation of the Ising chain. Moreover, we demonstrated that even for the free transverse-field Ising chain there is no relationship between the critical times after the second quench and the zeros of the magnetisation. In conclusion, our results show that the appearance of DQPTs is very fragile against the details of the quench setup. \section*{Acknowledgments} C.K. and D.K. are supported by the DFG via the Emmy-Noether program under KA 3360/2-1. D.S. is member of the D-ITP consortium, a program of the Netherlands Organisation for Scientific Research (NWO) that is funded by the Dutch Ministry of Education, Culture and Science (OCW). D.S. was supported by the Foundation for Fundamental Research on Matter (FOM), which is part of the Netherlands Organisation for Scientific Research (NWO), under 14PR3168.
1310.5350
\section{Introduction\label{intro} } Gravitational lensing is a powerful astrophysical as well as cosmological probe to study the lens object, and the source as well as the background geometry and it indirectly serves as a probe of gravitational physics. Gravitational lensing can help us to detect or infer the existence of exotic objects in the universe. When the lens is a very compact object (e.g. a black hole or the alternative end states of collapse), light rays can probe the strong gravitational field produced by them and gravitational lensing becomes a powerful probe of physics in strong gravity regime. If the lens is compact enough and light rays can explore spacetime close enough to it, then the deflection angle can become more than $2\pi$ \cite{dar59,dar61}. Moreover propagation of light in the vicinity of black holes or compact enough objects leads to interesting signatures in the apparent position and flux of images for sources near (and orbiting) such objects \cite{Campbell:1973ys,cb72,cb73}. These signatures constitute important strong field tests of General Relativity. For faraway sources lensed by a compact object, when the deflection angle exceeds $2\pi$, two sequences of images are formed on different sides of the lens due to photons which undergo one or more turns around the lens (the deflection angles are close to $2\pi,4\pi$ etc.) in addition to two (far-field) images of the source formed due to photons which undergo a small deflection (the so-called primary and secondary images). These additional images are called relativistic images, the phenomenon is called relativistic lensing \cite{Virbhadra:2000ju,Virbhadra:2002ju}. One of the exciting questions that the phenomenon of relativistic lensing can help us address is Cosmic Censorship conjecture \cite{pen69} and the related questions of the end state of gravitational collapse and the final fate of massive stars \cite{IAU:8682057}. Cosmic censorship conjecture, viz. the proposal that naked singularities do not occur in nature, still remains an open issue. On the other hand it has been shown that both black holes and naked singularities can form in gravitational collapse of a matter cloud (obeying certain energy conditions for physical reasonability) starting from a regular initial data \cite{Joshi1,Joshi2,Harada:1999jf}. For example, while homogeneous dust collapse always results in a black hole, inhomogeneous dust collapse with a decreasing density profile away from the center can result in both black holes and naked singularities as endstates \cite{Joshi1}. A similar conclusion follows from the study of more complex scenarios with the inclusion of pressure. There remains still the question of whether one needs finely tuned initial data to form naked singularities. Strictly restricting to a spherically symmetric collapse, one does have a concrete realization of naked singularity formation without the need of fine tuning initial data. The general relativistic Larson-Penston solution, which is obtained without fine-tuning in the spherically symmetric collapse of a perfect fluid with a soft equation of state ($p = k \rho$ for $0< k \leq 0.03$) \cite{Harada:2001nh} is stable against spherical linear perturbations and describes the formation of a naked singularity for $0 < k <0.0105$, ie for an extremely soft equation of state \cite{Ori:1987hg,Ori:1989ps}. Furthermore it was shown in \cite{Joshi:2011qq} that for spherically symmetric gravitational collapse of a general Type I matter field, when the effects of small pressure perturbations in an otherwise pressure-free collapse scenario is taken into account, both black holes and naked singularities are generic (for suitable definition of genericity) outcomes of a complete collapse. Non-spherical collapse has also been studied (see \cite{Joshi2} and references therein); but it is fair to say that a lot needs to be done before a conclusion can be arrived at for fine tuning issues in non-spherical collapse scenarios. However one can adopt the point of view alluded to in \cite{Joshi2} that `if cosmic censorship is to hold as a basic principle of nature, it better holds in spherical class too' and consider it worthwhile to investigate astrophysical implications of naked singularities. We must, all the same, mention that the examples studied here are toy models and we hope that it captures qualitatively the basic physics so as to guide future explorations. A lot of black hole candidates (compact,dark, heavy objects) have been discovered observationally and most likely they are indeed black holes \cite{Visser:2009pw}. However in the absence of concrete theoretical reasons to rule out a naked singularity and in the face of our relative ignorance of the behavior of matter in an extremely high density configuration which is reached towards the end of the gravitational collapse of a massive stars, one can take a phenomenological approach, i.e., compute the signatures of the naked singularities and confront them with observations. Gravitational lensing signatures of naked singularities will then be very useful in this regard. With this philosophy, Virbhadra,Narasimha and Chitre \cite{Virbhadra:1998dy} and later Virbhadra and Ellis \cite{Virbhadra:2002ju} and Virbhadra and Keeton \cite{Virbhadra:2007kw} have studied and compared gravitational lensing by Schwarzschild black holes and by the Janis, Newman, Winicour naked singularities (JNW solution). Gravitational lensing by a rotating version of the JNW naked singularity has been studied in \cite{Gyulchev:2008ff}. Extending the same line of work, we have recently studied a class of naked singularity solutions obtained as the end state of certain dynamical collapse scenarios for a fluid with only tangential pressure \cite{spnj12}. In this work, we take this program further to study a generalization of this model ,i.e., an analogous scenario for a fluid with non-zero radial as well as tangential pressure, to examine if the earlier conclusions obtained for the tangential pressure case also generalize qualitatively. Furthermore we focus on the role of time delay in relativistic lensing from this perspective by studying the time delay between relativistic Einstein Rings for all the above mentioned solutions. It is worth noting here that apart from lensing, accretion disk properties have also been explored recently as a probe of cosmic censorship \cite{Kovacs:2010xm,jmn13}. As we have discussed earlier in the strong gravity regime, like a black hole, the bending angle could be more than 2$\pi$ and multiple relativistic images might be formed. In the case of the formation of multiple images, the light-travel time along light paths corresponding to different images is different. Therefore, if the background source in the lens system were to be variable, this variability will not appear simultaneously in the images. So, intrinsic luminosity variations of the source manifests in the images as a relative temporal phase which is expected to depend on the lens geometry and lensing configuration. The time lag for the appearance of the intrinsic variability between the multiple images is called the time delay. Time delay has a privileged status among lensing observable in the sense that it is the only dimensional observable and as such is the only observable sensitive to the overall scaling of all the distance scales in the problem. The observed configuration in the sky imposes dimensionless constraints only, but time delay introduces a physical scale in the system. That makes it a useful probe of the mass of the lens as well as the distances between lens and observer etc. In cosmological contexts, it has been argued to be a probe of the Hubble parameter \cite{Refsdal:1964nw,Blandford:1991xc}. In the conventional post-Newtonian approximation, the difference in the light-travel time can be decomposed into a geometrical component due to difference in the path length and a potential component due to the different Newtonian gravitational potential felt by the photon. Time delay between far field images is a probe of the so-called `effective distance' of the system and can be a probe of cosmological parameters \cite{2002BASI...30..723N}. Time delay between relativistic images for black hole has been suggested as a distance estimator \cite{bom04}. In this paper we focus on time delay as a discriminator between black holes and naked singularities. For this we study Schwarzschild black hole, JNW naked singularity, JMN solution and Tolman-VI solution. The last two solutions have been shown to be obtainable as collapse end states \cite{jmn,jmn13}. Time delay between relativistic images have previously been studied for Schwarzschild black hole in \cite{Virbhadra:2000ju} and in general for spacetimes with photons spheres in \cite{bom04} (which includes black holes and Weakly Naked Singularities, as we will see later). The present study complements these by computing time delay between relativistic images for Strongly Naked Singularities, ie., naked singularities not covered by photon sphere. To give a feel for the numbers, we consider both the case of super massive objects at galactic centers (most likely super massive black holes \cite{2010RvMP...82.3121G}) as well as $100M_{\odot}$ objects in our galaxy. This paper is organized as follows. In section \ref{lf} we briefly discuss lensing formalism and in section \ref{metric} we introduce the spacetimes we are going to study. In section \ref{ps} we discuss the photon spheres in these spacetimes and in section \ref{rd} we discuss possibility of relativistic lensing and the basic features of relativistic images. Then in section \ref{er} we discuss Einstein Rings and section \ref{tdel} we present a discussion of time delay between the Einstein Rings for various scenarios under consideration and highlight the main differences between black holes and strongly naked singularities. Finally we conclude with a discussion of main results in \ref{dis}. We work in units of $ c=1 $ and $ G=1 $ through out the paper. \section{Lensing Formalism\label{lf}} In this section we briefly review the gravitational lensing formalism \cite{Virbhadra:1998dy, Virbhadra:2002ju} We consider a spherically symmetric, static and asymptotically flat spacetime to be thought of as gravitational lens. Source and observer are assumed to be sufficiently away from the central region. There are two important parts to the lensing formalism. First one is the lens equation which relates image location to the source location given the deflection angle and this is essentially a geometrical relation written down taking advantage of asymptotic flatness. For a very nice discussion on various lens equations in literature, refer to \cite{Bozza:2008ev}. The second part is the calculation of deflection angle which is computed by integrating the null geodesics. It is through this deflection angle that General Relativity enters into picture for calculation of lensing observables such as image position and magnification. We use the Virbhadra-Ellis lens equation \cite{Virbhadra:2002ju} which is given by the following expression. \begin{equation} \tan{\beta}=\tan{\theta}-\alpha \end{equation} where \begin{equation} \alpha=\frac{D_{ds}}{D_s}\left[\tan{\theta}+\tan{\left(\hat{\alpha}-\theta\right)}\right] \end{equation} where $\theta,\beta$ denote the image location and source location respectively (see Fig \ref{lens}). We also have $\sin{\theta}=\frac{J}{D_d}$ where $J$ is the impact parameter. \begin{figure} \includegraphics[scale=0.8]{lens1.eps} \caption{\textit{Len Diagram}: Positions of the source, lens, observer and the image are given by $S$, $L$, $O$ and $I$. The distances between lens-source, lens-observer and source-observer are given by $D_{ds}$, $D_{d}$ and $D_{s}$. The angular location of the source and the image with respect to optic axis are given by $\beta$ and $\theta$. The impact parameter is given by $J$.\label{lens}} \end{figure} The general form of the metric is given by \begin{equation} ds^2=-g(r)dt^2+\frac{1}{f(r)}dr^2+h(r)r^2d\Omega^2 \label{met} \end{equation} The total deflection suffered by the light ray as it travels from source to observer is given by \begin{equation} \hat{\alpha}\lt(r_0\rt) = 2 {\int_{r_0}}^{\infty} \lt(\frac{1}{f(r)h(r)}\rt)^{1/2} \lt[ \lt(\frac{r}{r_0}\rt)^2 \frac{h(r)}{h(r_0)} \frac{g(r_0)}{g(r)} -1 \rt]^{-1/2} \frac{dr}{r} - \pi , \label{defl} \end{equation} where $r_0$ is the distance of closest approach. The relation between impact parameter and the distance of closest approach is given by $J=r_0\sqrt{\frac{h(r_0)}{g(r_0)}}$. Image location $\theta$ is obtained by solving the lens equation for the fixed value of source location $\beta$. The magnification is defined as \begin{equation} \mu \equiv \lt( \frac{\sin{\beta}}{\sin{\theta}} \ \frac{d\beta}{d\theta} \rt)^{-1}. \end{equation} which can be broken down into the tangential and radial magnification in the following way. \begin{equation} \mu_t \equiv \lt(\frac{\sin{\beta}}{\sin{\theta}}\rt)^{-1}, ~ ~ ~ \mu_r \equiv \lt(\frac{d\beta}{d\theta}\rt)^{-1} \end{equation} Singularities of the tangential and radial magnification give tangential and radial critical curves and caustics. We discuss time delay separately in section \ref{tdel}. \section{Spacetimes under consideration\label{metric}} We will be considering static spherically symmetric spacetimes for simplicity. Under these assumptions the simplest black hole solution is the Schwarzschild solution which is given by \begin{equation} ds^{2}=-\left(1-\frac{2M}{r}\right)dt^{2}+\left(1-\frac{2M}{r}\right)^{-1}dr^{2}+r^{2}d\Omega^{2} \label{mets} \end{equation} where $M$ is the Schwarzschild mass. A naked singularity solutions which has been studied in literature is the JNW solution \cite{Virbhadra:1998dy,Virbhadra:2002ju}. It is a solution of Einstein equations with a minimally coupled massless canonical scalar field and is parametrized by two parameters, mass $M$ and scalar charge $q$. The solution can be written as \begin{equation} ds^{2}=-\left(1-\frac{b}{r}\right)^{\nu}dt^{2}+\left(1-\frac{b}{r}\right)^{-\nu}dr^{2}+\left(1-\frac{b}{r}\right)^{1-\nu}r^{2}d\Omega^{2} \end{equation} where $b=2\sqrt{M^{2}+q^{2}}$ and $\nu=\frac{2M}{b}$. Because of the absence of analogues of black hole uniqueness theorems for naked singularities, it becomes necessary to study other possible naked singularity solutions in order to properly gauge the prospects of any observational probe (like gravitational lensing) for shedding light on cosmic censorship question. Below we describe two naked singularity solutions which, unlike the JNW solution described above, have been shown to be obtainable as collapse end states \cite{jmn,jmn13} . Admittedly these too are toy models, but study of these solutions can be thought of as a step in the direction of study of more realistic models where by realistic we mean a naked singularity solution generated by a well motivated source, possibly taking into account deviations from spherical symmetry in the form of higher multipoles (which becomes important because of absence of analogues of black hole uniqueness theorems for naked singularities), and shown to be stable under linear (and non-linear) perturbations etc. Unfortunately such analytic solutions are hard to find or construct. One can then investigate various toy model scenarios to gather some idea of what is to be expected in a `realistic' scenario. This is expected to provide valuable insights as far as qualitative features are concerned. JNW solution mentioned above, which is well studied in literature from lensing perspective \cite{Virbhadra:1998dy,Virbhadra:2002ju} is one such toy model as are the two solutions we describe below. One can hope that these examples capture most of the essential features of a realistic model at least for spherically symmetric situation and observational features would generalize qualitatively. One of the examples is the JMN naked singularity which was shown to be obtainable as as the end state of dynamical collapse from regular initial conditions for a fluid with zero radial pressure but non-vanishing tangential pressure \cite{jmn} and was studied in \cite{spnj12} from lensing perspective. The interior region in this spacetime is described by the metric \begin{equation} ds_{e}^{2}=-(1-M_{0})\left(\frac{r}{R_{b}}\right)^{\frac{M_{0}}{1-M_{0}}}dt^{2}+\frac{dr^{2}}{1-M_{0}}+r^{2}d\Omega^{2}. \end{equation} The solution has a naked singularity at the center and matches to a Schwarzschild spacetime across the boundary $r=R_{b}$ with the Schwarzschild mass given by $M=\frac{M_{0}R_{b}}{2}$. There are two parameters in the solution, $M_{0}$ which is a dimensionless parameter and, $M$ ehich is the Schwarzschild mass. We must have $0<M_{0}<1$ for satisfying the condition that sound speed inside the cloud is always positive and less than speed of light \cite{jmn}. As mentioned before, the radial pressure of the interior fluid is zero and the tangential pressure $p_t$ is related to energy desity $\rho$ as \begin{equation} p_t=\frac{M_0}{4(1-M_0)}\rho \end{equation} Thus tangential pressure is linearly related to energy density. Both energy density and tangential pressure fall off as $1/r^2$. In addition to these, the fourth example that we consider in this paper is a static perfect fluid sphere solution given by Tolman \cite{Tolman:1939jz} which also happens to be obtainable as collapse end state \cite{jmn13}. We note here that basic features of accretion disks in JMN and above Tolman solution was studied and contrasted with black hole case in \cite{jmn} and \cite{jmn13} respectively. Below we describe the Tolman solution (henceforth to be referred to as Tolman-VI solution) briefly. The metric in the interior is given by \begin{eqnarray} ds_{e}^{2} & = & -(Ar^{1-\lambda}-Br^{1+\lambda})^{2}dt^{2}+(2-\lambda^{2})dr^{2}+r^{2}d\Omega^{2} \end{eqnarray} which has a central singularity and is matched to a Schwarzschild spacetime via $C^2$ matching. The number of parameters describing the solution as written above is 5 viz,$A,B,\lambda$ (which characterize the fluid via its pressure and energy density) $M$ the Schwarzschild mass and $R_{b}$, the boundary of the cloud which is the matching radius. However three matching conditions,viz, matching of $g_{rr}$, $g_{tt}$ and pressure across the boundary, reduce the number of independent variables to 2 and we take $\lambda$ and $M$ as the independent variables. Then we have \begin{equation} R_{b}=\frac{2M(2-\lambda^{2})}{1-\lambda^{2}} \end{equation} \begin{equation} A=\frac{(1+\lambda)^{2}}{4\lambda\sqrt{2-\lambda^{2}}}\left(\frac{2M(2-\lambda^{2})}{1-\lambda^{2}}\right)^{\lambda-1} \end{equation} \begin{equation} B=\frac{(1+\lambda)^{2}}{4\lambda\sqrt{2-\lambda^{2}}}\left(\frac{1-\lambda}{1+\lambda}\right)^{2}\left(\frac{2M(2-\lambda^{2})}{1-\lambda^{2}}\right)^{-\lambda-1} \end{equation} By imposing the condition that sound speed inside the cloud is always positive and less than speed of light ie $c_{s}<1$ one gets a bound on physically admissible range of $\lambda$,viz, $\lambda\in(0,1)$\cite{jmn13}. The interior fluid has pressure $p$ and energy density $\rho$ related as \begin{equation} p=\frac{1}{1-\lambda^2}\frac{(1-\lambda)^2-(1+\lambda)^2r^{2\lambda}(B/A )}{1-r^{2\lambda}(B/A )}\rho \end{equation} which upon using matching conditions becomes \begin{equation} p=\frac{(1-\lambda)^{2}}{1-\lambda^{2}}\frac{1-\left(\frac{r}{R_{b}}\right)^{2\lambda}}{1-\left(\frac{r}{R_{b}}\right)^{2\lambda}\left(\frac{1-\lambda}{1+\lambda}\right)^{2}}\rho \end{equation} Both energy density and pressure fall off as $1/r^2$ near center and far off. For convenience we introduce dimensionless variables (scaling all quantities by $2M$) $x\equiv\frac{r}{2M}$, $a\equiv\frac{A}{(2M)^{-1+\lambda}},$ $b\equiv\frac{B}{(2M)^{-1-\lambda}}$. The dimensionless boundary radius is then given by \begin{equation} x_{b}\equiv\frac{R_{b}}{2M}=\frac{2-\lambda^{2}}{1-\lambda^{2}}\label{tcomp} \end{equation} which means that the compactness of the solution $\frac{2M}{R_{b}}$ is determined by $\lambda$ alone. Similarly $a\mbox{ and }b$ are also determined by $\lambda$. For convenience again we define $\kappa\equiv\frac{b}{a}$. One then gets \begin{eqnarray} a=\frac{(1+\lambda)^{2}}{4\lambda\sqrt{2-\lambda^{2}}}\left(\frac{2-\lambda^{2}}{1-\lambda^{2}}\right)^{\lambda-1}\\ \kappa=\left(\frac{1-\lambda}{1+\lambda}\right)^{2}\left(\frac{1-\lambda^{2}}{2-\lambda^{2}}\right)^{2\lambda}\label{ak} \end{eqnarray} \section{Photon sphere\label{ps}} One important question for relativistic lensing is the presence/absence of photon sphere. The deflection angle diverges in the limit where distance of closest approach approaches the radius of photon sphere. In fact it diverges logarithmically as demonstrated by Bozza \cite{boz02}. Photon sphere is a time-like hypersurface generated by circular closed null-geodesics in the spacetime. So it can be obtained by solving for $r=constant$ null-geodesics or in other words for the minima/maxima for effective potential for photons which as has already been mentioned is given by equation \ref{pseq}. Alternatively Photon sphere can also be defined as a time-like hypersurface $r=r_{ph}$ such that the deflection angle becomes infinity when the closest distance of approach $r_{0}$ tends to $r_{ph}$ \cite{Virbhadra:2002ju}. The equation of the photon sphere for metric given by \ref{met} can be written as \begin{equation} \frac{g(r)'}{g(r)}=\frac{h(r)'}{h(r)}+\frac{2}{r} \label{pseq} \end{equation} where $ ' $ denotes derivative with respect to $ r $. For a generalization of the concept of photon sphere to arbitrary spacetime see \cite{Claudel:2000yi}. It can be argued that any spherically symmetric and static spacetime that has a horizon and is asymptotically flat for $r\rightarrow\infty$ must contain a photon sphere \cite{Hasse:2001by} and hence any black hole will always give rise to relativistic deflection and relativistic images. On the other hand a naked singularity may or may not have a photon sphere surrounding it and although a photon sphere guarantees relativistic images, one may or may not form relativistic images in the absence of photon sphere. Depending on whether or not the naked singularity is surrounded by a photon sphere Virbhadra and Ellis \cite{Virbhadra:2002ju} classified the naked singularities as weakly naked singularity (WNS) and strongly naked singularity (SNS). The ones surrounded by photon sphere are called weakly naked (WNS) while those not surrounded by photon sphere are referred to as strongly naked (SNS). The spacetimes we study in this paper include examples of black holes, SNS and WNS giving rise to relativistic images. We wish to clarify here that this terminology should not be confused with classification of naked singularities into `strong curvature singularity' and `weak curvature singularity' on the basis of extendibility of spacetime through the singularity \cite{Joshi1}. We also note here that for the definition presented in \cite{Virbhadra:2002ju} it is, in principle, possible to have a WNS with multiple photon spheres. For example one could have a stable photon sphere interior to the outer unstable photon sphere. In such cases one can have null geodesics coming from infinity, going inside the photon sphere and coming back to infinity as is easy to see from the effective potential for photons in such spacetimes. Also there could be multiple photon sphere spacetimes \cite{Karlovini:2000xd}. For such cases lensing signature is expected to be different from the single photon sphere and we briefly comment on this point in section \ref{rd}. The WNS spacetimes studied in this paper do not come in this category and always have a single photon sphere and the statements we make for WNS case henceforth shall consider such cases only. For the Schwarzschild geometry the equation of the photon sphere is satisfied at $x=3/2\mbox{ where \ensuremath{x=r/2M}}$. In any metric matched to Schwarzschild exterior (which is the case for both JMN and Tolman-VI metric that we are studying), the presence of photon sphere in Schwarzschild exterior depends on the dimensionless matching radius $x_{b}$. Clearly if and only if $x_{b}<3/2$, the exterior photon sphere will be present. For JMN metric this condition translates to $M_{0}>2/3$. Also as was discussed in \cite{spnj12} the interior solution has no photon sphere when $M_{0}\neq3/2$. Thus JMN metric has a photon sphere in the Schwarzschild exterior if $M_{0}>2/3$ and no photon sphere otherwise \footnote{ We neglect the slightly unusual case$M_{0}=2/3$ for which there is (neutral) photon sphere at \emph{every} radius inside the cloud.} On the other hand for Tolman-VI metric, using $\lambda\in(0,1)$ and \ref{tcomp} one gets $R_{b}>4M$ which implies $x_{b}>2>3/2$. So in no regime of parameter space for this geometry one can have a photon sphere in exterior Schwarzschild spacetime. The condition for existence of photon sphere inside the cloud reduces to $x^{2\lambda}=-\frac{a}{b}$ and owing to positivity of $a\mbox{ and \ensuremath{b}}$ there is no photon sphere in the interior geometry either. Thus there is no photon sphere in this geometry for \emph{any} value of allowed parameter range. In JNW solution for low scalar charge $q/M\le\sqrt{3}$ one has photon sphere and hence is an example of a weakly naked singularity \cite{Virbhadra:1998dy}. In this paper we will not be concerned with the case $q/M>\sqrt{3}$ as we want to use JNW solution to illustrate the case of a Naked Singularity with photon sphere. JMN for $M_{0}>2/3$ is lensing signature-wise exactly identical to Schwarzschild as discussed in ref \cite{spnj12} and won't be considered here. Both JMN for $M_{0}<2/3$ and Tolman-VI serve as examples of strongly naked singularities, i.e., naked singularity without photon sphere. In figure \ref{f sch} we show schematically the photon trajectory for photons traveling in loops in spacetime with photon sphere. As one can see the successive loops are anchored on the photon sphere. In contrast we show the photon trajectory for photons traveling in loops in spacetime without photon sphere in figure \ref{f tol}. This picture will help in an intuitive understanding of the differences in time delay results we discuss later. \begin{figure} \includegraphics[scale=0.8]{scht.eps} \caption{Schematic photon trajectory for relativistic deflection in spacetime with photon sphere with both axes plotted in Schwarzschild units (photon sphere is given by the thick orange circle; in the two-loop case the second loop is virtually indistinguishable from the photon sphere)\label{f sch}} \end{figure} \begin{figure} \includegraphics[scale=0.8]{tolt.eps}\caption{Schematic photon trajectory for relativistic deflection in spacetime without photon sphere with both axes plotted in Schwarzschild units \label{f tol} } \end{figure} \section{Relativistic Deflection and Images\label{rd}} Using formula \ref{defl} , the deflection angle in Schwarzschild in terms of dimensionless variables is given as \begin{equation} \hat{\alpha}\lt(x_0\rt) = 2 {\int_{x_0}}^{\infty} \lt(\frac{1}{1-\frac{1}{x}}\rt)^{1/2} \lt(\frac{x}{x_0}\rt)^2 \lt[ \lt(\frac{x}{x_0}\rt)^2 \lt(\fr{1-\frac{1}{x_0}}{1-\frac{1}{x}}\rt) -1 \rt]^{-1/2} \ \frac{dx}{x} -\pi \label{defls} \end{equation} and \begin{equation} \sin\theta=\frac{2M}{D_{d}}\frac{x_{0}}{\sqrt{1-\frac{1}{x}}} \end{equation} allows us to express deflection angle as a function of image location $\hat{\alpha}(\theta)$. For corresponding expressions in JMN refer to \cite{spnj12}and for JNW to \cite{Virbhadra:2002ju}. For Tolman-VI we have for $x_{0}<x_{b}$ \begin{eqnarray} \hat{\alpha}\lt(x_0\rt)= \nonumber 2\int_{x_{0}}^{x_{b}}\left(2-\lambda^{2}\rt)^{1/2}\left[\lt(\frac{x}{x_{0}}\rt)^{2} \lt(\frac{x_{0}^{1-\lambda}- \kappa x_{0}^{1+\lambda}}{x_{0}^{1-\lambda}- \kappa x^{1+\lambda}}\rt)^{2}-1\rt]^{-1/2} \frac{dx}{x}\\ +2\int_{x_{b}}^{\infty}\lt(\frac{1}{1-\frac{1}{x}}\rt)^{1/2}\lt[\lt(\frac{x}{x_{0}}\rt)^{2}\frac{a^{2}(x_{0}^{1-\lambda}-\kappa x_{0}^{1+\lambda})^{2}}{1-\frac{1}{x}}-1\rt]^{-1/2}\frac{dx}{x}-\pi\label{deflt} \end{eqnarray} and \begin{equation} \sin\theta = \frac{2M}{D_d}\frac{x_0}{a(x_{0}^{1-\lambda}-\kappa x_{0}^{1+\lambda})} \label{thx0t} \end{equation} where $a\mbox{ and \ensuremath{\kappa}}$ are given by \ref{ak} It was shown in \cite{spnj12} that in JMN metric the parameter range when there is relativistic deflection is given as $M_{0}>0.475$. Indeed for $M_{0}>2/3$ there is photon sphere and deflection angle becomes infinite. In Tolman-VI solution however there is no photon sphere. But it turns out that the maximal deflection becomes $>2\pi$ for $\lambda\lesssim0.44\mbox{ or equivalently \ensuremath{x_{b}<2.25}}$ The relativistic images for metric having a photon sphere are clumped together around the angular location of photon sphere and successive images are exponentially demagnified \cite{boz02}. This holds true for both the Schwarzschild and the JNW metric (in an appropriate parameter range where there is a photon sphere). So there is a forbidden angular region in which no images can form and the critical angle corresponds to the angular location of the photon sphere. This is true only if there is a single (unstable) photon sphere in the spacetime. For multiple photon spheres one can have additional images corresponding to light rays that go inside the outer photon sphere and turn back and so the lensing signature is expected to be qualitatively different. While this is an interesting avenue for further study, we do not consider such cases in this paper. In the absence of photon sphere, for example in the JMN metric when $M_{0}<2/3$, the images are better separated and although highly demagnified, the magnification is of the same order of magnitude for successive images as demonstrated in \cite{spnj12} . We simply note here that both these features qualitatively generalize for the Tolman-VI metric despite quantitative differences and the numbers involved are in the same ballpark. As it is not very enlightening, we forgo the presentation of image positions and magnifications for Tolman-VI metric. Also as with the JMN metric the deflection angle monotonically increases with decreasing impact parameter and consequently for reasons presented in \cite{spnj12} there are no radial critical curves in Tolman-VI solution. \section{Einstein Ring\label{er}} If the source, the lens, and the observer lie on a single straight line, i.e., in the so-called aligned configuration, a circular image pattern known as the Einstein Ring (ER), is formed. The rings formed by photons which have been deflected by $2\pi,4\pi$ etc are referred to as relativistic rings. For Schwarzschild and weakly naked JNW cases, there are in principle infinite number of relativistic ERs and they are located very close to angular location of photon sphere. This is called the critical angle $\theta_{crit}$, and angular location of all images has to be greater than $\theta_{crit}$ . For a Schwarzschild black hole $4\times10^{6}M_{\odot}$ at $8.5$ kpc one gets $\theta_{crit}\sim24.1\text{ microarcseconds}$ and for $100M_{\odot}$ at $1$ kpc one gets $\theta_{crit}\sim5.1\text{ nanoarcseconds}$. In contrast, for SNS, ER and images are formed below the critical angle for corresponding Schwarzschild black hole, and are reasonably well separated. In table \ref{ertg} and \ref{erts}, we show angular location of Relativistic ER for $4\times10^{6}M_{\odot}$ at $8.5$ kpc and for $100M_{\odot}$ at $1$ kpc respectively for both SNS metrics studied in the paper. \begin{longtable}[l]{|l|c|c|c|c|} \caption{Angular location of relativistic ER for $4\times10^{6}M_{\odot}$ at $8.5$ kpc for both SNS metrics studied in the paper:$\theta$ is in microarcseconds } \label{ertg} \endfirsthead \hline ER & JMN$(M_{0}=0.63)$ & JMN$(M_{0}=0.615)$ & Tolman-VI $(\lambda=0.13)$ & Tolman-VI $(\lambda=0.14)$\tabularnewline \hline I & 23.96 & 23.38 & 14.90 & 14.83\tabularnewline \hline II & 20.76 & 16.50 & 7.87 & 7.59\tabularnewline \hline III & 15.26 & 5.96 & 4.33 & 3.89\tabularnewline \hline IV & 8.17 & {*} & 1.90 & {*}\tabularnewline \hline \end{longtable} \begin{longtable}[l]{|l|c|c|c|c|} \caption{Angular location of Relativistic ER for $100M_{\odot}$ at $1$ kpc for both SNS metrics studied in the paper:$\theta$ is in nanoarcseconds} \label{erts} \endfirsthead \hline ER & JMN$(M_{0}=0.63)$ & JMN$(M_{0}=0.615)$ & Tolman-VI $(\lambda=0.13)$ & Tolman-VI $(\lambda=0.14)$\tabularnewline \hline I & 5.09 & 4.96 & 3.16 & 3.15\tabularnewline \hline II & 4.41 & 3.51 & 1.67 & 1.61\tabularnewline \hline III & 3.24 & 1.26 & 0.92 & 0.82\tabularnewline \hline IV & 1.73 & {*} & 0.40 & {*}\tabularnewline \hline \end{longtable} In next section we compute and analyze the time delay between relativistic ERs for these spacetimes. \section{Time Delay\label{tdel}} The coordinate time taken by the photon to travel from $r_{0}\mbox{ to \ensuremath{r}}$ is given by \begin{equation} t(r,r_{0})=\int_{r_{0}}^{r}\sqrt{\frac{1}{f(r)g(r)}}\frac{1}{\sqrt{1-\frac{h(r_{0})r_{0}^{2}}{h(r)r^{2}}\frac{g(r)}{g(r_{0})}}}dr\label{td} \end{equation} Let ${\cal R}_{s}$ and ${\cal R}_{o}$ be, the radial coordinates of the source and the observer measured from the center of lens. In dimensionless units they are given as \begin{equation} \chi_{s}=\frac{{\cal R}_{s}}{2M}\text{ and }{\cal \chi}_{o}=\frac{{\cal R}_{o}}{2M}\text{.}\label{XtoR} \end{equation} From geometry of the configuration we can write $\chi_{s\mbox{ }}\mbox{and }{\cal \chi}_{o}$ in terms of distances and angles involved as \cite{Virbhadra:2007kw} \begin{eqnarray} \chi_{s} & = & \frac{D_{s}}{2M}\sqrt{\left(\frac{D_{ds}}{D_{s}}\right)^{2}+\tan^{2}\beta}\text{,}\nonumber \\ \chi_{o} & = & \frac{D_{d}}{2M}\text{,} \end{eqnarray} As we are concerned with ER only, we take $\beta=0$. Then as we take$\frac{D_{ds}}{D_{s}}=\frac{1}{2}$, we have $\chi_{s}={\cal \chi}_{o}$ and equivalently $ {\cal R}_{o}={\cal R}_{s}$. Now time difference between $m\mbox{th and \ensuremath{n\mbox{th}}}$ relativistic ER is given by \begin{equation} \Delta t_{m,n}=2t({\cal R}_{0,}r_{0_{m}})-2t({\cal R}_{0,}r_{0_{n}})\label{tdd} \end{equation} where $r_{0_{m}}$ is the distance of closest approach corresponding to $m$th relativistic ER. Scaling by Schwarzschild time and writing in dimensionless units $\tau=\frac{t}{2M}$ and for notational simplicity writing $\tau(\chi_{o},x_{0})$ as $\tau(x_{0})$ we list below we list the time delay formulas for cases under study in table \ref{tdtab} \begin{table} \caption{Time delay formulas (in Schwarzschild units) for spacetimes under study} \begin{tabular}{|c|c|} \hline SpaceTime & Time Delay Formulae (Dimensionless)\tabularnewline \hline \hline Schwarzschild & $\tau(x_{0})=2\int_{x_{0}}^{\chi_{o}}\frac{1}{1-\frac{1}{x}}\frac{1}{\sqrt{1-\frac{x_{0}^{2}}{x^{2}}\frac{\left(1-\frac{1}{x}\right)}{\left(1-\frac{1}{x_{0}}\right)}}}dx$\tabularnewline \hline \hline JMN & $\tau(x_{0})=2\int_{x_{0}}^{x_{b}}\frac{1}{\sqrt{(1-M_{0}^{2})(xM_{0})^{\gamma}}}\frac{1}{\sqrt{1-\frac{x_{0}^{2}}{x^{2}}\frac{x^{\gamma}}{x_{0^{\gamma}}}}}dx+2\int_{x_{b}}^{\chi_{o}}\frac{1}{1-\frac{1}{x}}\frac{1}{\sqrt{1-\frac{x_{0}^{2}}{x^{2}}\frac{\left(1-\frac{1}{x}\right)}{(1-M_{0})(x_{0}M)^{\gamma}}}}dx$\tabularnewline \hline \hline Tolman-VI & $\tau(x_{0})=2\int_{x_{0}}^{x_{b}}\frac{1}{\sqrt{(1-M_{0}^{2})(xM_{0})^{\gamma}}}\frac{1}{\sqrt{1-\frac{x_{0}^{2}}{x^{2}}\frac{x^{\gamma}}{x_{0^{\gamma}}}}}dx+2\int_{x_{b}}^{\chi_{o}}\frac{1}{1-\frac{1}{x}}\frac{1}{\sqrt{1-\frac{x_{0}^{2}}{x^{2}}\frac{\left(1-\frac{1}{x}\right)}{(1-M_{0})(x_{0}M)^{\gamma}}}}dx$\tabularnewline \hline \hline JNW & $\tau(x_{0})=2\int_{x_{0}}^{\chi_{o}}\frac{1}{\left(1-\frac{1}{\nu x}\right)^{\nu}}\frac{1}{\sqrt{1-\frac{x_{0}^{2}}{x^{2}}\frac{\left(1-\frac{1}{\nu x}\right)}{\left(1-\frac{1}{\nu x_{0}}\right)}}}dx$\tabularnewline \hline \end{tabular} \label{tdtab} \end{table} The time delay in terms Schwarzschild time is given as $\Delta\tau_{m,n}=\tau(x_{0_{m}})-\tau(x_{0_{n}})$ Below we highlight the qualitative differences in the behavior of relativistic time delay for spactimes with a photon sphere from those without one. \subsection{Singularities with a photon sphere (Black Holes and WNS)\label{tdps}} As has been discussed earlier if the spacetime has a photon sphere then there is a forbidden angular region and all relativistic images are located close to each other and near the angular location of photon sphere. In this case the time delay between successive relativistic rings is more or less constant and is neatly related to light-travel time in a circle on photon sphere. This is intuitive to understand because in the presence of a photon sphere the light trajectories corresponding to successive rings differ by one extra nearly circular loop at the photon sphere radius as can be easily seen from figure \ref{f sch}. Indeed the time delay between ERs is well approximated by \begin{equation} \Delta t_{m,n}=\frac{2\pi(m-n)r_{ph}}{\sqrt{g(r_{ph})}}=2\pi(m-n)J_{ph} \end{equation} where $r_{ph}$ is radius of photon sphere and $J_{ph}$ is corresponding impact parameter \cite{Bozza:2003cp}. For Schwarzschild black hole time delay between successive relativistic ER then becomes roughly $2\pi\times3\sqrt{3}M$ and for JNW it becomes $2\pi\times\frac{1+2\nu}{\nu}\left(1-\frac{2}{1+2\nu}\right)^{\frac{1-2\nu}{2}}M$. For same Schwarzschild mass, increasing scalar charge decreases the time delay. Thus as with the case for position and magnification of relativistic images in the presence of photon sphere, the time delay is also determined by the metric near photon sphere and displays certain universal features irrespective of presence or absence of event horizon. Also as far as observations are concerned, since in such a situation, higher order relativistic images are clumped together (and excessively demagnified) \cite{boz02}, the relevant observational quantity should be time delay between first and second images. The scenario becomes different in absence of photon sphere as we discuss below. \subsection{Singularities without photon sphere (SNS)\label{tdnps}} \begin{longtable}[l]{|l|c|c|c|c|} \caption{Time delay for $4\times10^{6}M_{\odot}$ at $8.5$ kpc for both SNS metrics studied in the paper:$\Delta\tau$ is in second} \label{tdtg} \endfirsthead \hline Time Delay & JMN$(M_{0}=0.63)$ & JMN$(M_{0}=0.615)$ & Tolman-VI $(\lambda=0.13)$ & Tolman-VI $(\lambda=0.14)$\tabularnewline \hline $\Delta\tau_{2,1}$ & 299.6 & 272.0 & 145.8 & 141.9\tabularnewline \hline $\Delta\tau_{3,2}$ & 242.4 & 151.8 & 78.8 & 74.9\tabularnewline \hline $\Delta\tau_{4,3}$ & 157.7 & {*} & 41.4 & {*}\tabularnewline \hline \end{longtable} \begin{longtable}[l]{|l|c|c|c|c|} \caption{Time delay for $100M_{\odot}$ at $1$ kpc for both SNS metrics studied in the paper:$\Delta\tau$ is in milisecond} \label{tdts} \endfirsthead \hline Time Delay & JMN$(M_{0}=0.63)$ & JMN$(M_{0}=0.615)$ & Tolman-VI $(\lambda=0.13)$ & Tolman-VI $(\lambda=0.14)$\tabularnewline \hline $\Delta\tau_{2,1}$ & 7.5 & 6.8 & 3.6 & 3.5\tabularnewline \hline $\Delta\tau_{3,2}$ & 6.1 & 3.8 & 2.0 & 1.9\tabularnewline \hline $\Delta\tau_{4,3}$ & 3.9 & {*} & 1.0 & {*}\tabularnewline \hline \end{longtable} In the absence of photon sphere the time delay between successive images is no longer roughly constant. Explicit calculation for various cases shows that the successive time delays go on decreasing. In table \ref{tdtg} and \ref{tdts}, we show time delay between Relativistic ER for $4\times10^{6}M_{\odot}$ at $8.5$ kpc and for $100M_{\odot}$ at $1$ kpc respectively for both SNS metrics studied in the paper. It remains to be seen if this is a feature generic to SNS with monotonic deflection angle. In passing we note that for the particular cases we have studied here, the successive time delay is lesser for the Tolman-VI case as compared to the corresponding JMN case, where by corresponding we mean spacetimes admitting the same maximal deflection and hence the same number of relativistic images and rings. And also, the time delay is lesser in general than a black hole with the corresponding Schwarzschild mass. In case the images are expected to be resolved (like the super massive black hole case) the time delay between first and last images also becomes an observational quantity of interest. One can expect that to be fairly large if there are a large number of images as successive time delays will add up. But the decreasing trend in successive time delays compensates for this phenomenon to some extent. For the super massive black hole case the time delay is of the order of seconds. For $100 M_{\odot}$ objects at kpc distances, the time delay becomes of the order of milliseconds. In this case images are also unlikely to be resolved (separation as discussed earlier being of the order of nanoarcseconds). But if there is a characteristic variability of the source one might hope to find signatures in the unresolved image. Relativistic Micolensing studies in such cases might be an interesting idea to pursue. \section{ Discussion and Conclusion \label{dis}} In this paper, we have studied the time delay properties of gravitational lenses in strong deflection regime when relativistic images can be formed, and its role as an in principle probe of the cosmic censorship question. Three of the four metrics we have considered in this paper have been previously explored from relativistic lensing perspective. For the fourth one viz Tolman-VI solution we find that the basic features for image positions and magnification is similar to JMN case without photon sphere. Thus the lensing signatures for the JMN case, which happens to be a toy example of a naked singularity solution obtained as the collapse end state for a fluid with zero radial pressure, qualitatively generalize for an analogous scenario with the inclusion of radial pressure, i.e., the Tolman-VI case. We have also presented in this paper the time delay between relativistic ER for both JMN and Tolman-VI cases both of which serve as examples of SNS and contrasted them with previously studied examples for relativistic time delay in the literature which, to the best of our knowledge, have all been either black holes or WNS \cite{Virbhadra:2000ju,bom04,LarranagaRubio:2003vp}. We have confirmed that there are important differences between SNS and WNS for time delays between relativistic images which we discuss below. We have used the Schwarzschild black hole, which has a photon sphere at Schwarzschild radius of $3M$ as a standard, against which the time delay difference between successive Einstein Rings for our cases of interest are compared. For the differential time delay between relativistic images formed by successive loops made by photon, there is practically no difference between sources along the optic axis and slightly misaligned ones. Moreover only strongly aligned configurations are of importance for lensing. Hence, in this work we have presented the relativistic time delay only between the successive Einstein Rings. This time delay is of the order of a few seconds for the galactic center super massive object where as it is of the order of milliseconds for 100 solar mass objects in our galaxy at kpc distance scales. For the Schwarzschild black holes, the time delay difference is very close to $2\pi\times 3 \sqrt{3}M$, which is $2\pi$ times the critical impact parameter as to be expected. Other spacetimes with photon sphere will emulate this feature though the time delay difference will be different as it will depend on other parameters that characterize the spacetime apart from mass. As an example, consider the JNW metric discussed in section \ref{tdps} where this time delay difference decreases with increasing scalar charge. The contrasting scenario is the case of formation of multiple relativistic Einstein rings in the presence of naked singularities not covered by photon sphere. Then, the successive Einstein rings formed for a source close to the optic axis of the lens will have successively smaller impact parameters. The differential time delay between these rings will progressively decrease as was shown in section \ref{tdnps} for two prototype examples. If we were to detect time delay of this nature, possibly we have a scenario where the cosmic censorship is violated. This requires the images to be resolved. However, even when the images are not resolved, one can possibly discriminate SNS from WNS and black holes. This is because an intrinsic source variability will manifest itself in a characteristic way in the unresolved composite Einstein Ring for black holes and WNS owing to the (almost) periodic nature of differential time delays (and exponential suppression of magnification for higher order rings), which can be distinguished from the unresolved composite Einstein Ring for SNS where no such signature of periodicity is expected. Thus there are important differences in ratio of time delays between successive relativistic images for spacetimes with photon sphere and those without photon sphere and this might be helpful in observationally distinguishing SNS from WNS and black holes. As we have remarked earlier, we have not considered multiple photon sphere spacetimes in this study and WNS with multiple photon spheres will have signature different from the WNS studied in this paper. The observation of relativistic images will be a wonderful test of gravitational physics in strong field regime. Practical difficulties and technological challenges notwithstanding, VLBI (Very-long-baseline interferometry) is a promising technique which might in the near future achieve angular and time resolution to bring at least some of the questions that relativistic lensing can probe such as the one that we studied here under observational purview, at least for the galactic central super massive object \cite{VLBI,Fish:2009va}. With this in mind it will be useful to study more realistic examples of naked singularities and under more realistic astrophysical conditions. \section*{Acknowledgments} We would like to thank Dr. K.S.Virbhadra for valuable correspondence and the anonymous Referee for valuable comments and suggestions.
1506.03332
\section{Introduction} This paper studies, as a reflection group, the full general linear group $\mathrm{GL}(V)\cong \mathrm{GL}_n(\F)$, where $V$ is an $n$-dimensional vector space over a field $\F$. An element $g$ in $\mathrm{GL}(V)$ is called a \defn{reflection} if its \defn{fixed subspace} $V^g:=\{v\in V:\ gv=v\} = \ker(g - 1)$ has codimension $1$. A \defn{reflection group} is a subgroup of $\mathrm{GL}(V)$ generated by reflections.\footnote{Our definitions here deviates slightly from the literature, where one often insists that a reflection have \emph{finite order}. In particular, by our definition, the determinant of a reflection $g$ need not be a root of unity in $\F^\times$, and $\mathrm{GL}(V)=\mathrm{GL}_n(\F)$ is still generated by reflections even when $\F$ is infinite.} It is not hard to show that $\mathrm{GL}(V)$ itself is generated by its subset $T$ of reflections, and hence is a reflection group. {\it Finite, real} reflection groups $W$ inside $\mathrm{GL}_n(\mathbb{R})\cong\mathrm{GL}(V)$ are well-studied classically via their {\it Coxeter presentations} $(W,S)$. Here $S$ is a choice of $n$ generating {\it simple reflections}, which are the orthogonal reflections across hyperplanes that bound a fixed choice of {\it Weyl chamber} for $W$. Recent work by Brady and Watt \cite{BradyWatt2} and Bessis \cite{Bessis} has focused attention on an alternate presentation, generating real reflection groups $W$ by their subset $T$ of {\it all} reflections. Their work makes use of the coincidence, first proven by Carter \cite{Carter}, between two natural functions $W \rightarrow \{0,1,2,\ldots\}$ defined as follows for $w \in W$: \begin{compactitem} \item the {\it reflection length}\footnote{Warning: this is {\it not} the usual Coxeter group length $\ell_S(w)$ coming from the Coxeter system $(W,S)$.} given by $ \ell_T(w):=\min\{\ell: w=t_1 t_2 \cdots t_\ell \text{ with }t_i \in T\}, $ and \item the {\it fixed space codimension} given by $ \operatorname{codim}(V^w):= n-\dim(V^w). $ \end{compactitem} While both of these functions can be defined for all reflection groups, it has been observed (see, e.g., Foster-Greenwood \cite{FosterGreenwood}) that for non-real reflection groups, and even for most finite complex reflection groups, these two functions differ. This leads to two partial orders, \begin{compactitem} \item the {\it $T$-prefix order}: $g \leq h$ if $\ell_T(g)+\ell_T(g^{-1}h) =\ell_T(h)$, and \item the {\it fixed space codimension order}: $g \leq h$ if $\operatorname{codim}(V^g)+\operatorname{codim}(V^{g^{-1}h})=\operatorname{codim}(V^h)$. \end{compactitem} We discuss some general properties of these orders in Section~\ref{section:generalities}. One of our first results, Proposition~\ref{length=codim}, is the observation that when considering \emph{as a reflection group} the full general linear group $\mathrm{GL}(V)$ for a finite-dimensional vector space $V$ over a field, one again has the coincidence $\ell_T(g)=\operatorname{codim}(V^g)$, and hence the two partial orders above give the same order on $\mathrm{GL}(V)$, which we call the {\it absolute order}. We proceed to prove two basic enumerative results about this absolute order on $\mathrm{GL}(V)$ when the field $\F=\F_q$ is finite. First, Section~\ref{ranks-in-whole-order} uses M\"obius inversion to count the elements in $\mathrm{GL}_n(\F_q)$ of a fixed poset rank, that is, those with $\ell_T(g)=\operatorname{codim}(V^g)$ fixed. Second, in Section~\ref{section:chains}, we examine the interval $[e,c]$ in the absolute order on $\mathrm{GL}_n(\F_q)$ from the identity element $e$ to a \emph{Singer cycle} $c$. There has been established in recent years a close analogy between the Singer cycles in $\mathrm{GL}_n(\F_q)$ and {\it Coxeter elements} in real reflection groups; see \cite[\S8--\S9]{CSP}, \cite[\S7]{RStantonWebb}, \cite{LRS}. The interval from the identity to a Coxeter element in the absolute order on a real reflection group $W$ is a very important and well-behaved poset, called the poset of {\it noncrossing partitions} for $W$. Our main result, Theorem~\ref{thm:flag-f-vector}, gives a strikingly simple formula for the {\it flag $f$-vector} of $[e,c]$ in $\mathrm{GL}_n(\F_q)$: fixing an ordered composition $\alpha=(\alpha_1,\ldots,\alpha_{m})$ of $n=\sum_i \alpha_i$, the number of chains $e=g_0 < g_1 < \cdots <g_{m-1} < g_m=c$ in absolute order having $\ell_T(g_i)-\ell_T(g_{i-1})=\alpha_i$ is \[ q^{\varepsilon(\alpha)} \cdot (q^n-1)^{m-1} \quad \text{ where } \quad \varepsilon(\alpha):=\sum_{i = 1}^m (\alpha_i-1)(n-\alpha_i). \] The analogous flag $f$-vector formulas in real reflection groups are not as simple. The proof of Theorem~\ref{thm:flag-f-vector} is involved, using a character-theoretic enumeration technique due to Frobenius, along with information about the complex characters of $\mathrm{GL}_n(\F_q)$ that goes back to Green \cite{Green} and Steinberg \cite{Steinberg}. The proof has the virtue of applying not only to Singer cycles in $\mathrm{GL}_n(\F_q)$, but also to elements which are {\it regular elliptic}; see Section~\ref{section:chains} for the definition. Section~\ref{reformulation-section} reformulates the flag $f$-vector in terms of certain subspace arrangements. We hope that this may lead to a more direct approach to Theorem~\ref{thm:flag-f-vector}. Section~\ref{remarks} collects further questions and remarks. \subsection*{Acknowledgements} The authors thank Christos Athanasiadis, Valentin F\'eray, Alejandro Morales, Kyle Petersen, and Dennis Stanton for helpful remarks, suggestions, and references. \section{Length and prefix order} \label{section:generalities} The next few subsections collect some easy general properties of the length function with respect to a choice of generators for a group, and the resulting partial order defined in terms of prefixes of reduced expressions. We borrow heavily from work of Armstrong \cite[\S 2.4]{Armstrong}, Bessis \cite[\S 0.4]{Bessis}, Brady and Watt \cite{BradyWatt}, and Foster-Greenwood \cite{FosterGreenwood}, while attempting to clarify the hypotheses responsible for various properties. \subsection{Generated groups} \begin{definition} A {\it generated group} is a pair $(G,T)$ where $G$ is a group and $T \subseteq G$ a subset that generates $G$ as a monoid: every $g$ in $G$ has at least one {\it $T$-word for $g$}, meaning a sequence $(t_1,t_2,\ldots,t_\ell)$ with $g=t_1 t_2 \cdots t_\ell$. The length function $\ell=\ell_T: G \rightarrow \mathbb{N}$ is defined by \[ \ell(g):=\min\{\ell: g=t_1 t_2 \cdots t_\ell \text{ with }t_i \in T\}. \] That is, $\ell(g)$ is the minimum length of a $T$-word for $g$. Words for $g$ achieving this minimum length are called {\it $T$-reduced}. Equivalently, $\ell(g)$ is the length of the shortest directed path from the identity $e$ to $g$ in the Cayley graph of $(G, T)$. \end{definition} It should be clear from this definition that $\ell$ is {\it subadditive}, meaning that \begin{equation} \label{length-subadditivity} \ell(gh) \leq \ell(g) +\ell(h). \end{equation} Understanding the case where equality occurs in \eqref{length-subadditivity} motivates the next definition. \begin{definition}[Prefix order] \label{order-definition} Given a generated group $(G,T)$, define a binary relation $g \leq h$ on $G$ by any of the following three equivalent conditions. \begin{compactenum}[(i)] \item Any $T$-reduced word $(t_1,\ldots,t_{\ell(g)})$ for $g$ extends to a $T$-reduced word $(t_1,\ldots,t_{\ell(h)})$ for $h$. \item There is a shortest directed path $e$ to $h$ in the Cayley graph for $(G,T)$ going via $g$. \item $\ell(g)+\ell(g^{-1}h)=\ell(h).$ \end{compactenum} \end{definition} \noindent Condition (i) makes the following proposition a straightforward exercise, left to the reader. \begin{proposition} For $(G, T)$ a generated group, the binary relation $\leq$ is a partial order on $G$, with the identity $e$ as minimum element. It is graded by the function $\ell(-)$, in the sense that for any $g < h$, one has $\ell(h)=\ell(g)+1$ if and only if there is no $g'$ with $g < g' < h.$ \end{proposition} \begin{example} Taking $G=\mathrm{GL}_2(\F_2)$ and $T$ the set of all reflections in $G$, the Hasse diagram for $\leq$ on $G$ is as follows: \[ \xymatrix{ \left[\substack{11 \\ 10}\right]& &\left[\substack{01 \\ 11}\right]\\ \left[\substack{11 \\ 01}\right] \ar@{-}[u] \ar@{-}[urr] &\left[\substack{01 \\ 10}\right] \ar@{-}[ul] \ar@{-}[ur] &\left[\substack{10 \\ 11}\right] \ar@{-}[u] \ar@{-}[ull]\\ &\left[\substack{10 \\ 01}\right]\ar@{-}[ul] \ar@{-}[u] \ar@{-}[ur] & } \] Coincidentally, this is isomorphic to the absolute order on the symmetric group $\mathfrak{S}_3$, since the irreducible reflection representation for $\mathfrak{S}_3$ over $\F_2$ is isomorphic to $\mathrm{GL}_2(\F_2)$. \end{example} \subsection{Conjugacy-closed generators} When $(G,T)$ is a generated group in which $T$ is closed under conjugation by elements of $G$, one has $ \ell(ghg^{-1})=\ell(h) $ for all $g, h$ in $G$. This implies, for example, that $ \ell(gh)=\ell(g^{-1} \cdot gh \cdot g)=\ell(hg). $ The next proposition asserts an interesting consequence for the order $\leq$ on $G$, namely that it is \defn{locally self-dual}: each interval is isomorphic to its own opposite as a poset. \begin{proposition} \label{prop:self-duality} Let $(G, T)$ be a generated group, with $T$ closed under $G$-conjugacy. Then for any $x \leq z$, the bijection $G \rightarrow G$ defined by $y \mapsto x y^{-1} z$ restricts to a poset anti-automorphism $[x, z] \rightarrow [x,z]$. \end{proposition} \begin{proof} We first check the bijection restricts to $[x,z]$. By definition, $y \in [x,z]$ if and only if \begin{equation}\label{y1} \left\{ \begin{aligned} \ell(y) &= \ell(x) + \ell(x^{-1}y),\\ \ell(z) &= \ell(y) + \ell(y^{-1}z), \end{aligned} \right. \end{equation} while $xy^{-1}z \in [x,z]$ if and only if \begin{equation} \label{y2} \left\{ \begin{aligned} \ell(xy^{-1}z) &= \ell(x)+\ell(y^{-1}z), \\ \ell(z) &= \ell(xy^{-1}z)+\ell(z^{-1}yx^{-1}z) = \ell(xy^{-1}z)+\ell(yx^{-1}), \end{aligned} \right. \end{equation} where the last equality in \eqref{y2} uses the conjugacy hypothesis. To see that \eqref{y1} implies \eqref{y2}, note that, assuming \eqref{y1}, one has \begin{align*} \ell(z)&\le \ell(yx^{-1}) + \ell(xy^{-1}z) \\ &\le \ell(x^{-1}y) + \ell(x)+\ell(y^{-1}z) \\ &= (\ell(y)-\ell(x)) + \ell(x)+(\ell(z)-\ell(y)) = \ell(z), \end{align*} using the conjugacy hypothesis to say $\ell(yx^{-1})=\ell(x^{-1}y)$. The fact that one has equality at each inequality above implies \eqref{y2}. Conversely, assuming \eqref{y2}, one has \begin{align*} \ell(z) &= \ell(x)+\ell(y^{-1}z)+\ell(yx^{-1})\\ &\ge \ell(x)+(\ell(z)-\ell(y))+(\ell(y)-\ell(x)) = \ell(z) \end{align*} with equality at the inequality implying \eqref{y1}. It remains to show the restricted bijection $[x,z] \rightarrow [x,z]$ reverses order. Assume $y_1\le y_2$ in $[x,z]$. The preceding calculations show that $\ell(xy_i^{-1}z) = \ell(x) - \ell(y_i) + \ell(z)$. Thus \begin{align*} \ell(xy_1^{-1}z) & = \ell(x) - \ell(y_1) + \ell(z) \\ & = (\ell(x) - \ell(y_2) + \ell(z)) + (\ell(y_2) - \ell(y_1)) \\ & = \ell(xy_2^{-1}z) + \ell(y_1^{-1}y_2) \\ & = \ell(xy_2^{-1}z) + \ell(z^{-1} y_2 x^{-1} x y_1^{-1} z), \end{align*} using the conjugacy hypothesis in this last equality. Hence $xy_2^{-1}z \leq xy_1^{-1}z$, as desired. \end{proof} The following is another important feature of $G$-conjugacy-closed generators $T$. Given $g,h$ in $G$, let $g^h:=h^{-1}gh$ and ${}^hg:=hgh^{-1}$, and note that \begin{equation} \label{basis-for-Hurwitz} g \cdot h = h \cdot g^h = {}^gh \cdot g. \end{equation} \begin{definition}[Hurwitz operators] Given a generated group $(G,T)$ with $T$ closed under $G$-conjugacy and any $T$-word \[ \ttt:=(t_1,\ldots,t_{i-1},t_i,t_{i+1},t_{i+2},\ldots, t_m) \] for $g=t_1 \cdots t_m$, for $1 \leq i \leq m-1$ define the \defn{Hurwitz operator} $\sigma_i$ and its inverse $\sigma_i^{-1}$ by \begin{align*} \sigma_i(\ttt) &:=(t_1,\ldots,t_{i-1}, t_{i+1}, t_i^{t_{i+1}}, t_{i+2},\ldots, t_m), \\ \sigma_i^{-1}(\ttt) &:=(t_1,\ldots,t_{i-1}, {}^{t_i}t_{i+1},t_i, t_{i+2},\ldots, t_m). \end{align*} Equation \eqref{basis-for-Hurwitz} shows that $\sigma_i(\ttt)$ and $\sigma_i^{-1}(\ttt)$ are both $T$-words for $g$. \end{definition} \begin{remark} Although it is not needed in the sequel, note that $\{\sigma_1,\ldots,\sigma_{m-1}\}$ satisfy the {\it braid relations} $\sigma_i \sigma_{i+1} \sigma_i = \sigma_{i+1} \sigma_i \sigma_{i+1}$ and $\sigma_i \sigma_j = \sigma_j \sigma_i $ for $|i-j|\geq 2$, defining an action of the {\it braid group} $B_m$ on $m$ strands on the set of all length-$m$ factorizations of $g$. \end{remark} Note that the operator $\sigma_i$ (resp.\ $\sigma_i^{-1}$) can be used to swap any letter in a word for $g$ one position to the left (resp.\ right) \emph{unchanged} at the expense of conjugating the letter with which it swapped; this creates a new word for $g$ of the same length. Armstrong calls this the \emph{shifting property} \cite[Lem.~2.5.1]{Armstrong}. It has the following immediate consequence. \begin{proposition}[Subword property] \label{prop:subword} Let $(G, T)$ be a generated group with $T$ closed under $G$-conjugacy. Then $g \leq h$ if and only there exists a $T$-reduced word \[ \ttt:=(t_1,t_2,\ldots,t_{\ell(h)}) \] for $h$ containing as a subword (not necessarily a prefix, nor contiguous) a word \[ \hat{\ttt}=(t_{i_1},t_{i_2},\ldots,t_{i_{\ell(g)}}) \text{ with } 1 \leq i_1 < \cdots < i_{\ell(g)} \leq \ell(h) \] that is $T$-reduced for $g$. \end{proposition} \begin{proof} The ``only if" direction is direct from condition (i) in Definition~\ref{order-definition} of $g \leq h$. For the ``if" direction, given the $T$-reduced word $\ttt$ for $h$ containing the $T$-reduced subword $\hat{\ttt}$ for $g$, one obtains another $T$-reduced word for $h$ having $\hat{\ttt}$ as a prefix by repeatedly using Hurwitz operators to first move the letter $t_{i_1}$ leftward (unchanged) to the first position, then moving $t_{i_2}$ leftward (unchanged) to the second position, etc. \end{proof} \subsection{Fixed space codimension and reflection groups} Suppose that the group $G$ is given via a faithful representation, that is, $G$ is a subgroup of $\mathrm{GL}_n(\F)=\mathrm{GL}(V)$ where $V=\F^n$ for some field $\F$. This gives rise to another subadditive function $G \rightarrow \mathbb{N}$, namely the fixed space codimension \[ g \mapsto \operatorname{codim}(V^g)=n-\dim(V^g). \] \begin{proposition} \label{codim-subadditivity-prop} One has the subadditivity \begin{equation} \label{codim-subadditivity} \operatorname{codim}(V^{gh}) \leq \operatorname{codim}(V^g) + \operatorname{codim}(V^h) \end{equation} with equality occurring if and only if both of the following hold: \begin{align} \label{spanning-condition} V^g+V^h&= V \qquad \textrm{ and }\\ \label{intersection-condition} V^g \cap V^h&= V^{gh}. \end{align} \end{proposition} \begin{proof} One has \[ \dim(V^g)+\dim(V^h) = \dim(V^g+V^h)+\dim(V^g \cap V^h) \leq n + \dim(V^g \cap V^h) \] and hence \[ \operatorname{codim}(V^g)+\operatorname{codim}(V^h) \geq n-\dim(V^g \cap V^h) = \operatorname{codim}(V^g \cap V^h), \] with equality if and only if \eqref{spanning-condition} holds. Also, $V^g \cap V^h \subseteq V^{gh}$ and so \[ \operatorname{codim}(V^g \cap V^h) \geq \operatorname{codim}(V^{gh}), \] with equality if and only if \eqref{intersection-condition} holds. Hence \[ \operatorname{codim}(V^g)+\operatorname{codim}(V^h) \geq \operatorname{codim}(V^g \cap V^h) \geq \operatorname{codim}(V^{gh}), \] with equality if and only if both conditions hold. \end{proof} It is natural to compare $\operatorname{codim}(V^g)$ with the length function $\ell(g)=\ell_T(g)$ from before. \begin{definition}[Absolute length, absolute order] When a subgroup $G$ of $\mathrm{GL}(V)$ has a subset $T$ generating $G$ as a monoid, so that $(G,T)$ is a generated group, say that $\ell(g)=\ell_T(g)$ is an {\it absolute length function} if \begin{equation} \label{length-equals-codim} \operatorname{codim}(V^g) = \ell(g) \text{ for all }g\text{ in }G. \end{equation} In this situation, call the prefix order $\leq$ for $(G,T)$ of Definition~\ref{order-definition} the {\it absolute order} on $G$. \end{definition} \begin{proposition} \label{length-bounds-codim} Let $(G,T)$ be a generated group with $G$ a subgroup of $\mathrm{GL}(V)$. \begin{compactenum}[(i)] \item If $\ell(g)$ is an absolute length function, then $G$ must be a reflection group and $T$ must be the set of all reflections in $G$. \item Conversely, if $G$ is a reflection group and $T$ its set of all reflections, one at least has \[ \operatorname{codim}(V^g) \leq \ell(g) \text{ for all } g \text{ in } G. \] \end{compactenum} \end{proposition} \begin{proof} Assertion (i) follows as $\operatorname{codim}(V^g)=1$ if and only if $g$ is a reflection, and $\ell_T(g)=1$ if and only if $g$ lies in $T$. For (ii), write $g=t_1 t_2 \cdots t_{\ell(g)}$ and use the subadditivity \eqref{codim-subadditivity}. \end{proof} \begin{example} Carter showed \cite[Lem.~2]{Carter} that one has equality in \eqref{length-equals-codim} for any finite real reflection group $G \subset \mathrm{GL}_n(\mathbb{R})$. \end{example} \begin{example} On the other hand, motivated by considerations from the theory of deformation of skew group rings, Foster-Greenwood \cite{FosterGreenwood} analyzed the situation for finite \emph{complex} reflection groups $G \subset \mathrm{GL}_n(\mathbb{C})$ that cannot be realized as real reflection groups, and showed that in this case it is relatively rare to have equality in \eqref{length-equals-codim}. For example, the complex reflection group $G = G(4, 2, 2)$ is the set of monomial matrices in $\mathbb{C}^{2 \times 2}$ whose two nonzero entries lie in $\{\pm 1, \pm i\}$ and have product $\pm 1$. It has reflections \[ T= \left\{ \left[ \begin{matrix} 0 & 1 \\ 1 & 0\end{matrix} \right], \left[ \begin{matrix} 0 & i \\ -i & 0\end{matrix} \right], \left[ \begin{matrix} 0 & -1 \\ -1 & 0\end{matrix} \right], \left[ \begin{matrix} 0 & -i \\ i & 0\end{matrix} \right], \left[ \begin{matrix} 1 & 0 \\ 0 & -1\end{matrix} \right], \left[ \begin{matrix}-1 & 0 \\ 0 & 1\end{matrix} \right] \right\} \] and different distributions for the functions $\operatorname{codim}(V^g)$ and $\ell(g)$: \[ \sum_{g \in G} t^{\operatorname{codim}(V^g)} = 1 + 6t + 9t^2 \qquad \textrm{ and } \qquad \sum_{g \in G} t^{\ell_T(g)} = 1 + 6t + 7t^2 + 2t^3 . \] The two scalar matrices $ \pm \left[ \begin{smallmatrix} i & 0\\ 0 &i \end{smallmatrix} \right] $ have reflection length $3$; neither is a product of two reflections. \end{example} \begin{remark} \label{alternate-characterization-remark} Note that whenever $G$ is a reflection group with an absolute length function, so $\ell(g)=\operatorname{codim}(V^g)$, the absolute order relation $\leq$ acquires yet another characterization via Proposition~\ref{codim-subadditivity-prop} (in addition to those in Definition~\ref{order-definition} and Proposition~\ref{prop:subword}). Specifically, $g \leq h$ if and only if one has both equalities \begin{align} \label{codim-order-spanning-equality} V^g+V^{g^{-1}h}&= V \qquad \textrm{ and } \\ \label{codim-order-intersection-equality} V^g \cap V^{g^{-1}h}&= V^{h}. \end{align} \end{remark} \begin{example} Brady and Watt \cite{BradyWatt} considered the order $\leq$ defined via Remark~\ref{alternate-characterization-remark} on real orthogonal groups and complex unitary groups acting on finite-dimensional spaces. They showed \cite[Cor.~5]{BradyWatt} that such groups have an absolute length function when considered as reflection groups generated by their subset of reflections. \end{example} We come to our first main result, showing that the full general linear group $G=\mathrm{GL}(V)$ always has an absolute length function. \begin{proposition} \label{length=codim} Let $G=\mathrm{GL}_n(\F)=\mathrm{GL}(V)$ with $V=\F^n$ for some field $\F$, and consider the generated group $(G,T)$ where $T$ is the set of all reflections in $G$. Then every $g$ in $G$ has $$ \ell(g)=\operatorname{codim}(V^g). $$ \end{proposition} \begin{proof} By Proposition~\ref{length-bounds-codim}, it suffices to show that $\ell(g) \leq \operatorname{codim}(V^g)$. This follows by induction on $\operatorname{codim}(V^g)$ if one can show that for any $g$ in $G$ other than the identity, there exists some $t$ in $T$ having $V^{gt} \supsetneq V^g$. We construct such a $t$ explicitly. Choose an ordered basis $e_1,\ldots,e_n$ for $V=W \oplus W'$ so that $W':=V^g$ is spanned by $\{e_{m+1},e_{m+2},\ldots,e_n\}$. In this basis for $V$, we have \[ g= \begin{bmatrix} A & 0\\ B & \one_{n-m} \end{bmatrix} \] where $A$ in $\mathrm{GL}_m(\F)$ expresses the composite $ W \overset{i_W}{\hookrightarrow} V \overset{g}{\to} V \overset{\pi_W}{\twoheadrightarrow} W $ in the basis $e_1,\ldots,e_m$. We claim that by making a change of basis on $W$, one may assume that $e_m^\top A^{-1} e_m \neq 0$. To see this claim, fix any matrix $Q$ in $\mathrm{GL}_m(\F)$ (such as $Q=A^{-1}$) having $e_m^\top Q e_m =0$. Since $Q e_m \neq \mathbf{0}$, there must exist some $j$ in $\{1,2,\ldots,m-1\}$ for which $e_j^\top Q e_m \neq 0$. Thus one may define an invertible change of basis $P$ by $P(e_i) = e_i$ for $i \neq j$ and $P(e_j)=e_j+e_m$. Consequently, $P^{-1}(e_m)=e_m$ and $P^\top e_m=e_j+e_m$, so one can calculate that $PQP^{-1}$ satisfies \[ e_m^\top PQP^{-1} e_m = (P^\top e_m)^\top Q e_m = (e_j+e_m)^\top Q e_m =e_j^\top Qe_m + e_m^\top Qe_m =e_j^\top Qe_m \neq 0. \] Once one has $e_m^\top A^{-1}e_m \neq 0$, define the desired reflection $t$ to fix the hyperplane spanned by $\{e_1,\ldots,e_n\} \setminus \{e_m\}$ and send $e_m$ to $ A^{-1} e_m \oplus (-B A^{-1} e_m) $ in $W \oplus W'=V$. One can check that $\det(t)=e_m^\top A^{-1}e_m \neq 0$, so that $t$ does define a reflection in $\mathrm{GL}(V)$. Furthermore, both $g$ and $t$ fix $W'=V^g$ pointwise, so $gt$ also fixes $W'$ pointwise. However, the following shows that $gt$ additionally fixes $e_m$, and hence $V^{gt} \supsetneq W' =V^g$, as desired: \[ gt(e_m)= g \begin{bmatrix} A^{-1} e_m \\ -B A^{-1} e_m \end{bmatrix} = \begin{bmatrix} A & 0\\ B & \one_{n-m} \end{bmatrix} \cdot \begin{bmatrix} A^{-1} e_m \\ -B A^{-1} e_m \end{bmatrix} = \begin{bmatrix} A \cdot A^{-1} e_m \\ B \cdot A^{-1}e_m - BA^{-1}e_m \end{bmatrix} =e_m. \qedhere \] \end{proof} \subsection{Surjection onto subspace lattices} Consider the lattice $L(V)$ of all $\F$-subspaces of $V=\F^n$ ordered by \emph{reverse inclusion}.\footnote{This matches, e.g., the convention common in the theory of geometric lattices.} For any subgroup $G$ of $\mathrm{GL}(V)$, one has a map \begin{equation} \label{fixed-space-map} \begin{array}{rcl} G &\overset{\pi}{\longrightarrow}& L(V)\\ g &\longmapsto& V^g \end{array}. \end{equation} If $G$ is a reflection group with an absolute length, then Remark~\ref{alternate-characterization-remark} shows that this map $\pi$ is order-preserving for the absolute order. Orlik and Solomon \cite[Lem.~4.4]{OrlikSolomon} showed that if $G$ is a finite complex reflection group in $\mathrm{GL}_n(\mathbb{C})=\mathrm{GL}(V)$, then $\pi$ is a surjection onto the subposet of $L(V)$ consisting of all subspaces that are intersections of reflection hyperplanes. Hence for finite real reflection groups, which have an absolute length, $\pi$ is an order-preserving surjection onto this subposet. The next observation shows that the same holds for the full general linear groups. The proof is an easy exercise, left to the reader. \begin{proposition} \label{surjection onto lattice of subspaces} For $G=\mathrm{GL}(V)$ itself, the map \eqref{fixed-space-map} is an order-preserving surjection. \end{proposition} \begin{remark} Brady and Watt~\cite[Thm.~1]{BradyWatt} showed that the map \eqref{fixed-space-map} is also surjective, and in fact becomes a bijective order-isomorphism, when one restricts to a lower interval $[e,c]$ between the identity $e$ and an element $c$ having $V^c=\{0\}$ in real orthogonal or complex unitary groups. However, this bijectivity fails for general linear groups, when typically there are many elements below $c$ having the same fixed space. For example, it is a special case of Theorem~\ref{thm:flag-f-vector} below that there are $q^{n - 2}(q^n - 1)$ reflections in $[e, c] \subset \mathrm{GL}_n(\F_q)$, while there are only $(q^n - 1)/(q - 1)$ hyperplanes in $L(V)$. \end{remark} \begin{remark} For finite real reflection groups, orthogonal/unitary groups, and general linear groups, the absolute orders $\leq$ are not lattices because they have many incomparable maximal elements. However, when one restricts to lower intervals $[e,c]$, absolute orders are sometimes lattices. For example, in the case of orthogonal/unitary groups, Brady and Watt's order-isomorphism $[e,c] \cong L(V)$ shows that every lower interval is a lattice. For irreducible finite real reflection groups in the case that $c$ is chosen to be a {\it Coxeter element}, the fact that $[e,c]$ is a lattice was shown originally via a case-by-case check by Bessis \cite[Fact~2.3.1]{Bessis} and later with a uniform proof by Reading \cite[Cor.~8.6]{Reading}. For the general linear groups $\mathrm{GL}(V)=\mathrm{GL}_n(\F)$ with $n \geq 3$, the intervals $[e,c]$ are not lattices in general. For example, the interval $[e, c]$ in $\mathrm{GL}_3(\mathbb{F}_3)$ below the Singer cycle \[ c = \begin{bmatrix} 0 & 0 & 2 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{bmatrix} \] contains the two reflections \[ \begin{bmatrix} 1 & 2 & 2 \\ 0 & 1 & 0 \\ 0 & 1 & 2 \end{bmatrix} \quad \textrm{ and } \quad \begin{bmatrix} 1 & 2 & 2 \\ 0 & 2 & 1 \\ 0 & 2 & 0 \end{bmatrix}, \] both of which are covered by three elements \[ \begin{bmatrix} 1 & 2 & 2 \\ 0 & 1 & 1 \\ 0 & 1 & 0 \end{bmatrix}, \quad \begin{bmatrix} 1 & 2 & 2 \\ 1 & 0 & 1 \\ 1 & 0 & 0 \end{bmatrix}, \quad \begin{bmatrix} 1 & 2 & 2 \\ 2 & 2 & 1 \\ 2 & 2 & 0 \end{bmatrix} \] of absolute length $2$. \end{remark} \subsection{Length functions when $T=T^{-1}$} We close this section on $\ell(-)$ for a generated group $(G,T)$, with two general facts that hold when $T=T^{-1}$, that is, when $T$ is closed under taking inverses. They are reminiscent of properties of Coxeter group length functions. \begin{proposition}\label{prop:lengthpm} For $(G, T)$ a generated group with $T=T^{-1}$, any $t$ in $T$ and $g$ in $G$ have $$ \ell(g)-1 \leq \ell(tg), \ell(gt) \leq \ell(g)+1. $$ \end{proposition} \begin{proof} Subadditivity immediately gives $\ell(gt), \ell(tg) \leq \ell(g)+1$. Meanwhile \begin{align*} \ell(g) =\ell(gt \cdot t^{-1}) & \leq \ell(gt)+1,\\ \ell(g) =\ell(t^{-1} \cdot tg) & \leq \ell(tg)+1. \qedhere \end{align*} \end{proof} \noindent Note that $\ell(tg)=\ell(g)=\ell(gt)$ is possible, e.g., whenever $(G,T)$ is a reflection group whose set of all reflections $T$ contains reflections $t$ of order $3$ or more, so that $\ell(t \cdot t)=\ell(t)=1$. \begin{proposition}[Exchange property] \label{prop:exchange} Let $(G, T)$ be a generated group with $T=T^{-1}$ and $T$ closed under $G$-conjugation. If $\ell(tg)<\ell(g)$ for some $t\in T$ and $g$ in $G$, then there is a $T$-reduced word $g=t_1\cdots t_k$ such that $tg={}^tt_1\cdots {}^tt_{i-1} \cdot t_{i+1}\cdots t_k$. \end{proposition} \begin{proof} If $\ell(tg)<\ell(g)$ for some $t\in T$ then Proposition~\ref{prop:lengthpm} implies $\ell(tg)=\ell(g)-1$. Hence $t^{-1}\leq g$ and the subword property (Proposition~\ref{prop:subword}) implies that $t^{-1}$ is a subword of $(t_1,\ldots,t_k)$ for some $T$-reduced expression $g=t_1\cdots t_k$. If $t_i=t^{-1}$, then \[ tg=tt_1\cdots t_{i-1}t^{-1}t_{i+1}\cdots t_k ={}^tt_1\cdots {}^tt_{i-1} \cdot t_{i+1}\cdots t_k. \qedhere \] \end{proof} \section{Counting ranks in the absolute order on $\mathrm{GL}_n(\F_q)$} \label{ranks-in-whole-order} When the field $\F=\F_q$ is finite, so that $\mathrm{GL}_n:=\mathrm{GL}_n(\F_q)$ is finite, it is easy to give an explicit formula and generating function counting elements at rank $k$ in the absolute order on $\mathrm{GL}_n$, that is, those having fixed space codimension $k$. Such a formula, equivalent to \eqref{Fulman-formula} below, was derived\footnote{Fulman credits its first proof to unpublished work of Rudvalis and Shinoda \cite{RudvalisShinoda}.} in work of Fulman \cite[Thm.~6(1)]{Fulman} in a probabilistic context. In the formula and elsewhere, we will use some standard $q$-analogues: \begin{align*} (x;q)_n&:=(1-x)(1-xq)(1-xq^2) \cdots (1-xq^{n-1}),\\ [n]_q & :=1+q+q^2+\cdots+q^{n-1},\\ [n]!_q & :=[1]_q [2]_q \cdots [n]_q=\frac{(q;q)_n}{(1-q)^n},\\ \qbin{n}{k}{q} & :=\frac{[n]!_q}{[k]!_q [n-k]!_q} =\frac{(q;q)_n}{(q;q)_k (q;q)_{n-k}} =\#\{k\text{-dimensional }\F_q\text{-subspaces of }V=\F_q^n\}. \end{align*} We mention for future use the fact that \begin{equation} \label{GL-cardinality} |\mathrm{GL}_n(\F_q)| =(q^n-1)(q^n-q)(q^n-q^2)\cdots (q^n-q^{n-1}) =(-1)^n q^{\binom{n}{2}} (q;q)_n \end{equation} as well as the $q$-binomial theorem \cite[(1.87)]{EC1}: \begin{equation} \label{q-binomial-theorem} (x;q)_n = \sum_{k=0}^n (-1)^k q^{\binom{k}{2}} \qbin{n}{k}{q} x^k. \end{equation} \begin{proposition} \label{abs-order-rank-sizes} The number of $g$ in $\mathrm{GL}_n:=\mathrm{GL}_n(\mathbb{F}_q)$ having rank $k$ in absolute order is \begin{align} \label{formula for r(n, k)} r_q(n, k) &:= (-1)^k q^{{k \choose 2}} \qbin{n}{k}{q} \sum_{j=0}^k \qbin{k}{j}{q} q^{j(n-k)} (q;q)_j \\ \label{Fulman-formula} &= \frac{|\mathrm{GL}_n|}{|\mathrm{GL}_{n-k}|} \sum_{j=0}^k \frac{ (-1)^j q^{\binom{j}{2}-j(n-k)} }{|\mathrm{GL}_j|}, \end{align} with generating function \begin{equation} \label{gf for r(n, k)} 1+\sum_{n \geq 1} \left( \sum_{0 \leq k \leq n} r_q(n,k) x^{n - k} \right) \frac{y^n}{|\mathrm{GL}_n|} = \frac{1}{1 - y} \sum_{n \geq 0} \frac{(x; q^{-1})_n}{(q; q)_n} y^n. \end{equation} \end{proposition} \begin{proof} The equivalence of formulas \eqref{formula for r(n, k)} and \eqref{Fulman-formula} is a straightforward exercise using \eqref{GL-cardinality}. Thus we will derive \eqref{formula for r(n, k)}, and then check that it agrees with \eqref{gf for r(n, k)}. By Proposition~\ref{length=codim}, we need to count elements in $\mathrm{GL}_n$ whose fixed subspace has codimension $k$. For a subspace $W$ of $V=\mathbb{F}_q^n$, let \begin{align*} g(W)&:=|\{g \in G: V^g = W\}|,\\ f(W)&:=|\{g \in G: V^g \supseteq W\}|=\sum_{U\supseteq W}g(U), \end{align*} so that if $\operatorname{codim}(W)=k$ one has \begin{align} \label{q-binomial-inflation} r_q(n,k) &= \qbin{n}{k}{q} g(W),\\ \notag f(W) &= q^{k(n-k)} |\mathrm{GL}_k|= q^{k(n-k)} \cdot (-1)^k q^{\binom{k}{2}} (q;q)_k. \end{align} M\"obius inversion \cite[Ex.~3.10.2]{EC1} in the lattice of subspaces of $\F_q^n$ gives for $\operatorname{codim}(W)=k$, \[ g(W) = \sum_{U \supseteq W} \mu(W,U) f(U) \\ =\sum_{j=0}^k \qbin{k}{j}{q} (-1)^{k-j} q^{{k-j\choose 2}} \cdot (-1)^j q^{j(n-j)+{j \choose 2}}(q;q)_j \] from which \eqref{formula for r(n, k)} follows via \eqref{q-binomial-inflation}. To check \eqref{gf for r(n, k)}, use \eqref{q-binomial-theorem} to see that the coefficient of $y^n$ on its right is \[ \sum_{m=0}^n \frac{(x;q^{-1})_m}{(q;q)_m} = \sum_{m=0}^n \frac{1}{(q;q)_m} \sum_{i=0}^m (-1)^i q^{-\binom{i}{2}} \qbin{m}{i}{q^{-1}} x^i. \] Therefore the coefficient of $y^n x^{n-k}$ on the right of \eqref{gf for r(n, k)} equals \[ (-1)^{n-k} q^{-\binom{n-k}{2}} \sum_{m=n-k}^n \frac{ 1 }{(q;q)_m} \qbin{m}{n-k}{q^{-1}}. \] Reindexing $j:=n-m$ in the summation, and using the fact that $$ \qbin{a+b}{a}{q^{-1}} = q^{-ab} \qbin{a+b}{a}{q}, $$ one then finds that the coefficient of $y^n x^{n-k} / |\mathrm{GL}_n|$ on the right of \eqref{gf for r(n, k)} equals $$ |\mathrm{GL}_n| \cdot (-1)^{n-k} q^{-\binom{n-k}{2}} \sum_{j=0}^k \frac{ q^{-(n-k)(k-j)} }{(q;q)_{n-j}} \qbin{n-j}{n-k}{q} = (-1)^{k} q^{\binom{k}{2}} \qbin{n}{k}{q} \sum_{j=0}^k \qbin{k}{j}{q} q^{j(n-k)} (q;q)_{j}, $$ which agrees with the formula \eqref{formula for r(n, k)} for $r_q(n,k)$. \end{proof} \begin{remark} \label{r(n, k) vs. Stirling} The formula~\eqref{formula for r(n, k)} for $r_q(n, k)$ is reminiscent of the inclusion-exclusion formula \[ \binom{n}{k} \sum_{j=0}^k (-1)^{j} \binom{k}{j} (k-j)! \] counting permutations with $n - k$ fixed points. On the other hand, it seems more natural to think of $r_q(n,k)$ as a $q$-analogue of $c(n,n-k)$, the \emph{signless Stirling number of the first kind}, counting the permutations in the symmetric group $\mathfrak{S}_n$ having $n-k$ cycles: when $\mathfrak{S}_n$ acts as a real reflection group permuting coordinates in $V=\mathbb{R}^n$, a permutation $\sigma$ with $n-k$ cycles has $\operatorname{codim}(V^\sigma)=k$. In this sense, Equation~\eqref{gf for r(n, k)} gives a $q$-analogue of the formula \[ 1+\sum_{n \geq 1} \sum_{0 \leq k \leq n} c(n,n-k) x^{n - k} \frac{y^n}{n!} = (1 - y)^{-x} = \sum_{k = 0}^\infty (-1)^k \binom{-x}{k} y^k, \] particularly when one observes that $\dfrac{(x; q^{-1})_k}{(q; q)_k} = \qbin{N}{k}{q}$ if $x = q^N$. \end{remark} \section{Counting chains below a Singer cycle in $\mathrm{GL}_n(\mathbb{F}_q)$} \label{section:chains} In the theory of finite irreducible real reflection groups, the interval $[e,c]$ in absolute order below a \emph{Coxeter element} $c$ is sometimes called the poset $NC(W)$ of \defn{$W$-noncrossing partitions}. It is extremely well-behaved from several enumerative points of view, including pleasant formulas for its cardinality, its \emph{M\"{o}bius function}, and its \emph{zeta polynomial}. In the classical types $A,B/C,D$ one additionally has formulas for the following more refined counts; see Edelman \cite[Thm.~3.2]{Edelman} for type $A$, Reiner \cite[Prop.~7]{Reiner} for types $B/C$, and Athanasiadis--Reiner \cite[Thm.~1.2(ii)]{AthanasiadisReiner} for type $D$. \begin{definition} \label{flag-f-definition} Fix a reflection group $G$ having an absolute order, and an element $c$ of $G$ with $\ell(c)=n$. The {\it flag $f$-vector} $(f_\alpha)$ of the interval $[e,c]$ has entries $f_\alpha:=f_\alpha[e,c]$ indexed by {\it compositions} $\alpha=(\alpha_1,\ldots,\alpha_m)$ of $n=\sum_i \alpha_i$ with $\alpha_i > 0$. The entry $f_\alpha[e,c]$ is the number of chains \[ e=c_0 < c_1 < c_2 < \cdots < c_{m-1} < c_m=c \] in which $c_i$ has rank $\alpha_1+\alpha_2+\cdots+\alpha_i$ for each $i$. Since $c_{i-1} < c_{i}$ if and only if $g_i:=c_{i-1}^{-1} c_i$ has $\ell(g_i)=\ell(c_i)-\ell(c_{i-1})$, one can rephrase the definition as \[ f_\alpha[e,c]=\Big|\{(g_1,\ldots,g_m) \in G^m \colon c=g_1 \cdots g_m, \text{ and }\ell(g_i)=\alpha_i \text{ for each }i\}\Big|. \] \end{definition} As mentioned in the introduction, when viewing $\mathrm{GL}_n(\F_q)$ as a finite reflection group, the role analogous to that of a Coxeter element is played by a \defn{Singer cycle} $c$, which is the image of a multiplicative generator for $\F_{q^n}^\times$ after one embeds $\F_{q^n}^\times$ into $\mathrm{GL}_n(\F_q) \cong \mathrm{GL}_{\F_q}(\F_{q^n})$; see \cite[\S 9]{CSP}, \cite[Thm.~19]{RStantonWebb}, \cite{LRS}. Our goal in this section is to prove an unexpectedly simple formula for the flag $f$-vector $f_\alpha[e,c]$ when $c$ is a Singer cycle; see Theorem~\ref{thm:flag-f-vector} below. The special case where $\alpha=(1,1,\ldots,1)$ appeared in Lewis--Reiner--Stanton~\cite{LRS}, where it was shown that there are exactly $(q^n-1)^{n-1}$ maximal chains in $[e,c]$ (equivalently, minimal factorizations of a Singer cycle into reflections). In fact, the theorem also confirms a special case\footnote{Theorem~\ref{thm:flag-f-vector} confirms the special case \cite[Conj.~6.3 at $\ell=n$]{LRS}. In forthcoming work \cite{LewisMorales}, the second author and Morales use the same techniques to confirm \cite[Conj.~6.3]{LRS} in full generality.} of \cite[Conj.~6.3]{LRS}: it applies not only to a Singer cycle $c$, but to any element $c$ in $\mathrm{GL}_n(\F_q)$ which is \defn{regular elliptic}, meaning that $c$ stabilizes no proper subspaces in $\F_q^n$. (Equivalently, regular elliptic elements are those that act on $V=\F_q^n$ with characteristic polynomial which is irreducible in $\F_q[x]$; see \cite[Prop.~4.4]{LRS} for other equivalent definitions.) To state the theorem, define the quantity $$ \varepsilon(\alpha) := \sum_{i=1}^{m} (\alpha_i-1)(n-\alpha_i). $$ \begin{theorem} \label{thm:flag-f-vector} For any regular elliptic element $c$ in $\mathrm{GL}_n(\F_q)$ and any composition $\alpha = (\alpha_1, \ldots, \alpha_m)$ of $n$, one has \begin{equation} \label{q-flag-count} f_\alpha[e,c] = q^{\varepsilon(\alpha)} \cdot (q^n - 1)^{m - 1}. \end{equation} In particular, the number of elements of $[e,c]$ of rank $k$ for $1 \leq k \leq n-1$ is \begin{equation} \label{interval-rank-sizes} f_{(k,n-k)}[e,c] = q^{2k(n-k)-n} \cdot (q^n - 1). \end{equation} \end{theorem} We remark that Theorem~\ref{thm:flag-f-vector} appears very reminiscent of a special case of Goulden and Jackson's \emph{cactus formula}, counting the \defn{genus zero} factorizations $\sigma=\sigma_1 \cdots \sigma_m$ of an $n$-cycle~$\sigma$; these are the factorizations which are additive $\sum_{i=1}^m \ell(\sigma_i) = \ell(\sigma)$ for the absolute length function given by $ \ell(\tau)=\sum_j({\lambda_j-1}) $ if $\tau$ has cycle sizes $(\lambda_1,\lambda_2,\ldots)$. (This is the same length function discussed in Remark~\ref{r(n, k) vs. Stirling}.) To state it, we need the following notation: given a partition $\lambda=1^{m_1} 2^{m_2} 3^{m_i} \cdots$ having $m_i$ parts of size $i$ and $m:=\sum_i m_i$ parts total, define \[ N(\lambda)=\frac{1}{m}\binom{m}{m_1,m_2,\ldots}. \] If $\lambda=(\lambda_1,1^{n-\lambda_1})$ corresponds to a permutation with only one nontrivial cycle then $N(\lambda)=1$. \begin{theorem}[Cactus formula {\cite[Thm.~3.2]{GouldenJackson}}] \label{cactus-theorem} For an $n$-cycle $\sigma$ in the symmetric group $\mathfrak{S}_n$, the number of factorizations $\sigma=\sigma_1 \cdots \sigma_m$ that \begin{compactitem} \item are additive, i.e., $\sum_i \ell(\sigma_i)=n-1(=\ell(\sigma))$, and \item have $\sigma_i$ with cycle sizes $(\lambda^{(i)}_1,\lambda^{(i)}_2,\ldots)= \lambda^{(i)}$ \end{compactitem} is given by \[ n^{m-1} \prod_{i=1}^m {N(\lambda^{(i)})}. \] In particular, in the special case where each $\sigma_i$ has only one nontrivial cycle, the number of factorizations is given by \begin{equation} \label{cactus-special-case} n^{m-1}. \end{equation} \end{theorem} We currently lack a combinatorial proof of Theorem~\ref{thm:flag-f-vector}; see Question~\ref{Biane-type-proof-question}. Instead, prompted by the similarity between \eqref{q-flag-count} and \eqref{cactus-special-case}, we prove the former by following a $q$-analogue of a proof of the latter due to Zagier; see \cite[\S A.2.4]{LandoZvonkin}. We sketch here the steps in Zagier's proof and give the $q$-analogous steps in the subsections below. The first step is the same for both proofs, namely a representation-theoretic approach to counting factorizations that goes back to Frobenius; see, e.g., \cite[\S A.1.3]{LandoZvonkin} for a proof. \begin{definition} Given a finite group $G$, let $\Irr(G)$ be the set of its irreducible ordinary (finite-dimensional, complex) representations $U$. For each $U$ in $\Irr(G)$, define its \defn{character} $\chi_U(-)$, \defn{degree} $\chi_U(e)$, and \defn{normalized character} $\widetilde{\chi}_U(-)$ by \begin{align*} \chi_{U}(g)&:=\Tr(g: U \rightarrow U), \\ \chi_U(e)&=\dim_\mathbb{C} U,\\ \widetilde{\chi}_{U}(g)&:=\frac{\chi_U(g)}{\chi_U(e)}. \end{align*} Both functions $\chi_{U}(-), \widetilde{\chi}_{U}(-)$ on $G$ extend $\mathbb{C}$-linearly to functions on the \defn{group algebra} $\mathbb{C}[G]$. \end{definition} In the sequel, we will frequently conflate a representation $U$ with its character $\chi_U$ without comment. \begin{proposition}[Frobenius \cite{Frobenius}] \label{Frobenius-prop} Let $G$ be a finite group and let $A_1,\ldots,A_m \subset G$ be unions of conjugacy classes in $G$. Let $z_i:=\sum_{g_i \in A_i} g_i$ in $\mathbb{C}[G]$. Then for each $g$ in $G$, \begin{equation} \label{Chapuy-Stump-varying-class-answer} |\{(g_1,\ldots,g_m) \in A_1 \times \cdots \times A_m \colon g=g_1 \cdots g_m\}| =\frac{1}{|G|} \sum_{\chi \in \Irr(G)} \chi(e) \chi(g^{-1}) \prod_{i=1}^m \widetilde{\chi}(z_i). \end{equation} \end{proposition} Zagier's proof of Theorem~\ref{cactus-theorem} applies Proposition~\ref{Frobenius-prop} by following these four steps. \subsection*{Step 1} One observes that, when applying \eqref{Chapuy-Stump-varying-class-answer} to count factorizations of an $n$-cycle in $G=\mathfrak{S}_n$, the summation is much \emph{sparser} than it looks initially. Irreducible $\mathfrak{S}_n$-characters $\chi^\lambda$ are indexed by partitions $\lambda$ of $n$, but the only $\chi^\lambda$ which do not vanish on an $n$-cycle $\sigma$ are the \defn{hook shapes}, i.e., those of the form $\lambda=(n-d,1^d)$ for $d=0,1,\dots,n-1$. These satisfy \[ \chi^{(n-d,1^d)}(\sigma) =(-1)^d \qquad \textrm{ and } \qquad \chi^{(n-d,1^d)}(e) =\binom{n-1}{d}. \] Hence Proposition~\ref{Frobenius-prop} shows that the number of additive factorizations $\sigma=\sigma_1 \cdots \sigma_m$ in which each $\sigma_i$ has cycle type $\lambda^{(i)}$ is \begin{equation} \label{Zagier-sparse-expression} \frac{1}{n!} \sum_{d=0}^{n-1} (-1)^d \binom{n-1}{d} P(d), \qquad\text{ where }\quad P(d):=\prod_{i=1}^m \widetilde{\chi}^{(n-d,1^d)}(z_i) \end{equation} and each $z_i$ is the sum in $\mathbb{C}[\mathfrak{S}_n]$ of all permutations of cycle type $\lambda^{(i)}$. \subsection*{Step 2} One shows that each normalized character value $\widetilde{\chi}^{(n-d,1^d)}(z_i)$ appearing as a factor in \eqref{Zagier-sparse-expression} is the specialization at $x=d$ of a polynomial $P_{\lambda^{(i)}}(x)$ in $\mathbb{Q}[x]$. This polynomial has degree $\sum_j (\lambda^{(i)}_j-1)$ and a predictable, explicit leading coefficient. Thus the product $P(d)$ is also the specialization of a polynomial $P(x)$ in $\mathbb{Q}[x]$, having degree $n - 1$ and a predictable, explicit leading coefficient. \subsection*{Step 3} Note that the $N$th iterate $\Delta^N:=\Delta \circ \cdots \circ \Delta$ of the forward difference operator \begin{equation} \label{forward-difference} \Delta(f)(x):=f(x+1)-f(x) \end{equation} satisfies \begin{equation} \label{forward-difference-iterate} (\Delta^N f)(x)=\sum_{d=0}^N (-1)^d \binom{N}{d} f(x+d). \end{equation} Hence the sum \eqref{Zagier-sparse-expression} is the $(n-1)$st forward difference of $P(x)$ evaluated at $x = 0$, that is, $ (\Delta^{n-1}P)(0). $ \subsection*{Step 4} For each integer $m \geq 0$ one has \[ \Delta(x^m)=(x+1)^m-x^m=mx^{m-1}+O(x^{m-2}), \] and so the operator $\Delta$ lowers degree by $1$ and scales by $m$ the leading coefficient of a degree-$m$ polynomial. Hence the polynomial $P(x)$ from Step 2 has $\Delta^{n-1}P=(\Delta^{n-1}P)(0)$ equal to a constant, namely $(n-1)!$ times the leading coefficient of $P(x)$. Thus our answer \eqref{Zagier-sparse-expression}, which is equal to $\frac{1}{n!} (\Delta^{n-1}P)(0)$ by Step 3, is $\frac{(n-1)!}{n!}=\frac{1}{n}$ times the leading coefficient of $P(x)$ computed in Step 2. \medskip In the next four subsections, we describe what we view as $q$-analogues of Steps 1, 2, 3, 4, in order to prove Theorem~\ref{thm:flag-f-vector}. As a preliminary step, take $\mathrm{GL}_n:=\mathrm{GL}_n(\F_q)$, acting on $V = \F_q^n$, and define for $k=0,1,\ldots,n$ the element $z_k$ in $\mathbb{C}[\mathrm{GL}_n]$ to be the sum of all elements $g$ for which $\operatorname{codim}(V^g)=k$. Then Definition~\ref{flag-f-definition} and Proposition~\ref{Frobenius-prop} show that \begin{equation} \label{chain counting equation} f_\alpha[e,c] = \frac{1}{|\mathrm{GL}_n|} \sum_{\chi \in \Irr(\mathrm{GL}_n)} \chi(e) \chi(c^{-1}) \prod_{i=1}^m \widetilde{\chi}(z_{\alpha_i}). \end{equation} \subsection{A $q$-analogue of Step 1.} Just as in Step 1 above, one observes that for a regular elliptic element $c$ in $\mathrm{GL}_n$, the summation \eqref{chain counting equation} is much {\it sparser} than it looks initially, as many $\mathrm{GL}_n$-irreducibles have $\chi(c^{-1})=0$. To explain this, we begin with a brief outline of some of the theory of complex characters of $\mathrm{GL}_n(\F_q)$. The theory was first developed by J.A. Green \cite{Green}, building on R. Steinberg's work \cite{Steinberg} constructing the unipotent characters $\chi^{\one,\lambda}$. It has been reworked several times, e.g., by Macdonald \cite[Chs.~III, IV]{Macdonald} and Zelevinsky \cite[\S 11]{Zelevinsky}. \begin{definition} A key notion is the {\it parabolic} or {\it Harish-Chandra induction} $\chi_1 \hcprod \chi_2$ of two characters $\chi_1, \chi_2$ for $\mathrm{GL}_{n_1}, \mathrm{GL}_{n_2}$ to give a character of $\mathrm{GL}_n$ where $n=n_1+n_2$. To define it, introduce the parabolic subgroup \begin{equation} \label{parabolic-definition} P_{n_1,n_2}:=\left\{ \left[ \begin{matrix} A_1 & B\\ 0 & A_2 \end{matrix} \right] \text{ in }\mathrm{GL}_n \right\} \end{equation} so that $A_i$ lies in $\mathrm{GL}_{n_i}$ for $i=1,2$, and $B$ is arbitrary in $\F_q^{n_1 \times n_2}$. Then \begin{equation} \label{parabolic-induction-formula} (\chi_1 \hcprod \chi_2)(g):= \frac{1}{|P_{n_1,n_2}| } \sum_{ \substack{h \in G\colon\\ hgh^{-1} \in P_{n_1,n_2}}} \chi_1(A_1) \chi_2(A_2), \end{equation} where the element $hgh^{-1}$ of $P_{n_1,n_2}$ has diagonal blocks labeled $A_1,A_2$ as above. Said differently, $ \chi_1 \hcprod \chi_2 :=\left( \chi_1 \otimes \chi_2 \right) \Uparrow^{P_{n_1,n_2}}_{\mathrm{GL}_{n_1} \times \mathrm{GL}_{n_2}} \uparrow_{P_{n_1,n_2}}^{\mathrm{GL}_n} $ where \begin{itemize} \item $(-)\Uparrow_{\mathrm{GL}_{n_1} \times \mathrm{GL}_{n_2}}^{P_{n_1,n_2}}$ is {\it inflation} of representations of $\mathrm{GL}_{n_1} \times \mathrm{GL}_{n_2}$ into those of $P_{n_1,n_2}$, by precomposing with the surjection $P \twoheadrightarrow \mathrm{GL}_{n_1} \times \mathrm{GL}_{n_2}$, and \item $(-)\uparrow_{P_{n_1,n_2}}^{\mathrm{GL}_n}$ is {\it induction} of representations. \end{itemize} \end{definition} The binary operation $(\chi_1,\chi_2) \longmapsto \chi_1 \hcprod \chi_2$ turns out \cite[Ch.~III]{Zelevinsky} to define an associative, commutative (!), graded $\mathbb{C}$-algebra structure on $\bigoplus_{n \geq 0} \Class(\mathrm{GL}_n)$, where $\Class(\mathrm{GL}_n)$ denotes the $\mathbb{C}$-vector space of class functions on $\mathrm{GL}_n$, with $\Class(\mathrm{GL}_0):=\mathbb{C}$. \begin{definition} An irreducible $U$ in $\Irr(\mathrm{GL}_n)$ is called \defn{cuspidal}, with \defn{weight} $\wt(U)=n$, if $U$ is not a constituent of any proper induction $\chi_1 \hcprod \chi_2$ for characters $\chi_i$ of $\mathrm{GL}_{n_i}$ with $n=n_1+n_2$ and $n_1, n_2 \geq 1$. Denote by $\textrm{Cusp}_n$ the set of weight-$n$ cuspidal characters, and $\textrm{Cusp}:=\sqcup_{n \geq 0} \textrm{Cusp}_n$. \end{definition} \begin{definition} An irreducible $\mathrm{GL}_n$-character is called \defn{primary} to the cuspidal $U$ if $\chi$ \emph{does} occur as an irreducible constituent of some product $U^{\hcprod{\frac{n}{s}}}=U \hcprod U \hcprod \cdots \hcprod U$, where $\wt(U)=s$. \end{definition} \noindent It turns out that one can parametrize the irreducible $\mathrm{GL}_n$-characters primary to the cuspidal $U$ as $\{ \chi^{U,\lambda}: |\lambda|=\frac{n}{s} \}$, parallel to the parametrization of the irreducible $\mathfrak{S}_n$-characters as $\{ \chi^\lambda: |\lambda| = n \}$. In fact, two primary irreducibles $\chi^{U,\mu}, \chi^{U,\nu}$ for $\mathrm{GL}_{n_1},\mathrm{GL}_{n_2}$ primary to the same cuspidal $U$ have product controlled by the usual \emph{Littlewood--Richardson coefficients}: \[ \chi^{U,\mu} \hcprod \chi^{U,\nu} = \sum_\lambda c_{\mu,\nu}^\lambda \chi^{U,\lambda} \quad \text{ where }\quad (\chi^{\mu} \otimes \chi^{\nu} ) \uparrow_{ \mathfrak{S}_{|\mu|} \times \mathfrak{S}_{|\nu|} }^{\mathfrak{S}_{|\mu|+|\nu|}} = \sum_\lambda c_{\mu,\nu}^\lambda \chi^{\lambda}. \] Furthermore, the set of \emph{all} irreducibles $\Irr(\mathrm{GL}_n)$ can be indexed as $\{ \chi^{\llambda} \}$ in which $\llambda$ runs through the functions $\llambda:U \longmapsto \lambda(U)$ from $\textrm{Cusp}$ to all integer partitions, subject to the restriction $\sum_U \wt(U) \cdot |\lambda(U)|=n$. In this parametrization, \[ \chi^{\llambda} = \chi^{U_1,\lambda(U_1)}\hcprod \cdots \hcprod\chi^{U_m,\lambda(U_m)} \] if $\{U_1,\ldots,U_m\}$ are the cuspidals having $\lambda(U_i) \neq \varnothing$. We next recall from \cite{LRS} the sparsity statement analogous to that of Step 1, showing that most irreducible $\mathrm{GL}_n$-characters $\chi$ vanish on a regular elliptic element. We also include the character values and a degree formula for certain irreducibles that arise in our computation. \begin{proposition}[{\cite[Prop.~4.7]{LRS}}] \label{Singer-cycle-character-values} Let $c$ in $\mathrm{GL}_n$ be regular elliptic, e.g., a Singer cycle. \begin{compactenum}[(i)] \item The irreducible character $\chi^{\llambda}$ has vanishing value $\chi^{\llambda}(c) = 0$ unless $\chi$ is a primary irreducible $\chi^{U,\lambda}$ for some cuspidal $U$ with $\wt(U)=s$ dividing $n$, and $\lambda = \hook{d}{\frac{n}{s}}$ is a hook-shaped partition of $\frac{n}{s}$. \item If $U = \one=\one_{\mathrm{GL}_1}$ is the trivial cuspidal with $s=\wt(U)=1$, then \[ \chi^{\one, \hook{d}{n}}(c) = (-1)^d \qquad \textrm{ and } \qquad \chi^{\one, \hook{d}{n}}(e) = q^{\binom{d+1}{2}} \qbin{n-1}{d}{q}. \] \end{compactenum} \end{proposition} \subsection{A $q$-analogue of Step 2.} Of course, to use \eqref{chain counting equation} we also need some character values on the elements $z_k$. These are provided by the following remarkable result, which was suggested by computations in GAP \cite{GAP}. Its proof is deferred to Appendix~\ref{technical-proof}. \begin{proposition} \label{character values prop} One has these normalized character values on $z_k$ for certain $\widetilde{\chi}^{U,\lambda}$. \begin{compactenum}[(i)] \item For any primary irreducible $\mathrm{GL}_n$-character $\chi^{U,\lambda}$ with the cuspidal $U \neq \one$ nontrivial, \[ \widetilde{\chi}^{U, \lambda}(z_k) = (-1)^k q^{\binom{k}{2}} \qbin{n}{k}{q}. \] \item For $U = \one$ and $\lambda = \hook{d}{n}$ a hook, we have \[ \widetilde{\chi}^{\one, \hook{d}{n}}(z_k) =\P_k(q^{-d}) \] where $\P_k(x)$ is the following polynomial in $x$ of degree $k$: \begin{equation} \label{definition-of-P} \P_k(x) := (-1)^k q^{\binom{k}{2}} \left( \qbin{n}{k}{q} + \frac{1 - q^n}{[n - k]!_q} \sum_{j = 1}^k \frac{[n - j]!_q}{[k - j]!_q} q^{j(n - k)} x \cdot (x q^{n - j + 1}; q)_{j - 1} \right). \end{equation} \end{compactenum} \end{proposition} \subsection{A $q$-analogue of Step 3.} We are now well-equipped to analyze the summation in \eqref{chain counting equation} by breaking it into two pieces: \begin{equation} \label{A+B-expression} f_\alpha[e,c] = \frac{1}{|\mathrm{GL}_n|} \sum_{\substack{\chi \in \Irr(\mathrm{GL}_n)\colon\\ \chi(c^{-1}) \neq 0}} \chi(e) \chi(c^{-1}) \prod_{i=1}^m \widetilde{\chi}(z_{\alpha_i}) = \frac{1}{|\mathrm{GL}_n|} (A + B) \end{equation} where $A$ is the sum over primary irreducibles $\chi^{U,\lambda}$ with $U \neq \one =\one_{\mathrm{GL}_1}$ and $B$ is the sum over primary irreducibles of the form $\chi^{\one,\lambda}$. By Proposition~\ref{character values prop}(i), one has \[ A=\prod_{i=1}^{m} (-1)^{\alpha_i} q^{\binom{\alpha_i}{2}} \qbin{n}{\alpha_i}{q} \; \sum_{\substack{\chi^{U,\lambda} \in \Irr(\mathrm{GL}_n)\colon\\ U \neq \one}} \chi^{U,\lambda}(e) \chi^{U,\lambda}(c^{-1}). \] However, Proposition~\ref{Singer-cycle-character-values}(i) lets one rewrite this last summation as \[ \sum_{\substack{\chi^{U,\lambda} \in \Irr(\mathrm{GL}_n)\colon\\ U \neq \one}} \chi^{U,\lambda}(e) \chi^{U,\lambda}(c^{-1}) = \sum_{\substack{\chi \in \Irr(\mathrm{GL}_n)\colon\\ \chi(c^{-1}) \neq 0}} \chi(e) \chi(c^{-1}) - \sum_{d=0}^{n-1} \chi^{\one,(n-d,1^d)}(e) \chi^{\one,(n-d,1^d)}(c^{-1}). \] The first sum on the right side is the character of the \emph{regular representation} for $\mathrm{GL}_n$ evaluated at $c$, and hence is equal to $0$. By Proposition~\ref{Singer-cycle-character-values}(ii) and the $q$-binomial theorem \eqref{q-binomial-theorem}, the second sum on the right side is \[ \sum_{d=0}^{n-1} (-1)^d q^{\binom{d+1}{2}} \qbin{n-1}{d}{q} = (q;q)_{n-1}. \] Thus one concludes that \begin{equation} \label{final-A-expression} A= -(q;q)_{n-1} \prod_{i=1}^{m} (-1)^{\alpha_i} q^{\binom{\alpha_i}{2}} \qbin{n}{\alpha_i}{q}. \end{equation} Next we analyze the sum $B$ in \eqref{A+B-expression}. For a composition $\alpha$, define $ \P_\alpha(x) = \prod_i \P_{\alpha_i}(x). $ By Propositions~\ref{Singer-cycle-character-values} and~\ref{character values prop} and the definition of $B$, we may rewrite \begin{equation} \label{first-expression-for-B} B= \sum_{d = 0}^{n - 1} (-1)^d q^{\binom{d + 1}{2}} \qbin{n - 1}{d}{q} \P_{\alpha}(q^{-d}). \end{equation} We identify $B$ in terms of the $(n-1)$st iterate of a $q$-difference operator $\Delta_q$. This operator is the $q$-analogue of \eqref{forward-difference} defined by \[ \Delta_q(f)(x) = \frac{f(qx)-f(x)}{qx - x} =\frac{f(qx)-f(x)}{(q-1) x}. \] One can check via the $q$-Pascal recurrence \[ \qbin{N}{d}{q} = \qbin{N-1}{d}{q}+q^{N-d}\qbin{N-1}{d-1}{q} \] and induction that for $N \geq 0$, the $N$th iterate $\Delta^N_q=\Delta_q \circ \cdots \circ \Delta_q$ has the following expression: \begin{equation} \label{iterate-of-q-difference} \Delta_q^{N}(f)(x) = q^{-\binom{N}{2}} (q - 1)^{-N} \sum_{d= 0}^{N}(-1)^{d} q^{\binom{d}{2}} \qbin{N}{d}{q} \frac{ f(q^{N - d}x) }{x^N}. \end{equation} (This is $q$-analogous to \eqref{forward-difference-iterate}.) Taking $N = n - 1$ in \eqref{iterate-of-q-difference} and applying the operator to $\P_{\alpha}(x)/x$ gives \begin{align*} \left[ \Delta_q^{n - 1}\left( \frac{ \P_{\alpha}(x)}{x} \right) \right]_{x = q^{1 -n}} &= q^{-\binom{n-1}{2}} (q - 1)^{1-n} \sum_{d= 0}^{n-1}(-1)^{d} q^{\binom{d}{2}} \qbin{n-1}{d}{q} \left[ \frac{1}{x^{n-1}} \frac{ \P_\alpha(q^{n-1 - d}x)}{(q^{n-1-d}x) } \right]_{x = q^{1 -n}} \\ &= q^{-\binom{n-1}{2}+(n-1)^2} (q - 1)^{1-n} \sum_{d= 0}^{n-1}(-1)^{d} q^{\binom{d+1}{2}} \qbin{n-1}{d}{q} \P_\alpha(q^{- d}). \end{align*} Combining with \eqref{first-expression-for-B} gives \begin{equation} \label{second-expression-for-B} B =(q - 1)^{n - 1} q^{-\binom{n}{2}} \left[ \Delta_q^{n - 1}\left( \frac{\P_{\alpha}(x)}{x} \right) \right]_{x = q^{1 -n}}. \end{equation} \subsection{A $q$-analogue of Step 4.} We process the expression \eqref{second-expression-for-B} for $B$ further. It is easily verified by induction on $N \geq 0$ that for any $m$, \[ \Delta_q^N(x^m) = \frac{(q^m-1)(q^{m-1} -1) \cdots (q^{m-N+1}-1)}{(q-1)^N} \cdot x^{m - N} = \frac{(q^{m - N+1} ; q)_N}{(1-q)^N} \cdot x^{m - N}. \] In particular, for integer $m$ one has \begin{equation} \label{relevant-q-diffs} \Delta_q^N(x^m)= \begin{cases} 0 & \text{ if } N > m \geq 0,\\ [m]!_q & \text{ if } N = m \geq 0,\\ (-1)^N q^{\binom{N+1}{2}} [N]!_q \cdot x^{-N-1} & \text{ if } m =-1. \end{cases} \end{equation} \begin{proposition} \label{boundary-terms-of-P} For any composition $\alpha=(\alpha_1,\ldots,\alpha_m)$ of $n$, the function $\P_\alpha(x)=\prod_{i} \P_{\alpha_i}(x)$ \begin{itemize} \item is a polynomial in $x$ of degree $n$, \item has leading coefficient equal to $\displaystyle q^{\varepsilon(\alpha)+n(n-1)} \cdot (q^n-1)^m$, and \item has constant coefficient equal to $-A/ (q;q)_{n-1}$. \end{itemize} \end{proposition} \begin{proof} Note from the definition \eqref{definition-of-P} of $\P_k(x)$ that it is a polynomial in $x$ of degree $k$, with constant coefficient $(-1)^k q^{\binom{k}{2}} \qbin{n}{k}{q}$. Hence $\P_\alpha(x)$ is polynomial in $x$ of degree $\sum_i \alpha_i=n$ with constant coefficient \[ \prod_{i=1}^m (-1)^{\alpha_i} q^{\binom{\alpha_i}{2}} \qbin{n}{\alpha_i}{q} = \frac{-A}{(q;q)_{n-1}}, \] where the last equality uses \eqref{final-A-expression}. One sees that in \eqref{definition-of-P}, the $x^k$ coefficient in $\P_k(x)$ is entirely accounted for by the $j=k$ summand, and is equal to \[ (-1)^k q^{ \binom{k}{2} + k(n-k) + \sum_{j=n-k+1}^{n-1} j} \cdot(q^n-1) = q^{k(n-k)+n(k-1)} \cdot (q^n-1). \] Therefore the product $\P_\alpha(x) = \prod_{i} \P_{\alpha_i}(x)$ has leading coefficient \[ q^{\sum_i \alpha_i(n-\alpha_i)+n(\alpha_i-1) } \cdot (q^n-1)^m =q^{\varepsilon(\alpha)+n(n-1)} \cdot (q^n-1)^m . \qedhere \] \end{proof} As $\P_{\alpha}(x)$ has degree $n$ in $x$, the quotient $\frac{\P_\alpha(x)}{x}$ is a Laurent polynomial with top degree $n-1$ and bottom degree $-1$. Therefore, combining Proposition~\ref{boundary-terms-of-P} with \eqref{relevant-q-diffs} gives \begin{align*} \Delta_q^{n-1}\left( \frac{\P_\alpha(x)}{x} \right) &= (-1)^{n-1} q^{-\binom{n}{2}} [n-1]!_q \cdot x^{-n} \cdot \frac{-A}{(q;q)_{n-1}} \quad + \quad [n-1]!_q q^{\varepsilon(\alpha)+n(n-1)} \cdot (q^n-1)^m\\ &=[n-1]!_q \left( (-1)^{n-1} q^{-\binom{n}{2}} \frac{-A}{x^n \cdot (q;q)_{n-1}} \quad + \quad q^{\varepsilon(\alpha)+n(n-1)} \cdot (q^n-1)^m \right). \end{align*} Plugging this into \eqref{second-expression-for-B} and using $(q-1)^{n-1}[n-1]!_q = (-1)^{n-1}(q;q)_{n-1}$ gives \begin{align*} B &= (-1)^{n-1} q^{-\binom{n}{2}} (q;q)_{n-1} \left( (-1)^{n-1} q^{-\binom{n}{2}} \frac{-A}{q^{-n(n-1)} (q;q)_{n-1}} \quad + \quad q^{\varepsilon(\alpha)+n(n-1)} \cdot (q^n-1)^m \right)\\ &=-A \quad + \quad (-1)^{n-1} (q;q)_{n-1} q^{\varepsilon(\alpha)+\binom{n}{2}}\cdot (q^n-1)^m. \end{align*} Using \eqref{GL-cardinality}, one can finally compute from \eqref{A+B-expression} that \[ f_\alpha[e,c] =\frac{1}{|\mathrm{GL}_n|} (A + B) =\frac{(-1)^{n-1} (q;q)_{n-1} q^{\varepsilon(\alpha)+\binom{n}{2}}} {(-1)^n (q;q)_{n} q^{\binom{n}{2}} } \cdot (q^n-1)^m = q^{\varepsilon(\alpha)} \cdot (q^n-1)^{m-1}. \] This concludes the proof of Theorem~\ref{thm:flag-f-vector}. \hfill$\qed$ \vskip.1in The preceding proof is computational and unenlightening. This prompts the following question. \begin{question} \label{Biane-type-proof-question} Biane \cite{Biane} has given a short, inductive proof of \eqref{cactus-special-case} not relying on any auxiliary objects (trees, maps, etc.). Is there an analogous proof of Theorem~\ref{thm:flag-f-vector}? \end{question} \begin{question} Is there a reasonable $q$-analogue of the cactus formula (Theorem~\ref{cactus-theorem}) in full generality, not just in the special case \eqref{cactus-special-case}? \end{question} \noindent We currently have no conjectural candidate for such a $q$-analogue. \section{Reformulating the flag $f$-vector} \label{reformulation-section} The goal of this section is to prove Proposition~\ref{chain-subspace proposition}, a linear algebraic reformulation of $f_{\alpha}[e,c]$ when $V^c=0$. We hope that this reformulation may be more amenable to combinatorial counting methods. In particular, we show below that it helps recover somewhat more directly the rank sizes for $[e,c]$ given in \eqref{interval-rank-sizes} \begin{definition} Fix a field $\F$, and let $V$ be an $n$-dimensional $\F$-vector space. Given a sequence $g_\bullet:=(g_0, g_1, \ldots,g_{m-1},g_m)$ with $g_i$ in $\mathrm{GL}(V)$, define a sequence of subspaces $ \varphi(g_\bullet):=(V_1,\ldots,V_m) $ via \[ V_i:=V^{g_{i-1}} \cap V^{g_i^{-1}g_{m}}. \] Fix $c$ in $\mathrm{GL}(V)$. Given an ordered vector space decomposition $V_\bullet=(V_i)_{i=1}^m$ of $V$, so that \[ V=\underbrace{V_1 \oplus V_2 \oplus \cdots \oplus V_i}_{=:V_{\leq i}} \oplus \underbrace{V_{i+1} \oplus V_{i+2} \oplus \cdots \oplus V_m}_{=:V_{> i}}, \] define a sequence $ \psi(V_\bullet):=(g_0,g_1,\ldots,g_{m-1},g_m) $ of $\F$-linear maps $g_i: V \rightarrow V$ by \[ g_i(x + y)=c(x) +y \qquad \text{ for } x, y \text{ in } V_{\leq i}, V_{> i}, \text{ respectively}. \] \end{definition} \begin{proposition} \label{chain-subspace proposition} Let $V=\F^n$ for a field $\F$, and let $c$ lie in $G := \mathrm{GL}(V)$ with $V^c=0$. Then the maps $\varphi, \psi$ restrict to inverse bijections between these two sets: \begin{compactenum}[(a)] \item multichains $g_\bullet:=(e=g_0 \leq g_1 \leq \cdots \leq g_{m-1} \leq g_m=c)$ in absolute order on $G$, and \item decompositions $V_\bullet=(V_i)_{i=1}^m$ satisfying $V=c(V_{\leq i}) \oplus V_{> i}$ for every $i=0,1,\ldots,m$. \end{compactenum} Moreover, they satisfy $\dim V_i = \ell(g_i)-\ell(g_{i-1})$. In particular, when $\F=\F_q$ is finite, for any composition $\alpha = (\alpha_1, \ldots, \alpha_m)$ of $n$, the flag number $f_\alpha[e,c]$ counts decompositions $V_\bullet$ as in (b) having $\dim V_i= \alpha_i$ for $i=1,2,\ldots,m$. \end{proposition} \begin{proof} Given $g_\bullet$ as in (a), we wish to show that $\phi(g_\bullet)=(V_1,\ldots,V_m)$ is as in (b). First note that Proposition~\ref{prop:self-duality} and $e \le g_{i-1}\leq g_i\le c$ imply $g_i^{-1}c\le g_{i-1}^{-1} c$. Thus, from Remark~\ref{alternate-characterization-remark} we have \begin{equation} \label{fixed subspaces} \begin{array}{rcccl} V&=&V^{g_i}&\oplus&V^{g_i^{-1}c}\\ & &\cap & &\cup\\ V&=&V^{g_{i-1}}&\oplus& V^{g_{i-1}^{-1}c}. \end{array} \end{equation} As a first goal, we show $V=\bigoplus_{i=1}^m V_i$ via induction on $m$, with the base case $m=1$ being trivial. In the inductive step, remove $g_1$ from $g_\bullet$ to give $g'_\bullet=(e\leq g_2 \leq \cdots \leq g_{m-1} \leq c)$. Then $\varphi(g'_\bullet)=(U_2,V_3,V_4,\ldots,V_m)$ satisfies $V=U_2 \oplus \left(\bigoplus_{i=3}^m V_i\right)$ by induction. Moreover, note \[ U_2 = V \cap V^{g_2^{-1}c} = (V^{g_1^{-1}c} \oplus V^{g_1}) \cap V^{g_2^{-1}c} = V^{g_1^{-1}c}\oplus (V^{g_1} \cap V^{g_2^{-1}c}) = V_1 \oplus V_2, \] where the second-to-last equality uses $V^{g_1^{-1}c} \subset V^{g_2^{-1}c}$ from \eqref{fixed subspaces}. Hence $V=\bigoplus_{i=1}^m V_i$. We also claim $V_{\leq i}=V^{g_i^{-1}c}$ and $V_{> i}=V^{g_i}$. To see this, note that for each $j \leq i$ one has \[ V_j = (V^{g_{j-1}} \cap V^{g_j^{-1}c}) \subset V^{g_j^{-1}c} \subset V^{g_i^{-1}c} \] by \eqref{fixed subspaces}, and hence $V_{\leq i} \subseteq V^{g_i^{-1}c}$; a similar argument shows that $V_{> i} \subseteq V^{g_i}$. But then \[ V=V_{\leq i} \oplus V_{> i}=V^{g_i^{-1}c} \oplus V^{g_i} \] forces the claimed equalities, as well as $\dim(V_i)=\ell(g_i)-\ell(g_{i-1})$. Lastly, applying $g_i$ to the decomposition in \eqref{fixed subspaces}, one obtains the final desired property for (b): \[ V=g_iV =g_i(V^{g_i^{-1}c} \oplus V^{g_i}) =g_iV^{g_i^{-1}c} \oplus g_iV^{g_i} =cV^{g_i^{-1}c} \oplus V^{g_i} =cV_{\leq i} \oplus V_{> i}. \] Conversely, given $V_\bullet$ as in (b), we must show that $\psi(V_\bullet)=g_\bullet$ is as in (a). The assumption that $V=cV_{\leq i} \oplus V_{> i}$ shows that $g_iV = V$, and hence each $g_i$ is invertible. We claim $V^c=0$ shows $V^{g_i}=V_{> i}$: expressing $v=x+y$ uniquely with $x,y$ in $V_{\leq i}, V_{> i}$, one has $v$ in $V^{g_i}$ if and only if $c(x)+y=x+y$ if and only if $c(x)=x$ if and only if $x=0$. Similarly, $ V^{g_{i-1}^{-1}g_i}=V_{\leq i-1} \oplus V_{> i} $ . Hence \[ \ell(g_{i-1})+\ell(g_{i-1}^{-1}g_i)=\dim V_{\leq i-1}+\dim V_i = \dim(V_{\leq i})=\ell(g_i). \] Thus $g_{i-1} < g_i$ and so $g_\bullet$ satisfies (a). Finally, one can check $\phi$ and $\psi$ are inverses to each other. \end{proof} \begin{proof}[Alternate proof of Equation \eqref{interval-rank-sizes}, via Proposition~\ref{chain-subspace proposition}] Choose $c$ in $\mathrm{GL}(V)$ regular elliptic. By Proposition~\ref{prop:self-duality}, it is enough to show that for $1\leq k\leq n/2$, there are \[ f_{(k, n - k)}[e,c] = q^{\varepsilon((k, n - k))} \cdot (q^n-1) = q^{ 2k(n-k) - n } \cdot (q^n-1) \] elements $g$ in $[e,c]$ of rank $k$. By Proposition~\ref{chain-subspace proposition}, these elements are in bijection with direct sum decompositions \[ V = \F_q^n= U\oplus W = cU\oplus W \] where $\dim U = k$. Count such decompositions by first choosing $U$, and then choosing $W$ complementary to both $U$ and $cU$. The number of choices of $W$ depends only on $k=\dim U$ and $d:=\dim (U \cap cU)$, and thus it helps to have the following very special case of a general formula due to Chen and Tseng \cite[p.~28]{ChenTseng}: for a regular elliptic element $c$ in $\mathrm{GL}_n(\F_q)$, there are \[ g(n,k,d) := \frac{ [n]_q} { [k]_q} \qbin{n-k-1}{k-d-1}{q} \qbin{k}{d}{q} q^{(k-d)(k-d-1)} \] subspaces $U$ of $\F_q^n$ for which $\dim U=k$ and $\dim (U \cap cU) = d$, assuming $0 \leq d < k < n$. Given two $k$-dimensional subspaces $U_1, U_2$ of $V$ with $\dim (U_1 \cap U_2) = d$ (such as $U_1=U$ and $U_2=cU$ above), it is a straightforward exercise to check that when $0 \leq d \leq k \leq n/2$ there are \begin{equation} \label{counting co-complements} f(n, k, d) := q^{ k(n-k) - {k-d+1\choose 2} } (-1)^{k-d}(q;q)_{k-d} \end{equation} subspaces $W$ with $V=U_1 \oplus W=U_2 \oplus W$. Thus \begin{align*} f_{\alpha}[e,c]&=\sum_{d = 0}^{k-1} g(n, k, d) f(n,k,d) \\ &=\sum_{d=0}^{k-1} \frac{ [n]_q} { [k]_q} \qbin{n-k-1}{k-d-1}{q} \qbin{k}{d}{q} q^{(k-d)(k-d-1)} \cdot q^{ k(n-k) - {k-d+1\choose 2} } (-1)^{k-d}(q;q)_{k-d}\\ &=(q^n-1) q^{k(n-k)-1} \sum_{d=0}^{k-1} \qbin{k-1}{d}{q} (q^{n-k-1}-1)(q^{n-k-1}-q)\cdots (q^{n-k-1}-q^{k-d-2}). \end{align*} Finally, we apply the special case \[ q^{ab} = \sum_{d=0}^a \qbin{a}{d}{q} (q^b-1)(q^b-q) \cdots (q^b-q^{a-d-1}) \] of the $q$-Chu--Vandermonde identity \cite[II.6]{GasperRahman} with $(a,b)=(k-1, n-k-1)$ to conclude. \end{proof} \begin{remark} Both the Chen--Tseng result and the needed case of the $q$-Chu--Vandermonde identity have elementary proofs: in the former case by a complicated recursive argument, and in the latter case by counting matrices in $\F_q^{a \times b}$ by their row spaces (see, e.g., \cite{Landsberg}). \end{remark} \section{Final remarks and questions} \label{remarks} It was shown by Athanasiadis, Brady and Watt \cite{AthanasiadisBradyWatt} that the noncrossing partition posets $[e,c]$ for Coxeter elements $c$ in real reflection groups are EL-shellable; this was extended to well-generated complex reflection groups by M\"uhle \cite{muhle}. In particular, the open intervals $(e,c)$ are homotopy Cohen--Macaulay. They also have predictable Euler characteristics, that is, M\"obius functions $\mu(e,c)$. Analogously, Theorem~\ref{thm:flag-f-vector} allows one to compute for regular elliptic elements $c$ in $\mathrm{GL}_n(\F_q)$ that the interval $[e,c]$ in the absolute order on $\mathrm{GL}_n(\F_q)$ has \[ \mu(e,c)=\sum_{\alpha=(\alpha_1,\ldots,\alpha_m)} (-1)^{m} f_\alpha[e,c] =\sum_{\alpha=(\alpha_1,\ldots,\alpha_m)} (-1)^{m} q^{\varepsilon(\alpha)} \cdot (q^n-1)^{m-1}. \] We do not suggest any simplifications for this last expression. \begin{question} Is the open interval $(e,c)$ in the absolute order on $\mathrm{GL}_n(\F_q)$ homotopy Cohen--Macaulay? Is it furthermore shellable? \end{question} \noindent Homotopy Cohen--Macaulayness would imply two weaker conditions: \begin{compactenum}[(i)] \item $(-1)^{\ell(y) - \ell(x)} \mu(x, y) \geq 0$ for all $x \leq y$ in $[e,c]$, and \item for $i < n-2$, one has vanishing reduced homology $\tilde{H}_i((e,c),\mathbb{Z})=0$. \end{compactenum} Condition (i) is easily seen to hold for $n = 2$ or $n = 3$ and any $q$; in addition, we have checked by direct computation that it holds for $n = 4$ if $q = 2$ or $3$. Condition (ii) is trivial for $n = 2$. For $n=3$, it amounts to connectivity of the bipartite graph which is the Hasse diagram for $(e,c)$, and one can give a direct proof (using Proposition~\ref{chain-subspace proposition}) that this graph is connected. For $n=4$ and $q=2$ we have checked in Sage \cite{sage} that $\tilde{H}_i((e,c),\mathbb{Z})=0$ for $i=0,1$ and $\tilde{H}_2((e,c),\mathbb{Z})=\mathbb{Z}^{|\mu(e,c)|}=\mathbb{Z}^{1034}.$ Similarly, it was shown by Athanasiadis and Kallipoliti \cite{AthanasiadisKallipoliti} that, after removing the bottom element $e$, the absolute order on all of $\mathfrak{S}_n$ gives a \emph{constructible} simplicial complex, and hence also this poset is homotopy Cohen--Macaulay. In type $B_n$, it is open whether removing the bottom element from the absolute order gives a homotopy Cohen--Macaulay complex; however, Kallipoliti \cite{Kallipoliti} showed that when one restricts to the order ideal which is the union of all intervals below Coxeter elements, one obtains a homotopy Cohen--Macaulay complex. \begin{question} \label{abs-order-homotopy-question} After removing the bottom element from the absolute order on all of $\mathrm{GL}(V)$, say for $V=\F_q^n$, does one obtain a homotopy Cohen--Macaulay simplicial complex? What about the order ideal which is the union of all intervals below Singer cycles? \end{question} \noindent For example, for $\mathrm{GL}_3(\F_2)$, every maximal element in the absolute order is already a Singer cycle, so that the two simplicial complexes in Question~\ref{abs-order-homotopy-question} are the same. Both have reduced simplicial homology vanishing in dimensions $0,1$, and isomorphic to $\mathbb{Z}^{838}$ in dimension $2$. In terms of Sperner theory, the poset $[e,c]$ is rank-symmetric and rank-unimodal by \eqref{interval-rank-sizes}, and is self-dual by Proposition~\ref{prop:self-duality}. This raises a question, suggested by Kyle Petersen. \begin{question} For every Singer cycle $c$ in $\mathrm{GL}_n(\F_q)$, does the absolute order interval $[e,c]$ have a symmetric chain decomposition? \end{question} \noindent The local self-duality proven in Proposition~\ref{prop:self-duality} also implies that, for any $c$ in $\mathrm{GL}_n(\F_q)$, the {\it Ehrenborg quasisymmetric function} encoding the flag $f$-vector of the ranked poset $[e,c]$ will actually be a symmetric function; see \cite[Thm.\ 1.4]{Stanley-flag}. When $c$ is regular elliptic, Theorem~\ref{thm:flag-f-vector} lets one compute this symmetric function explicitly, but we did not find the results suggestive. Lastly, we ask how the poset $[e,c]$ in $\mathrm{GL}_n(\F_q)$ depends upon the choice of Singer cycle $c$. \begin{question} Do all Singer cycles $c$ in $\mathrm{GL}_n(\F_q)$ have isomorphic posets~$[e,c]$? \end{question} Certainly $[e,c]$ and $[e,c']$ are poset-isomorphic whenever $c, c'$ are conjugate, and whenever $c'=c^{-1}$. However, not all Singer cycles can be related by conjugacy and taking inverses. A similar issue arises for Coxeter elements $c$ in finite reflection groups $W$. For real reflection groups, all Coxeter elements {\it are} $W$-conjugate. For well-generated complex reflection groups, they are all related by what Marin and Michel \cite{MM} call {\it reflection automorphisms}, and these give rise to the desired poset isomorphisms $[e,c] \cong [e,c']$; see Reiner--Ripoll--Stump \cite{RRS}. \begin{remark} In spite of Theorem~\ref{thm:flag-f-vector}, within some $\mathrm{GL}_n(\F_q)$ there exist \emph{regular elliptic} elements $c'$ and Singer cycles $c$ for which $[e,c'] \not\cong [e,c]$. For example, the Singer cycles in $\mathrm{GL}_4(\F_2)$ are the elements $c$ with characteristic polynomial $t^4+t+1$ or $t^4+t^3+1$, while the elements $c'$ having characteristic polynomial $1 + t + t^2 + t^3 + t^4$ are regular elliptic but not Singer cycles; such $c'$ have multiplicative order $5 \neq 15=2^4-1=|\F_{2^4}^\times|$. One can check that $[e,c] \not\cong [e,c']$, for example by computing the determinants of the $\{0,1\}$-incidence matrices between ranks $1$ and $3$ for the two intervals. \end{remark}
1506.03395
\section{Introduction} The importance of low surface brightness (LSB) galaxies to galaxy studies is not that they dominate the total galaxy population of the Universe (they do not, Hayward, Irwin \& Bregman 2005; Rosenbaum \& Bomans 2004; Schombert, Pildis \& Eder 1997) nor that they represent a special form of star formation (they do not, Schombert \& McGaugh 2014). Instead, their importance lies in the need to explore the full range of galaxy characteristics in order to derive formation and evolutionary scenarios unbiased by size, mass or density. A clear picture of galaxy formation requires the inclusion of the LSB realm both globally, those galaxies that are LSB in their mean stellar densities (Pildis, Schombert \& Eder 1997), and locally, the LSB regions of a galaxy (Boissier \it et al. \rm 2008) This series of papers (Schombert, Maciel \& McGaugh 2011, Schombert, McGaugh \& Maciel 2013, Schombert \& McGaugh 2014a, Schombert \& McGaugh 2014b) has explored the class of LSB galaxies selected from visual survey only by their mean surface brightness (in this sense, a class of objects that occupy the faintest end of central surface brightness distribution). The LSB class contains a full range of galaxy types (irregulars to disks) and a full range of size and luminosity (dwarfs to giants). They are found in all types of galaxy environments (Schombert \it et al. \rm 1992; van Dokkum \it et al. \rm 2014), but tend to avoid the dense, rich environments, such as cluster cores (Galaz \it et al. \rm 2011). While LSB dwarf ellipticals (dE's) and dSph are gas-poor and often found in clusters, the typical LSB galaxy is gas-rich (Huang \it et al. \rm 2014, and references therein), but with low H$\alpha$ fluxes indicating low current star formation rates (SFR; Schombert, Maciel \& McGaugh 2011). Over the years, and numerous studies by many observing teams, the following characteristics were found to be in common with galaxies at the low end of the surface brightness spectrum. There are, of course, exceptions to all of the following generalizations, but a majority of LSB galaxies maintain these trends. First, the ratio of gas to stellar masses increases dramatically with decreasing mean surface brightness such that the highest gas fraction galaxies ($M_{gas}/(M_{gas}+M_*)$) are typically LSB as has been shown in previous studies (McGaugh \& de Blok 1997, Geha \it et al. \rm 2006). This is not too surprising in that baryonic content of galaxies is primarily stars and gas, so the decreased importance of stellar mass will naturally result in an increasing fractional dominance of gas. Second, the optical and near-infrared (near-IR) colors of LSB galaxies are atypically blue (Pildis, Schombert \& Eder 1997; Schombert \& McGaugh 2014b). This is only considered atypical in the sense that their current SFR are extremely low, with values in common with red spirals and S0's. However, galaxy colors have a trend of becoming bluer with more irregular morphology and higher gas fractions, so the expectation, just based on morphology and gas content, is that LSB galaxies should have similar colors as other irregular galaxies. Third, unsurprisingly, LSB galaxies do have extremely low current SFRs (on average a factor of ten lower than other irregular galaxies of similar stellar mass; Schombert, McGaugh \& Maciel 2013). The style of star formation is similar to other irregular galaxies, i.e., most of the star formation is concentrated in HII regions. This dispels a scenario where LSB galaxies only form stars outside molecular complexes (Schombert \it et al. \rm 1990). LSB galaxies also have a normal range of HII region sizes from massive $10^5$ $M_{\sun}$ complexes to individual OB star regions (Schombert, McGaugh \& Maciel 2013), i.e., massive complexes do occur despite their low mean stellar densities. The key dilemma from LSB galaxies studies is primarily concerning their stellar populations. As the mean surface brightness decreases, the optical and near-IR colors become bluer, SFR decreases. This is opposite to the expectation that a galaxy whose current SFR's are low should be dominated by the older, redder population, particularly in the lowest surface brightness regions. Contrary, Schombert, McGaugh \& Marciel (2013) found that the optically dimmest regions were, in fact, the bluest regions. Near-IR imaging revealed that the colors for LSB galaxies could be explained by a combination of metallicity and the pattern of recent star formation (low level bursts separated by quiescent epochs; Schombert \& McGaugh 2014a). Models of roughly constant mean star formation rate punctuated by stochastic variations in current SFR also agree well with constraints from kinematic studies, where LSB irregulars display solid body rotation rather than the differential rotation that drives most spiral patterns (Kuzio de Naray, McGaugh \& Mihos 2009, Eder \& Schombert 2000). Constant star formation, in turn, provides a natural explanation for the observed range of stellar mass-to-light ratios in LSB galaxies (Schombert \& McGaugh 2014a). A more direct method to understanding the star formation history of LSB galaxies is to, of course, resolve the stellar population using deep space imaging (Dalcanton \it et al. \rm 2009). For even the top of a stellar population's color-magnitude diagram (CMD) reveals detailed information on the evolution of the underlying stars, the SFR over the last Gyr and chemical evolution of those stars. Interpretation is assisted by numerous synthetic CMD simulators that use the most recent stellar isochrones and take into account short-lived, but highly luminous, phases of stellar evolution (e.g., asymptotic giant branch stars, blue stragglers, etc., see Gallart \it et al. \rm 1996; Mighell 1997; Holtzman \it et al. \rm 1999; Dolphin 2002; Aparicio \& Hidalgo 2009) The analysis of CMD's in numerous Local Volume (LV, galaxies within 10 Mpc of the Milky Way) primary and dwarf galaxies has filled the literature in the last ten years (see Tully \it et al. \rm 2009; McQuinn \it et al. \rm 2012). These studies have found that the star formation history of most LV dwarfs is a complicated progression of bursts of star formation; however, they all also have significant old stellar populations ($\tau >$ 8 Gyrs) that dominate their global colors and the lower portion of the CMD. Resolved populations also give spatial information on the star formation history (SFH) of LV dwarfs (McQuinn \it et al. \rm 2012), usually solid body rotation prevents significant mixing on 50 Myr timescales (Bastian \it et al. \rm 2009) allowing for the potentially mapping of the spatial chemical history of stars rather than gas metallicity from emission lines (Gallart \it et al. \rm 2005). Of course, the less distant a galaxy is to the Milky Way, the fainter into a CMD one can resolve. Unfortunately, none of the LSB dwarfs from our samples are closer than 8 Mpc, the outer edge of successful HST stellar photometry and, thus, our interpretation will be limited to the bright portion of the CMD. For this study, we have selected three dwarfs from our LSB catalogs that are the closest to the Milky Way (F415-3, F608-1 and F750-V1) for WFC3 imaging in three filters (F336W, F555W and F814W). Our objective is to obtain the first observations of the top of the CMD in any LSB galaxy for comparison to other LV dwarfs. These galaxies also have matching ground-based optical, near-IR, H$\alpha$ and HI imaging for comparison to the resolved stellar populations. Our goal is a first look at the details of the star formation history of these low density galaxies and their relevance to galaxy formation and evolution scenarios. \section{Analysis} \subsection{Sample} Three LSB galaxies were selected from dwarf LSB catalog of Schombert, Pildis \& Eder (1997), an optically selected sample with Arecibo HI observations. The criteria for inclusion in the Schombert dwarf LSB sample was 1) LSB nature, 2) one arcmin or greater angular size and 3) irregular morphology. The intent of the sample was to extend the local dwarf galaxy sample as a test of biased galaxy formation scenarios. There was no attempt to be luminosity or mass complete; size, morphology and LSB appearance were the main criteria. The three galaxies selected were all Fall objects (F415-3, F608-1 and F750-V1) and all located less than 11 Mpc away. Their distances are from HI observations (Eder \& Schombert 2000), corrected to the CMB reference frame by NED, and are 10.4, 8.9 and 7.9 Mpc respectively with an accuracy of 0.1 Mpc. These were the closest LSB dwarfs in the Schombert LSB catalog to maximize the detection of stellar populations and depth to the resulting CMD. F608-1 is also UGC159, where the original coordinates were misprinted on POSS SAO overlays resulting in separate designations for several years. We maintain the LSB catalog labels for clarity. Ground-based $V$ images from KPNO 2.1m for all three galaxies are shown in Figure \ref{ground} (each image is 900 secs of exposure, 0.61 arcsecs/pixel plate scale). Their optical and HI properties are summarized in Table 1. Compared to other LSB galaxies studied in H$\alpha$ (Schombert, McGaugh \& Maciel 2013), F608-1 and F750-V1 are on the small size (less than 0.6 kpc in radius at the 26 $V$ mag arcsec$^{-2}$ isophote), F415-3 is 1.8 kpc in radius, larger but still dwarf-like in size. Their central surface brightness are average with respect to the LSB sample as a whole, ranging around 23 $V$ mag arcsecs$^{-2}$. F608-1 and F750-V1 have the lowest baryon masses (gas plus stellar mass) in the LSB sample (10$^{7.8}$ and 10$^{7.3} M_{\sun}$). F415-3 has an average LSB mass and luminosity (10$^{8.8} M_{\sun}$). All three have high gas fractions ($M_{gas}$/$M_*$+$M_{gas}$) greater than 70\%. With respect to star formation, all three galaxies have very low current star formation rates based on H$\alpha$ measurements. They lie in the lower 10\% of the star formation rates (SFR) for LSB galaxies (110 objects) studied by Schombert, McGaugh \& Maciel (2013) and in the bottom 5\% of star forming dwarf galaxies (168 objects, van Zee 2001; Hunter \& Elmegreen 2004). Despite the low SFR's, their optical colors are very blue, typical of values for LSB galaxies and star-forming spirals, although the blue colors in star forming galaxies is presumingly due to a large high-mass stellar population. The origin of blue colors in LSB galaxies is unclear as the current SFR is low. Possible explanations for the blue colors of LSB has ranged from extremely low metallicities (Schombert \it et al. \rm 1990) to unusual stellar types (i.e. over abundance of bHB or blue straggler stars, Rakos \& Schombert 2004), but no particular idea has gained support from observations. \begin{figure}[!ht] \centering \includegraphics[scale=0.85,angle=0]{ground.pdf} \caption{\small The LSB galaxies F415-3 (left), F608-1 (middle) and F750-V1 (right) from 900 sec KPNO 2.1m $V$ images. The 3x3 arcmin WFC3 field is shown as an outline on the ground-based images. All three galaxies are between 8 and 11 Mpc in distance, so each frame is approximately 20 kpc on a side. The typical irregular morphology to LSB dwarfs is obvious, H$\alpha$ maps of the same galaxies are found in Schombert, McGaugh \& Maciel (2013). } \label{ground} \end{figure} \subsection{Observations} The CMD's presented herein are produced by performing HST stellar photometry taken with the Wide Field Camera 3 (WFC3), the fourth generation UV/IR imager. WFC3 uses two backside illuminated 2Kx4K CCDs with a combined field of view of 162x162 arcsecs. The plate scale is 0.04 arcsecs/pixel. Observations were taken in three filters F336W, F555W and F814W, approximately Johnson $U$, $V$ and $I$. These filters were selected to isolate a CMD in $V-I$ to measure mean metallicity of the older population and $U-V$ observations to identify the UV sources of the limited H$\alpha$ emission seen in these LSB dwarf galaxies. Ten orbits from cycle 20 were assigned to each object, split as five orbits for F336W, three orbits for F555W and two orbits for F814W. Due to their low surface brightness nature, the observations were made in LOW-SKY mode where the zodiacal light is less than 30\% of the minimum. Each orbit was broken into four exposures using UVIS-DITHER-BOX for cosmic-ray subtraction and to minimize pixel-to-pixel sensitivity variations. A pre-flash option was used for the F336W exposures to avoid a known WFC3 charge transfer problem (Anderson \& Baggett 2014). Orbital variations resulted in total F336W exposure times of 8,800 secs for F415-3 and 10,000 secs for F608-1 and F750-V1. F555W and F814W received total exposures of 7,088 and 4,688 secs for all three galaxies. Reduction images were taken directly from the STScI pipeline where bias, flat-fielding and image distortion were automatically corrected. Calibration was performed using standard WFC3 header values. CMD's for all three colors, in the HST filter system, are shown in Figure \ref{cmd}. Conversion from the WFC3/UVIS filter passbands to the Johnson/Cousins $UVI$ passbands was accomplished using the photometric transformation to $AB$ magnitudes, then from $AB$ to $UVI$. The $AB$ conversion for $V$ and $I$ are well known, but the $AB$ conversion to $U$ is less well established. After some investigation, we used the $AB$ and $U$ magnitude of the Sun (6.35 and 5.61 at 336 nm, Blanton \it et al. \rm 2007) for the conversion. We confirmed the $V$ and $I$ calibration by comparison to ground-based images calibrated with Landolt standards. The mean difference between HST and ground-based magnitudes was 0.01$\pm$0.05 in $V$ and 0.02$\pm$0.07 in $I$, effectively zero. Galactic extinction was applied to the final photometry following the prescription of Schlafly \& Finkbeiner (2011). These values were 0.27, 0.15 and 0.25 for our three galaxies. Stellar sources were identified using a threshold filter that used local sky for discrimination. Crowded fields in the galaxies' cores were isolated by visual inspection; however, the crowding was by no means as intense as often appears in high surface brightness galaxies. Star clusters were identified by visual inspection and assigned a letter designation. Many of these corresponded to regions of enhanced surface brightness in ground-based images ($V$ knots and HII regions). Comparison between frames made use of the internal WCS for the WFC3 frames. Comparison to ground-based optical and H$\alpha$ images used a coordinate system based on a dozen stars in common with the WFC3 fields. Stellar photometry was performed using the version of DAOPHOT found in the recent PyRAF platform. Although more sophisticated photometry programs were available, the low number of sources and wide spacing indicative to LSB galaxies made for uncomplicated PSF fitting and local sky determination. Targets were identified by a combination of threshold filtering and visual inspection. A series of 10 to 15 stars were selected as PSF standards. The FWHM was consistent at 0.092 arcsecs in each filter. Non-stellar objects, based on profile sharpness, were rejected as presumed background galaxies (although the possibility that these objects are planetary nebula is not excluded). A few interesting (i.e., very bright) stars were too crowded to process automatically and were reduced by interactive tools. Blending is a serious problem at distances of the three galaxies in our sample. However, there are couple factors that work to minimize the effects of blending on our results. First, the stellar densities of our LSB galaxies are, by definition, much lower than other LV dwarfs with resolved stellar populations. Aside from a few very dense regions associated with the high H$\alpha$ signature of strong star formation, most of the stellar sources are spatially well defined. Second, blending by binary pairs is less of a concern than other CMD studies for we only sample the top of the luminosity function. Odds are that the companion star for a binary system will be much less luminous than the primary, that have a small contribution to the measured color. A large occulted region passes through the center of the WFC3 frame as an artifact of the interface between the two CCD's. This region, while manually removed from the image, unfortunately, also passes through the center of each galaxy. However, offsets would have moved interesting outer star-forming regions off field. As the occulting strip varies in sky position from filter to filter (orbit visit to visit), some clusters in the galaxy cores were only observed through one or two filters. A map of all the sources with at least two color photometry is found in Figures \ref{map1}, \ref{map2} and \ref{map3}. \begin{figure}[!ht] \centering \includegraphics[scale=0.45,angle=0]{map1_grey_full.pdf} \caption{\small The right frame displays the WFC3 F555W image (7,088 sec exposure) for the LSB galaxy F415-3. North is roughly towards the upper right corner, East is 90 degrees counter-clockwise. The left frame is a map of 1,869 sources with S/N $>$ 5 in the F555W frame. Pixel units in X and Y are shown on the right, kpc's on the left. Symbol size correlates with luminosity, color displays blue or red based on a $V-I=0.5$ cut. The number counts for the stellar sources traces the ground-based optical surface brightness. Blue stars tend to be concentrated in clusters; however, a significant fraction are distributed in low surface brightness regions explaining the long-standing dilemma of the lack of sharp 2D color discrimination in LSB galaxies. } \label{map1} \end{figure} \begin{figure}[!ht] \centering \includegraphics[scale=0.45,angle=0]{map2_grey_full.pdf} \caption{\small The right frame displays the WFC3 F555W image (7,088 sec exposure) for the galaxy F608-1. North is roughly towards the upper right corner, East is 90 degrees counter-clockwise. The left frame is a map of 465 sources with S/N $>$ 5 in the F555W frame. Axes are marked in kpc's. Pixel units in X and Y are shown on the right, kpc's on the left. Symbol size correlates with luminosity, color displays blue or red based on a $V-I=0.5$ cut. Although smaller in size that F415-3, the stellar distribution is as extended as F415-3. Again, the bright blue stars are associated with clusters. } \label{map2} \end{figure} \begin{figure}[!ht] \centering \includegraphics[scale=0.45,angle=0]{map3_grey_full.pdf} \caption{\small The right frame displays the WFC3 F555W image (7,088 sec exposure) for the galaxy F750-V1. North is roughly towards the upper right corner, East is 90 degrees counter-clockwise. The left frame is a map of 501 sources with S/N $>$ 5 in the F555W frame. Pixel units in X and Y are shown on the right, kpc's on the left. Symbol size correlates with luminosity, color displays blue or red based on a $V-I=0.5$ cut. F750-V1 is smaller than F415-3 or F608-1 as the adjusted X/Y scale indicates. } \label{map3} \end{figure} The limiting magnitudes and photometric errors were similar from galaxy to galaxy since the exposure times and instrument set-up were identical. The limiting magnitudes were 27.1 in F336W ($U$), 27.4 in F555W ($V$) and 27.5 in F814W ($I$), which corresponds to approximately $M_V=-2.5$ at the distances of the sample. These were exactly the expected limiting magnitudes based on pre-observation calculations from HST's APT for the requested orbits and filters. The photometric errors as a function of F555W magnitude is shown in Figure \ref{limiting_mag}. At the limiting magnitude, errors reach 0.12, 0.07 and 0.20 in F336W, F555W and F814W. Stars with errors greater than these values were inspected visually for reality. Experiments with artificial stars demonstrated that our sample is complete to 80\% at the limiting magnitude in F555W. Thus, we use the catalog of F555W detections as the primary catalog and search for stars detected only in F336W or F814W as extreme blue or red objects. A total of 2,155 sources are found for all three galaxy with S/N $>$ 5 (1,869 in F415-3, 465 in F608-1 and 501 F750-V1). All the photometry and images can be found at our LSB website (http://abyss.uoregon.edu/$\sim$js/lsb). \begin{figure}[!ht] \centering \includegraphics[scale=0.75,angle=0]{limiting_mag.pdf} \caption{\small Photometric errors for stellar sources in the F555W frame of F415-3. Stars with anomalous errors are typically associated with crowded regions (poor local sky) or asymmetric PSF shapes. The typical error for a limiting magnitude of 26.8 is 0.07. } \label{limiting_mag} \end{figure} \subsection{Surface Brightness Mapping} One of the many paradoxes for LSB galaxies is the origin of their LSB nature. There is the original problem of why their are so low in surface brightness as a class of objects, compounded with the dilemma that even their lowest LSB regions are atypically blue in color (Schombert, McGaugh \& Maciel 2013). A faded (i.e., old) stellar population would exhibit a LSB nature, but would be quite red (Rakos \& Schombert 2005). Alternatively, the spacing between stars might be much larger than star-forming spirals, resulting in less stellar luminosity per pc$^2$ or the galaxies might be much thinner than other dwarf galaxies. Both scenarios require very different star formation mechanisms than the usual molecular cloud collapse into star clusters once a critical gas density is reached (i.e., Schmidt's law, Lada 2014). The distribution of stellar sources is the first look we have into the tip of the underlying stellar population in LSB galaxies. There are 1,869 detected stellar objects in F415-3. They range in absolute luminosity from $-$10 to $-$0.5 $I$ mag. The total luminosity of the stellar sources is log $L/L_{\sun}$ = 6.86, compared to log $L/L_{\sun}$ = 7.85 for the galaxy's total $V$ luminosity (using 4.83 as the absolute magnitude of the Sun). This means that the observed WFC3 stellar population is only 1/10th of the total luminosity of the galaxy, the rest contributed by an unresolved, underlying stellar population. \begin{figure}[!ht] \centering \includegraphics[scale=0.8,angle=0]{sfb_map.pdf} \caption{\small Mean surface brightness versus stellar luminosity for F415-3. The local surface brightness in three arcsecs boxes is plotted against the total luminosity of stellar sources in the same area. The mean surface brightness is taken from ground-based $V$ images, correlated against WFC3 stellar counts. Stellar counts in the same regions are converted to luminosities per pc$^2$. A moving average is shown as blue symbols. The red dotted line is the canonical relationship between $V$ surface brightness and solar luminosities per pc$^2$ shifted by a factor of 10 to account for the difference between the galaxy total luminosity and the sum of the stellar counts (i.e. the unresolved stellar population).} \label{sfb_map} \end{figure} We can check the distribution of surface brightness from ground-based images, which measures the contribution from all the stars, with the luminosity distribution of the WFC3 sources. The mean surface brightness is taken from re-registered ground-based images where a 3x3 arcsecs box was used to smooth the cleaned image (foreground and background objects removed). This is compared to the sum of the luminosity of all the stellar sources in the same region, converted to luminosity per pc$^2$. The resulting correlation between the source distribution and surface brightness shown in Figure \ref{sfb_map}, scaled by a factor of 1/10 for the luminosity of the stellar sources. There are several interesting points to extract from this Figure. First, as was expected by visual inspection of Figure \ref{map1}, there is a correlation between stellar counts and the underlying surface brightness of the galaxy. Regions of bright surface brightness (knots from Paper II) are clearly associated with stellar associations. Regions of densely packed stellar sources are also higher in mean surface brightness. Note that this correlation does not necessarily have to exist, for while the higher surface brightness regions would be associated with new star formation, the lower surface brightness regions could be faded populations devoid of bright stars. This increases the confidence that conclusions based on the top of the stellar luminosity function can be extended to the underlying stellar population. The correlation between stellar sources and mean surface brightness also follows the trend expected for converting surface brightness into stellar luminosity per pc$^2$ (dotted line in Figure \ref{sfb_map}, corrected for missing luminosity of undetected stars). A majority of the data is within the expectation of liner correlation between surface brightness and stellar counts. This implies that for every square parsec the relationship between the bright stars and faint, unresolved, stars is constant. That is, there are no hidden populations in LSB galaxies. Aside from the very brightest, short-lived blue stars, the other stellar sources trace the faintest, unresolved stars as well. The broad distribution of blue stars indicates a great deal of uniformity by age to the stellar populations in LSB galaxies. There appear to be no regions that are strictly old (greater than 3 Gyrs) stars. In most galaxies, an old population is associated with some central concentration of light. The irregular morphology of LSB galaxies means the older populations, if they exist, are intermixed with the new stars. \subsection{Star Clusters Identification} Numerous stellar associations are identified in all three galaxies, again indicating that star formation in LSB galaxies proceeds in a fashion similar to HSB star-forming spirals and irregulars, i.e. molecular clouds collapsing to form stellar clusters. Thirty-nine groupings were identified in all three galaxies (23 in F415-3, 11 in F608-1 and 5 in F750-V1). Assuming a standard IMF, these clusters range in masses from a few times $10^4$ to $10^6$ $M_{\sun}$, see Figure \ref{clusters}. However, as a cautionary note, most of these groupings are 25 to 50 pc in size and are probably collections of several smaller clusters in the same region. Open clusters found in M31 (Williams \& Hodge 2001) display a similar appearance to the clusters identified in our LSB galaxies (see their Figure 5), but the regions associated with H$\alpha$ emission are 100's of pc in diameter and can accommodate several groups of ionizing OB stars. The star clusters' mean colors are typically blue ($V-I < 0.5$), but the inclusion of one or two bright red red giant branch (RGB) or asymptotic giant branch (AGB) star makes a mean color less sensitive to the age of the cluster (see Asa'd \& Hanson 2012). Our limiting magnitude (see \S2.2) means that the stellar sources that identify the clusters are composed solely of OB stars or stars above the tip of the RGB. The clusters in F415-3 are bluer, on average (mean $V-I = 0.0$), versus the cluster colors in F608-1 and F750-V1 (mean $V-I = 0.5$). These colors map consistently into the $B-V$ of LSB knots from ground-based images (Schombert, McGaugh \& Maciel 2013) implying that the ground-based colors of the enhanced surface brightness knots are also driven by the brightest stars. \begin{figure}[!ht] \centering \includegraphics[scale=0.8,angle=0]{clusters.pdf} \caption{\small Histograms of the total $V$ luminosities and weighted $V-I$ colors of the 39 star groupings identified in all three galaxies. These values are consistent with the luminosities and colors of LSB knots from Schombert, McGaugh \& Maciel 2013 which were tentatively identified as star-forming regions based in H$\alpha$ images. } \label{clusters} \end{figure} Figures \ref{F415-3_clusters}, \ref{F608-1_clusters} and \ref{F750_clusters} display a mosaic of some clearer examples of the star associations in each galaxy. Associations are identified by a letter and H$\alpha$ contours from ground-based imaging are also shown in each Figure. The H$\alpha$ peaks are always associated with a bright, blue star or small grouping of blue stars. However, there are several associations not identified with H$\alpha$ emission (e.g., cluster C in F608-1). Normally, these would be identified with older associations (ages greater than 10 Myrs) as the ionizing stars would have died off; however, most have at least one centrally located bright, blue star. Presumingly, these are clusters where the leftover gas has been blown away by galactic winds or made too diffuse to be detected in ground-based H$\alpha$ imaging. \begin{figure}[!ht] \centering \includegraphics[scale=1.5,angle=0]{F415-3_clusters.pdf} \caption{\small Selected star clusters from F415-3 F555W images. Colored contours display H$\alpha$ emission from ground-based images (FWHM = 1.2 arcsecs). Clusters A through E are located in a LSB knot at SW corner of the galaxy. The knot resolves into four distinct clusters, all associated with H$\alpha$ emission. Cluster O displays a bright HII region powered by a pair of massive O stars ($M_I < -5$). Cluster Q through T are associated with a LSB knot to the SE. Clusters M and N display the more typical shape and size of an LSB star forming region. } \label{F415-3_clusters} \end{figure} F415-3 is our largest galaxy in sample with the most number of detected stellar sources. Twenty-three associations were identified by visual inspection, all associated with a distinct HII region. The irregular morphology of LSB galaxies is often driven by the presence of one or two knots, now identified with a single cluster or group of clusters. For example, the LSB features to the SE and SW (see Figure \ref{ground}) are cluster groups A through D (SW) and groups Q through S (SE). However, their spacing is sufficiently wide enough to prevent a distinct HSB knot as would be visible in a star-forming irregular galaxy. Cluster O demonstrates the sharp correlation between ionizing star luminosity and the H$\alpha$ emission. The ionizing pair of OB stars in cluster O are the brightest in the sample. Clusters M and N (bottom left) are good examples of two H$\alpha$ knots associated with two distinct knots in $V$ ground images. The two clusters are displayed in Figure \ref{F336_F555_cluster} with the F336W frame next to the F555W frame. The ionizing blue stars are obvious in the F336W frame, with 4 to 5 OB stars per cluster. This maps nicely into the H$\alpha$ fluxes of those regions (log $L_{H\alpha} = 37.13$ and 37.15 respectfully) which corresponds to approximately five OB stars (see Figure 13; Schombert, McGaugh \& Maciel 2013). \begin{figure}[!ht] \centering \includegraphics[scale=1.5,angle=0]{F336_F555_cluster.pdf} \caption{\small A comparison of the F336W and F555W images for clusters M and N in F415-3. The H$\alpha$ emission in Figure \ref{F415-3_clusters} centers around the bluest handful of stars in each cluster. The F336W images are decisive in identifying the bluest stars in each galaxy. } \label{F336_F555_cluster} \end{figure} The central HII region in F415-3 divides into at least three groupings (J, K and L). The H$\alpha$ flux of this region (38.10) corresponds to several Orion sized HII regions, but was unresolved into the distinct clusters. We suspect that many of the bright HII regions in LSB galaxies are unresolved combinations of several Orion-sized clusters as displayed by clusters J, K and L as their combined surface brightness is only slightly less that a 30 Dor sized complex. Cluster H and I (middle right panel) display a more common grouping where cluster H is powered by a massive O star, while cluster I has only two, faint stars visible in the F336W frame and would produce very few ionizing photons. \begin{figure}[!ht] \centering \includegraphics[scale=1.5,angle=0]{F608-1_clusters.pdf} \caption{\small Selected star clusters from F608-1 F555W images. Colored contours display H$\alpha$ emission from ground-based images (FWHM = 1.2 arcsecs). Unlike F415-3, several clusters in F608-1 lack H$\alpha$ emission (clusters C, F and L). Inspection of their localized CMD's displays a lack of bright OB stars with a strong young RGB population (rHeB stars, see \S2.6) indicating a older population than those that comprise the H$\alpha$ regions. } \label{F608-1_clusters} \end{figure} F608-1 also displays a number of distinct associations (see Figure \ref{F608-1_clusters}, ranging from $10^4$ to a few times $10^5$ $M_{\sun}$. Of the thirteen identified groupings, only two are not associated with H$\alpha$ emission. One grouping (L) is widely dispersed, having the appearance of an old, open cluster, but being much too dispersed to be a gravitational unit (having a scale size of 400 pcs). Most likely, this is a region where several complexes were born, evolved and dispersed by kinematic effects. The G cluster displays very faint H$\alpha$ under closer inspection of the original ground-based images. The remaining cluster (C, middle left) shows no H$\alpha$ despite having several OB stars to ionize any nearby gas. It is one of the reddest clusters in the sample ($V-I=1.5$) perhaps indicating an older age where the leftover gas has been blown away by stellar winds. \begin{figure}[!ht] \centering \includegraphics[scale=1.5,angle=0]{F750-V1_clusters.pdf} \caption{\small F750-V1 is the smallest galaxy in the sample with only five HII regions. While each HII region is identified with an ionizing OB star, no large association of stars are visible like those in F415-3 or F608-1. } \label{F750_clusters} \end{figure} Our smallest galaxy, F750-V1 (see Figure \ref{F750_clusters}), has five distinct HII regions but with H$\alpha$ luminosities near log $L_{H\alpha}$ = 36.0, i.e. the flux expected from one OB star. Cluster E is a distinct cluster of a handful of blue stars, but the HII regions to the north are a widely dispersed, over an area of 500 pc, with only a few OB stars. Clusters A and B are powered by a single O star. \subsection{Stellar Distribution} Without a global dynamic structure, such as supplied by spiral density waves, the star formation history in LSB galaxies should be dominated by stochastic processes and, thus, the spatial distribution of young stars is a measure of this process. As can be seen in Figure \ref{map1}, the blue stars are clearly more clustered then the redder population, although this is even more magnified as many of the $V-I>0.5$ stars are on the red helium-burning branch and, thus, are only 100 Myrs old. This differs from the stellar distribution in LV dwarfs such as NGC 1705 (Tosi \it et al. \rm 2001) where the young stars are concentrated in the central regions with an older population found in the halo. However, the distribution of young stars in our irregular LSB galaxies is mostly a statement of how the stellar mass is distributed. Most irregular LSB galaxies have no well-defined central location and it is rare to find the highest surface brightness region associated with the geometric center defined by the outer isophotes. Star formation, and thus the youngest stars, are clearly associated with the higher surface brightness knots seen in the ground-based images and the uniform stellar distributions, such as those in NGC 1705, are accidents of the uniformity of the isophotes in some LV dwarfs. To be more precise, 75\% of the stars in identified stellar associations or groupings have colors less than $V-I=0.5$ while only 40\% of the field population are that blue. Although we refer to the groupings in Figure \ref{F415-3_clusters}, \ref{F608-1_clusters} and \ref{F750_clusters} as star clusters, this is a misnomer as gravitational bound open clusters range have sizes from a few to 10 pc's in diameter. In fact, these associations should be referred to star complexes, for they probably contain several cluster-sized units and their luminosity plus H$\alpha$ fluxes are more in agreement with a grouping of young open clusters. The star formation pattern in our LSB sample is similar to the pattern found in Sextans A (a Local Group dwarf of similar size and luminosity as F415-3, see Dohm-Palmer \it et al. \rm 2002) where the brightest bMS stars are found in the stellar groupings and the highest percentage of older AGB stars are found in the regions between the H$\alpha$ knots. Also the 100 Myr population (rHeB stars) are located primarily in the stellar associations, but with less density than the very young bMS stars (50\% versus 75\%). None of this is particularly surprising as it appears that local star formation in LSB galaxies proceeds in the same fashion as their HSB irregular cousins, i.e., from compact clusters to dispersed associations. The only peculiar aspect to the stellar distribution (especially for F415-3) is the existence of any bright O stars outside of an association or an HII region. The lack of H$\alpha$ emission may relate to the low gas density in LSB galaxies for about 50\% of the H$\alpha$ emission in LSB galaxies is not associated with a particular HII region, but exists in a low surface brightness diffuse form. Thus, the isolated O stars may be generating this H$\alpha$ emission not visible against the sky background. O stars outside of a cluster is not improbable considering the internal kinematics of LSB galaxies. The typical gas velocity dispersion is 8 km/sec (Kuzio de Naray \it et al. \rm 2006) which corresponds to 5 pc/Myr, or sufficient velocity to scatter older O stars from their regions of intense star formation, or isolate single O star HII regions. \subsection{Color Magnitude Diagrams} The color-magnitude diagrams for all three galaxies, in HST filters F336W, F555W and F814W, are shown in Figure \ref{cmd}. The left panels are $M_{F814W}$ versus $F555W-F814W$, the right panels display $M_{F555W}$ versus $F336W-F555W$, where the absolute magnitudes are determined from the distances in Table 1. Shown for reference are stellar isochrones for a 50 Myr population of [Fe/H]=$-$0.6 and a 12 Gyr population with a [Fe/H]=$-$1.5. Similar features are seen in all three galaxies, with the clearest CMD morphology visible in F415-3 due to its larger sample. The brightest stars in F415-3 ($M_{F814W} > -8$) are potentially foreground stars. According to Gould, Bahcall \& Maoz (1993), a total of 3$\times$10$^5$ stars per square degree are expected at this galactic latitude and limiting magnitude of $M_V=-7$ ($m_V=23$). For the size of F415-3, this results in 15 possible contaminating stars, whereas there are 16 stars brighter than $M_{F814W} = -8$ in F415-3. This makes all of them potentially non-members of the F415-3's stellar population. Neither F608-1 nor F750-V1 appear to have foreground stars contaminating their samples, probably due to their smaller samples and angular sizes. \begin{figure}[!ht] \centering \includegraphics[scale=0.9,angle=0]{cmd.pdf} \caption{\small The two color CMD's for all three LSB galaxies are displayed with no extinction corrections, but using the distance moduli in Table 1. Similar CMD features are seen in each galaxy with prominent blue main sequences and young RGB's (rHeB populations, see discussion in text). Low metallicity isochrones for a 50 Myr and 12 Gyr population are shown. Typical error bars are shown on the right side of the top panels. } \label{cmd} \end{figure} Our analysis of the CMD's in LSB galaxies is guided by comparison to other dwarf CMD's (e.g., IC 2574, see below) and stellar population simulations. One of the highest quality simulators is the synthetic CMD generator from IAC-STAR (Aparicio \& Gallart 2004). Numerous variables control a synthetic CMD generator, i.e., the IMF, star formation rates and chemical evolution scenarios. Our experiments used the isochrones (Bertelli \it et al. \rm 1994) with the default mass loss and IMF settings from the IAC-STAR simulations. We adopt a chemical evolution scenario similar to our own stellar population models (Schombert \& McGaugh 2014a) where the initial metallicity was chosen to be [Fe/H] = $-$1.5 and ending with values varying from $-$0.9 to +0.1. \begin{figure}[!ht] \centering \includegraphics[scale=1.7,angle=0]{iac_regions.pdf} \caption{\small An IAC-STAR synthetic CMD for a stellar population of 13 Gyrs in age with a constant star formation rate. The starting population has a metallicity of Z=0.0006 ([Fe/H]=$-$1.5) and a final metallicity of Z=0.004 ([Fe/H]=$-$0.6). Various colors correspond to different ages; blue = less than 150 Myrs, green = 150 Myrs to 1 Gyr, yellow = 1 to 3 Gyrs, cyan = 3 to 8 Gyrs, magenta = 8 to 10 Gyrs, red = greater than 10 Gyrs. Also shown are the CMD morphology regions used in Figure \ref{ic2574}. The youngest population ($\tau < 150$ Myrs) is displayed in greater detail in Figure \ref{age_color_mag}. } \label{iac_regions} \end{figure} One such synthetic CMD's, for a final metallicity of [Fe/H]$=-0.6$ and constant star formation over 13 Gyrs, is shown in Figure \ref{iac_regions}. Only the top of the CMD is shown ($M_I < -1$) and different aged stars are represented with different symbol colors (blue for less than 150 Myrs, green for between 150 Myrs and 1 Gyrs, yellow for 1 to 3 Gyrs, cyan for 3 to 8 Gyrs, magenta for 8 to 10 Gyrs and red for older than 10 Gyrs). Immediately obvious are the very young features of a stellar population, the bMS, bHeB and rHeB branches plus a fainter 'juvenile' population of stars with ages between 150 Myrs and one Gyr (this would include the brightest portion of the red clump). Beyond one Gyr, the stellar populations quickly develops into a classic RGB with the oldest stars forming the blue edge of the RGB. Intermediate aged stars (between two and eight Gyrs) dominate the AGB region of the synthetic CMD. We have designated specific regions in the $M_I$ vs $V-I$ CMD to compare the star formation history of LSB galaxies with the other HST dwarf galaxy samples. This will allow us to compare data from dwarf galaxies with large numbers of stellar sources to our smaller samples plus comparison with synthetic CMDs. The six regions are shown in Figure \ref{iac_regions} and comprise the area in the CMD that are sensitive to specific age and metallicity effects. The region to the far blue encompasses the youngest stars, those making up the tip of the main sequence (bMS), defined as all stars bluer than $V-I=0.0$. Slightly redder is the region containing the blue branch of the helium-burning phase (bHeB) defined by a wedge parallel to the red (rHeB) branch. The rHeB branch is defined as a parallelogram with a width that would capture a range of metallicities strictly above the $M_I = -4$ line, i.e. the tip of the RGB. Stars in bMS region are less than 15 Myrs in age, while stars in the bHeB and rHeB are between 15 and 150 Myrs. The rHeB feature is from $M_I < -4$ and between $1 < V-I < 2$ and, while this region also contains very young stars, the age of the stars decreases as their luminosity increases (see Figure \ref{age_color_mag}). Therefore, the number of the stars from the base of the rHeB to the tip are a measure of star formation over the last 100 Myrs (see \S2.8). A combination of studying the bluest stars, and the brightest red stars, resolves the most recent star formation epoch outside of H$\alpha$ emission ($\tau < 15$ Myrs). \begin{figure}[!ht] \centering \includegraphics[scale=1.9,angle=0]{iac.pdf} \caption{\small Four IAC-STAR simulations for ending values of [Fe/H] of $-$0.60, $-$0.45, $-$0.30 and +0.10. The rHeB branch is well defined for low metallicities and degrades with higher values, as well as drifting to the red. The population of AGB stars is dominated by intermediate age stars for low metallicities, but decreasing in numbers at higher metallicities. The population regions outlined in Figure \ref{iac_regions} are marked. } \label{iac} \end{figure} The region below the rHeB contains stars with ages between 150 Myrs and 1 Gyrs, a young population, but not the stars involved in HII regions or any emission line signatures of star formation. As they are younger that the typical intermediate age population (e.g., AGB stars), we have titled this region as 'juvenile' stars. Stars older than one Gyr will occupy the AGB and RGB sections of the CMD. The stars with ages between one and 8 Gyrs dominate the AGB region, thus the ratio of AGB to RGB stars is a measure of this epoch of star formation. However, this region is also highly dependent on the metallicity where lower metallicity populations contain more stars in the AGB region. The effect of metallicity can be seen in Figure \ref{iac}, where four simulations of varying ending [Fe/H] are shown. Metallicity effects are most prominent for the rHeB and AGB populations. Lastly, there is the region below $M_I = -4$ (the tip of the RGB) that is the classic old, RGB. The blueward side of the old RGB is fixed by the metallicity of the initial stellar population. As the metallicity, and age, increases for later generations, those stars occupy the redder portion of the CMD. Low metallicity, old stars can occupy the AGB region but with decreasing number of AGB's with increasing metallicity. Thus, the ratio of the AGB region to the old RGB region, combined with the position of the rHeB, is a measure of the rate of chemical evolution of a galaxy and its current metallicity. Interpretation of the various regions is dependent on the chemical history of the galaxy. To demonstrate this effect, we varied the final metallicities between [Fe/H]$=-0.6$ and +0.1 for a population with a history of constant star formation and a initial metallicity of [Fe/H]=$-$1.5. The results are shown in Figure \ref{iac} for the four different ending [Fe/H] values. The first characteristic to note is that the position of very young stars (less than 15 Myrs, the bMS) is independent of metallicity. On the other hand, the color of the bHeB and rHeB branches are mildly dependent on metallicity. In fact, the rHeB is a feature that only exists for metallicities less than $-$0.3 and its mean color is sharply defined by the current metallicity. Other features to note is the broadening of the RGB with increasing metallicity, and the decreasing importance of AGB stars with metallicity. For the range of expected metallicities (LSBs range from [FE/H] = $-$1.0 to $-$0.3 based on oxygen abundance analysis, McGaugh 1994; Kuzio de Naray, McGaugh \& de Blok 2004), the morphology of the RGB and AGB are fairly constant. The juvenile population also remains well defined over these metallicities, with the dividing line between the juvenile and RGB populations unchanged by variation in metallicity. Changes in star formation history will work to increase (or decrease) the numerical proportions of the various ages, but will not alter the position of the various regions in color-luminosity space. A complicating factor to the simulations is the ratio of so-called light or $\alpha$ elements, typically expressed as $\alpha$/Fe, in a stellar population. Supernovae are the main contributors of metallicity in a stellar population; however, Type Ia and Type II SN contribute differing amounts of Fe where Type Ia SN overproduce Fe with respect to $\alpha$ elements. Thus, the ratio of $\alpha$ elements to Fe is a function of the number of Type II versus Type Ia supernova in a galaxy's past with Type Ia SN producing extra amounts of Fe and driving the ratio downward. Since main source of free electrons in stellar atmospheres is Fe, stars with high $\alpha$/Fe ratios will have hotter (i.e., bluer) colors. Ellipticals tend to have high $\alpha$/Fe ratios (typically near 0.3, compared to the solar value of 0.0, S{\'a}nchez-Bl{\'a}zquez \it et al. \rm 2006), primarily due to their evolution dominated by a rapid burst of star formation at early epochs. As Type Ia SN require at least a Gyr to develop their white dwarf companions, a rapid burst of star formation will leave a stellar population deficient in Fe (i.e., a high $\alpha$/Fe ratio). However, Type Ia SN require at least a Gyr of time to build-up within a galaxy and, therefore, high fractions of $\alpha$/Fe indicate shorter duration times for star formation. Constant star formation, such as found for the Milky Way, allows for the build-up of Type Ia SN and, therefore, lower $\alpha$/Fe values. We investigated the effects of variation in $\alpha$/Fe in Paper III (Schombert \& McGaugh 2014a) using $\alpha$-enhanced isochrones from the BaSTI group (Pietrinferni \it et al. \rm 2004). Increasing the $\alpha$/Fe ratio by 0.3 from solar resulting in integrated $B-V$ color of 0.03 bluer. The $V-I$ isochrones are approximately 0.02 bluer around the red clump, decreasing to zero at the turn-off point. Similar changes were observed for the bHeB and rHeB positions in the $\alpha$-enhanced isochrones. The expectation for the star formation history of LV and LSB dwarfs is that their past SFR's are more similar to the Milky Way than ellipticals. So expected values for $\alpha$/Fe should range from solar (i.e. zero) to slightly less than zero. For example, Lapenna \it et al. \rm (2012) found the youngest stars in the LMC to be slightly under solar ($\alpha$/Fe=$-$0.1). If the stars in the LSB galaxies in are sample have similar ratios, than the effect of $\alpha$/Fe on the synthetic CMD's will be quite small (less than 0.01 in $V-I$) and comparison with LV dwarfs (with near solar values) is appropriate. In order to compare the deeper CMD's of nearby dwarfs to our three LSB galaxies, we have outlined a completeness region (see Figure \ref{ic2574}) that includes 95\% of the stars in our target galaxies. The same completeness region is applied to the CMD's taken from the Extragalactic Distance Database (EDD, Jacobs \it et al. \rm 2009) and to the synthetic CMD's generated from IAC-STAR. Thus, it is important to remember that the fractional values quoted in the following discussions do not represent the percentages with respect to the entire galaxy stellar population. They only refer to the completeness region although an extrapolation to the total population could be made with some simple assumptions to the distribution of faint stars. While the depth of the CMD's for our three LSB galaxies does not approach the depth of other CMD studies in galaxies (i.e., a main sequence and turnoff, Jacobs \it et al. \rm 2009), a clear red helium-burning sequence (rHeB) is visible as well as the top of the blue main sequence (bMS, also called the blue plume) along with a significant intermediate age AGB population (see Figure \ref{ic2574}). The rHeB branch is of particular interest for it only develops in metal-poor populations ([Fe/H] $< -0.3$) and the number of stars along the branch is a direct measure of the star formation rate for the last 100 Myrs (see \S2.8). The stars in the bMS are consistent with a young, high mass OB star population, the stars responsible for the low level H$\alpha$ emission in LSB galaxies. All the stars found in F814W ($I$) are detected in F555W ($V$) with the exception of five stars. These five stars are detected in F814W between $M_I=-4$ and $-$6, but not visible in F555W. This predicts their colors are greater than 2, typical of extreme AGB stars. The bHeB population is poorly defined in $V-I$, but is clearer in $U-V$ (see \S2.9). The LSB CMD's mostly closely resemble the CMD's of LV starburst dwarfs from McQuinn \it et al. \rm (2010) and the ANGST survey (Dalcanton \it et al. \rm 2009) (comparison CMD's can be found at the Extragalactic Distance Database, Jacobs \it et al. \rm 2009). In particular, the morphology of the CMD in our LSB sample closely resembles the morphology of the CMD from IC 2574 (McQuinn \it et al. \rm 2010), one of the faintest (and lowest metallicity) of their sample. A comparison between F415-3 and IC 2574 CMD's is seen in Figure \ref{ic2574} where the IC 2574 data contains 158,000 stars and, thus, is displayed using a logarithmic Hess diagram overlayed with individual datapoints in the CMD regions of low density. Regions of particular interest in the star formation history are marked. As can be seen in Figure \ref{ic2574}, the dwarfs in the EDD catalog display all the CMD features for stellar populations with a range of ages, such as a blue main sequence, an old red giant branch (divided by the tip of the RGB at $M_I=-4$), blue and red helium-burning sequences (bHeB and rHeB), a red clump (RC) and an asymptotic giant branch (AGB). Features in common with F415-3 are a distinct rHeB branch and AGB population. A bMS is evident in both galaxies, but that feature in F415-3 is broader due to increased photometric errors near the completion limit. We note that the LSB galaxies in our sample have very little resolution of the old RGB populations. Several bMS tracks are visible in IC 2574, indicating bursts of star formation on timescales of 10 Myrs. Similar features are not seen in F415-3, probably due to small number statistics. \begin{figure}[!ht] \centering \includegraphics[scale=1.0,angle=0]{ic2574.pdf} \caption{\small A comparison of the CMD's for F415-3 and IC 2574 from Dalcanton \it et al. \rm (2009). Several CMD morphology features are evident in both galaxies. Particular regions of interest are marked; the blue main sequence (bMS), the blue and red helium-burning branches (bHeB and rHeB), a "juvenile" population (between 100 Myrs and 1 Gyr), the AGB and RGB populations ($\tau > 3$ Gyrs). These regions are defined by comparison to synthetic CMD simulations (see Figure \ref{iac_regions}). The dotted line displays the completeness limit used when comparing fractions of the various CMD features with other CMD's. IC 2574 is displayed with a logarithmic Hess diagram and red symbols for stars in regions of the CMD with few stars. } \label{ic2574} \end{figure} \subsection{CMD Morphology} Using the CMD regions defined in Figure \ref{iac_regions}, we can classify the CMD morphologies of existing HST samples from the EDD (Jacobs \it et al. \rm 2009) for comparison with our LSB galaxies. We have divided the existing CMD samples from EDD into young (CMD's with a clear rHeB branches and strong AGB populations, such as IC 2574) and old (ones lacking a rHeB branch, but may have a weak bMS populations). Examples of old morphologies are DDO 44, DDO 71 and ESO 294-010 from the ANGST survey. In all, 57 CMD's were extracted from the HST archives and the EDD website, 45 classified as young and 12 classified as old. Each CMD of the three LSB galaxies in our sample, is analyzed by calculating the number of stars in the six population regions outlined in Figure \ref{iac_regions}. The population percentages are displayed as histogram in Figure \ref{fraction_hist}. The percentage of bMS stars varied from only a few percent for old dwarfs (e.g. DDO 44) to over 40\% for dwarfs such as NGC 3077 and UGC 5336. The galaxies with strong rHeB branches have bMS populations that vary between 10 and 40\%, indicating a connection between the two features but the bMS being more sensitive to very recent SF. For example, the fraction of stars in the rHeB branch ranges between 10 and 20\% of the population, regardless of the fraction of bMS stars, due to the fact that the bMS fraction varies on very short timescales. The old dwarfs in the EDD sample display strong RGB fractions and weak bMS and rHeB branches. Both young and old EDD dwarfs have similar AGB fractions (suggesting their primary differences is due to their star formation rates over the last Gyr). The galaxies without prominent rHeB branches display the highest concentrations of RGB stars in the completeness region, reinforcing the interpretation that old dwarfs, while often having some current star formation, produced most of their stars over 5 Gyrs ago. Young dwarfs typical have strong bMS, bHeB and rHeB populations (McQuinn \it et al. \rm 2010) which would agree with the fact that their current SFR (based on H$\alpha$ values) exceeds the mean past SFR based on dividing their stellar mass by 12 Gyrs (i.e., $<$SFR$>$, see \S3.2). Increasing importance of the bMS stars in young dwarf CMD's reflects into decreasing percentages of RGB stars, i.e., star formation has continued to recent epochs. The constant fraction of rHeB stars indicates that the bursts of star formation responsible for the bMS population are fairly evenly spacing on timescales of 100 to 200 Myrs. \begin{figure}[!ht] \centering \includegraphics[scale=0.8,angle=0]{fraction_hist.pdf} \caption{\small A comparison of the fraction of stars in the six population regions defined in Figure \ref{iac_regions}. In general, young (blue) and old (red) CMD's separate based on the dominance of blue plume features (bMS, bHeB and rHeB) versus RGB fractions. Our three LSB galaxies are indicated and differ from young dwarfs by having stronger bMS and weaker AGB fractions. Incompleteness prevents any strong statements on the RGB population in the LSB galaxies. } \label{fraction_hist} \end{figure} The three LSB galaxies in our sample distinguish themselves from the young dwarfs by having weaker AGB fractions and stronger bMS fractions. The LSB galaxies also have weaker juvenile and RGB fractions; however, the RGB region is undersampled and, even when we apply the same completeness boundaries to all the CMDs, we are hesitant to draw strong conclusions from this trend. The LSB galaxies have similar bHeB and rHeB branch fractions with the star-forming EDD dwarfs, which samples the Gyr timescale of star formation. The difference in young and old populations can be seen more clearly in Figure \ref{fractions}, a comparison of the fraction of bMS, rHeB, juvenile and AGB stars. Old dwarves were selected by an absence of a distinct rHeB branch, so their low values are unsurprising. They typically, also, have very weak bMS, bHeB and rHeB populations, signaling very low rates of star formation over the last Gyr. Old dwarfs display a range of juvenile and AGB fractions (anti-correlated), suggesting a continuum driven by a star formation history of increasing SFR from the young to old dwarfs. Even though AGB's are a measure of intermediate age stars, there is a strong anti-correlation between the bMS and the AGB fraction (bottom left panel of Figure \ref{fractions}. We see the dominance of AGB stars in the older CMD's, but the trend of decreasing AGB populations with increasing bMS populations is also evident in the young CMD's. As the AGB stars sample intermediate timescales (3 to 8 Gyrs), then we, again, see a trend of increasing star formation from intermediate ages (although strict interpretation requires comparison to synthetic CMDs, see \S2.6). The ratio of AGB to RGB stars (not shown) increases with larger AGB populations to a maximum of approximately 20\%. The linear behavior of the AGB to RGB relation for young dwarfs may signal a late initial star formation epoch, in agreement with their higher current SFRs compared to their past mean rates. \begin{figure}[!ht] \centering \includegraphics[scale=0.8,angle=0]{fractions.pdf} \caption{\small The relationships between population fractions for the bMS, rHeB, juvenile and AGB portions of the CMD. Low metallicity young dwarfs are shown as open symbols, high metallicity dwarfs as solid symbols. LSB galaxies are indicated by their names. Old dwarfs are identified by their absence of a distinct rHeB branch. Also shown are synthetic CMD simulations with increasing SFR's (dashed green line). } \label{fractions} \end{figure} All three LSB galaxies in our sample have CMD morphologies at the extreme edges of other dwarf CMD's. Our LSB galaxies CMD's typically have stronger very young components (i.e., bMS) with a mean of 45\% compared to the LV dwarf mean value of 20\%. The dominance of the bMS population comes as no surprise due to the extremely blue colors for most LSB galaxies. And while their HSB cousins also have blue star-forming colors, those colors are typically restricted to the bright star-forming regions. LSB galaxies are unique in that even the regions between the few higher surface brightness knots are blue in optical colors (Schombert, Maciel \& McGaugh 2011). The widely dispersed, and predominately blue, stellar population are responsible for this effect. LSB galaxies also have weaker intermediate aged components, with the AGB fraction at 5\% compared to the LV dwarf values between 10 and 30\%. The interpretation here is that the current SFR is much higher than the SFR over the last 5 Gyrs, although this is not supported by the mean past SFR ($<$SFR$>$) as estimated by the current stellar mass divided by a Hubble time. The AGB fraction is metallicity dependent (see Figure \ref{iac}), however, for the estimated [Fe/H] of our LSB sample (between $-$1.0 and $-$0.6) the AGB fraction should be greater than the higher metallicity LV dwarfs. The RGB population in LSB galaxies also deficient compared to old LV dwarfs, but young LV dwarfs have RGB fractions below 10\% so this comparison is difficult. Also, the conclusions concerning the $\tau > 8$ Gyrs populations in our LSB galaxies are less secure due to the lack of complete resolution of the old RGB in our CMD's. We deduce the crude characteristics of old stars in LSB's based on this limited resolution plus the fact that the chemical evolution requires some older population in order to produce even the low [Fe/H] values measured with the rHeB populations (see \S2.8 and Villegas \it et al. \rm 2008). In addition, the pixel-by-pixel surface surface brightness characteristics (see \S2.3) also match the expectations for an underlying normal older population. Given these limitations, the three LSB galaxies still have very low AGB fractions. Their rHeB branch fractions are similar to other young dwarfs (although all star-forming dwarfs have rHeB fractions near 10\%). A more telling diagnostic is the ratio of AGB to bMS for LSB galaxies. While most young, blue dwarfs have an AGB population in proportion to their bMS populations, the three LSB galaxies have significantly higher bMS and bHeB populations compared to the AGB populations. This is particularly significant since the fraction of AGB's increases with decreasing metallicity (based on comparison to synthetic CMD's) and both LSB and young dwarfs display the opposite trend. Either the star formation rate in LSB's has suddenly increased in the last Gyr (i.e. the current epoch is a special time) or the past star formation rates of LSB have long been inhibited, perhaps an explanation for their LSB nature as a whole. \subsection{Recent SF and [Fe/H] from the rHeB Branch} The mapping of the most recent star formation has the advantage that the youngest stars have positions on the CMD that are most easily distinguished from other aged populations. In particular, stars with ages less than 100 Myrs are the domain of the bMS, bHeB and rHeB branches (McQuinn \it et al. \rm 2012). Figure \ref{age_color_mag} displays a breakdown of these three young phases of the CMD from a IAC-STAR simulation by age, color and luminosity. Note that a cut by $V-I < 0$ will isolate all stars younger than $10^7$ years old. The timescale between $10^7$ and $10^8$ years is represented by the bHeB and rHeB branches. A star will oscillate between the two branches, with the younger stars at higher luminosities. However, the rHeB branch is much easier to distinguish from the bHeB branch due to confusion in the CMD between bMS and bHeB stars (however; these two branches are separated in $U-V$ color, see \S2.9) . In addition, stars on the rHeB branch display a strong correlation between age and luminosity (see right panel, Figure \ref{age_color_mag}). This provides a simple, and direct, measurement of star formation between $10^7$ and $10^8$. \begin{figure}[!ht] \centering \includegraphics[scale=0.8,angle=0]{age_color_mag.pdf} \caption{\small A synthetic CMD (IAC-STAR) for a metal-poor stellar population ([Fe/H] = $-$0.4) displaying the color and luminosity for the youngest stars (ages less than 100 Myrs). The bMS contains only stars less than 15 Myrs and can be easily distinguished by a simple cut in color above $M_I = -4$. Older stars occupy the bHeB and rHeB branches (oscillating between the branches). The rHeB branch is easier to identify in the CMD, and the relationship between age and luminosity is linear for stars in the rHeB region of a CMD (dotted line). } \label{age_color_mag} \end{figure} The linear relation between absolute luminosity and age for rHeB branch stars (see Figure \ref{age_color_mag}) allows the distribution of rHeB branch stars on the CMD to be compared with various star formation histories. For example, in Figure \ref{rheb}, the averaged distribution of blue LV dwarfs is compared to an IAC-STAR synthetic CMD using a constant SFR (the results were independent of metallicity). The mean star formation history for young dwarfs is slightly higher for ages greater than 50 Myrs and slightly lower for younger populations, although the range for all LV dwarfs is consistent with approximately constant SF for the last 200 Myrs. \begin{figure}[!ht] \centering \includegraphics[scale=1.0,angle=0]{rheb.pdf} \caption{\small The distribution of stars along the rHeB branch as a function of luminosity. Luminosity along the rHeB branch correlates with age (the youngest stars being the brightest). Thus, the number of stars per luminosity bin is a measure of the star formation history over the last 100 Myrs. The mean distribution for all young EDD dwarfs is shown as the blue curve. A model of constant star formation is shown as the red curve. Nearby dwarfs appear to have slightly higher than constant SF at 100 Myrs, dropping below the constant curve in recent epochs. The LSB F415-3 is also shown (green curve), which displays the opposite trend from the EDD dwarfs. } \label{rheb} \end{figure} The SF history of F415-3 (our LSB with the best sampling of the rHeB branch) is similar to other young LV dwarfs. F415-3 has a slightly lower SFR at 100 Myrs, rising to a constant SF by 50 Myrs. However, F415-3 is well within the distribution of SF histories of other young dwarfs, again, surprisingly considering the very different appearance of LSB dwarfs and LV dwarfs in terms of mean stellar density. If both types of galaxies have similar current SFR, then their differences lie in their intermediate and older populations, i.e., the mean past SFRs. The current metallicity can also be extracted from the mean position of the rHeB branch. As can be seen in Figure \ref{iac}, the rHeB branch moves redward with increasing [Fe/H]. Calibrating the position of the rHeB branch using synthetic CMD's, we can assign a current metallicity to each CMD ([Fe/H]$_{rHeB}$). The results are shown in Figure \ref{metal} where the histogram displays the deduced [Fe/H]$_{rHeB}$ values for 45 LV dwarfs with strong rHeB populations. The three LSB galaxies in our sample are also marked in Figure \ref{metal} with [Fe/H] values of $-$1.0, $-$0.6 and $-$0.7 respectfully. This places all three on the low end of the distribution, in line with a history of inhibited star formation and, therefore, a suppressed chemical evolutionary path. This is also in agreement with the typical oxygen abundances deduced from LSB emission lines (McGaugh 1994). \begin{figure}[!ht] \centering \includegraphics[scale=1.0,angle=0]{metal.pdf} \caption{\small The distribution of metallicity (parameterized as [Fe/H]) deduced from the position of the rHeB branches. The histogram displays the [Fe/H] values for 45 EDD dwarfs. The LSB galaxies are labeled by their names. All three galaxies display much lower [Fe/H] values (i.e. bluer rHeB branches) than other nearby dwarfs. } \label{metal} \end{figure} \subsection{$U-V$ CMDs} The bMS region of a $U-V$ CMD is relatively insensitive to metallicity effects as increasing metallicity primarily lowers the peak luminosity of the brightest O stars, and the range of blue star absolute luminosity. Age dominates the position of the isochrones on the blue side in the $M_V$ versus $U-V$ diagram (see Figure \ref{uv}). And, unsurprisingly, the bright portion of the blue branch of the CMD can only be explained by very young ($\tau < 5$ Myrs), metal-poor stars. A majority of the blue stars are concentrated along this young isochrone with only 10\% have $U-V$ colors redder than 0. Where the bMS and bHeB branches are blurred in the $V-I$ CMD due to photometric errors, the bHeB branch separates nicely in $U-V$. Shown in the right panel of Figure \ref{uv} is an IAC-STAR simulation of a constant star formation, [Fe/H]=$-0.4$ population. The $U-V$ colors, sampled by our survey, explore the stellar populations with ages less and 10 Myrs (i.e., very recent), and the ratio of the bMS versus bHeB regions is a measure of the fraction of 2 Myrs to 10 Myrs stars. Increasing metallicity will lower the fraction of bHeB stars (they become fainter and drop below the completeness line). \begin{figure}[!ht] \centering \includegraphics[scale=0.9,angle=0]{uv.pdf} \caption{\small The $U-V$ CMD for F415-3 compared to an IAC-STAR simulation of enhanced recent star formation ([Fe/H]=$-0.4$). The completeness, bMS and bHeB regions are marked. The bMS and bHeB branches (measuring 2 Myrs and 10 Myrs stars respectfully) are clearer in the $U-V$ plane than $V-I$ and the ratio of the bMS and bHeB stars will measure recent star formation on timescales of two to 10 Myrs. } \label{uv} \end{figure} Using this $U-V$ CMD diagnostic, we find that the three LSB galaxies in our sample have bMS and bHeB fractions of 70\% and 25\% on average. Enhanced recent star formation predicts fractions of 85\% and 10\% for metal-poor populations. Constant star formation models predict 80\% and 20\%. This would suggest, as indicated in the previous section, that despite the large fraction of young stars (see Figure \ref{fractions}) this does not signal a sharp increase in the SFR in the last 50 Myrs. Rather, this is an indication that the past SFR has been inhibited such that the blue colors of LSB galaxies is from a suppressed old population, rather than a recent enhanced cycle of current SF. \section{Discussion} Although, due to the distance to our LSB galaxies, their CMD's to not reach to absolute limiting magnitudes comparable to other LV dwarfs, the LSB CMD's have many of the same CMD features that LV dwarfs display. In particular, strong signatures of recent star formation with numerous OB stars, very low current [Fe/H] values as deduced by the position of the rHeB population and a measurable deficiency of intermediate age AGB stars (compared to LV dwarfs). Our analysis can be divided into three sections; 1) pure observables from the spatial and color distribution of the LSB CMD's, 2) empirical comparison to CMD's in other dwarf galaxies and 3) examination of the results from comparison to synthetic CMD simulations. \subsection{H$\alpha$ emission and Mean Surface Brightness} The clearest result from HST stellar photometry of our three LSB galaxies is the significant one-to-one correspondence between the types and luminosity of the resolved stars and global features such as local surface brightness, local color and H$\alpha$ emission. While this was not unexpected, it is direct confirmation that the same star formation processes that dominates normal spirals and irregulars are also found in LSB galaxies (Helmboldt \it et al. \rm 2009). For every HII region identified in Schombert, McGaugh \& Maciel (2013) there exists at least one, often several, stars with $U-V$ colors less than $-$1. In addition, the brighter the HII region, the brighter the ionizing stars. Several groupings are identified without H$\alpha$ emission (also identified in ground-based imaging as surface brightness knots) and these regions have a higher fraction of rHeB stars, i.e. older than 10 Myrs stars and non-ionizing. The connection between bright OB stars and ionized gas confirms that star formation in LSB galaxies proceeds in the same fashion as normal spirals and irregulars, i.e., collapse of a gas cloud, star cluster formation, massive star gas ionization followed by gas blowout. There is no support of earlier speculation that LSB galaxies form stars without massive stars (Meurer \it et al. \rm 2009; Dopcke \it et al. \rm 2013) nor that H$\alpha$ emission in LSB galaxies is due to an exotic ionizing population (e.g., blue HB stars or hot white dwarfs). In addition, the local colors (optical and near-IR) are in direct correspondence with the colors and luminosity of the local brightest stars. Regions that are blue in mean color are also rich in blue stars. High surface brightness regions are also dominated by the brightest stars (both blue and red). Blue regions with low mean surface brightness have an excess of faint, widely dispersed bMS and bHeB stars suggesting strong kinematic mixing on short timescales in LSB galaxies. This is in agreement with typical gas velocity dispersion estimates of 8 km/sec (Kuzio de Naray \it et al. \rm 2006) that corresponds to stellar motions of 5 pc per Myr, most than sufficient to scatter O stars from their original regions of intense star formation. Lastly, the total luminosity of the resolved population is roughly 10\% of the total luminosity of the galaxy. This agrees with the estimates from simulations of synthetic CMD's, where the ratio between the completeness region and the fainter stars was between 5 and 20\%, highly variable due to small number statistics of the brightest stars. In addition, the stellar counts per pc$^2$ are in excellent agreement with the local mean surface brightnesses when scaled to the total luminosity of the galaxy (see Figure \ref{sfb_map}). This implies there are no hidden stellar populations in LSB galaxies, the resolved bright stars trace the same structure as the underlying stellar contribution. As suspected from the lack of CO and far-IR detections, LSB have almost no extinction or significant absorption over the scale sizes of the large star-forming regions (Lynn \it et al. \rm 2005; Hinz \it et al. \rm 2007). The difference between the lowest surface brightness regions and the higher surface brightness knots is due primarily to the brightest blue stars. There are numerous B stars in low surface brightness regions indicating that their ages differ only by a few tens of Myrs from the bright cluster regions. In other words, there are no distinct old regions in LSB galaxies, strong mixing with the star forming regions is implied or there are simply no obvious regions with stars older than 5 Gyrs, in conflict with the observed chemical evolution. \subsection{Comparison to LV dwarfs CMDs} The $V-I$ CMD has been explored for dozens of LV dwarf galaxies, some as deep as the turn-off point, but all fainter than the limiting magnitude of our three LSB galaxies. The CMD features of LV dwarfs varies widely as their star formation histories range from very little recent star formation (e.g., IC 3104 and DDO88) to galaxies with a full range of main sequence, post-main sequence, RGB and post-RGB features. In terms of general features, our best CMD in F415-3 contains all the same CMD features as those in LV dwarfs such as IC 2574 (see Figure \ref{ic2574}). In particular, we observe the bMS, bHeB, rHeB and AGB populations. Our CMD's do not extend significantly below the tip of the RGB to fully sample the red clump or lower RGB populations. Compared to 57 LV dwarf CMD's, we find that the fraction of bMS stars is much higher in our LSB galaxies than LV dwarfs. Star forming LV dwarfs have bMS fractions between 5 and 20\%, whereas LSB galaxies have bMS fractions greater than 30\% (see Figure \ref{fraction_hist}). Despite the high fraction of bright blue stars, the total numbers are in agreement with H$\alpha$ fluxes. For example, in F415-3, log $L_{H\alpha}$ is 38.5, which is the equivalent to a cluster slightly larger than $10^5$ $M_{\sun}$ cluster. With a normal IMF, this population would have between 800 and 1,000 stars brighter than $M_I = -3$. In F415-3, there are 850 stars brighter than this luminosity, which we interpret that there is nothing particularly unusual on the upper end of the IMF in LSB galaxies. This is in agreement with the one-to-one correspondence found between H$\alpha$ emission and the ionizing population in LV dwarfs (McQuinn \it et al. \rm 2010), but in contradiction with the observations of Meurer \it et al. \rm (2009) who found a deficiency in the upper mass of the IMF for LSB galaxies (see also Lee \it et al. \rm 2004). However, this high bMS fraction must be reconciled with the extremely low SFR rates for LSB galaxies, typically less than $10^{-3}$ $M_{\sun}$ per yr. Since the current SFR is low, the only way to produce a high bMS fraction is to suppress the fraction of stars in the older populations. In other words, the stellar population in LSB galaxies appear to be predominately very young with an underpopulated stellar population older than 2 to 3 Gyrs. This is confirmed by the fraction of AGB stars in LSB galaxies, a measure of intermediate age populations. For LV dwarfs, the fraction of AGB stars ranges from 20 to 30\% for non-star forming dwarfs to 10\% for star forming dwarfs. The LSB galaxies have AGB fractions below 10\%, indicating a much lower SFR in the distant past which, of course, is in agreement with their abnormally low stellar densities. This is even a more abnormal fraction for the LSB galaxies in our study are particularly low in [Fe/H], which should strengthen the AGB population fraction. Thus, the abnormally blue colors of LSB galaxies is due as much to an absence of old red stars as much as an overabundance of young blue stars. There is very little evidence of any stellar population older than 5 Gyrs; however, our data does not sample the RGB where these stars would lie in the CMD. A unusually low fraction of older stars is deduced from the lack of their color signatures in broadband imaging (Pildis, Schombert \& Eder 1997) and the close correspondence between the resolved stars and the underlying colors. Some older population must exist for the metallicity values, while low, imply the existence of some earlier enriching stars (see below and, for example, old globular clusters are found in large LSB galaxies, Villegas \it et al. \rm 2009). While a high fraction of bMS and rHeB stars implies either recent surge of SFR or a highly suppressed SFR in the past, these conclusions uncomfortably imply that we live in a special epoch with respect to the star formation history of LSB galaxies. That is, we are seeing their first epochs of increasing star formation from a large reservoir of gas reserves. This is possible, as our sample size is small and selected from a survey of blue PSS-II plates. However, more likely, either due to internal or external inhibitors, we are seeing a global history of steady, but very slowly increasing star formation where surface brightness is an aftereffect of a very low, past total star formation rate. Once a galaxy has achieved a certain value of SF per pc$^2$, the mean surface brightness of the galaxy exceeds a visibility threshold, the galaxy becomes detectable for our surveys and catalogs. Confirmation of this idea would be the detection of numerous pure-HI systems with very little past star formation and, therefore, extremely low surface brightness (Davies \it et al. \rm 2004). \subsection{Comparison to Synthetic CMDs} To extract the star formation and chemical evolution history from CMD's one must make statistical comparisons to artificial CMD's generated with known metallicity and SFRs as a function of population age. Several examples are shown in Figure \ref{iac}. The two regions where comparison to synthetic CMD's is most informative is the rHeB (see \S2.8) and the $U-V$ CMD (as there are very few $U-V$ CMD's in the literature). The results we deduce from the rHeB region is that LSBs have very nearly constant SF for the past 10$^8$ years, slightly stronger than the typical LV dwarf, but well within the range of recent SFR for LV dwarfs of a range of surface brightnesses. We note that although the star formation has been nearly constant, this constant rate is still at extremely low absolute levels. Proposing even lower SFRs in the past is in conflict with the deduced mean $<$SFR$>$ from the stellar mass of HSB and LSB galaxies. For comparison, Figure \ref{sfr} displays the current SFR in three samples of irregular galaxies (our LSB sample; van Zee 2001; Hunter \& Elmegreen 2004) versus the stellar mass of each galaxies divided by 12 Gyrs (a measure of the mean SFR, $<$SFR$>$, in a galaxy where galaxy luminosity is converted to stellar mass with a M/L value. The M/L values used were deduced by McGaugh \& Schombert (2015) modified for color following the prescription given in de Blok \& McGaugh (1997) and varied, at most, from 0.4 to 0.6. Per unit mass, the LSB galaxies are typically a factor of ten lower in current SFR than other HSB irregulars. And their $<$SFR$>$ values are typically higher than their current SFR values, indicating past rates that are higher than the current value (the unity line is shown in Figure \ref{sfr} where most HSB irregulars are above the line and LSB galaxies are below the line). \begin{figure}[!ht] \centering \includegraphics[scale=0.9,angle=0]{sfr.pdf} \caption{\small The mean past rate of star formation, $<$SFR$>$, the stellar mass divided by 12 Gyrs versus the current SFR from H$\alpha$ luminosities (in $M_{\sun}$ per yr). The HSB samples of van Zee (2001) and Hunter \& Elmegreen (2004) are shown as red symbols, LSB galaxies are blue, the three LSB galaxies in this study are magenta. HSB irregulars display higher SFRs compare to past rates (explaining their brighter surface brightnesses) whereas LSB galaxies typically have lower current SFRs compared to past rate, in conflict with their missing AGB populations. } \label{sfr} \end{figure} The global properties of LSB galaxies (compared to HSB irregulars) are difficult to reconcile with the deficiency of AGB stars in our three LSB galaxies. While we lack resolution of truly old stars on the RGB, a deficiency in AGB stars with a deduced higher past SFRs from Figure \ref{sfr} induces a contrived star formation history (one with an initial burst sufficient to produce most of the current stellar mass, then a long quiescent period, to a current epoch of slowly increasing SFR). While this may very well be the case, and is consistent with low stellar density distribution, the mechanism for this type of star formation history, given the stochastic appearance of current star formation, would require some external process to moderate the quiescent phases (McQuinn \it et al. \rm 2015). Second, the position of the rHeB sequence in the $V-I$ CMD is very sensitive to metallicity of the younger stars. Calibrating the position to [Fe/H] (using a standard enrichment scenario), we find the current [Fe/H] values for our three LSB galaxies are $-$1.0, $-$0.6 and $-$0.7 respectfully. This is on the low side for [Fe/H] of LV dwarfs by the same method (their mean value is $-$0.4). Given the assumption of lower past SFR's in LSB galaxies compared to LV dwarfs, this is an unsurprising result and reflects the abnormally low current [Fe/H] values, i.e., the chemical history of LSB galaxies is strongly suppressed. Lastly, the $U-V$ CMD allows for a comparison of the bMS and bHeB populations which are blurred in the $V-I$ CMD. The ratio of these populations, compared to synthetic CMD's, confirms the result from the rHeB population, i.e., that the current SFR in LSB has been roughly constant for the last few 100 Myrs. Constant star formation is not a new conclusion for gas-rich galaxies (West \it et al. \rm 2009; Hunter \it et al. \rm 2011) and the bluest gas-rich galaxies require rising SFR to explain their global colors. Schombert \& McGaugh (2014a) found that recent weak bursts on timescales of 500 Myrs would satisfy the colors of LSB galaxies, rather than a uniformly rising SFR (see also Boissier \it et al. \rm 2003). It may be a coincidence that the three LSB galaxies in this sample display the rHeB population indicated the onset of a recent burst (thus, making them more visible in a blue oriented visual survey). \section{Summary} The results from the CMD's of Local Volume dwarf galaxies has historically been a shockingly revelation on the stochastic and random nature to the star formation history in dwarf galaxies. The uniform nature of their global colors and H$\alpha$ luminosities with mass (Hunter \& Elmegreen 2004) is replaced with a highly variable history of brief, weak bursts. While our study lacks the luminosity depth and high number of resolved stellar sources to match the detail of most LV dwarfs CMD's, the similarity between the CMD's for LSB galaxies and LV dwarfs indicates that they have analogous recent star formation histories. The primary difference between LV dwarfs and our LSB galaxies is the underpopulated older population ($\tau >$ 3 Gyrs), implied by the overabundance of young stars and, yet, a low current SFR. Most studies of LSB galaxies in the past have tested and dismissed various explanations for their LSB nature based on stellar population variations (extinction by dust, unusual IMF's, exotic stellar populations). Whereas, this study indicates that that LSB are low in surface brightness simply because they have lower stellar densities due to a widely dispersed stellar population. In other words, they have fewer stars per pc$^2$ than their HSB cousins, and that this underpopulation occurs with both recent and older stars. Their current and past star formation rates are typically a factor of ten less then their HSB cousins, which clearly reflects into their different mean surface brightness. However, a kinematic mechanism is required to disperse the younger stellar populations to maintain the uniform color mixing from high to low surface brightness regions within the LSB galaxies themselves. This resurrects then idea that LSB galaxies are "young" (Vorobyov \it et al. \rm 2009; Gao \it et al. \rm 2010), not necessarily young in their formation epoch, for their mean metallicities indicate some small amount of chemical evolution over the that last 10 Gyrs. Rather they are "young" in the sense that a majority of their stars formed after 5 Gyrs (McGaugh \& Bothun 1994; Jimenez \it et al. \rm 1998; Schombert, McGaugh \& Eder 2001), and their chemical history is the weakest of any galaxy type. We also note that the analysis of rHeB population in \S2.8 opens a powerful technique to study galaxies outside the Local Volume out to 20 Mpc or more. There is a great deal of information in the resolved stellar populations brighter than $M_I < -4$, the canonical value for the tip of the RGB. Ultimately, the conclusions from H$\alpha$ and CMD studies is that star formation is suppressed in LSB galaxies. However, the dilemma exists on why should the SFR be so low in galaxies so rich in the neutral gas that is the fuel for star formation. The evidence points to the star formation efficiency as the difference between HSB and LSB galaxy types. While star formation has been directly linked to gas density (Kennicutt 1989), numerous secondary factors vary with surface brightness. For example, it has been shown that star formation efficiency decreases with surface brightness (Leroy \it et al. \rm 2008) and driven in part by lower metallicities in the gas clouds (Shi \it et al. \rm 2014). This results in a fluctuating (bursts) and a spatially irregular star formation history (McQuinn \it et al. \rm 2015) such as seen in most LSB galaxies. \acknowledgements Software for this project was developed under NASA's AIRS and ADP Programs. Based on observations made with the NASA/ESA Hubble Space Telescope, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with program 12859. This work has made use of the IAC-STAR Synthetic CMD computation code. IAC-STAR is suported and maintained by the computer division of the Instituto de Astrofsica de Canarias.
2101.10176
\section{Introduction} In this article, the fundamental gap of a domain is the difference between the first two eigenvalues of the Laplacian with zero Dirichlet boundary conditions. For convex domains in $\mathbb R^n$ or $\mathbb S^n$, $n\geq 2$, it is known from \cite{fundamental, seto2019sharp,dai2018fundamental,he2017fundamental} that $\lambda_2 - \lambda_1 \ge 3\pi^2/D^2$, where $D$ is the diameter of the domain. In hyperbolic space, this quantity behaves very differently from the Euclidean and spherical cases. Recently, the authors showed \cite{BCNSWW2} that for any fixed $D>0$, there are convex domains with diameter $D$ in $\mathbb H^n$, $n\geq 2$, such that $D^2(\lambda_2 - \lambda_1)$ is arbitrarily small. Since convexity does not provide a lower bound, one naturally asks if imposing a stronger notion of convexity, such as horoconvexity, would imply an estimate for $D^2(\lambda_2 - \lambda_1)$ from below. Recall that for a domain with smooth boundary, convexity corresponds to nonnegative principal curvatures of the boundary, while horoconvexity corresponds to principal curvatures greater or equal to 1. We show that the quantity $D^2 (\lambda_2 - \lambda_1)$ still tends to zero for all horoconvex domains in hyperbolic space when the diameter tends to infinity. \begin{theorem} \label{gap-horoconvex} For every $n \geq 2$, there exists a constant $C(n)$ such that the Dirichlet fundamental gap of every horoconvex domain $\Omega$ with diameter $D \geq 4 \ln 2$ satisfies \[ \lambda_2(\Omega)- \lambda_1(\Omega) \leq \frac{C(n)}{D^3}. \] In particular, as $D \to \infty$, the quantity $(\lambda_2 - \lambda_1) D^2$ tends to $0$. \end{theorem} We prove this by first obtaining the following estimate for the fundamental gap for special horoconvex domains, the geodesic balls in hyperbolic space. \begin{theorem} \label{thm-main} Let $B_R$ be the geodesic ball of radius $R$ in $\mathbb H^n$ and $\lambda_i(B_R)$ be the $i$-th eigenvalue of the Laplace operator $-\Delta$ in $B_R$ with Dirichlet boundary conditions. Then there is a constant $C(n)$ so that \begin{equation} \lambda_2 (B_R) - \lambda_1(B_R) \leq \frac{C(n)}{R^3}. \label{gap-l-u} \end{equation} In particular, as $R \to \infty$, the quantity $(\lambda_2 - \lambda_1) R^2$ tends to $0$. \end{theorem} In the authors' earlier work \cite{BCNSWW2}, it was shown that, for any fixed $D>0$, one can find a domain $\Omega$ for which $(\lambda_2 (\Omega) - \lambda_1 (\Omega)) D^2$ can be made arbitrarily small. The domains $\Omega \subset \mathbb H^n$ in \cite{BCNSWW2} are convex, but not horoconvex. Their first eigenfunction is not log-concave either. In contrast, note that the first eigenfunction of $B_R$ is log-concave (see \cite[Corollary 1.1]{IST} and Lemma~\ref{lem: log}). On the one hand, while the log-concavity of the first eigenfunction plays a very important role in estimating the fundamental gap of convex domains in the Euclidean space and sphere, Theorem~\ref{thm-main} shows that the log-concavity of the first eigenfunction in the hyperbolic case does not imply a lower bound estimate for $(\lambda_2 - \lambda_1) D^2$. On the other hand, we believe that $D^2$ is not the appropriate factor for domains in the hyperbolic space and we conjecture that, for all horoconvex convex domains $\Omega \subset \mathbb H^n$, we have $\lambda_2 (\Omega) - \lambda_1 (\Omega) \ge c(n, D)$ for some function $c(n, D)$ depending on the dimension and diameter, that can lead to a lower bound on the fundamental gap appropriately compared with the diameter. This is true for balls in $\mathbb H^n$, see \eqref{ball-gap-lower}. Theorem~\ref{thm-main} is proved by transforming the eigenvalue equation of balls to the eigenvalue equation of a Schr\"odinger operator. As a result, we obtain some immediate upper and lower bound estimates on the first two eigenvalues of balls, which improve and simplify earlier estimates on the first eigenvalues of balls. See Sections 2, 3. To prove Theorem~\ref{gap-horoconvex}, we exploit the fact that all big horoconvex domains contain a large ball \cite{Borisenko-Miquel}, see Theorem~\ref{horoconvex-ball}. We then combine Theorem~\ref{thm-main} with Benguria and Linde's \cite{Benguria-Linde2007} comparison result for the fundamental gap to conclude the proof, see Section 4. \section{Basic Facts on Eigenvalues of Balls in $\mathbb H^n$} Here we review some basic facts about first two Dirichlet eigenvalues of balls in the hyperbolic space. By transforming the eigenvalue equation of balls to its Schr\"odinger form, we obtain some immediate upper and lower bound estimates on the first two eigenvalues which improve and simplify earlier estimates. \subsection{The first eigenvalue} In this section, let $\lambda_i$ be the $i$-th eigenvalue of the Laplacian, with Dirichlet boundary conditions, of geodesic balls with radius $r$ in $\mathbb H^n$. By \cite{Chavel, Benguria-Linde2007}, the first eigenvalue $\lambda_1$ is the first eigenvalue of the $1$-dimensional problem on $[0, r]$ \begin{equation} u'' + \frac{n-1}{\tanh t} u' + \lambda u =0, \ \ u(r) =0, \ u'(0) =0. \label{1-eigenvalue-ball} \end{equation} With the change of variable $u(t) = (\sinh t)^{\frac{1-n}{2}} \bar u(t)$, we have the associated Schr\"odinger equation \begin{equation} \label{1-Sch-ball} -\frac{d^2}{dt^2}\bar u + \frac{n-1}{4} \left( n-1 +\frac{n-3}{\sinh^2 t} \right) \bar u= \lambda \bar u \end{equation} with Dirichlet boundary conditions at $0$ and $r$, and $\lambda_1$ is the first eigenvalue of \eqref{1-Sch-ball}. Note that the nonconstant potential term changes sign at $n =3$. We immediately notice that, when $n=3$, $\lambda_1 = 1+ \frac{\pi^2}{r^2}$. Since $\sinh^{-2} t \ge \sinh^{-2} r$ on $(0, r]$, the ODE comparison theorem implies: \begin{lemma} For $n>3$, \begin{equation} \label{lambda1-lower-bound-4} \lambda_1 > \frac{(n-1)^2}{4}+ \frac{\pi^2}{r^2} + \frac{(n-1)(n-3)}{4\sinh^2 r}. \end{equation} For $n=2$, \[ \lambda_1 \le \frac 14+ \frac{\pi^2}{r^2} - \frac1{4 \sinh^{2} r}. \] \end{lemma} The lower bound is sharper than the estimate of \cite[(1.7)]{Artamoshin2016}, which followed the earlier estimate of McKean \cite{McKean1970}. It is also an improvement over \cite[Theorem 5.6]{Savo2009} and an earlier estimate in \cite[Theorem 5.2]{Gage1980} when $r$ is large and $n>3$. The upper bound in the case $n=2$ is that found by Gage \cite[Theorem 5.2]{Gage1980}. The bounds in the other direction do not follow directly from the Schr\"odinger equation \eqref{1-Sch-ball}. In \cite[Theorem 5.6]{Savo2009} the following uniform upper and lower bounds for the first eigenvalue $\lambda_1$ is obtained for all $n \ge 2$: \begin{equation} \frac{(n-1)^2}{4}+ \frac{\pi^2}{r^2} - \frac{4\pi^2}{(n-1)r^3} \le \lambda_1 \le \frac{(n-1)^2}{4}+ \frac{\pi^2}{r^2} + \frac{C}{r^3}, \label{lambda1-all-n} \end{equation} with $C = \frac{\pi^2 (n^2-1)}{2} \int_0^\infty \frac{t^2}{\sinh^2 t} dt = \frac{\pi^4 (n^2-1)}{12}$. We will use this lower bound and improve the upper bound in Section 3. \subsection{The second eigenvalue} The second eigenvalue $\lambda_2$ is studied in \cite[Lemma 3.1]{Benguria-Linde2007}, where it is shown that it is the first eigenvalue of the following equation (see also \eqref{eigenvalue-ball} with $k=1, l=1$): \begin{equation} u'' + \frac{n-1}{\tanh t} u' - \frac{n-1}{\sinh^2 t} u + \lambda u =0, \ \ \ u(r) =0, \ u (t) \sim t \ \mbox{as}\ t \rightarrow 0. \label{2-eigenvalue-ball} \end{equation} Again with the change of variable $u(t) = (\sinh t)^{\frac{1-n}{2}} \bar u(t)$, we have the associated Schr\"odinger equation \begin{equation} \label{2-Sch-ball} -\frac{d^2}{dt^2} \bar u+ \frac{n-1}{4} \left( n-1 +\frac{n+1}{\sinh^2 t} \right) \bar u= \lambda \bar u \end{equation} with Dirichlet boundary conditions at $0$ and $r$, where the second eigenvalue $\lambda_2$ is the first eigenvalue of \eqref{2-Sch-ball}. Using once more the ODE comparison theorem, we obtain \begin{equation} \lambda_2 \ge \frac{(n-1)^2}{4}+ \frac{\pi^2}{r^2} + \frac{n^2-1}{4\sinh^2 r}. \label{lambda2-lower} \end{equation} To find an upper bound estimate for $\lambda_2$, we will seek in the next section an upper bound for the first eigenvalue of a more general Schr\"odinger equation and, as such, we will simultaneously obtain an upper bound for $\lambda_1$, slightly improve the one in \eqref{lambda1-all-n}. From \eqref{1-Sch-ball} and \eqref{2-Sch-ball} we immediately have the following lower bound on the fundamental gap of the ball $B_R \subset \mathbb H^n$ for all $n \ge 2$. \begin{equation} \label{ball-gap-lower} \lambda_2 - \lambda_1 \ge \frac{n-1}{\sinh^2 R}.\end{equation} \section{First Eigenvalue Upper Bound for Schr\"odinger Equation} \label{sec3} Let $\lambda_1^\alpha$ be the first eigenvalue of the following equation \begin{equation} -\frac{d^2}{dt^2} u+ \frac{n-1}{4} \left( n-1 +\frac{\alpha}{\sinh^2 t} \right) u= \lambda u \label{Sch-alpha} \end{equation} with Dirichlet boundary conditions at $0$ and $r$. \begin{prop} \label{prop-1} For $\alpha \ge 0$, we have \begin{equation} \lambda_1^\alpha < \frac{(n-1)^2}{4}+ \frac{\pi^2}{r^2} + \frac{(n-1) \alpha}{12 r^3} \pi^4. \label{lambda1-alpha} \end{equation} In particular, the first two eigenvalues of the geodesic ball of radius $r$ in $\mathbb{H}^n$ satisfy \begin{eqnarray} \lambda_1 & < & \frac{(n-1)^2}{4}+ \frac{\pi^2}{r^2} + \frac{(n-1) (n-3)}{12r^3} \pi^4, \ \mbox{for} \ n \ge 3, \label{lambda1-ub}\\ \lambda_2 & < & \frac{(n-1)^2}{4}+ \frac{\pi^2}{r^2} + \frac{(n-1) (n+1)}{12r^3} \pi^4, \ \mbox{for} \ n \ge 2. \label{lambda2-upper} \end{eqnarray} \end{prop} The upper bound \eqref{lambda1-ub} improves the upper bound in \cite[Theorem 5.6]{Savo2009}, see \eqref{lambda1-all-n}. \begin{proof} The first Dirichlet eigenvalue of a Schr\"odinger operator $-u''+ Vu$ is a minimizer of the Rayleigh quotient \[ R[u]=\frac{\int |u'|^2+ V u^2 }{\int u^2}, \] among all non-constant $u$ with $u(0) = u(r)=0$. The equation \eqref{Sch-alpha} with $\alpha =0$ has its first eigenfunction equal to $v = \sqrt{\frac{2}{r}} \sin (\pi t/r)$. It is normalized so that $\int_0^r v^2 dt = 1$. Therefore by inserting $v$ into the Rayleigh quotient associated to \eqref{Sch-alpha}, we find \begin{align*} \lambda_1^\alpha &\leq \frac{(n-1)^2}{4}+ \int_0^r \left( \frac{dv}{dt}\right)^2 dt +\int_0^r \frac{(n-1)\alpha}{4(\sinh t)^2} v^2 \,dt \\ &= \frac{(n-1)^2}{4}+ \frac{\pi^2}{r^2} + \frac{(n-1) \alpha}{4} \int_0^r \frac{v^2}{(\sinh t)^2} \,dt. \end{align*} Using $\sin |x| \leq |x|$, we have \[ r^2 \int_0^r \left( \frac{\sin\left( \pi t/r \right)}{\sinh t} \right)^2 dt \leq \pi^2 \int_0^r \left( \frac{t}{\sinh t} \right)^2 dt < \pi^2 \int_0^\infty \left( \frac{t}{\sinh t} \right)^2 dt = \frac{ \pi^4}{6}. \] This gives $\int_0^r \frac{v^2}{(\sinh t)^2} \,dt < \frac{\pi^4}{3r^3}$, hence \eqref{lambda1-alpha}. \end{proof} Combining the lower bound in \eqref{lambda1-all-n} with \eqref{lambda2-upper} gives the estimate \eqref{gap-l-u} in Theorem~\ref{thm-main}. \section{Horoconvex domains in $\mathbb H^n$} A stronger definition of convexity in the hyperbolic space considers horospheres as natural analogues of Euclidean hyperplanes supporting a convex domain: \begin{definition} A set $\Omega \subset \mathbb H^n$ is called horoconvex if, for every point $p \in \partial \Omega$, there exists a horosphere ${\mathcal{H}}$ through $p$ such that $\Omega$ lies in the horoball bounded by ${\mathcal{H}}$. \end{definition} Recall that a \emph{horosphere} is a sphere with center on the ideal boundary of $\mathbb H^n$ and that a \emph{horoball} is a domain whose boundary is a horosphere. When $\Omega$ is a compact domain with smooth boundary in the hyperbolic space of constant negative curvature $-1$, the domain $\Omega$ is horoconvex if and only if all principal curvatures of the boundary hypersurface are greater or equal to one. As a special case, $B_R$, the geodesic sphere of radius $R$, is horoconvex as each of the principal curvatures of its boundary is equal to $\coth R$, and $\coth R > 1$ for all $R>0$. Finally, for any compact domain, recall that its inradius is the radius of the largest ball contained in the domain, and that its circumradius is the radius of the smallest ball containing the domain. Part of a result of Borisenko-Miquel \cite[Theorem 1]{Borisenko-Miquel} states the following: \begin{theorem} \cite{Borisenko-Miquel} \label{horoconvex-ball} Let $\Omega$ be a compact horoconvex domain in $\mathbb{H}^n$ with inradius $r$ and circumradius $R$. Denoting $\tau = \tanh \frac{r}{2}$, then \begin{equation} \label{eq:BM} R-r \leq \ln \frac{(1+ \sqrt{\tau})^2}{1 + \tau} < \ln 2, \end{equation} and this bound is sharp. \end{theorem} An immediate consequence of (\ref{eq:BM}) is that the diameter of the domain satisfies $D\le 2R \le 2r+2\ln 2$. We are now ready to prove Theorem~\ref{gap-horoconvex}. \begin{proof} Let $\Omega \subset \mathbb H^n$ be a horoconvex domain of diameter $D$. Choose $R_\Omega$ such that the ball of radius $R_\Omega$ satisfies $\lambda_1(B_{R_\Omega}) = \lambda_1(\Omega)$. Theorem~\ref{horoconvex-ball} implies that $\Omega$ contains a ball of radius $r$ with $r \geq \frac{D}{2} - \ln 2$. By domain monotonicity of the first eigenvalue, $ R_\Omega \ge \frac{D}{2} - \ln 2$, hence \begin{equation} R_\Omega \ge \frac{D}{4}, \label{R-Omega} \end{equation} when $D \ge 4 \ln 2$. Using \cite{Benguria-Linde2007}, Benguria-Linde's upper bound on the second eigenvalue, we have that $$\lambda_2(\Omega) - \lambda_1(\Omega) \leq \lambda_2(B_{R_\Omega}) - \lambda_1(B_{R_\Omega}).$$ Applying the estimates \eqref{gap-l-u} and \eqref{R-Omega} concludes the proof of Theorem~\ref{gap-horoconvex}. \end{proof} \section*{Appendix}{{\bf{Small balls and log-concavity of eigenfunction of geodesics balls in $\mathbb M^n_K$}}} \ To round up the discussion on the fundamental gap of balls in the hyperbolic space, we thought to include here an observation on the fundamental gap of balls of small radii, as well as a simple argument proving the log-concavity of the first eigenfunction of geodesic balls in simply connected Riemannian manifolds with constant negative sectional curvature. \subsection{The gap of small balls in negatively curved manifolds} Let $\mathbb M^n_K$ be the simply connected Riemannian manifold with constant sectional curvature $K$. Here, we assume that $K$ is negative and write $K =- k^2,\, (k>0)$. Denote by $\lambda_i(n,k,r)$ the eigenvalues of the Laplacian for geodesic balls with radius $r$ in $\mathbb M^n_K$ with Dirichlet boundary condition. By separation of variables, see \cite{Chavel, Benguria-Linde2007}, the eigenvalues $\lambda_i(n,k,r)$ are eigenvalues of \begin{equation} u'' + \frac{(n-1)k}{ \tanh (kt)} u' - \frac{l(l+n-2)k^2}{\sinh^2 (k t)} u + \lambda u =0, \label{eigenvalue-ball} \end{equation} where $l = 0, 1,2, \cdots$, with boundary condition $u'(0) =0$ for $l =0$, $u(t) \sim t^l$ as $t \to 0$ for $l>0$, and $u(r) =0$. By scaling, this immediately gives \cite[Lemma 4.1]{Benguria-Linde2007}, for $c>0$, \begin{equation} \lambda_i (n, \frac 1c k, cr ) = c^{-2}\lambda_i (n, k, r). \end{equation} Hence \begin{equation} \lambda_i (n, 1, r ) = r^{-2}\lambda_i (n, r, 1). \end{equation} Therefore, for small balls in $\mathbb H^n$, the value $r^2 \lambda_i (n, 1, r )$ is close to the corresponding one in the Euclidean space, as one would expect. Namely, \begin{lemma} \label{gap-small-ball} $$\lim_{r \to 0} r^2 \lambda_i (n, 1, r ) = \lambda_i (n, 0,1) = r^2 \lambda_i (n, 0,r),$$ and \begin{equation} \lim_{r \to 0} r^2 \left(\lambda_2(n,1, r) - \lambda_2(n,1, r) \right) = r^2(\lambda_2 (n, 0,r)- \lambda_1 (n, 0,r)) = j_{\frac n2, 1}^2 - j_{\frac n2 -1, 1}^2, \end{equation} where $j_{p,k}$ is the $k$-th positive zero of the Bessel function $J_p(x)$. \end{lemma} \subsection{The first eigenfunction for balls} The first eigenfunction of balls is purely radial, so it is straightforward to show that it is log-concave, as in the Euclidean and spherical case. \begin{lemma} \label{lem: log} The first eigenfunction $u_1$ of \eqref{1-eigenvalue-ball} is strictly log-concave. \end{lemma} This is in \cite[Corollary 1.1]{IST}, where more general elliptic equations with power are considered. For convenience, we give a simple and direct proof here. \begin{proof} First we show $u_1$ is strictly decreasing. Multiplying both sides of \eqref{1-eigenvalue-ball} by $\sinh^{n-1} t$, we have \[ (u_1' \sinh^{n-1}t)' = -\lambda_1 u_1 \sinh^{n-1}t <0. \] Since $u_1'(0) =0$, we have $u_1'(t) <0$ for $t \in (0, r)$. Let $\varphi = (\log u_1)'$. Then $\varphi(0) =0$, $\varphi <0$ on $(0, r)$, and \[ \varphi' = \frac{u_1''}{u_1} - \left( \frac{u_1'}{u_1}\right)^2 = -\frac{n-1}{\tanh t} \varphi -\lambda_1 -\varphi^2. \] Taking the limit as $t \to 0$ gives $\varphi'(0) = -\lambda_1 - (n-1) \lim_{t \to 0} \frac{\varphi}{\tanh t} = -\lambda_1 - (n-1) \varphi'(0)$. Hence, $\varphi'(0) <0$. Now, we claim that $\varphi' (t) < 0$ on $[0, r)$. Otherwise, there exists $t_1 \in (0, r)$ such that $\varphi' <0$ on $[0, t_1)$, $\varphi' (t_1) = 0$ and $\varphi'' (t_1) \ge 0$. Note that $\varphi''$ satisfies \[ \varphi'' = \frac{n-1}{\sinh^2 t} \varphi -\frac{n-1}{\tanh t} \varphi' - 2\varphi \varphi'. \] Evaluating the two sides of the equation at $t_1$ gives \[ 0 \le \varphi'' (t_1) = \frac{n-1}{\sinh^2 t_1} \varphi (t_1) <0. \] This is a contradiction. \end{proof} \bibliographystyle{plain}
2102.02512
\section{Introduction} Clouds play a major role in regulating weather and climate on Earth, by modulating the incoming solar radiation. Despite substantial scientific advances in the last decades, clouds still represent the primary source of uncertainty in climate projections \citep{pincus2018shallow,IPCC}. A key challenge is to understand how entrainment of dry air at the edges of ice-free clouds affects the size distribution and number density of droplets \citep{Blyth1993}. This is important because amount and distribution of liquid water determine cloud optical properties \citep{Kokhanovsky2004} and precipitation efficiency \citep{Burnet2007}. As a consequence, weather and climate models are sensitive to how entrainment at the cloud edge is \eng{parameterized}{parameterised} \citep{mauritsen2012tuning}. The optical properties of clouds are of crucial importance for the radiation balance of the Earth's climate system \citep{caldwell2016quantifying,dufresne2008assessment}. The size distribution and number density of droplets are key ingredients, because the light-extinction coefficient of the cloud is determined by the number of the droplets it contains, times their average surface area \citep{Kokhanovsky2004}. Regarding precipitation efficiency, the mechanism of rain formation in ice-free clouds is a longstanding unresolved problem in atmospheric physics \citep{Grabowski2013}. A broad initial droplet-size distribution is needed to activate the collisions and coalescences of droplets that are necessary for the rapid onset of rain formation observed empirically in warm clouds \citep{devenish2012droplet,szumowski1997microphysical,szumowski1998microphysical}. Microscopic droplets grow by condensation of water \eng{vapor}{vapour}, or shrink by evaporation. Yet when a droplet-containing parcel does not mix with its surroundings, condensation causes the droplet-size distribution to narrow because the diffusional growth of a droplet is inversely proportional to its radius, so that small droplets grow faster than large ones \citep{rogers1989short}. Turbulence has a strong influence upon droplet condensation and evaporation in clouds \citep{bodenschatz2010can}. Turbulent mixing causes water-\eng{vapor}{vapour} and liquid-water content to fluctuate on different length and time scales \citep{Vaillancourt:2001}. As a consequence, nearby droplets may have experienced quite different growth histories. The droplets of a cloudy parcel that is mixed with dry air evaporate at different rates, so that the droplet-size distribution broadens \citep{Lanotte:2009,sardina2015continuous,sardina2018broadening,li2020condensational}. Cloud-resolving simulations \citep{hoffmann2019entrainment} show that this mechanism can have a strong effect on droplet-number densities in turbulent clouds. Entrainment of dry air at the cloud edge triggers rapid changes in the droplet-size distribution \citep{perrin2015lagrangian,abade2018broadening}. As turbulence mixes dry air into the cloud it creates long-lasting regions of dry air where droplets can rapidly evaporate. Droplets in regions with higher water-\eng{vapor}{vapour} concentration, by contrast, may saturate the air and survive for a much longer time, Figure~\figref{schematic}({\bf a}). While droplets evaporate turbulence mixes the cloud at many length scales, ranging from the Kolomogorov length -- of the order of \eng{millimeters}{millimetres} \citep{devenish2012droplet} -- to a few \eng{kilometers}{kilometres} \citep{rogers1989short}. Evaporation and mixing on a spatial scale $\ell$ depend on the turbulent mixing time $\tau_\ell$ at that scale, and upon the relevant thermodynamic time scale $\tau$. Their ratio forms a Damk\"ohler number ${\rm Da} = \tau_\ell/\tau$ \citep{dimotakis2005turbulent}. The thermodynamic process \eng{parameterized}{parameterised} by ${\rm Da}$ is limited by the rate of mixing if ${\rm Da}$ is large, and limited by thermodynamics if ${\rm Da}$ is small. The dynamics at large Damk\"ohler numbers is referred to as inhomogeneous mixing \citep{Baker1980}, where some droplets evaporate completely while others do not evaporate at all. Small-Da mixing is called homogeneous, where droplets evaporate at approximately the same rate, so that the droplet-size distribution remains narrow. The notion of homogeneous and inhomogeneous mixing remains debated \citep{tolle2014effects}, but it can be given a precise meaning in terms of the fraction $P_\text{e}(t)$ of droplets that have completely evaporated at time $t$. However, it is at present not understood which mechanisms and parameters that determine the transition from homogeneous to inhomogeneous mixing. Several authors have attempted to describe turbulent mixing in terms of one Damk\"ohler number. \cite{lehmann2009homogeneous} used a combined microphysical response time $\tau_{\rm r}$, a function of the two thermodynamic time scales of the problem, $\tau_{\rm d}$ (droplet evaporation) and $\tau_{\rm s}$ (supersaturation relaxation). \cite{lu2018microphysical}, by contrast, suggest that $\tau_{\rm d}$ should be used to formulate a single-parameter criterion for inhomogeneous mixing. The direct numerical simulations (DNS) by \cite{kumar2018scale} explored how the nature of mixing changes as $\tau_{L}/\tau_{\rm r}$ increases with the linear size $L$ of the simulated domain. However, no clear sign of inhomogeneous mixing was found. The authors mention that this may be a consequence of the thermodynamic setup used. Another possibility is that the simulated system was just not large enough. \cite{Jeffery2007} \eng{emphasized}{emphasised} that evaporation and mixing cannot be described by a single Da alone, because there are two thermodynamic time scales, leading to three key non-dimensional~parameters, ${\rm Da}_{\rm d}$, ${\rm Da}_{\rm s}$, and the volume fraction $\chi$ of cloudy air. However, Jeffery only studied the case ${\rm Da}_{\rm d}={\rm Da}_{\rm s}$ and did not discuss the implications of varying the Damk\"ohler numbers separately. \cite{pinsky2016theoretical3} and \cite{pinsky2018theoretical4} modified an equation for advected liquid water \citep{rogers1989short} into a diffusion-reaction equation for the droplet-size distribution, and \eng{emphasized}{emphasised} the significance of a non-dimensional~parameter, their potential evaporation parameter. \begin{figure}[t] \centering \includegraphics{Fries_Fig1.eps} \caption{\label{fig:schematic} ({\bf a}) Effect of mixing on local droplet populations (schematic). On large spatial scales ($\ell_1$ and $\ell_2$) mixing and evaporation is not yet complete, but some smaller regions (of size $\ell_3$) are in local steady states: saturated air with droplets, or subsaturated air without droplets. ({\bf b}) Initial cloud configuration in our model. Before mixing, moist air and droplets reside in a $w \times L\times L$ slab, contained in a cubic domain of side length $L$. Regions with dry air are dashed. The solid line is the initial profile of supersaturation $s$ (see text). } \end{figure} Here we derive a statistical model for evaporation and turbulent mixing at the cloud edge from first principles. The model takes into account the multi-scale turbulent dynamics, as turbulent clouds can have large Reynolds numbers; ${\rm Re}\sim 10^7$ is a conservative estimate for convective clouds \citep{devenish2012droplet}. The model quantitatively predicts the outcomes of the DNS by \cite{kumar2012extreme,Kumar:2013,kumar2014lagrangian,kumar2018scale}. Furthermore, the model shows in accordance with \cite{Jeffery2007} that the evolution of the droplet-size distribution is determined by ${\rm Da}_{\rm d}$, ${\rm Da}_{\rm s}$, and $\chi$. We find that the potential evaporation parameter of \cite{pinsky2016theoretical3} and \cite{pinsky2018theoretical4} is simply determined by the ratio of the Damk\"ohler numbers. Finally, the model allows to interpret the results of in-situ measurements of clouds at the \eng{centimeter}{centimetre} scale \citep{beals2015holographic}.\\[0.1cm] \section{Method} We study mixing and evaporation of cloud droplets by mixing moist air, droplets, and dry air in a cubic domain of side length $L$ with periodic boundary conditions. Initially, the saturated or slightly supersaturated moist air with supersaturation $s_{\rm c} \geq 0$ is contained in a $w\times L \times L$ slab together with $N_0$ randomly distributed water droplets, Figure~\figref{schematic}({\bf b}). The dry air, initially outside the slab, has negative supersaturation $s_{\rm e} < 0$. The mixing is driven by statistically stationary homogeneous isotropic turbulence, with turbulent kinetic energy $\k$ and mean dissipation rate per unit mass $\varepsilon$ \citep{frisch_1995}. Essentially the same setup is used in the DNS of \cite{kumar2012extreme,Kumar:2013,kumar2014lagrangian,kumar2018scale}, which allows us to understand their simulation results in terms of our model. \subsection{Microscopic equations} For the turbulent mixing, we start from the microscopic equations of \cite{kumar2018scale} and earlier studies \citep{vaillancourt2002microscopic,paoli2009turbulent,Lanotte:2009,kumar2014lagrangian,perrin2015lagrangian}. We neglect buoyancy, particle inertia and settling, temperature changes due to vertical motion, temperature and pressure dependencies of the thermodynamic coefficients, and subsume the joint effects of temperature and water \eng{vapor}{vapour} into a single supersaturation field. We denote fluid velocity and pressure by $\ve{u}(\ve{x},t)$ and $p(\ve{x},t)$, and supersaturation by $s(\ve{x},t)$. The spatial position of droplet $\alpha$ is $\ve x_\alpha(t)$, its radius equals $r_\alpha(t)$, and the index $\alpha$ ranges from $1$ to $N_0$. We non-dimensionalise~as follows: $\ve{u}' = \ve{u}/U$, $\ve{x}' = \ve{x}/(U\tau_L)$, $t' = t/\tau_L$, $p' = p/(\varrho_{\rm a} U^2)$, $\ve{x}_\alpha' = \ve{x}_\alpha/(U\tau_L)$, $s' = s/|s_{\rm e}|$, $r_\alpha' = r_\alpha/r_0$, and $s_{\rm c}' = s_{\rm c}/|s_{\rm e}|$. Here $U = \sqrt{2\,\k/3}$ is the turbulent r.m.s. velocity, $\tau_L = \k/\varepsilon$ is the large-eddy time [$\propto L/U$ if the size of the largest eddies is of the order $L$ \citep{pope2000turbulent}], $\varrho_{\rm a}$ is the reference mass density of air, $r_0 = [N_0^{-1}\sum_{\alpha = 1}^{N_0} r_\alpha(0)^3]^{1/3}$ is the initial volume-averaged droplet radius, and $|s_{\rm e}|$ is the (positive) subsaturation of the air outside the initial cloud slab. Dropping the primes, the microscopic equations take the non-dimensional~form: \begin{subequations} \label{eq:model} \begin{eqnarray} &&\hspace*{-5mm} \tfrac{{\rm D}}{{\rm D}t}\ve u = - \nabla p +\text{Re}_L^{-1} \nabla^2 \ve{u} \label{veldl} \,\,\,\mbox{with}\,\,\,\nabla\cdot \ve{u}\! = \!0\,, \label{eq:veldlred} \\ &&\hspace*{-5mm} \tfrac{{\rm D}}{{\rm D}t}s = ({\rm Re}_L {\rm Sc})^{-1}\nabla^2 s - \text{Da}_\text{s} \,\chi V \overline{r_\alpha(t) s(\ve x_\alpha,t)} \,,\label{eq:supsatdlred} \\ &&\hspace*{-5mm} \tfrac{{\rm d}}{{\rm d}t}{\ve x}_\alpha= \ve{u}(\ve x_\alpha,t) \,,\label{eq:droplmovdlred}\\ &&\hspace*{-5mm} \tfrac{{\rm d}}{{\rm d}t}{r}_\alpha= \text{Da}_\text{d} \,s(\ve x_\alpha(t),t) /(2r_\alpha) \quad\mbox{if $r_\alpha> 0 $}\,. \label{eq:droplraddlred} \end{eqnarray} \end{subequations} Equations~\eqnref{veldlred} are the incompressible Navier-Stokes equations, with Lagrangian time derivative $\tfrac{{\rm D}}{{\rm D}t}=\partial_t \!+\! (\ve{u}\cdot \nabla) $. In DNS of Equations~\eqnref{model}, a forcing is imposed to sustain stationary turbulence. This is not necessary in the model introduced below, and we therefore do not include a forcing term in Equation~\eqnref{veldlred}. Equation~\eqnref{supsatdlred} is the equation for supersaturation. The first term on its r.h.s. describes diffusion of the supersaturation $s(\ve x,t)$. The second term models the effect of condensation and evaporation through the average $\overline{r_\alpha (t)s(\ve x_\alpha,t)}$, taken over all droplets in the vicinity of~$\ve x$. Droplets are advected by the turbulent flow (Equation~\eqnref{droplmovdlred}), and Equation~\eqnref{droplraddlred} models how the droplet radius $r_\alpha$ changes due to evaporation and condensation. When a droplet has evaporated completely, we impose that it must remain at $r_\alpha=0$. In the derivation of Equation~\eqnref{droplraddlred}, $s(\ve x_\alpha(t),t)$ enters as the supersaturation at distances from the droplet that are much larger than the droplets radius \citep{rogers1989short}. In other words, Equation~\eqnref{droplraddlred} relies on a scale separation between droplet sizes and lengths that characterise supersaturation fluctuations generated by turbulent mixing \citep{Vaillancourt:2001}. As a consequence, droplets interact locally with the supersaturation field over finite volumes through the average $\overline{r_\alpha (t)s(\ve x_\alpha,t)}$ in Equation~\eqnref{supsatdlred}. Further details regarding Equations~\eqnref{model} are given in the Supporting Information (SI), where we also show how to derive Equations~\eqnref{model} from the more detailed dynamical description of \cite{Vaillancourt:2001,vaillancourt2002microscopic}, \cite{kumar2014lagrangian,kumar2018scale}, and \cite{perrin2015lagrangian}. An advantage of writing the dynamics in non-dimensional~form is that this determines the independent non-dimensional~parameters. First, ${\rm Re}_L=\tfrac{2}{3} \k^2/(\varepsilon\nu)$ is the turbulence Reynolds number \citep{pope2000turbulent}, $\nu$ is the kinematic viscosity of air. The Schmidt number is defined as $\text{Sc} = \nu/\kappa$, where $\kappa$ is the diffusivity of~$s$. The volume fraction of cloudy air is given by $\chi = w/L$, and $V = [L/(U \tau_L)]^3$ is the dimensionless domain volume. The Damk\"ohler number $\text{Da}_\text{s}$ is defined as $\text{Da}_\text{s} = \tau_L/\tau_{\rm s}$, where $\tau_\text{s} = (4 \pi A_2 A_3 \varrho_\text{w} n_0 r_0)^{-1}$ is the supersaturation relaxation time. This is the time scale at which the supersaturation decays towards saturation, assuming that all droplets have the same radius $r_0$, and for droplet number density $n_0= N_0/(wL^2)$. Further, $\varrho_\text{w}$ is the density of pure liquid water, and $A_2$ and $A_3$ are thermodynamic coefficients, specified in the SI. The Damk\"ohler number $\text{Da}_\text{d}$ is defined as $\text{Da}_\text{d} = \tau_L/\tau_{\rm d}$, where $\tau_\text{d} = r_0^2/(2 A_3 |s_{\rm e}|)$ is the droplet evaporation time, the time that it takes for a droplet of radius $r_0$ to evaporate completely in a constant ambient supersaturation $s_{\rm e} < 0$. The Damk\"ohler numbers determine the extent to which saturation and droplet evaporation are limited by the rate of mixing \citep{dimotakis2005turbulent}. Saturation is mixing limited at large $\text{Da}_\text{s}$, since regions with evaporating droplets -- created by mixing of cloudy and dry air at the time scale $\tau_L$ -- saturate faster than $\tau_L$. When $\text{Da}_\text{s}$ is small, by contrast, evaporating droplets saturate the air more slowly than it is mixed. In this case, saturation is not limited by the rate of mixing. Droplet evaporation is mixing limited at large $\text{Da}_\text{d}$, since droplets then evaporate more rapidly than the exposure to subsaturated air changes. At small $\text{Da}_\text{d}$, mixing is faster than droplet evaporation, and droplets tend to evaporate mainly after the system has been mixed. The droplets then experience roughly the same supersaturation as they evaporate. In the limit ${\rm Re}_L \rightarrow \infty$, three key non-dimensional~parameters remain in Equations~\eqnref{model}: $\chi$, $\text{Da}_\text{d}$, and $\text{Da}_\text{s}$. The system can be parameterised by $\chi$, $\text{Da}_\text{d}$, and \begin{equation} \label{eq:ratio} \mathscr{R}=\text{Da}_\text{d}/\text{Da}_\text{s}\,. \end{equation} In this way, the scale dependence of the mixing process is contained in $\text{Da}_\text{d}$ only. The Damk\"ohler-number ratio $\mathscr{R}$ is inversely proportional to the density of liquid water in the cloud slab; it regulates the moisture of the mixing process (details in SI). The bifurcation between moist steady states, where droplets remain in saturated air, and dry steady states, where all droplets have evaporated completely \citep{Jeffery2007,Kumar:2013,pinsky2016theoretical2}, occurs at a critical value of $\mathscr{R}$, $\mathscr{R}_{\rm c}$. The critical ratio $\mathscr{R}_{\rm c}$ can be computed from the conserved quantity ${\theta = -\langle s(t)\rangle -\tfrac{2 \chi}{3 \mathscr{R}} [1-P_\text{e}(t) ]\langle r^3(t)\rangle}$, which is analogous to the liquid-water potential temperature at fixed altitude \citep{gerber2008entrainment,lamb2011physics,kumar2014lagrangian}. Here, $P_\text{e}(t)$ is the fraction of completely evaporated droplets, the fraction of droplets for which $r_\alpha(t)=0$ at time $t$. Furthermore, $\langle s(t)\rangle = V^{-1}\int_V s(\ve x,t)\,\mathrm d \ve x$ is the volume average of supersaturation, and $\left \langle r^3(t) \right \rangle = \{[1-P_{\rm e}(t)]N_0\}^{-1}\sum_{\alpha = 1}^{N_0} r_\alpha(t)^3$ is the mean cubed droplet radius conditioned on $r_\alpha(t) > 0$ by the factor $[1-P_{\rm e}(t)]^{-1}$. The conservation of $\theta$ can be concluded by integrating Equation~\eqnref{supsatdlred} for supersaturation over the domain volume, see the SI for details. Moist steady states have $\left \langle r^3(t) \right \rangle > 0$ and $\langle s(t)\rangle = 0$, dry steady states have $P_{\rm e}(t) = 1$ and $\langle s(t)\rangle < 0$, so the sign of $\theta$ determines whether the steady-state is moist or dry. The value of $\theta$ is determined by the initial conditions, $\theta = -\langle s(0)\rangle -{2 \chi}/({3 \mathscr{R}})$. Setting $\theta = 0$, we find the critical Damk\"ohler-number ratio $\mathscr{R}_{\rm c} = -\tfrac{2}{3}\chi/\langle s(0)\rangle$. DNS of Equations~\eqnref{model} for experimentally observed dissipation rates and droplet-number densities are feasible only for quite small systems \citep{kumar2018scale}. This restricts the range of scales that can be explored, and makes it difficult to detect inhomogeneous mixing in DNS. We therefore pursue an alternative approach and adapt a PDF model \citep{pope2000turbulent} -- commonly used to describe combustion processes \citep{haworth2010progress} -- to the inhomogeneous cloud edge. As opposed to the kinematic statistical models reviewed by \cite{Gus16}, we must also take thermodynamic processes into account. \vfill\eject \subsection{Statistical model} Statistical models have been used to describe droplets in a supersaturation field that fluctuates around zero, as in the cloud core \citep{paoli2009turbulent,sardina2015continuous,chandrakar2016aerosol,siewert2017statistical}. At the cloud edge, there are large deviations from this equilibrium. \cite{Jeffery2007}, \cite{pinsky2016theoretical3}, and \cite{pinsky2018theoretical4} formulated models for the cloud edge where droplets evaporate in direct response to a mean supersaturation field. This does not take into account that mixing is local, and that small droplets are advected together with the supersaturation field. For an accurate description of mixing and evaporation, it is essential to describe how each droplet carries its own local supersaturation \citep{siewert2017statistical}. Our model does just that. It is derived from first principles using the established framework of PDF models \citep{pope2000turbulent}. For the configuration shown in Figure~\figref{schematic}({\bf b}) we derive one-dimensional statistical-model equations from Equations~\eqnref{model} (details in the SI): \begin{subequations} \label{eq:statmod} \begin{eqnarray} &&\hspace*{-5mm}\mathrm d u = -\tfrac{3}{4}C_0 u \mathrm d t + \big( \tfrac{3}{2} C_0 \big)^{1/2} \mathrm d \eta \,, \label{eq:velmc}\\ &&\hspace*{-5mm}\tfrac{\rm d}{{\rm d}t} s = - \tfrac{1}{2}C_\phi [ s -\langle {s}(x,t)\rangle ] - \text{Da}_\text{s}\, \chi V \langle r(t) s( x,t)\rangle\,, \label{eq:supsatmc}\\ &&\hspace*{-5mm}\tfrac{\rm d}{{\rm d}t} {x} = u \,, \label{eq:posmc}\\ &&\hspace*{-5mm}\tfrac{\rm d}{{\rm d}t} r = \text{Da}_\text{d}\, s /(2r)\quad \mbox{if}\quad r > 0 \,. \label{eq:radmc} \end{eqnarray} \end{subequations} Equation~\eqnref{velmc} describes the fluctuating acceleration of Lagrangian fluid elements in turbulence. Here, $\mathrm d \eta$ are Brownian increments with zero mean and variance $\mathrm d t$, and $C_0$ is an empirical constant \citep{pope2011simple}. Each Lagrangian fluid element has a supersaturation $s$, and may contain a droplet (at position $x$, of size $r$). Equation~\eqnref{supsatmc} approximates the supersaturation dynamics as decay towards $\langle s(x,t)\rangle$, regulated by the empirical constant $C_\phi$~\citep{pope2000turbulent}. The second term on the r.h.s. of Equation~\eqnref{supsatmc} represents the effect of condensation and evaporation through $\langle r(t) s( x,t)\rangle$. The position-dependent averages $\langle \cdots \rangle$ in Equation~\eqnref{supsatmc} are taken over fluid elements located at $x$ at time $t$ (details in the SI). The statistical-model Equations \eqnref{statmod} becomes independent of $\text{Re}_L$ at large Reynolds numbers, where $C_0$ approaches a definite limit \citep{pope2011simple}. It is independent of $\text{Sc}$, in accordance with the known behaviour of advected scalars in fully developed turbulence \citep{shraiman2000scalar}. Equations \eqnref{statmod} rest upon a probabilistic description of the dynamics of the two phases, droplets and air \citep{pope2000turbulent,jenny2012modeling}. The corresponding evolution equations, dictated by Equations \eqnref{model}, contain unclosed terms that must be approximated \citep{pope1985pdf,haworth2010progress} to obtain a closed model such as Equations~\eqnref{statmod}. Following \cite{pope2000turbulent}, we cast the model into the form of stochastic dynamical equations for Lagrangian fluid elements. Since the dynamics is statistically one-dimensional in our configuration, we can average over the $y$ and $z$ coordinates to obtain Equations \eqnref{statmod} in one-dimensional form. For the closures, we rely on standard approximations, common and justified in PDF modeling of single-phase flows \citep{pope1985pdf}, and in models for turbulent combustion \citep{haworth2010progress,jenny2012modeling,stollinger2013pdf}. The explicit mathematical approximations for the closures provided in the SI render the interpretation of the statistical model definite, and indicate how to improve the model when necessary. In the following, we briefly summarise the closures. Equation \eqnref{velmc} contains the closure for fluid-element accelerations. It reproduces the empirically observed effect of turbulent diffusion of passive-scalar averages \citep{pope2000turbulent}. Equation~\eqnref{supsatmc} contains two closure approximations. First, the decay towards $\left \langle s(x,t)\right \rangle$ approximates the diffusion of supersaturation. This closure ensures that the mean of a passive scalar is conserved, and that a passive scalar remains bounded between its minimal and maximal values \citep{pope2000turbulent}. Furthermore, it describes the decay of passive-scalar variance in statistically homogeneous turbulent mixing of two scalar concentrations \citep{pope2000turbulent}. However, this closure does not reproduce the relaxation of the single-point PDF of scalar concentration -- from a two-peaked distribution via a U-shaped distribution into a Gaussian \citep{eswaran1988direct,pope1991mapping}. For our initial conditions [Figure~\figref{schematic}{\bf (b)}], the decay towards $\left \langle s(x,t)\right \rangle$ captures the supersaturation fluctuations experienced by a fluid element as it moves towards or away from the most cloudy region. Note that this closure does not account for large saturated cloud structures that tend to relax slowly towards $\left \langle s(x,t)\right \rangle$. Such events are most relevant during the initial stages of the mixing-evaporation process, and their effect is expected to diminish with time, as large cloud structures are mixed into smaller and smaller structures. Consequently, the statistical model may describe short initial transients only qualitatively, not quantitatively. The second closure in Equation~\eqnref{supsatmc} approximates the effects of droplet phase change on the supersaturation field: the local average $\overline{r_\alpha (t)s(\ve x_\alpha,t)}$ in Equation \eqnref{supsatdlred} is replaced by the ensemble average $\left \langle r (t)s(x,t) \right \rangle$. This is the simplest closure that preserves the conservation of the parameter $\theta$, and it is therefore common in PDF models that describe combustion of particles in turbulence \citep{haworth2010progress,jenny2012modeling,stollinger2013pdf}. The average $\left \langle r (t)s(x,t) \right \rangle$ takes into account that droplet evaporation is delayed locally when nearby droplets saturate the surrounding air. Since we obtain closure by replacing the local average $\overline{r_\alpha (t)s(\ve x_\alpha,t)}$ by an average over fluid elements with one-dimensional dynamics, variations in the rate of supersaturation relaxation in the $y$ and $z$-coordinates are not described. It is expected that the large cloud structures mentioned above, and their three-dimensional forms, matter more at very large Damk\"ohler numbers. Therefore it cannot be excluded that the statistical model is only qualitative in this extreme limit. Below we show that the model works very well even for the largest Damk\"{o}hler numbers in DNS studies \cite{kumar2012extreme,kumar2014lagrangian}. Also, since the statistical model is derived using the established framework of PDF models \citep{pope2000turbulent}, it can be straightforwardly improved by incorporating additional variables in the probabilistic description \citep{pope1990velocity,pope2000turbulent,meyer2008improved}, or by using more refined approximations \citep{pope1991mapping,jenny2012modeling}. \\[0.1cm] \begin{table}[bt] \caption{\label{tab:simulations} Summary of statistical-model simulations, details in SI. Damk\"{o}hler numbers $\text{Da}_\text{d}$ and $\text{Da}_\text{s}$, Damk\"{o}hler-number ratio $\mathscr{R}$, critical ratio $\mathscr{R}_{\rm c}$, and volume fraction $\chi$ of cloudy air}. \begin{threeparttable} \begin{tabular}{llllll} \headrow \thead{Simulation} & $\text{Da}_\text{d}$ & $\text{Da}_\text{s}$ & $\mathscr{R}$ & $\mathscr{R}_{\rm c}$ & $\chi$\\ \hiderowcolors {Figure}~\figref{peoftime}~and~\figref{slab_comparison}({\bf a}) [dry]\mbox{}\hspace*{-3mm} & 2.44 & 0.968 & 2.52 & 0.859 & 0.428 \\ {Figure}~\figref{peoftime} [moist] & 1.09 & 1.43 & 0.76 & 0.859 & 0.428 \\ {Figure}~\figref{slab_comparison}({\bf b}) [very moist] & 0.754 & 8.20 & 0.092 & 0.683 & 0.4 \\ {Figure}~\figref{paramplane} & 5E\raisebox{1mm}{-3}-4E\raisebox{1mm}{2} & 1E\raisebox{1mm}{-3}-9E\raisebox{1mm}{3} & 4E\raisebox{1mm}{-2}-4E\raisebox{1mm}{0} & 0.913 & 0.429 \\ {Figure}~\figref{mapping}({\bf a}) & 1E\raisebox{1mm}{-2}-1E\raisebox{1mm}{3}& 6E\raisebox{1mm}{-2}-6E\raisebox{1mm}{3} & 0.17 & 0.18-2.7 & 0.2-0.8 \\ {Figure}~\figref{mapping}({\bf b}) & 1E\raisebox{1mm}{-2}-8E\raisebox{1mm}{2} & 3E\raisebox{1mm}{-3}-4E\raisebox{1mm}{4} & 2.4E\raisebox{1mm}{-2}-2.9E\raisebox{1mm}{-2 } & 0.38-0.41 & 0.369-0.374 \\ \hline \end{tabular} \end{threeparttable} \end{table} \begin{table}[bt] \setlength{\tabcolsep}{2.4pt} \caption{\label{tab:otherstudies} Parameters of DNS shown in Figure~\ref{fig:paramplane}: Damk\"{o}hler number $\text{Da}_\text{d}$, Damk\"{o}hler-number ratio $\mathscr{R}$, critical ratio $\mathscr{R}_{\rm c}$, and volume fraction $\chi$ of cloudy air. Some dimensional~parameters are also shown: domain size $L$, mean dissipation rate $\varepsilon$, and droplet-number density $n_0$ of the initially cloudy air.} \begin{threeparttable} \begin{tabular}{>{}l>{}l>{}l>{}l>{}l>{}l>{}l>{}l>{}l} \headrow & \multicolumn{5}{c}{\bf Non-dimensional~parameters} & \multicolumn{3}{c}{\bf Dimensional~parameters} \\[-1.0ex] \headrow & & & & & & $\,\,\,L$& $\,\,\,\,\,\,\,\varepsilon$ & $\,\,\,\,n_0$ \\[-0.9ex] \headrow \multirow{-1.5}{*}{\bf Reference}& \multirow{-1.5}{*}{$\text{Da}_\text{d}$} & \multirow{-1.5}{*}{$\text{Da}_\text{s}$} & \multirow{-1.5}{*}{$\mathscr{R}$}& \multirow{-1.5}{*}{$\mathscr{R}_{\rm c}$} & \multirow{-1.5}{*}{$\chi$} & [cm] & [cm$^2$/s$^3$] & [cm$^{-3}$] \\ \hiderowcolors \multicolumn{1}{l|}{ $\circ$ \cite{Andrejczuk:2006}}& \multicolumn{1}{|l}{ 8E$^{-1}$-1E$^2$} & 3E$^{0}$-3E$^2$ & 0.13-2.8 & 0.10-4.5 & \multicolumn{1}{l|}{ 0.13-0.87} & \multicolumn{1}{|l}{ 64} & 4E$^{-1}$-9E${^2}$ & 1E$^{2}$-1E${^3}$ \\ \multicolumn{1}{l|}{$\triangle$ \cite{kumar2012extreme}}& \multicolumn{1}{|l}{ 8E$^{-3}$-8E$^{-1}$} & 8E$^{-2}$-8E$^0$& 9.2E$^{-2}$ & 0.68 & \multicolumn{1}{l|}{ 0.4} & \multicolumn{1}{|l}{ 26} & 34 & 164 \\ \multicolumn{1}{l|}{ $\square$ \cite{Kumar:2013}}& \multicolumn{1}{|l}{0.14, 0.31} & 0.62, 0.41& 0.22, 0.73 & 0.68 & \multicolumn{1}{l|}{0.4} & \multicolumn{1}{|l}{26} & 34 & 164 \\ \multicolumn{1}{l|}{ $\diamond$ \cite{kumar2014lagrangian}} & \multicolumn{1}{|l}{ 0.61-2.4} & 0.97-1.9& 0.31-2.5 & 0.84 & \multicolumn{1}{l|}{0.42} & \multicolumn{1}{|l}{ 51} & 34 & 153 \\ \multicolumn{1}{l|}{ $\triangledown$ \cite{kumar2018scale}}& \multicolumn{1}{|l}{ 0.12-0.91} & 0.51-4.0 & 0.23 &0.90-0.95 & \multicolumn{1}{l|}{ 0.42-0.45} & \multicolumn{1}{|l}{1E$^{1}$-2E${^2}$} & 32-35 & 120 \\ \hline \end{tabular} \end{threeparttable} \end{table} \section{Results} \subsection{Comparison with DNS} The statistical model can be used to understand DNS results of \cite{kumar2012extreme,Kumar:2013,kumar2014lagrangian,kumar2018scale}. Figure~\figref{peoftime} shows good agreement for the time evolution of the fraction $P_\text{e}(t)$ of droplets that have completely evaporated, even though the statistical-model dynamics is slightly slower. Panels ({\bf a}) and ({\bf b}) in Figure~\figref{slab_comparison} show that the model reproduces the broadening of the droplet-size distribution. The slightly slower dynamics in Figure~\figref{peoftime} and the deviations in the tails in Figure~\figref{slab_comparison} suggest that the statistical model does not reproduce the most rapid evaporation rates. This could be due to turbulent fluctuations in the supersaturation diffusion, neglected in Equation~\eqnref{supsatmc}. \cite{kumar2012extreme,kumar2014lagrangian} compute droplet-size distributions with prominent exponential tails using DNS -- some of them are seen in Figure~\figref{slab_comparison} (black lines) -- and connect these tails to corresponding exponential tails in the PDF of supersaturation at droplet positions. In our statistical-model simulations we observe heavy tails in the PDF of supersaturation at the droplet position, but the tails are less pronounced than in the DNS (not shown). Heavy tails are consistent with the results of \cite{eswaran1988direct} mentioned above, who observed how an initially bimodal supersaturation relaxes. Despite these shortcomings, our model describes the time evolution of $P_\text{e}(t)$ very well (Figure~\figref{peoftime}). It is also a significant improvement over models in which the droplets interact with a mean supersaturation field \citep{Jeffery2007,pinsky2016theoretical3,pinsky2018theoretical4}. In reality, the droplets react to the local supersaturation, as mentioned above, and this may be particularly important at large values of $\text{Da}_\text{d}$, where locally saturated regions can persist for a long time. Figure~\figref{paramplane} shows the steady-state value $P_\text{e}^*$ of $P_\text{e}(t)$ computed from the statistical model as a function of $\text{Da}_\text{d}$ and $\mathscr{R}/\mathscr{R}_{\rm c}$. We see how $P_\text{e}^*$ increases with both $\text{Da}_\text{d}$ and $\mathscr{R}/\mathscr{R}_{\rm c}$. The DNS results of \cite{Andrejczuk:2006} and \cite{kumar2012extreme,Kumar:2013,kumar2014lagrangian,kumar2018scale} form a pattern in Figure~\figref{paramplane} that verifies these dependencies: open symbols correspond to DNS with little or no complete evaporation in the steady state ($P_\text{e}^*<10\%$), and filled symbols to $P_\text{e}^*>10\%$. Figure~\figref{paramplane} also explains why the DNS of \cite{kumar2018scale} did not exhibit significant levels of inhomogeneous mixing: since their $\mathscr{R}$ was quite small, small values of $P_\text{e}^\ast$ require values of $\text{Da}_\text{d}$ much larger than unity ($\text{Da}_\text{d} \sim 10^2$ for $P_\text{e}^\ast=10\%$). Furthermore, the substantially different outcomes of the DNS of \cite{kumar2012extreme} and \cite{kumar2014lagrangian} are explained, their parameters lie on opposite sides of the bifurcation line. Figure~\figref{paramplane} also explains, at least qualitatively, numerical results of DNS of transient turbulence with quite different initial conditions~\citep{Andrejczuk:2006}, namely how the amount of complete droplet evaporation increases with both $\mathscr{R}/\mathscr{R}_{\rm c}$ and $\text{Da}_\text{d}$. There is no parameter corresponding directly to $\text{Da}_\text{d}$ in the simulations of \cite{Andrejczuk:2006}, because they are for different initial conditions and flows. We therefore place these simulations in Figure~\figref{paramplane} by computing a time-scale ratio that, in a qualitative sense, incorporates the same physics as $\text{Da}_\text{d}$ (details in the SI). Key parameters of the DNS in Figure~\figref{paramplane} are summarised in Table~\tabref{otherstudies}, a complete description is provided in the SI. \begin{figure*}[t] \centering \includegraphics{Fries_Fig2.eps} \caption{\label{fig:peoftime} Fraction $P_\text{e}(t)$ of droplets that have completely evaporated as a function of non-dimensional~time $t$, parameters in Table \tabref{simulations}. Coloured lines are statistical-model simulations, black lines are DNS of \cite{kumar2014lagrangian}.} \end{figure*} \begin{figure*}[t] \centering \includegraphics{Fries_Fig3.eps} \vspace{-2mm} \caption{\label{fig:slab_comparison} ({\bf a}) Evolution of the droplet-size distribution (parameters in Table~\tabref{simulations}) for different times. The probability density of the non-dimensional~droplet radius $r$ is shown for the statistical-model (coloured lines) and DNS of \cite{kumar2014lagrangian} (black lines). The initial droplet-size distribution is monodisperse, and centred at $r = 1$. ({\bf c}) Same as ({\bf b}), but for a very moist case (parameters in Table~\tabref{simulations}) and DNS of \cite{kumar2012extreme}.} \end{figure*} \begin{figure}[h!] \centering \includegraphics{Fries_Fig4.eps} \vspace{-2mm} \caption{\label{fig:paramplane} Steady-state fraction $P_\text{e}^*$ of completely evaporated droplets, as a function of $\text{Da}_\text{d}$ and $\mathscr{R}/\mathscr{R}_{\rm c}$, details in SI. The fraction $P_\text{e}^\ast$ is \eng{color}{colour} coded. The solid red line is the contour $P_\text{e}^* = 10\%$, the black dashed line indicates the transition between moist and dry steady states, and symbols indicate DNS results from previous studies: $\triangle$~\citep{kumar2012extreme}, $\square$~\citep{Kumar:2013}, $\diamond$~\citep{kumar2014lagrangian}, $\triangledown$~\citep{kumar2018scale}, and $\circ$~\citep{Andrejczuk:2006}. Filled symbols indicate $P_\text{e}^* > 10\%$. The DNS of \cite{Andrejczuk:2006} should not be quantitatively compared to the statistical model results, since they are for different initial conditions and decaying turbulence (see text and SI). Red, blue and light blue circles correspond to the dry, moist, and very moist simulations in Figures~\ref{fig:peoftime}~and~\ref{fig:slab_comparison}.} \end{figure} \eject \subsection{Mixing histories from observations} \label{mixhist} \begin{figure} \centering \includegraphics{Fries_Fig5.eps} \caption{\label{fig:mapping} ({\bf a}) Mixing diagram. Empirical data from \cite{beals2015holographic} (black crosses). The homogeneous mixing line (black) from top panel of Figure~3 of \cite{beals2015holographic} corresponds to $\mathscr{R}=0.17$. The \eng{colored}{coloured} region shows where steady states are found in the mixing diagram for $\mathscr{R}=0.17$. The corresponding values of $\text{Da}_\text{d}$ are \eng{color}{colour} coded (legend). The red cross is the measurement shown in the middle panel of Figure~2 of \cite{beals2015holographic}. ({\bf b}) Values of $\mathscr{R}$ and $\text{Da}_\text{d}$ consistent with a steady state at the red cross in panel ({\bf a}) (red). We estimate $L\sim 9$ m for $\text{Da}_\text{d} = 1$ (blue circle). These local mixing processes have $\mathscr{R} = \mathscr{R}_\text{min}$ (black dashed line), so no droplets evaporated completely ($P_\text{e}^\ast=0$). The green circle corresponds to $\text{Da}_\text{d} = 13$ and $\mathscr{R} = 0.028$ with $P_\text{e}^* = 1$\% and $L\sim 300\,$m.} \end{figure} A common way of characterising the droplet content of a cloud is to plot the mean cubed radius $r^3$ and number density $n$ of droplets for observed cloud-droplet populations in a mixing diagram. Figure~\figref{mapping}({\bf a}) shows a mixing diagram with empirical data from \cite{beals2015holographic}. Black crosses are values of $r^3$ and $n$ extracted from snapshots (linear size $15\,$cm) of local droplet populations measured during an airplane flight through a convective cloud. Observational data in mixing diagrams are commonly discussed in relation to the homogeneous mixing line, a curve of global steady states ($r_\ast^3,n_\ast$) that result from homogeneous mixing (no complete evaporation) between different proportions of undiluted cloudy and dry environmental air \citep{gerber2008entrainment,kumar2014lagrangian,pinsky2016theoretical3}. \cite{beals2015holographic} calculated this line, it is also shown in Figure~\figref{mapping}({\bf a}). A fundamental problem is however that it is not clear how to interpret mixing diagrams such as Figure~\figref{mapping}({\bf a}), since it is not clear that the empirically observed droplet populations reflect global steady states \citep{pinsky2018theoretical4}. It is nevertheless likely that most data points in Figure~\figref{mapping}({\bf a}) sample local steady states, i.e. locally well-mixed droplet populations that reside in saturated air. To describe such droplet populations, one must refer to the multi-scale turbulent mixing process. We attempted this analysis using the statistical model, assuming that the statistical model with the initial condition shown Figure~\figref{schematic}({\bf b}) describes how a cloud structure at the spatial scale $L$ develops. Under this assumption, $n_\ast$ and $r^3_\ast$ are given by the droplet-number density (normalised by $n_0$) and the mean cubed droplet radius $\left \langle r(t)^3\right \rangle$ in the steady state, and we can conjecture the mixing histories that formed the measured droplet populations. We begin by noting that $\chi$ and $P_\text{e}^*$ are completely determined for any steady-state point ($r_\ast^3,n_\ast$) in a mixing diagram. To show this, we write the volume-averaged initial supersaturation as $\langle s(0)\rangle = (1+s_{\rm c})(\chi + \chi_0)-1$, where $\chi_0$ is a constant that depends on the initial supersaturation profile (details in the SI). Inserting \begin{equation} \label{eq:nast} \chi = n_*/(1-P_\text{e}^*) \end{equation} into $\theta = -\langle s(t)\rangle -\tfrac{2 \chi}{3 \mathscr{R}} [ 1-P_\text{e}(t) ]\langle r^3(t)\rangle$ we find: \begin{align}\label{eq:r3} P_\text{e}^* = 1 - \frac{n_\ast [1+\tfrac{3}{2} \mathscr{R} (1+s_{\rm c})]} {n_\ast r_\ast^3+\tfrac{3}{2} \mathscr{R} [ 1-\chi_0(1+s_{\rm c})]}\, . \end{align} Equations~\eqnref{nast} and \eqnref{r3} determine how to map ($r_\ast^3,n_\ast$) to ($\chi,P_\text{e}^*$). As a consistency check we note that one obtains the homogeneous mixing line \citep{pinsky2016theoretical3} from Equation~\eqnref{r3} by setting $P_\text{e}^* = 0$. This allows us to infer that $\mathscr{R} = 0.17$ and $s_{\rm c} = \chi_0 = 0$ from the homogeneous mixing line of \cite{beals2015holographic}. Any point in the mixing diagram must correspond to a local steady state with certain values of $P_\text{e}^*$ and $\chi$. Each statistical-model simulation for given $\text{Da}_\text{d}$, $\mathscr{R}$, and $\chi$ yields a certain value of $P_\text{e}^*$. This allows us to extract a value of $\text{Da}_\text{d}$ for each point in the mixing diagram from our statistical-model simulations. The result is shown in Figure~\figref{mapping}({\bf a}). We see that $\text{Da}_\text{d}$ increases rapidly above the homogeneous mixing line. Estimating $\tau_{\text{s}} \sim {1}\,$s from \cite{beals2015holographic} and conservatively estimating $\varepsilon \sim 1$ cm$^2$s$^{-3}$ for a convective cloud, a value of $\text{Da}_\text{d} = 1000$ implies that an observed droplet population was mixed at spatial scales of the order of $L \sim \sqrt{\varepsilon\tau_L^3} \sim 5\,$km, larger than the size of the cloud. In other words, the rapid increase of $\text{Da}_\text{d}$ in Figure~\figref{mapping}({\bf a}) suggests that most of the data in the mixing diagram cannot be in a global steady state of a mixing process \eng{parameterized}{parameterised} by $\mathscr{R}=0.17$. We concluded above that most measurements of \cite{beals2015holographic} are likely to correspond to local steady states. As undiluted cloudy air is mixed with premixed air, such steady states are formed locally and temporarily as local mixing processes equilibrate at small spatial scales [Figure~\figref{schematic}({\bf a})]. We now discuss how the analysis of local steady states may yield insight into possible local histories of the cloud. Air affected by earlier mixing events is not as dry as environmental air, so the mixing of undiluted cloud with premixed air is governed by smaller values of $\mathscr{R}$. We therefore ask: which values of $\mathscr{R}$ are consistent with the assumption that the experimentally observed droplet population in the middle panel of Figure~2 of \cite{beals2015holographic} -- red cross in Figure~\figref{mapping}({\bf a}) -- reflects a local steady state? Our model allows us to determine possible combinations of $\text{Da}_\text{d}$, $\mathscr{R}$, and $\chi$ consistent with a local steady state. We know that $\mathscr{R}$ must be smaller than 0.17, the upper limit dictated by the homogeneous mixing line of \cite{beals2015holographic}. Furthermore, since the data cannot lie below the homogeneous mixing line of the global mixing process, a lower bound for $\mathscr{R}$ is $\mathscr{R}_{\rm min} = \tfrac{2}{3} (n_*-r^3_*) n_*/\left [(1+s_{\rm c})(\chi_0+n_*)-1\right ] = 0.0236$. Figure~\figref{mapping}({\bf b}) shows values of $\mathscr{R}$ and $\text{Da}_\text{d}$ obtained from our statistical-model simulations that are consistent with these constraints. We see that the range of possible values of $\text{Da}_\text{d}$ covers several orders of magnitude. This means that local mixing processes consistent with the red cross in Figure~\figref{mapping}({\bf a}) may have occurred over a large range of spatial scales. We also see that $\mathscr{R}$ does not vary much in Figure~\figref{mapping}({\bf b}), only between 0.024 and 0.03. This allows us to conclude that some important aspects of the mixing dynamics are essentially independent of spatial scale. First, the fact that $\mathscr{R}$ is substantially smaller than 0.17 indicates that the non-cloudy air was premixed. Second, using Equation~\eqnref{r3}, we find that the reduction in droplet-number density was primarily caused by dilution even at the largest scales, since $P_\text{e}^*$ increases only up $\sim$1.4\% for the largest value of $\text{Da}_\text{d}$, at $\text{Da}_\text{d} \sim 1000$ and $\mathscr{R} = 0.03$. Put differently, $\chi \sim n_\ast$ for all values of $\text{Da}_\text{d}$ we considered. How does the outcome of a local mixing process depend on its scale? Larger scales correspond to larger values of $\text{Da}_\text{d}$, and Figure~\figref{mapping}({\bf b}) shows that complete droplet evaporation begins to occur around $\text{Da}_\text{d} = 1$, where $\mathscr{R}$ starts to exceed $\mathscr{R}_\text{min}$ (blue circle). Estimating ${\varepsilon \sim 10}$~cm$^2$s$^{-3}$, a typical value for convective clouds \citep{devenish2012droplet}, we find that $\text{Da}_\text{d} = 1$, $\tau_\text{s} \sim 1$, and $\mathscr{R} = \mathscr{R}_\text{min}$ correspond to the spatial scale $9\,$m. Mixing processes leading to the red cross in Figure~\figref{mapping}({\bf a}) that occurred at scales smaller than $9\,$m were therefore perfectly homogeneous: none of the droplets evaporated completely as they were diluted by premixed air. At larger spatial scales, small but non-zero fractions of the droplets evaporated completely. Equation~\eqnref{r3} gives $\mathscr{R} = 0.028$ for $P_\text{e}^* = 1\%$, and from Figure~\figref{mapping}({\bf b}) we read off $\text{Da}_\text{d} = 13$~(green~circle). For $\varepsilon \sim 10$~cm$^2$s$^{-3}$ these values correspond to $300\,$m. This suggests that reductions in droplet number concentration are dominated by dilution, and not complete droplet evaporation, also for mixing processes that range over hundreds of meters. Furthermore, since most data points in Figure~\figref{mapping}({\bf a}) reside well above the region where equilibria are found for $\mathscr{R} = 0.17$, we conclude that they too resulted from mixing with premixed air. \section{Discussion} A general conclusion from our analysis is that both Damk\"{o}hler numbers are important for the transition to inhomogeneous mixing; $\text{Da}_\text{d}$ \eng{parameterizes}{parameterises} the mixing-limited nature of droplet evaporation, and the ratio $\mathscr{R}= \text{Da}_\text{d}/\text{Da}_\text{s}$ regulates the self-limiting effect of droplet evaporation, namely that droplets cease to evaporate when they have saturated the surrounding air or evaporated completely. \eng{Analyzing}{Analysing} the parameters of our microscopic equations \eqnref{model} we see that $ \mathscr{R}=-\tfrac{3}{2} R$ where $R$ is the potential evaporation parameter of \cite{pinsky2016theoretical3} and \cite{pinsky2018theoretical4}. So $R$ is in fact given by the ratio of $\text{Da}_\text{d}$ and $\text{Da}_\text{s}$, consistent with our conclusion that both Damk\"ohler numbers matter. \cite{pinsky2018edge1,pinsky2019edge2} concluded that the Damk\"{o}hler number ratio $\mathscr{R}$ determines whether a cloud expands by dilution or shrinks by complete droplet evaporation. A mixing process that mixes equal proportions of saturated cloudy and subsaturated non-cloudy air has $\mathscr{R}_{\rm c} = \tfrac{2}{3}$, so the symmetric configuration they adopted for the cloud edge implies that the cloud expands if $\mathscr{R} < \tfrac{2}{3}$, and shrinks otherwise. We note that whether the cloud expands or shrinks depends on the position and scale at which one perceives it. A local mixing process with small $\mathscr{R}/\mathscr{R}_{\rm c}$ tends towards a moist steady state, so the cloud dilutes locally. A local mixing process that tends towards a dry steady state, by contrast, consumes the cloud. It is possible for a cloud to expand locally for some time, even if this local expansion is part of a mixing process that consumes the cloud at larger length and time scales. Although global mixing processes are transient, they contain local steady states. Diluted and saturated local droplet populations [such as the red cross in {Figure~\figref{mapping}({\bf a})] can be a part of such transients. However, such local steady states must eventually be abandoned as mixing proceeds globally. Adopting this multi-scale picture of mixing, in which a large-scale mixing process consists of many mixing processes at smaller scales, it is natural to expect that large ranges of the parameters $\chi$, $\mathscr{R}$ and $\text{Da}_\text{d}$ are relevant. If one moves the domain of a local mixing process from the interior of the cloud towards the cloud edge, the liquid water content decreases, so that $\mathscr{R}/\mathscr{R}_{\rm c}$ and $\text{Da}_\text{d}$ increase. \cite{lehmann2009homogeneous} point out that $\text{Da}_\text{d}$ increases as one perceives mixing processes at larger and larger spatial scales. This corresponds to moving to the right in Figure~\figref{paramplane}. We note that $\mathscr{R}/\mathscr{R}_{\rm c}$ and $\text{Da}_\text{d}$ tend to increase with distance from the interior of the cloud. Moving the sampling volume towards the cloud edge then corresponds to a motion upwards and to the right in Figure~\figref{paramplane}. The amount of complete droplet evaporation increases in this direction, consistent with the fact that complete droplet evaporation takes place at the cloud edge. A number of assumptions may influence our interpretation of the empirical data in Section~\ref{mixhist}. First, the model configuration in Figure~\figref{schematic}({\bf b}) is simplified compared to real clouds, which have irregular shapes that deform during the mixing-evaporation process. Second, the observational method may not detect droplets with radii smaller than 3 $\mu$m ($r_\alpha^3 = 0.2$), as stated by \cite{beals2015holographic}. If many small droplets where not detected, the observations in Figure~\figref{mapping}({\bf a}) are located too far from the homogeneous mixing line. Third, at the upper end of the Damk\"{o}hler range in Figure~\figref{mapping}, the statistical model may not be quantitatively accurate, as stated above. We nevertheless expect that the statistical model reproduces the evolution of droplet-size distributions in DNS qualitatively in Figure~\figref{mapping}. This expectation is corroborated by the robust tendency for $P_\text{e}^*$ to increase with increasing values of $\text{Da}_\text{d}$ and $\mathscr{R}$ in Figure~\figref{paramplane}, and follows directly from the roles of the Damk\"{o}hler numbers in mixing-evaporation dynamics. The deviations in the tails of droplet-size distributions at moderate Damk\"{o}hler numbers in Figure~\figref{slab_comparison} suggest that the next step in improving the statistical model should aim at reproducing the fastest evaporation rates in the transient mixing-evaporation process. A better agreement in the tails could be achieved by refining the closure for supersaturation diffusion by using a dynamic $C_\phi$ in Equation~\eqnref{supsatmc} \citep{jenny2012modeling}, or by introducing additional fluctuations \citep{pope1991mapping}. Another possibility is to refine the description of the spatial structure of the supersaturation field by improved closures \citep{vedula2001dynamics,pope1991mapping,meyer2008improved,jenny2012modeling} \section{Conclusions} We derived a statistical model for evaporation and turbulent mixing at the cloud edge from first principles. The model explains results of earlier DNS studies of mixing \citep{Andrejczuk:2006,kumar2012extreme,Kumar:2013,kumar2014lagrangian,kumar2018scale}, and shows that two thermodynamic time scales are important for a mixing process, the droplet evaporation time and the supersaturation relaxation time. This means that one must consider two Damk\"ohler numbers in order to quantify the mixing-evaporation dynamics. We concluded that the simulations of \cite{kumar2018scale} did not exhibit a transition to inhomogeneous mixing with increasing spatial scale because the supersaturation relaxation time was too small compared to the droplet evaporation time. Our analysis supports general conclusions regarding in-situ observation of droplets in turbulent clouds. First, most of the local and instantaneous snapshots of droplet configurations observed by \cite{beals2015holographic} cannot be in the steady states of a global mixing process that mixed undiluted cloud with dry environmental air. However, a local droplet population may still be in a local steady state, established as the droplets saturated the air locally. Such local steady states belong to the transient of a global mixing process. In order to understand the nature of this transient, it was necessary to consider the whole range of possible steady states at different length scales, Figure~\figref{schematic} ({\bf a}). In short, clouds are not equilibrated at large scales, yet local steady states occur at small scales. Our analysis also indicates that most of the droplet populations observed by \cite{beals2015holographic} are likely to have resulted from mixing with premixed air, and we concluded that the corresponding local steady states arose by dilution rather than complete evaporation. Our model indicates that only very few droplets evaporated completely. We found that the statistical-model dynamics is somewhat slower than the DNS of \cite{kumar2012extreme,kumar2014lagrangian}, and that the tails of our droplet-size distributions are somewhat lighter. We speculated that this may be due to that the supersaturation dynamics is oversimplified. Since our model belongs to the family of established PDF models \citep{pope2000turbulent}, it is clear how to address this question in the future \citep{vedula2001dynamics,pope1991mapping,jenny2012modeling}. Last but not least our analysis highlights which additional observational data is needed for a more quantitative statistical-model analysis of in-situ cloud-droplet measurements. To determine the three key parameters, the volume fraction of cloudy air as well as the two Damk\"ohler numbers, one needs joint measurements of local droplet populations, supersaturation levels in their vicinity, and the sizes of the local cloud structures. This will allow to characterise and understand the mechanisms underlying local mixing processes observed on different length and time scales, and at different distances from the cloud edge. A challenge for the future is to understand the global picture, how evaporation distributes in the cloud, and where complete droplet evaporation takes place. This is necessary to improve the \eng{parameterization}{parameterisation} of mixing and evaporation at the cloud edge in sub-grid scale models, in order to better represent the radiative effects of clouds. \section{Supporting Information} The SI contains Tables S1-S3 and Appendix S1. Table S1 lists all parameters of our statistical model simulations. Tables~S2 and S3 specify the DNS results from \cite{kumar2012extreme,Kumar:2013,kumar2014lagrangian,kumar2018scale}, and \cite{Andrejczuk:2006} in Figure~\figref{paramplane}. Appendix S1 describes the relation between our microscopic Equations~\eqnref{model}, and those used in the DNS of others \citep{vaillancourt2002microscopic,paoli2009turbulent,Lanotte:2009,perrin2015lagrangian,kumar2014lagrangian,kumar2018scale}. We discuss the approximations made in deriving Equations~\eqnref{model}, and quantify their accuracy. We also describe how to derive the statistical model, Equations \eqnref{statmod}, from Equations~\eqnref{model}, using the framework of PDF methods \citep{pope2000turbulent}. We detail our computer simulations of the statistical model, and address their numerical convergence. Finally, Appendix S1 contains details concerning our interpretation of the data of \cite{beals2015holographic}, and of direct numerical simulations conducted by other authors. \section*{ACKNOWLEDGEMENTS} JF and BM thank B. Kumar and J. Schumacher for providing details of their simulations. We acknowledge support by Vetenskapsrådet (grant number 2017-368 3865), Formas (grant number 2014-585), and by the grant ‘Bottlenecks for particle growth in turbulent aerosols’ from the Knut and Alice Wallenberg Foundation, Dnr. KAW 2014.0048. Simulations were performed on resources at Chalmers Centre for Computational Science and Engineering (C3SE) provided by the Swedish National Infrastructure for Computing (SNIC). \vfill\eject \section{List of symbols} \begin{tabular}{ll} $t$ & time \\ $\ve x = (x,y,z)$ & spatial position \\ $N_0$ & number of droplets at initialisation \\ $P_\text{e}$ & fraction of completely evaporated droplets \\ $\ve u$ & fluid velocity \\ $p$ & pressure \\ $L$ & side length of cubic simulation domain \\ $w$ & width of initially cloudy region in simulation domain \\ $s_c$ & supersaturation within initial cloud slab \\ $s_e$ & supersaturation outside initial cloud slab \\ $\k$ & turbulent kinetic energy \\ $U$ & root-mean-square of fluid velocity \\ $\varepsilon$ & turbulent dissipation rate per unit mass \\ $\nu$ & kinematic viscosity \\ $\kappa$ & diffusivity of supersaturation \\ $s$ & supersaturation \\ $r$ & droplet radius \\ $r_0$ & initial volume radius of droplets \\ $n_0$ & droplet-number density of intially cloudy region \\ $\overline{r_\alpha (t)s(\ve x_\alpha,t)}$ & average of $r_\alpha (t)s(\ve x_\alpha,t)$ for droplets in the vicinity of $\ve x$ at time $t$\\ $\tfrac{{\rm D}}{{\rm D}t}$ & Lagrangian time derivative \\ $\tau_{\rm d}$ & droplet evaporation time \\ $\tau_{\rm s}$ & supersaturation relaxation time \\ $\tau_\ell$ & time scale for mixing at the length scale $\ell$\\ $\tau_L$ & large-eddy turnover time in simulation domain \\ ${\rm Re}_L$ & turbulence Reynolds number \\ Sc & Schmidt number \\ $V$ & non-dimensional~volume of simulation domain \\ $\text{Da}_\text{d}$ & Damk\"{o}hler number based on droplet evaporation time \\ $\text{Da}_\text{s}$ & Damk\"{o}hler number based on supersaturation relaxation time \\ $\mathscr{R}$ & Damk\"{o}hler-number ratio \\ $\mathscr{R}_{\rm c}$ & Critical Damk\"{o}hler-number ratio \\ $\mathscr{R}_{\rm min}$ & Lower bound for Damk\"{o}hler-number ratio related to mixing diagrams \\ $\chi$ & volume fraction of cloudy air \\ $\chi_0$ & contribution to the initial volume average of supersaturation \\ $\theta$ & conserved quantity that reflects the conservation of water and energy \\ $C_0$, $C_\phi$ & empirical constants \\ $\left \langle \dots \right \rangle$ & volume average or ensemble average in statistical model \\ \end{tabular} \section*{REFERENCES} \begingroup \renewcommand{\section}[2]{} \part*{Appendix S1} \part*{Tables S1 to S3} \part*{SI References} \vspace{1cm} \end{abstract} \newpage \part*{Appendix S1} \section{Microscopic equations}\label{otherfundamental} We denote fluid velocity and pressure by $\ve{u}(\ve{x},t)$ and $p(\ve{x},t)$. Additional fields are the supersaturation $s(\ve{x},t)$, and the condensation rate $C_{\rm d}(\ve{x},t)$. Initially, at $t = 0$, there are $N_0$ spherical droplets, labeled by $\alpha = 1, \ldots , N_0$, with positions $\ve{x}_\alpha(t)$, velocities $\ve{v}_\alpha(t)$, and radii $r_\alpha(t)$. Our microscopic equations read: \begin{subequations} \label{eq:modelsimplified} \begin{eqnarray} &&\hspace*{-5mm}\pder{\ve{u}}{t} + (\ve{u}\cdot \nabla) \ve{u} = - \frac{1}{\varrho_{\rm a}}\nabla p + \nu \nabla^2 \ve{u}, \label{eq:momdimfull} \quad \nabla\cdot \ve{u} = 0\,, \\ &&\hspace*{-5mm}\pder{s}{t} + (\ve{u}\cdot \nabla) s = \kappa \nabla^2 s - A_2 C_{\rm d}(\ve{x},t) \,\label{eq:qvdimfull} \\ &&\quad \text{with} \quad C_{\rm d}(\ve{x},t) = \sum_{\alpha = 1}^{N_0} \kernel (|\ve{x}-\ve{x}_\alpha(t)|) \frac{4\pi}{3} \rhow \der{r_\alpha(t)^3}{t} \label{eq:gammadimfull} \\ &&\hspace*{-5mm}\frac{{\rm d}\ve{x}_\alpha} {{\rm d}t}= \ve{u}(\ve{x}_\alpha(t),t), \label{eq:droplposdimfull}\\ &&\hspace*{-5mm}\der{r_\alpha^2}{t} = 2 A_3 \,s(\ve{x}_\alpha(t),t). \label{eq:droplraddimfull} \end{eqnarray} \end{subequations} Here, $\varrho_{\rm a}$ is the density of air and $\nu$ is its kinematic viscosity, $\rhow$ is the density of pure liquid water, $\kappa$ is the diffusivity of supersaturation, and $A_2$ and $A_3$ are thermodynamic coefficients. In Equation~\eqnref{gammadimfull}, $\kernel(|\ve{x}-\ve{x}_\alpha(t)|)$ is a spatial kernel, \eng{normalized}{normalised} to unity. It makes it possible to construct the continuous condensation-rate field from the dispersed droplets, and ensures that the condensation rate is computed locally \citep{jenny2012modeling}. The supersaturation is defined as \begin{align}\label{eq:supsatdef} s = e_{\rm v}/e_{\rm vs}(T)-1, \end{align} where $e_{\rm v}$ is the partial pressure of water \eng{vapor}{vapour} and $e_{\rm v s}(T)$ is the equilibrium \eng{water-vapor}{water-vapour} pressure at the temperature $T$ \citep{rogers1989short}. The partial pressure of water \eng{vapor}{vapour} depends on the density $\varrho_{\rm v}$ and gas constant $R_{\rm v}$ of water vapor, and is given by the ideal-gas law \citep{rogers1989short}: \begin{align}\label{eq:partpressdef} e_{\rm v} = \varrho_{\rm v} R_{\rm v} T. \end{align} Equations~\eqnref{modelsimplified} describe an incompressible flow with advected droplets that condense or evaporate in response to the supersaturation in their vicinity. Evaporation or condensation causes the droplet radii $r_\alpha$ to decrease or increase, at a rate regulated by the thermodynamic coefficient $A_3$. The supersaturation $s(\ve x,t)$ evolves according to an advection-diffusion equation, with a source term that describes the response of $s(\ve x,t)$ to droplet phase change. The strength of this response is regulated by the thermodynamic coefficient $A_2$. The parameters $\varrho_{\rm a}$, $\nu$, $\kappa$, $\rhow$, $A_2$, and $A_3$ are assumed constant in time. In addition, we impose that a droplet that has evaporated completely must remain at $r_\alpha = 0$. Equation~\eqnref{droplraddimfull} relies on a scale separation: spatial scales of supersaturation fluctuations induced by turbulent mixing are much larger than the radii of droplets \citep{Vaillancourt:2001}. This means that droplets do not impose boundary conditions on the supersaturation $s(\ve x,t)$ in Equation~\eqnref{qvdimfull}. They do interact locally with the supersaturation field, but over a finite volume determined by the spatial kernel in Equation~\eqnref{gammadimfull}. It also means that the supersaturation $s(\ve x,t)$ is not defined on length scales comparable to the radii of droplets, and therefore no a microscopic field in the most strict sense. The microscopic equations \eqnref{modelsimplified} can be derived from more fundamental descriptions \citep{Vaillancourt:2001,vaillancourt2002microscopic,kumar2014lagrangian,kumar2018scale,perrin2015lagrangian}. In the following we discuss the assumptions underlying Equations~\eqnref{modelsimplified}. \subsection{Droplet inertia and settling} Equation~\eqnref{droplposdimfull} assumes that the droplets are small enough so that droplet inertia and settling are negligible \citep{vajedi2014clustering,Gus16}. To neglect droplet inertia is a reasonable approximation when the Stokes number $\text{St} = \tau_{\rm p}/\tau_\eta$ is small enough. Here $\tau_{\rm p} = 2 \rhow r_\alpha^2/(9\varrho_{\rm a} \nu)$ is the Stokes time for a small water droplet in air with $\rhow \gg \varrho_{\rm a}$, and $\tau_\eta = (\nu/\varepsilon)^\frac{1}{2}$ is the Kolmogorov time (of the order of the turnover time of the smallest eddies). Settling is negligible at small settling numbers $\text{Sv} = g \tau_{\rm p}/u_\eta$, where $u_\eta = (\nu\varepsilon)^\frac{1}{4}$ is the Kolmogorov velocity and $g > 0$ is the gravitational acceleration. For typical cloud conditions (dissipation rate per unit mass $\varepsilon \sim 10^{-2}$ m$^2$/s$^3$ and viscosity $\nu \sim 1.5\times 10^{-5}$ m$^2$/s), the Stokes number is of the order $\text{St} \sim 10^{-4} \, (r_\alpha/\mu\text{m})^2$, and $\text{Sv} \sim 10^{-2} \, (r_\alpha/\mu\text{m})^2$. In Figure~5 we \eng{analyze}{analyse} observational data from \cite{beals2015holographic}. Droplet-size distributions in \cite{beals2015holographic} suggest that very few of the measured droplets exceed $r_\alpha = 6\, \mu$m, corresponding to $\text{St} =1.4\times 10^{-2}$ and $\text{Sv} = 2.7 \times 10^{-1}$. Droplet inertia and settling can therefore be neglected for most of these droplets. Out of the five DNS studies referred to in the main text, \cite{Kumar:2013,kumar2014lagrangian,kumar2018scale} incorporate droplet inertia and settling. Droplet inertia and settling can have minute effects on the droplet-size distribution, despite quite large settling numbers, $\text{Sv} \sim 3.9$ \citep{Kumar:2013}. \subsection{Buoyancy} Buoyancy is neglected in the momentum equation, Equation~\eqnref{momdimfull}. The neglected buoyancy terms read \citep{bannon1996anelastic}: \begin{align}\label{eq:buoydef} g B(\ve x,t) = g \Bigg{[} \frac{T-\tempstrat(z)}{\tempstrat(z)} + 0.608 \vapmixrat - \liqmixrat \Bigg{]}. \end{align} Here, $B = B(\ve x,t)$ is buoyancy, $T = T(\ve x,t)$ is temperature, and $\tempstrat(z)$ is a static base profile of temperature as a function of altitude $z$. Furthermore, $\vapmixrat = \vapmixrat(\ve x,t)$ is the \eng{water-vapor}{water-vapour} mixing ratio, and $\liqmixrat = \liqmixrat(\ve x,t)$ is the liquid-water mixing ratio. In a dry atmosphere ($\vapmixrat = \liqmixrat = 0$), buoyancy variations are caused only by vertical displacements. If $\tempstrat(z)$ depends linearly on $z$, then a parcel that is neutrally buoyant at $z = 0$ and is displaced adiabatically to an altitude $z$ has the buoyancy \begin{align}\label{eq:buoyancy} B = \frac{1}{\tempstrat(0)} \lp \Gamma - \der{\tempstrat}{z} \rp z + \mathcal{O}(z^2). \end{align} Here, $\Gamma$ is the dry adiabatic lapse rate, the vertical temperature gradient $\mathrm d T/\mathrm d z$ in a dry adiabatic atmosphere at hydrostatic equilibrium \citep{rogers1989short}. One finds an upper bound for the buoyancy variations within a system of spatial scale $\ell$ by substituting $z=\ell$ in Equation~\eqnref{buoyancy}. An analogous constraint is derived in Dougherty \citep{dougherty1961anisotropy}. Comparing buoyancy acceleration $gB$ at a spatial scale $\ell$ to the inertial acceleration $(\varepsilon^2/\ell)^\frac{1}{3}$ at that scale, one obtains the Dougherty-Ozmidov length scale \citep{grachev2015similarity}: \begin{align}\label{eq:doughertyscale} \ell_{\rm B} = \varepsilon^\frac{1}{2} {\Big(g \frac{1}{\tempstrat(0)} \left |\Gamma - \der{\tempstrat}{z} \right | \Big)^{-\frac{3}{4}}}\,. \end{align} For typical atmospheric conditions [$\varepsilon \sim 10^{-2}$ m$^2$/s$^3$, $\Gamma \sim -10^{-2}$ K/m, $\mathrm d \tempstrat/\mathrm d z \sim -2\times 10^{-3}$ K/m \citep{dougherty1961anisotropy}], one finds $\ell_{\rm B} \sim$ 45~m. Under these conditions, buoyancy effects may matter on spatial scales larger than 45 m. In a moist atmosphere, buoyancy varies not only as a consequence of vertical displacements, but also because droplets condense and evaporate. The largest impact of droplet phase change occurs in regions where mixing between cloudy and subsaturated air takes place \citep{vaillancourt2000review}. In such regions, buoyancy reduces as evaporating droplets absorb latent heat from the air. \cite{grabowski1993cumulus} show that sedimentation of droplets from 7~cm wide stationary filaments of cloudy air can amplify buoyancy reductions due to droplet evaporation by a factor $\sim 10$ within 5~s. We argue however that such amplifications can not be expected in turbulent clouds, because buoyancy fluctuations at spatial scales $\ell \sim 7$~cm are smoothened by turbulent dissipation at time scales $\tau_\ell \sim (\ell^2/\varepsilon)^{1/3}$ that are smaller than 5~s, already at quite low dissipation rates. We therefore analyse buoyancy reductions caused by evaporation using the mixing curve of buoyancy \citep{siems1990buoyancy}, which does not take into account droplet sedimentation. This mixing curve gives an upper bound on the buoyancy reduction $\Delta B$ that phase change induces when a volume fraction $\chi$ of cloudy air is mixed with a volume fraction $1-\chi$ of non-cloudy air. This upper bound is achieved when the mixture equilibrates at a moist or dry steady state, and it is a definite function of the volume fraction $\chi$, the temperatures $T_\cloudairsubscript$ and $T_\dryairsubscript$, the \eng{water-vapor}{water-vapour} mixing ratios $\vapmixratcloud$ and $\vapmixratdry$, and the liquid-water mixing ratio $\liqmixratcloud$ \citep{grabowski1993cumulus}. The subscripts $\cloudairsubscript$ and $\dryairsubscript$ indicate values for the cloudy and non-cloudy mixing substrates. We estimate $T_{\rm c} \sim 257.9$~K, $\vapmixratcloud \sim 3.1\cdot10^{-3}$, $\liqmixratcloud \sim 5.6\cdot10^{-4}$, $T_{\rm e} \sim 257.6$, and $\vapmixratdry \sim 2.7\cdot10^{-3}$ for the cloud and cloud environment observed by \cite{beals2015holographic} (see Section~\ref{sec:beals}). From these estimates we compute the upper bound $\Delta B \sim 10^{-3}$ for buoyancy reductions caused by droplet evaporation, using the mixing curve of buoyancy in \cite{grabowski1993cumulus}. Comparing $g\Delta B$ with the a typical inertial acceleration $(\varepsilon^2/\ell)^\frac{1}{3}$ at the spatial scale $\ell$, we find that buoyancy accelerations due to evaporation are smaller than inertial accelerations for $\ell<100\,$m. One caveat is that buoyancy may cause large-scale anisotropies in the fluid motion. This can introduce additional complexity to droplet evaporation and mixing \citep{perrin2015lagrangian}. Nevertheless, buoyancy is not a necessary ingredient in a model for these processes. Buoyancy is neglected in two of the DNS studies that we compare to \citep{kumar2012extreme,Kumar:2013}, as well as in the studies of \cite{pinsky2018theoretical4} and \cite{Jeffery2007}. \subsection{Temperature changes due to vertical air motion} Neglecting temperature changes due to vertical air motion is justified for systems that remain at a constant altitude and that are not too large or too moist, as we now explain. Given the spatial scale $\ell$ of a system that remains at a fixed altitude, the maximal temperature changes caused by vertical air motion are comparable to $ \ell \, \Gamma$. For the temperature to change by 1 K due to vertical air motion, $\Gamma \sim 10^{-2}$ K/m dictates that we must have $\ell \sim 100$ m. Temperature regulates droplet evaporation through supersaturation. Supersaturation is determined by the temperature $T$ and the \eng{water-vapor}{water-vapour} density $\varrho_{\rm v}$ according to Equations~\eqnref{supsatdef} and \eqnref{partpressdef}. For the values of $\varrho_{\rm v}$ that imply saturation ($s = 0$) at temperatures above 273.15 K, we find that a temperature change of 1 K causes the supersaturation to change by 0.07 or less in magnitude. In a quite moist system, where the supersaturation is smaller than this in magnitude everywhere, temperature changes due to vertical air motion can be more important than mixing for droplet evaporation. Since temperature changes due to vertical air motion increase with the spatial scale of the system, results obtained using Equations~\eqnref{modelsimplified} may be inaccurate for systems at constant altitudes that are large and moist. Temperature changes due to vertical air motion are often negelcted in studies of droplet evaporation \citep{kumar2012extreme,Kumar:2013,kumar2012extreme,kumar2014lagrangian,kumar2018scale,Andrejczuk:2006,pinsky2018theoretical4,siewert2017statistical,Jeffery2007}. \subsection{Temperature and pressure dependence of the coefficient $A_3$} We use a constant thermodynamic coefficient $A_3$. This parameter is given by \begin{align}\label{eq:athreedef} A_3 = \Bigg{[} \left ( \frac{\Lambda}{R_{\rm v}T}-1\right ) \frac{\Lambda \rhow}{K_{\rm a} T} + \frac{\rhow R_{\rm v} T}{D_{\rm v} e_{\rm s}(T)} \Bigg{]}^{-1}, \end{align} where $\Lambda$ is the latent heat of water vapor, $K_{\rm a}$ is the thermal conductivity of air, and $D_{\rm v}$ is the diffusivity of water \eng{vapor}{vapour} \citep{rogers1989short}. The parameter $A_3$ decreases with pressure through $D_{\rm v}$, and increases with the temperature $T$ \citep{rogers1989short}. The values that we use for $A_3$ are computed from temperatures and pressures that are representative of the systems that we consider. To use a constant value for $A_3$ is usually a good approximation \citep{Lanotte:2009,sardina2015continuous,lehmann2009homogeneous, kumar2012extreme,andrejczuk2009numerical,siewert2017statistical, Jeffery2007,pinsky2018theoretical4,devenish2016analytical}. The reason is that the temperature and pressure dependencies are quite weak. From Figure~7.1 of \cite{rogers1989short} we see, for example, that $A_3$ increases by 26\% as the temperature increases from 10$^\circ\text{C}$ to 20$^\circ\text{C}$ and the pressure decreases from 100~kPa to 70~kPa. For observations of a convective cloud at 4000 m altitude \citep{beals2015holographic} we estimate the temperature variations to be smaller than $\sim1^\circ\text{C}$ (see Section~\ref{sec:beals}). These temperature variations cause $A_3$ to change by less than 10\%, which suggests that it is a good approximation to keep the coefficient constant in analyses of convective clouds at fixed altitude. \subsection{\eng{Linearization}{Linearisation} of the supersaturation profile}\label{sec:supsatlin} We treat the joint effect of temperature and water \eng{vapor}{vapour} at the level of a single field, the supersaturation field. This one-field treatment is obtained by \eng{linearizing}{linearising} the supersaturations dependence on \eng{water-vapor}{water-vapour} mixing ratio $q_{\rm v}$ and temperature $T$, and by setting the diffusivities of water \eng{vapor}{vapour} and temperature equal to the same value $\kappa$. With the same diffusivity for temperature and water vapor, and with $s$ given by a linear combination of $\vapmixrat$ and $T$, the two separate advection-diffusion equations for temperature and \eng{water-vapor}{water-vapour} mixing ratio \citep{Vaillancourt:2001} can be concatenated into a single equation for $s$. We denote the heat capacity of dry air at constant pressure by $c_{\rm p}$ and estimate $\varrho_{\rm a} \heatcapacity \sim 1000\,$J/(m$^3$K) \citep{rogers1989short}. With this estimate, Table 7.1 of \citep{rogers1989short} implies that the diffusivity of water \eng{vapor}{vapour} roughly equals the diffusivity of temperature for atmospheric conditions, so that it is sufficient to consider just one value for the diffusivity. The supersaturation \eng{linearized}{lineqrised} around a reference state with temperature $T_\refsubscript$ and \eng{water-vapor}{water-vapour} mixing ratio $\vapmixratref$ reads: \begin{align}\label{eq:subsatlin} s(\vapmixrat,T) =& s(\vapmixratref,T_\refsubscript) + \frac{\partial s}{\partial \vapmixrat} \bigg{|}_{\substack{\vapmixratref\\T_\refsubscript}}(\vapmixrat-\vapmixratref) + \frac{\partial s}{\partial T} \bigg{|}_{\substack{\vapmixratref\\T_\refsubscript}}(T-T_{\refsubscript}). \end{align} Using $\mathrm d q_{\rm v} = -\mathrm d q_{\rm \ell}$ and $\mathrm d T = (\Lambda/c_{\rm p})\mathrm d q_{\rm \ell}$, we find that the coefficient $A_2$ in Equation~\eqnref{qvdimfull} is given by \begin{align}\label{eq:a2other} A_2 = \frac{1}{\varrho_{\rm a}} \Big(\frac{\partial s}{\partial \vapmixrat} \bigg{|}_{\substack{\vapmixratref\\T_\refsubscript}} - \frac{\Lambda}{\heatcapacity} \frac{\partial s}{\partial T} \bigg{|}_{\substack{\vapmixratref\\T_\refsubscript}} \Big)\,. \end{align} In Figures~2~to~4 we compare our statistical-model results with DNS results \citep{kumar2014lagrangian,kumar2018scale,andrejczuk2009numerical} that are obtained using models where temperature and water \eng{vapor}{vapour} are treated as two separate fields. In order to match their non-linear supersaturation to our linear supersaturation given by Equation~\eqnref{subsatlin}, we \eng{linearize}{linearise} their supersaturation around a saturated reference state $(T_\refsubscript,\vapmixratref)$. We use $T_\refsubscript = 270.77\,$K \citep{kumar2014lagrangian}, $T_\refsubscript = 282.866\,$K \citep{kumar2018scale} , and $T_\refsubscript = 293\,$K \citep{andrejczuk2009numerical}, determining our values of $\vapmixratref$ through Equations~\eqnref{supsatdef}, \eqnref{partpressdef}, and $q_{\rm v} = \varrho_{\rm v}/\varrho_{\rm a}$. For a convective cloud observed by \cite{beals2015holographic} we estimate the temperatures $T_{\rm c} \sim 257.9$ and $T_{\rm e} \sim 257.6$, and the water-vapour mixing ratios $q_{v\rm c} \sim 3.1\times10^{-3}$ and $q_{v\rm e} \sim 2.7\times10^{-3}$, wihtin and outside the cloud (see Section~\ref{sec:beals}). Over these ranges, the absolue error of the supersaturation linearised at $(T_\refsubscript,\vapmixratref)=(T_{\rm c},q_{v\rm c})$ is less than 1\%. The relative error at $(T_{\rm e},q_{v\rm e})$ is around 2\%. \section{Initial conditions}\label{sec:cloudconfig} To allow for quantitative comparisons with the results of \cite{Kumar:2013,kumar2012extreme,kumar2014lagrangian,kumar2018scale}, we use the same (or almost identical) initial conditions: a slab of cloudy air inside a cubic domain of size $L$ with periodic boundary conditions. The cloudy air occupies a fraction $\chi$ of the domain, and it is surrounded by subsaturated air [Figure~1({\bf b})]. The initial droplet-size distribution is either monodisperse or Gaussian, with mean $r_\mu$ and standard deviation $\sigma_0$. When we compare to DNS from \cite{Kumar:2013,kumar2012extreme,kumar2014lagrangian,kumar2018scale} in Figures~2~to~4, we use their initial droplet-size distribution, which is monodisperse ($\sigma_0 = 0$). When we \eng{analyze}{analyse} measurements of \cite{beals2015holographic}, we use the Gaussian droplet-size distribution of the undiluted cloud that corresponds to the measurements ($\sigma_0 \neq 0$, see Section \ref{sec:params}). The simulations of \cite{Andrejczuk:2006} are for decaying turbulence with quite strong buoyancy effects, and a different initial geometry. Nevertheless, some of our results can be compared qualitatively to the results of \cite{Andrejczuk:2006}, as explained in Section \ref{sec:DNS}. Our analysis of the measurements of \cite{beals2015holographic} is based on our initial condition. This analysis rests upon the assumption that our system models a structure of size $L$ formed at the cloud edge, although its detailed composition and geometry can deviate from our slab-like initial conditions. Our initial conditions are homogeneous in two of the three coordinate axes. We take $x$ to denote the coordinate in the spatially inhomogeneous direction. Initially the droplets are distributed uniformly and randomly within a three-dimensional cloud slab, the volume for which $-\chi L/2 < x < \chi L/2$ [Figure~1({\bf b})]. The initial droplet-number density of the slab is $n_0 = N_0/(\chi V)$, where $V = L^3$ is the domain volume. To obtain the statistical-model results shown in Figures~2~to~4 we use the initial supersaturation profiles of \cite{Kumar:2013,kumar2012extreme,kumar2014lagrangian,kumar2018scale}: \begin{align}\label{eq:hs} s(x,0) = (s_\cloudairsubscript-s_\dryairsubscript) \exp \Bigg{[} -\profilefactor \lp \frac{x}{L}\rp^{\profileexponent} \Bigg{]}\, + s_\dryairsubscript. \end{align} Here, $s_\cloudairsubscript > 0$ and $s_\dryairsubscript < 0$ denote the initial supersaturation at the center of the slab and of the dry air [Figure~1({\bf b})]. The shape of the initial supersaturation profile is contained in the parameters $\profilefactor$ and $\profileexponent$. The homogeneous mixing line of \cite{beals2015holographic} passes through the top-right corner of the mixing diagram in Figure~5({\bf a}). To be able to compare with this homogeneous mixing line, we use a sharp initial supersaturation profile with $s_\cloudairsubscript =0$ when \eng{analyzing}{analysing} the measured data: \begin{align}\label{eq:hssimple} s(x,0) = \begin{cases} 0 \quad \,\,\text{for} \quad |x/L| < \chi/2\,, \\ s_\dryairsubscript \quad \text{otherwise}\,. \quad \end{cases} \end{align} \section{Non-dimensional~equations and parameters}\label{sec:dimensionless} We non-dimensionalise~ Equations~\eqnref{modelsimplified} using $U = \sqrt{2\,\k/3}$ and the large-eddy time $\tau_L = \k/\varepsilon$, which is proportional to $L/U$ if the size of the largest eddies scale with $L$ \citep{pope2000turbulent}. In addition, we use the following quantities to non-dimensionalise: the droplet-number density of the slab [$n_0 = N_0/(\chi V)$], the (positive) supersaturation $|s_\dryairsubscript|$, and the initial volume radius \begin{align} r_0 = \left [ \frac{1}{N_0}\sum_{\alpha = 1}^{N_0} r_\alpha(0)^3 \right ]^\frac{1}{3}. \end{align} We non-dimensionalise~as follows: $\ve{u}' = \ve{u}/U$, $\ve{x}' = \ve{x}/(U\tau_L)$, $t' = t/\tau_L$, $p' = p/(\varrho_{\rm a} U^2)$, $\ve{x}_\alpha' = \ve{x}_\alpha/(U\tau_L)$, $s' = s/|s_\dryairsubscript|$, $r_\alpha' = r_\alpha/r_0$, $\kernel' = \kernel (U\tau_L)^3$, $n' = n/n_0$, $s_\cloudairsubscript' = s_\cloudairsubscript/|s_\dryairsubscript|$, $\sigma_0' = \sigma_0/r_0$, $L' = L/(U\tau_L)$ and $V' = V/(U\tau_L)^3$. Dropping the primes, Equations~\eqnref{modelsimplified} take the non-dimensional~form: \begin{subequations} \label{eq:modelsimplifieddl} \begin{eqnarray} &&\hspace*{-5mm}\pder{\ve{u}}{t} + (\ve{u}\cdot \nabla) \ve{u} = - \nabla p + \frac{1}{\turbreynolds} \nabla^2 \ve{u}\,, \label{eq:veldl} \\ &&\hspace*{-5mm}\nabla\cdot \ve{u} = 0\,, \label{eq:incomprdl} \\ &&\hspace*{-5mm}\pder{s}{t} + (\ve{u}\cdot \nabla) s = \frac{1}{\turbreynolds {\rm Sc}} \nabla^2 s - \text{Da}_\text{s} \chi V \overline{r_\alpha(t) s(\ve{x}_\alpha,t)}\,, \label{eq:sdl}\\ &&\hspace*{-5mm}\frac{{\rm d}\ve{x}_\alpha} {{\rm d}t}= \ve{u}(\ve{x}_\alpha(t),t)\,, \label{eq:droplposdl}\\ &&\hspace*{-5mm}\der{r_\alpha^2}{t} = \text{Da}_\text{d} \,s(\ve{x}_\alpha(t),t) \label{eq:droplradl}\,. \end{eqnarray} \end{subequations} Here $\overline{r_\alpha(t) s(\ve{x}_\alpha,t)}$ denotes the average of $\kernel(|\ve{x}-\ve{x}_\alpha(t)|) r_\alpha(t) s(\ve{x}_\alpha,t)$ over the droplets: \begin{align}\label{eq:corrdl} \overline{r_\alpha(t) s(\ve{x}_\alpha,t)} = \frac{1}{N_0} \sum_{\alpha=1}^{N_0} \kernel(|\ve{x}-\ve{x}_\alpha(t)|) r_\alpha(t) s(\ve{x}_\alpha,t). \end{align} The average $\overline{r_\alpha(t) s(\ve{x}_\alpha,t)}$ is position dependent, it depends on number and sizes of the local droplets, and upon the local supersaturation $s(\ve{x}_\alpha,t)$ tied to these droplets. The dynamics in Equations~\eqnref{modelsimplifieddl} is identical to the dynamics in Equations~\eqnref{modelsimplified}, but non-dimensional. Equations~\eqnref{modelsimplifieddl} have the following non-dimensional~parameters: the domain volume $V$, the initial cloud fraction $\chi$, the turbulence Reynolds number $\turbreynolds = \tfrac{2}{3}\k^2/(\varepsilon \nu)$ \citep{pope2000turbulent}, the Schmidt number $\text{Sc} = \nu/\kappa$ [of order unity since the diffusivities of both temperature and water \eng{vapor}{vapour} are roughly the same as the kinematic viscosity of air \citep{perrin2015lagrangian}], and the two Damk\"{o}hler numbers \begin{align}\label{eq:das} \text{Da}_\text{s} = \tau_L/\tau_\text{s} \quad \text{and} \quad \text{Da}_\text{d} = \tau_L/\tau_\text{d}, \end{align} where $\tau_\text{s}$ and $\tau_\text{d}$ denote the supersaturation relaxation time and the droplet evaporation time. These time scales are given by \begin{align}\label{eq:taus} \tau_\text{s} = (4 \pi A_2 A_3 \rho_\text{w} n_0 r_0)^{-1} \quad \text{and} \quad \tau_\text{d} = \frac{r_0^2}{2 A_3 |s_\dryairsubscript|}. \end{align} The time scales $\tau_\text{s}$ and $\tau_\text{d}$ can be very different. This can be understood by \eng{analyzing}{analysing} their ratio \begin{align}\label{eq:ratiorhol} \mathscr{R} = \frac{\tau_\text{s}}{\tau_\text{d}} = \frac{\text{Da}_\text{d}}{\text{Da}_\text{s}}=\frac{|s_\dryairsubscript|}{2 \pi A_2 \rho_\text{w} n_0 r_0^3} = \frac{2 |s_\dryairsubscript|}{3 A_2 \varrho_{\ell 0}}. \end{align} Here, we introduce $\varrho_{\ell 0} = 4 \pi r_0^3 n_0 \rhow/3$ as a scale for liquid water density (mass per volume), the liquid water density in a cloud with droplet-number density $n_0$ and mean volume radius $r_0$. The scale for liquid water density determines $\mathscr{R}$ together with the supersaturation $s_\dryairsubscript$ of the dry mixing substrate, and the thermodynamic coefficient $A_2$. For typical cloud conditions, we find $A_2 \sim 260\,$m$^3$/kg using Equation~\eqnref{a2other}. Consider, for example, a cumulus cloud with a typical \citep{rogers1989short} liquid water density $0.3$~g/m$^3$ in the bulk that contains premixed and slightly subsaturated air with supersaturation $-0.01$. Substituting $s_\dryairsubscript = -0.01$ and $\varrho_{\ell0} = 0.3$~g/m$^3$ into Equation~\eqnref{ratiorhol}, we find that $\mathscr{R} \sim 0.09$ for the local mixing processes in the bulk, so $\tau_\text{d}$ is more than ten times larger than $\tau_\text{s}$. At the cloud edge, by contrast, the liquid water density is lower, and mixing with dry air occurs. Assuming, say, $s_\dryairsubscript = -0.1$ and $\varrho_{\ell0} = 0.1$~g/m$^3$ for a local mixing process yields $\mathscr{R} = 2.6$, so $\tau_\text{d}$ is smaller than $\tau_\text{s}$. As we continue to move away from the cloud core, $\varrho_{\ell0}$ tends to zero, and $\tau_\text{s}/\tau_\text{d}$ tends to infinity. In conclusion, the supersaturation relaxation time and the droplet evaporation time for local mixing processes can differ by orders of magnitude.\\[0.1cm] \section{Statistical model} The statistical model, Equations~\maintextstatmodeqnnum, rests upon on a probabilistic description of the microscopic Equations~1, in terms of two probability-density functions (PDF:s). The first PDF, denoted by $\mathscr{F}$, describes the droplets, and the second one, denoted by $f$, describes the air. The dynamics of Lagrangian fluid elements in Equations~\maintextstatmodeqnnum~constitutes a closed set of evolution equations for these PDF:s. The presence of droplets makes it necessary to consider two types of fluid elements, one for each PDF. Similar extensions of single-phase PDF models have already been made to describe combustion problems, where a gas interacts with droplets or other dispersed particles \citep{jenny2012modeling,stollinger2013pdf}. In the following, we derive the statistical model in terms of $\mathscr{F}$ and $f$, since this allows us to highlight similarities and differences to PDF models for single-phase flows \citep{pope1985pdf}. The first step is to define the two PDF:s. Their exact evolution equations are not closed, as they contain conditional averages that are not known in terms of the PDF:s. We employ standard closures, designed to approximate the effects of acceleration and diffusive scalar flux in single-phase flows \citep{pope1985pdf} and multiphase combustion \citep{jenny2012modeling,stollinger2013pdf,haworth2010progress}. The resulting closed set of equations constitutes a statistical model that describes the dynamics dictated by the microscopic Equations~1. As a final step, the model is cast into the form of Langevin equations \citep{pope2000turbulent}, invoking the concept of Lagrangian fluid elements. \subsection{Probabilistic description} We describe the droplets by the joint PDF of droplets and air, $\mathscr{F}(\ve U,S,R^2,\ve{x};t)$, which is the probability density of the event \begin{align}\label{eq:lagrdef} E_{\rm d}: \{ \ve x_\alpha(t) = \ve{x}, \quad r_\alpha(t)^2 = R^2 > 0, \quad \ve{u}(\ve{x}_\alpha(t),t) = \ve{U}, \quad \text{and} \quad s(\ve{x}_\alpha(t),t) = S\, \} \end{align} for a droplet $\alpha$. Our definition of probability densities of events is that of \cite{pope1985pdf}. The PDF $\mathscr{F}$ is {\em Lagrangian}, since it is a PDF evaluated along Lagrangian trajectories \citep{pope2000turbulent}, the trajectories of the advected droplets. By defining $\mathscr{F}$ as a density in squared droplet radius $r_\alpha^2(t)$, instead of droplet radius $r_\alpha(t)$, we avoid complications that follow from that $\mathrm d r_\alpha/\mathrm d t$ diverges in the limit $r_\alpha(t) \rightarrow 0$ when we formulate the evolution equation of $\mathscr{F}$ below. Droplets that evaporate according to Equation~\eqnref{droplradl} may adopt a zero radius, $r_\alpha^2(t)=0$. When $r_\alpha^2(t)$ becomes zero the droplet has evaporated completely and no longer plays any role for the dynamics. Therefore only droplets with $R^2>0$ are considered in $\mathscr{F}$. In this setup $\mathscr{F}$ is \eng{normalized}{normalised} to $1 - P_{\rm e}(t)$, reflecting that the total probability of $\mathscr{F}$ decreases as droplets completely evaporate. The observables that we solve for are contained in $\mathscr{F}$. We compute the droplet-size distribution as \begin{align}\label{eq:dsdstatmod} \mathscr{F}_r(R;t) = 2 R \int \mathscr{F}(\ve U,S,R^2,\ve{x};t) \mathrm d \ve U \; \mathrm d S \; \mathrm d \ve x, \end{align} and we compute the fraction of completely evaporated droplets as \begin{align}\label{eq:pestatmod} P_\esubscript(t) = 1 - \int \mathscr{F}(\ve U,S,R^2,\ve{x};t) \mathrm d \ve U \mathrm d S \mathrm d R^2 \mathrm d \ve x. \end{align} The air is described by the joint PDF of velocity and supersaturation, $f(\ve U,S;\ve x,t)$, which is the probability density of the event \begin{align}\label{eq:eulerdef} E_{\rm a}: \{\ve u(\ve x,t) = \ve{U} \quad \text{and} \quad s(\ve x,t) = S\, \}. \end{align} As opposed to $\mathscr{F}$, the PDF $f$ is \eng{normalized}{normalised} to unity at all times. Note also that $f$ is {\em Eulerian}, since it is a PDF of fluid properties at a fixed position \citep{pope2000turbulent}. PDF models for single-phase flows describe turbulent flows with one or several advected scalar fields \citep{pope2000turbulent}. They solve for an Eulerian joint PDF of velocity and scalars. This PDF is analogous to $f$, but may describe more than one advected scalar field. As in some combustion problems \citep{stollinger2013pdf,jenny2012modeling,haworth2010progress}, we must consider a dispersed phase (the droplets) that interacts with a fluid phase (the air). As a consequence, we need two PDF:s to describe the dynamics. Exact evolution equations of Lagrangian and Eulerian PDF:s are given in \cite{pope1985pdf}. To obtain the evolution equation for $\mathscr{F}$, we integrate a corresponding transition PDF in \cite{pope1985pdf} over $\mathscr{F}(\ve U,S,R^2,\ve{x};t=0)$, which is known from the intial conditions. The microscopic equations \eqnref{modelsimplifieddl} imply that the dynamics of $\mathscr{F}$ and $f$ read: \begin{adjustwidth}{-70pt}{-10pt} \begin{subequations} \label{eq:pdfmodel} \begin{eqnarray} &&\pder{\mathscr{F}}{t} + U_i\pder{\mathscr{F}}{x_i}+ \text{Da}_\text{d} S \pder{\mathscr{F}}{R^2} \notag \\ &&\hspace{10mm}= - \pder{}{U_i} \la \frac{1}{\turbreynolds } \nabla^2 u_i -\pder{p}{x_i}\vline \hspace{1pt}E_{\rm d} \ra \mathscr{F}- \pder{}{S} \la \frac{1}{\turbreynolds {\rm Sc}} \nabla^2 s\vline \hspace{1pt}E_{\rm d} \ra \mathscr{F} - \text{Da}_\text{s} \chi V \pder{}{S} \la \overline{r_\alpha(t)s(\ve x_\alpha,t)} \vline\hspace{1pt}E_{\rm d} \ra \mathscr{F}\, \label{eq:droplpdfuncl}, \\ \notag \\ &&\pder{f}{t} + U_i\pder{f}{x_i} = - \pder{}{U_i} \la \frac{1}{\turbreynolds } \nabla^2 u_i -\pder{p}{x_i}\vline \hspace{1pt}E_{\rm a} \ra f- \pder{}{S} \la \frac{1}{\turbreynolds {\rm Sc}} \nabla^2 s\vline \hspace{1pt}E_{\rm a} \ra f - \text{Da}_\text{s} \chi V \pder{}{S} \la \overline{r_\alpha(t)s(\ve x_\alpha,t)} \vline\hspace{1pt}E_{\rm a} \ra f\hspace{1pt}\, \label{eq:airpdfuncl}. \end{eqnarray} \end{subequations} \end{adjustwidth} Here, we follow \cite{pope1985pdf} and write the components of positions and velocities by $(x_1,x_2,x_3)$, $(u_1,u_2,u_3)$ and $(U_1,U_2,U_3)$, and sum over repeated indices. Averages conditioned upon the events in Equations~\eqnref{lagrdef} and \eqnref{eulerdef} are denoted by $\la \ldots | E_{\rm d} \ra$ and $\la \ldots | E_{\rm a} \ra$. We have imposed that a droplet that has evaporated completely must remain at $r_\alpha^2(t) = 0$. Since $\mathscr{F}$ describes droplets with positive radii, this dynamics implies a boundary condition for $\mathscr{F}$ at $R^2 = 0$, described by the probability current \citep{gardiner2009stochastic} \begin{align}\label{eq:currdef} \mathscr{J}_{\rm e}(\ve U,S,\ve{x};t) =\begin{cases} - \text{Da}_\text{d} S \Lim{R^2\rightarrow 0^+}\mathscr{F}(\ve U,S,R^2,\ve{x};t) \quad&\text{if}\quad{S < 0}\\ 0 & \text{otherwise} \end{cases} \end{align} in the negative $R^2$ direction. Here, the limit $R^2\rightarrow 0^+$ is required, since $R^2 > 0$ in $\mathscr{F}$. The probability current in Equation~\eqnref{currdef} is non-negative, $\mathscr{J}_{\rm e}(\ve U,S,\ve{x};t) \geq 0$, which reflects that droplets can reach $r_\alpha^2(t) = 0$ from above, but not from below. The rate of complete droplet evaporation can be written in terms of the probability current as \begin{align}\label{eq:decomp} \der{P_{\rm e}}{t} = \int \mathscr{J}_{\rm e}(\ve U,S,\ve{x};t) \mathrm d \ve U\mathrm d S \mathrm d\ve{x}. \end{align} In addition to the boundary condition at $R^2 = 0$, $\mathscr{F}$ and $f$ inherit the spatial periodicity of the slab configuration [Figure~(1{\bf b})]. The variables $\ve U$ and $S$ are not bounded, and require no boundary conditions. The initial conditions for $\mathscr{F}$ and $f$ are determined by the initial supersaturation profile (Equation~\eqnref{hs}~or~\eqnref{hssimple}), the initial spatial distribution of droplets, the initial droplet-size distribution, and the PDF of velocity $\varphi(\ve U;\ve x,t)$, which is the probability density of the event $\ve u(\ve x,t) = \ve U$. We discuss the PDF $\varphi(\ve U;\ve x,t)$ below. The evolution equations \eqnref{pdfmodel}, together with their initial and boundary conditions, constitute an exact description of the dynamics of $\mathscr{F}$ and $f$. The terms on the left-hand sides of Equations~\eqnref{droplpdfuncl} and \eqnref{airpdfuncl} are in closed form, as they can be computed in terms of the PDF:s. The terms on the right-hand sides are not in closed form, and must be approximated in order to solve for the joint evolution of $\mathscr{F}$ and $f$. They correspond to the fluctuating acceleration and diffusive flux of supersaturation that a fluid element experiences. \subsection{Closure} We follow \citep{pope2000turbulent} and model the fluctuating acceleration of fluid elements with velocities $\ve {u}(t)$ in statistically stationary, homogeneous and isotropic turbulence as a three-dimensional Ornstein-Uhlenbeck process: \begin{align} \mathrm d \ve u = -\frac{3}{4}C_0 \ve u \mathrm d t + \left ( \frac{3}{2} C_0 \right )^\frac{1}{2} \mathrm d \ve \eta \,. \label{eq:velmcsuppl} \end{align} In this Langevin equation, $C_0$ is a constant that depends on the Taylor-scale Reynolds number $\text{Re}_\lambda =\sqrt{10\turbreynolds }$ \citep{pope2011simple}, and $\mathrm d \ve \eta$ is the increment of a three-dimensional isotropic Wiener process \citep{pope2000turbulent}. We model the $\text{Re}_\lambda$-dependence of $C_0$ according to the empirical fit of \cite{pope2011simple}, $C_0 = 6.5/(1+140 \text{Re}_\lambda^{-4/3})^\frac{3}{4}$. Equation~\eqnref{velmcsuppl} dictates that each velocity component of a fluid element evolves stochastically according to an independent one-dimensional Ornstein-Uhlenbeck process with dimensional~variance $2\k/3$. Another consequence of Equation~\eqnref{velmcsuppl} is that the auto-correlation function $\rho(s) = \la u(t)u(t+s)\ra$ (dimensional~$t$ and $s$) of the velocity component $u(t)$ of a fluid element decays exponentially at the time-scale $4 \tau_L/(3 C_0)$. In our non-dimensional~units, the Ornstein-Uhlenbeck processes have variance unity and auto-correlation time $4 /(3 C_0)$. With Equation~\eqnref{velmcsuppl}, the PDF of fluid-element velocity relaxes to a joint normal distribution at this time scale \citep{pope2000turbulent}. The joint normal PDF of fluid-element velocity components and the exponential decay of $\rho(s)$ dictated by Equation~\eqnref{velmcsuppl} is observed empirically \citep{pope2000turbulent,pope2011simple}. In the limit $\text{Re}_\lambda \rightarrow \infty$, $C_0$ approaches 6.5, a measured value for the Kolmogorov constant that specifies how the second order Lagrangian structure function depends on the Lagrangian auto-correlation time and the mean dissipation rate in high-Reynolds number turbulence \citep{pope2011simple}. Consistently with the Kolmogorov theory of turbulence \citep{kolmogorov1941local}, the statistics pertaining to the motion of a fluid element subject to Equation~\eqnref{velmcsuppl} is Reynolds-number independent at high Reynolds numbers. Equation~\eqnref{velmcsuppl} does not take into account that the dissipation rate experienced by a fluid element fluctuates intermittently around the mean dissipation rate $\varepsilon$, as predicted by Kolmogorov's refined theory of turbulence \citep{kolmogorov1962refinement}. This intermittency can be taken into account using the refined Langevin model of \cite{pope1990velocity}, which is an extension of Equation~\eqnref{velmcsuppl}. Since Equation~\eqnref{velmcsuppl} reproduces the observed variance and autocorrelation time of fluid-element velocities, Equation~\eqnref{velmcsuppl} ensures that the empirically observed turbulent diffusion of passive scalars is correctly described \citep{pope2000turbulent}. Equation~\eqnref{velmcsuppl} for the velocity of fluid elements provides closure for the first terms on the right-hand sides of Equations~\eqnref{droplpdfuncl} and \eqnref{airpdfuncl}. The joint PDF of droplets and air $\mathscr{F}$ is a probability density in the velocity of a fluid element, a fluid element that moves together with one of the advected droplets. The closure for Equation~\eqnref{droplpdfuncl} therefore follows directly from Equation~\eqnref{velmcsuppl}. The joint PDF of velocity and supersaturation $f$ is not a probability density in the velocity of a fluid element. The closure implied for Equation~\eqnref{airpdfuncl} is therefore somewhat more intricate. As explained by \cite{pope2000turbulent}, the crucial step in the derivation of this closure is to consider the relation between (the Eulerian) $f$ and a corresponding Lagrangian PDF that is conditioned upon the initial position of a fluid element. In an incompressible flow, $f$ is obtained by integrating this Lagrangian PDF over initial positions of fluid elements. Equation~\eqnref{velmcsuppl} prescribes a Fokker-Planck equation for the Lagrangian PDF. Since this Fokker-Planck equation does not depend upon the initial position of the fluid element, a Fokker-Planck equation of the same form follows for $f$. Following \cite{pope2000turbulent}, we conclude that Equation~\eqnref{velmcsuppl} implies that the first terms on the right-hand sides of Equations~\eqnref{droplpdfuncl} and \eqnref{airpdfuncl} are given by two Fokker-Planck equations: \begin{subequations} \label{eq:velpdfclosure} \begin{eqnarray} &&- \pder{}{U_i} \la \frac{1}{\turbreynolds } \nabla^2 u_i -\pder{p}{x_i}\vline \hspace{1pt}E_{\rm d} \ra \mathscr{F}=\frac{3}{4} C_0 \pder{}{U_i} U_i \mathscr{F} + \frac{3}{4} C_0 \pder{^2\mathscr{F}}{U_i \partial U_i}, \quad \text{and} \\ \notag \\ &&- \pder{}{U_i} \la \frac{1}{\turbreynolds } \nabla^2 u_i -\pder{p}{x_i}\vline \hspace{1pt}E_{\rm a} \ra f=\frac{3}{4} C_0 \pder{}{U_i} U_i f + \frac{3}{4} C_0 \pder{^2f}{U_i \partial U_i}\, \label{eq:fdsfsdf}. \end{eqnarray} \end{subequations} In addition to providing closure to the first terms on the right-hand sides of Equations~\eqnref{droplpdfuncl} and \eqnref{airpdfuncl}, Equation~\eqnref{velmcsuppl} sets the initial conditions of $\mathscr{F}$ and $f$. Statistical homogeneity and stationarity dictates that the PDF of velocity $\varphi (\ve U;\ve x,t)$ discussed above is independent of $\ve x$ and $t$. To obtain statistically stationary modeled turbulence it follows from Equation~\eqnref{velmcsuppl} that $\varphi (\ve U;\ve x,t)$ must be joint normal in the velocity components. For the second terms on the right-hand sides of Equations~\eqnref{droplpdfuncl} and \eqnref{airpdfuncl}, we make the following closure: \begin{align} &&\la \frac{1}{\turbreynolds {\rm Sc}} \nabla^2 s\vline \hspace{1pt}E_{\rm d} \ra = \la \frac{1}{\turbreynolds {\rm Sc}} \nabla^2 s\vline \hspace{1pt}E_{\rm a} \ra = -\frac{1}{2} C_\phi (S-\la s(\ve x,t)\ra). \label{eq:diffclosure} \end{align} Here, $C_\phi$ is an empirical constant, and $\la s(\ve x,t) \ra$ is defined as the position-dependent average \begin{align}\label{eq:smeandef} \la s (\ve x,t)\ra = \int S f(\ve U,S;\ve x,t) \mathrm d \ve U\, \mathrm d S. \end{align} Equation~\eqnref{diffclosure} ensures that the combined effect of molecular diffusion and turbulent mixing results in the correct decay of the variance ${\rm Var} [ s(\ve x,t) ] = \la s (\ve x,t)^2\ra - \la s (\ve x,t)\ra^2$ of supersaturation, at least under homogeneous conditions ($\partial f/\partial x_i = 0$). Here, $\la s (\ve x,t)^2\ra$ is the mean-squared supersaturation, given by Equation~\eqnref{smeandef} with $S$ replaced by $S^2$ in the integrand. Experiments and DNS support that, under homogeneous conditions, the variance of supersaturation in fully developed turbulence decays as \begin{align}\label{eq:supsatvardecay} \frac{\mathrm d}{\mathrm d t} {\rm Var} [ s(\ve x,t) ] =- C_\phi {\rm Var} [ s(\ve x,t) ], \end{align} with $C_\phi \sim 2$, independently of the molecular diffusivity (independently of $\rm Sc$) \citep{pope2000turbulent}. This decay rate is reproduced by Equation~\eqnref{diffclosure}. Furthermore, Equation~\eqnref{diffclosure} respects the boundedness condition of a passive scalar \citep{pope2000turbulent}. This means that, in the absence of evaporation or condensation, the supersaturation remains bounded between its maximal and minimal values. Other aspects of the decay of supersaturation fluctuations may not be correctly reproduced by Equation~\eqnref{diffclosure} \citep{pope2000turbulent}. For the third terms on the right-hand sides of Equations~\eqnref{droplpdfuncl} and \eqnref{airpdfuncl}, we make the following closure: \begin{align} \la \overline{r_\alpha(t)s(\ve x_\alpha,t)} \vline\hspace{1pt}E_{\rm d} \ra = \la \overline{r_\alpha(t)s(\ve x_\alpha,t)} \vline\hspace{1pt}E_{\rm a} \ra = \la r(t) s(\ve x,t)\ra\, \label{eq:evapclosure}, \end{align} where $\la r(t) s(\ve x,t) \ra$ is the position-dependent average \begin{align} \la r(t) s(\ve x,t) \ra = \int R S \mathscr{F}(\ve U,S,R^2,\ve{x};t) \mathrm d \ve U\, \mathrm d S \,\mathrm d R^2. \end{align} Equation~\eqnref{evapclosure} ensures that the liquid-water potential temperature analogue $\theta$ is conserved under the statistical-model. With this closure we replace the local average $\overline{r_\alpha(t) s(\ve{x}_\alpha,t)}$, which describes local neighborhoods of droplets, by $\la r(t) s(\ve{x},t) \ra$, which is an expectation for a single droplet. As a consequence, the self-limiting nature of droplet evaporation may not be accurately described by the statistical model. To model local conditional averages as in Equation~\eqnref{evapclosure} is a standard way to obtain closure for PDF models that describe combustion of particles in turbulence \citep{jenny2012modeling,stollinger2013pdf,haworth2010progress}. Inserting Equations~\eqnref{velpdfclosure}, \eqnref{diffclosure} and \eqnref{evapclosure} into Equations~\eqnref{pdfmodel}, we obtain a model for the joint evolution of $\mathscr{F}$ and $f$: \begin{adjustwidth}{-70pt}{-10pt} \begin{subequations} \label{eq:pdfmodelclosed} \begin{eqnarray} &&\pder{\mathscr{F}}{t} + U_i\pder{\mathscr{F}}{x_i} + 2 \text{Da}_\text{d} S \pder{\mathscr{F}}{R^2} \notag \\ &&\hspace{10mm}= \frac{3}{4} C_0 \pder{}{U_i} U_i \mathscr{F} + \frac{3}{4} C_0 \pder{^2\mathscr{F}}{U_i \partial U_i} + \frac{1}{2} C_\phi \pder{}{S} (S-\la s(\ve x,t)\ra) \mathscr{F} - \text{Da}_\text{s} \chi V \pder{}{S} \la r(t) s(\ve x,t)\ra \mathscr{F}\, \label{eq:droplpdfunclclosed}, \\ \notag \\ &&\pder{f}{t} + U_i\pder{f}{x_i} = \frac{3}{4} C_0 \pder{}{U_i} U_i f + \frac{3}{4} C_0 \pder{^2f}{U_i \partial U_i} + \frac{1}{2} C_\phi \pder{}{S} (S-\la s(\ve x,t)\ra) f - \text{Da}_\text{s} \chi V \pder{}{S} \la r(t) s(\ve x,t)\ra f \, \label{eq:airpdfunclclosed}. \end{eqnarray} \end{subequations} \end{adjustwidth} These equations are the statistical model, described as a closed set of evolution equations for $\mathscr{F}$ and $f$. We note that $\mathscr{F}$ and $f$ couple to each other through the averages $\la s(\ve x,t) \ra$ and $\la r(t)s(\ve x,t) \ra$. This coupling is a consequence of that droplets are affected by the supersaturation of the air, and that the supersaturation of the air is affected by evaporating droplets. \subsection{Lagrangian dynamics} The PDF dynamics in Equations~\eqnref{pdfmodelclosed} can be cast into a set of equations that describe the dynamics of Lagrangian fluid elements \citep{pope2011simple}. The dynamics of Lagrangian fluid elements with positions $\ve x(t)$, velocities $\ve u(t)$, and supersaturations $s(t)$ that corresponds to Equation~\eqnref{airpdfunclclosed} is given by Equation~\eqnref{velmcsuppl}, together with: \begin{subequations} \label{eq:statmodsuppl} \begin{eqnarray} &&\hspace*{-5mm}\der{\ve x}{t} = \ve u \label{eq:posmcsuppl}\, ,\\ &&\hspace*{-5mm}\der{s}{t} = - \frac{1}{2}C_\phi \left ( s -\langle s(\ve x,t)\rangle \right ) - \text{Da}_\text{s}\, \chi V \langle r(t) s( \ve x,t)\rangle\, \label{eq:supsatmcsuppl}. \end{eqnarray} \end{subequations} Since droplets are advected, they move together with Lagrangian fluid elements. Therefore, some Lagrangian fluid elements coincide with droplets. We describe such fluid elements by the radii $r(t)$ of the droplets that they coincide with, in addition to their positions, velocities and supersaturations. Their dynamics corresponds to Equation~\eqnref{droplpdfunclclosed}, and is given by Equations~\eqnref{velmcsuppl} and \eqnref{statmodsuppl}, together with \begin{align} \der{r^2}{t} = \begin{cases} \text{Da}_\text{d} s \quad \text{if} \quad r^2(t) > 0\\ 0 \quad \hspace{6mm}\text{if} \quad r^2(t) = 0 \end{cases} \label{eq:radmcsuppl}. \end{align} The relations between fluid-element dynamics and evolution equations of Eulerian and Lagrangian PDF:s are derived by \cite{pope1985pdf}. To conclude why Equations~\eqnref{velmcsuppl}, \eqnref{statmodsuppl}, and \eqnref{radmcsuppl} correspond to Equations~\eqnref{pdfmodelclosed}, we give a brief summary of the derivations for our statistical-model dynamics. First, one \eng{recognizes}{recognises} that fluid elements with droplets sample $\mathscr{F}$, and that fluid elements without droplets sample $f$. Second, one formulates the Fokker-Planck equations for the two types of fluid elements. The Fokker-Planck equation for fluid elements with droplets is Equation~\eqnref{droplpdfunclclosed}, subject to the boundary condition \eqnref{currdef}. To obtain Equation~\eqnref{airpdfunclclosed} from the fluid elements without droplets, one integrates their Fokker-Planck equation over initial positions. Equation~\eqnref{airpdfunclclosed} then follows, as a consequence of the relation between Lagrangian and Eulerian PDF:s mentioned above. \subsection{Computation of observables and statistical one-dimensionality} Since the PDF dynamics of our statistical model is implied by the dynamics of Lagrangian fluid elements, we solve Equations~\eqnref{pdfmodelclosed} for the PDF:s by evolving fluid elements according to Equations~\eqnref{velmcsuppl}, \eqnref{statmodsuppl} and \eqnref{radmcsuppl}. The averages $\la s(\ve x,t)\ra$ and $\la r(t)s(\ve x,t)\ra$ are known in terms of the fluid elements, since the fluid elements sample $\mathscr{F}$ and $f$. In practice, these averages are computed using kernel estimates, as described by \cite{pope2000turbulent}. Since $\la s(\ve x,t)\ra$ and $\la r(t)s(\ve x,t)\ra$ are known, Equations~\eqnref{velmcsuppl}, \eqnref{statmodsuppl} and \eqnref{radmcsuppl} form a closed set of equations that govern the simultaneous evolution of an ensemble of fluid elements. Since fluid elements with droplets sample $\mathscr{F}$, we can compute the droplet-size distribution and the fraction of completely evaporated droplets from them according to Equations~\eqnref{dsdstatmod} and \eqnref{pestatmod}. To compute our results, we use the one-dimensional form of the statistical model shown in the main text. This is possible, since the dynamics that we model is statistically one-dimensional. Statistical one-dimensionality follows from that the turbulence is statistically homogeneous and isotropic, and from that our initial conditions are one-dimensional. The direction of statistical inhomogeneity is $x$. The PDF:s $\mathscr{F}(\ve U,S,R^2,\ve{x};t)$ and $f(\ve U,S;\ve x,t)$, as well as the averages $\la r(t)s(\ve x,t) \ra$ and $\la s(\ve x,t) \ra$ that they form, depend on the three-dimensional position $\ve x$ only through $x$. As explained by \cite{pope1985pdf}, it is therefore sufficient to evolve only the $x$-components of fluid element positions and velocities. In the main text, we have taken the statistical one-dimensionality into account and replaced $\la r(t)s(\ve x,t) \ra$ and $\la s(\ve x,t) \ra$ by $\la r(t)s( x,t) \ra$ and $\la s( x,t) \ra$. \subsection{Averaging} The mean cubed radius of droplets and the volume average of supersaturation are required to compute the liquid-water potential temperature analogue $\theta$. In the statistical model the mean cubed radius of droplets is given by: \begin{align}\label{eq:r3statmod}\ \big{\langle} r(t)^3 \big{\rangle} = \frac{1}{ 1-P_\esubscript(t) } \int R^3 \mathscr{F}(\ve U,S,R^2,\ve{x};t) \,\mathrm d \ve U \mathrm d S \mathrm d R^2 \mathrm d \ve x\,. \end{align} The volume average of supersaturation is given by: \begin{align} \la s(t) \ra = \frac{1}{V} \int S f(\ve U,S;\ve{x},t)\, \mathrm d \ve U \mathrm d S \mathrm d \ve x\,. \end{align} \section{Simulation parameters}\label{sec:params} The parameters of our simulations are listed in Table~\ref{tab:param}. The parameters for Figures~2~to~4 are taken from the DNS of \cite{kumar2012extreme,kumar2014lagrangian,kumar2018scale}. The results of \citep{kumar2014lagrangian,kumar2018scale} and \cite{Andrejczuk:2006} shown in these figures are not obtained using Equations~\maintextmicromodeqnnum, the microscopic equations that we model. The interpretation of the simulation setups of \cite{kumar2014lagrangian,kumar2018scale} and \cite{Andrejczuk:2006} in terms of our parameters is discussed in Sections \ref{otherfundamental}~and~\ref{sec:DNS}. \cite{kumar2018scale} did not give their values of $\profilefactor$, $\profileexponent$, and $s_\cloudairsubscript$, but we obtained them through private communication \citep{privcommkumar1}. For Figure~4, we use the values of $C_0$ and $L$ of Run 5 of \cite{kumar2018scale}, since we wanted to compare with DNS and are interested in large-Reynolds number regimes. All simulations use $C_\phi = 2$, a common estimate for this empirical parameter \citep{pope2000turbulent}. In Figure~5 we \eng{analyze}{analyse} measurements from the top panel of Figure~3 of \cite{beals2015holographic}. In the supplemental material of that paper, the authors show droplet-size distributions and droplet-number densities of the undiluted cloud air. The distribution of droplet radii that we use to analyse the top panel in Figure~3 of \cite{beals2015holographic} is very close to Gaussian, as explained in Section~\ref{sec:beals}. For our Figure~5, we fit a Gaussian to this distribution, and the fit is parameterised by the non-dimensional~standard deviation $\sigma_ 0 = 0.1386$. We also use $C_0 = 6.5$ and $L = 2.28$ for Figure~5. This value of $L$ is an estimate of the constant of proportionality between the dimensional~domain size and $U\tau_L$ at large Reynolds numbers. We compute this estimate by in two steps. First, we estimate the turbulent kinetic energy associated to a spatial scale $\ell$ by integrating the energy-spectrum function \citep{pope2000turbulent} from the wave number $k = 2 \pi/\ell$: \begin{align}\label{eq:kestimate} \k = \int_{2 \pi/\ell}^\infty C \varepsilon^{\frac{2}{3}} k^{-\frac{5}{3}} \mathrm d k = C \frac{3}{2}\lp \frac{\varepsilon \ell}{2 \pi} \rp^{\frac{2}{3}}. \end{align} Here, $C$ is a Kolmogorov constant that has been measured to $C = 1.5$ \citep{pope2000turbulent}. Second, we use this value of the Kolmogorov constant and find \begin{align}\label{eq:lestimate} \frac{\ell}{U \tau_L} = \frac{4\pi}{3} C^{-\frac{3}{2}} = 2.28. \end{align} Since 2.28 is the constant of proportionality between an arbitrary spatial scale and $U\tau_L$, it is the constant of proportionality between the dimensional~domain size and $U\tau_L$. We therefore conclude that our estimate in Equation~\eqnref{kestimate} gives $L = 2.28$ and $V = L^3 = 11.85$. Table~\ref{tab:param} provides a set of independent parameters that, together with $C_\phi = 2$, specify our simulations. Also shown are a few dependent parameters. These include the Damk\"{o}hler number ratio $\mathscr{R} = \text{Da}_\text{d}/\text{Da}_\text{s}$ and the critical Damk\"{o}hler number ratio \begin{align} \mathscr{R}_{\rm c} = -\frac{2}{3} \frac{\chi}{\left \langle s (0) \right \rangle}, \end{align} where $\left \langle s (0) \right \rangle$ is the initial volume average of the supersaturation (defined below). Also included is the coefficient $\chi_0$ that determine $\left \langle s (0) \right \rangle$ together with $\chi$ and $s_\cloudairsubscript$. The initial volume average of the supersaturation can be written \begin{align}\label{eq:chizero} \left \langle s (0) \right \rangle = (1+s_\cloudairsubscript)(\chi + \chi_0)-1, \end{align} a replacement that is done in the main text. For Figure~5, we have $\chi_0 = s_\cloudairsubscript = 0$, so $\left \langle s (0) \right \rangle = \chi - 1$. For Figures~2~to~4, we have a non-zero $\chi_0$, since the initial supersaturation profile is smooth. Here, the parameters $\profilefactor$ and $\profileexponent$ that determine the initial supersaturation profile are tied to specific values of $\chi$. It is only for these values of $\chi$ that the initial supersaturation is approximately zero at $x = \pm \chi L/2$, the edges of the cloud slab [Figure~1({\bf b})]. For a general $\chi$, the volume average of the initial supersaturation is given by Equation~\eqnref{chizero}, with a value of $\chi_0$ that can be computed from $\profilefactor$, $\profileexponent$, and $s_\cloudairsubscript$. \section{Numerics}\label{sec:numparam} We solve the statistical model using computer simulations that evolve an ensemble of Lagrangian fluid elements according to Equations~\maintextstatmodeqnnum. Fluid elements with droplets are \eng{initialized}{initialised} uniformly over the interval $-\chi L/2 < x < \chi L/2$. To ensure that $\la s (x,t)\ra$ can be computed in the whole simulation domain, we \eng{initialize}{initialise} fluid elements without droplets uniformly over $-L/2 < x < L/2$ \citep{pope2000turbulent}. We solve for the supersaturation $s(t)$ and squared radius $r^2(t)$ of fluid elements using Euler's method \citep{rade2013mathematics}. We use a dynamical time step that ensures that the absolute errors in $s(t)$ and $r^2(t)$ at each time step are smaller than a given threshold. At each time step, the position $x(t)$ and velocity $u(t)$ of a fluid element are drawn from their exact joint distribution, which is conditioned on the position and velocity before the time step \citep{gillespie1996exact}. The averages $\la s(x,t) \ra$ and $\la r(t) s(x,t) \ra$ are computed from the fluid elements on a regular mesh. We obtain the mean-field values at the positions of fluid elements by linear interpolation. The spatial variations of these averages reduce with time due to mixing, and we increase the computational efficiency of our simulations by neglecting these spatial variations when they become very small. That $\la s(x,t) \ra$ and $\la r(t) s(x,t) \ra$ become essentially independent of $x$ does not mean that the simulated system becomes spatially uniform. In particular, individual \eng{realizations}{realisations} of the simulated system may still exhibit supersaturation fluctuations that affect the evaporation of droplets for some time. Figures~2~and~3 show droplet-size distributions and time series of $P_e(t)$ for three simulations. To conclude convergence for these simulations, we simply check that the results do not change as we vary the parameters and thresholds that control the numerics. Figures~4~and~5 show results from hundreds of simulations. To address the convergence of these simulations, we test the accuracy of our simulations for representative initial conditions and representative combinations of $\text{Da}_\text{d}$, $\mathscr{R}$ and $\chi$. In each test, we vary the numerical parameters separately to exclude systematical errors. To estimate the statistical errors, we compute several independent \eng{realizations}{realisations} for each combination of the numerical parameters. These tests allow us to conclude that no systematical errors pertain to the results in Figures~4~and~5, and that the values of $P_\esubscript^*$ that we compute have relative errors that are less than 5 \%, and/or absolute errors that are smaller than $10^{-3}$. The simulation results in Figures~4~and~5 are either $P_\esubscript^*$ or functions of $P_\esubscript^*$, and we conclude that none of our conclusions in the main text are affected by numerical errors. \section{Parameters of DNS in Figure 4}\label{sec:DNS} To place DNS of \cite{Andrejczuk:2006,kumar2012extreme,Kumar:2013,kumar2014lagrangian,kumar2018scale} in our phase diagram (Figure~4), we extract their dimensional~ parameters that determine $\text{Da}_\text{d}$ and $\mathscr{R}/\mathscr{R}_{\rm c}$. The computed values are listed in Table~\tabref{dnskumar} for \cite{kumar2012extreme,Kumar:2013,kumar2014lagrangian,kumar2018scale}, and in Table~\tabref{dnsandr} for \cite{Andrejczuk:2006}. \subsection{DNS of \cite{kumar2012extreme,Kumar:2013,kumar2014lagrangian,kumar2018scale}} It is straightforward to compute $\text{Da}_\text{d}$, $\mathscr{R}$, and $\mathscr{R}_{\rm c}$ for the simulations in \cite{kumar2012extreme,Kumar:2013,kumar2018scale} and three simulations in \cite{kumar2014lagrangian}, since they are for stationary turbulence with Lagrangian droplets, just as Equations~1. First, one casts the dynamics into the dynamics of Equations~\eqnref{modelsimplified}, using the simplifications described in Section~\ref{otherfundamental}. After that one non-dimensionalise~as described in Section~\ref{sec:dimensionless}. The simulations of \cite{kumar2012extreme,Kumar:2013,kumar2014lagrangian,kumar2018scale} shown in Figure~4 are the simulations for which one can conclude if $P_\text{e}^* > 10 \%$ or if $P_\text{e}^* < 10 \%$. \subsection{DNS of \cite{Andrejczuk:2006}} Figure~4 shows 42 simulations of \cite{Andrejczuk:2006}. \cite{Andrejczuk:2006} reports of 58 simulations, but we can only read off whether $P_\text{e}^* > 10 \%$ or not for 50 of them. Out of these 50 simulations, eight simulations fall outside the plot range of Figure~4. We therefore show 42 simulations of \cite{Andrejczuk:2006} in Figure~4. The simulations of \cite{Andrejczuk:2006} are for decaying turbulence and initial conditions in the form of randomly distributed cloudy filaments in dry air. The parameter $\text{Da}_\text{d}$ in Equation~\eqnref{das} is defined for the stationary turbulence and the initial conditions of our simulations, so it has no direct counterpart in \cite{Andrejczuk:2006}. To nevertheless discuss the simulations of \cite{Andrejczuk:2006} we compute a time-scale ratio that incorporates the same physics as $\text{Da}_\text{d}$. We then place a simulation in Figure~4 by assuming that this time-scale ratio is equal to $\text{Da}_\text{d}$. The time-scale ratio is computed as $\text{Da}_\text{d} = \tau_L/\tau_{\rm d}$, but with $\tau_L$ taken as $\k/\varepsilon$ at the time when $\k$ decayed fastest in \cite{Andrejczuk:2006}. We estimate these values of $\k$ and $\varepsilon$ from plots in \cite{Andrejczuk:2006}. The coefficient $A_3$ enters the dynamics multiplied by an unnamed factor $\gamma_{A_3}$ in \cite{Andrejczuk:2006}, and we read off $A_3/\gamma_{A_3} = 10^{-10}$ m$^2$/s. We extract the the density $\varrho_a$ of air and the density $\varrho_{\rm w}$ of pure liquid water \cite{Andrejczuk:2006,andrejczuk2004numerical}. The expression for saturation pressure of water vapour is not given in \cite{Andrejczuk:2006}, so we can not compute their value of $A_2$ using Equation~\eqnref{a2other}. We therefore use the value $A_2 = 413$~m$^3$/kg that we compute for \cite{kumar2014lagrangian}. To nevertheless analyse DNS results of \cite{Andrejczuk:2006} qualitatively is justified since the temperatures and water-vapour densities in \cite{Andrejczuk:2006} and \cite{kumar2014lagrangian} imply that thair values of $A_2$ are similar. We compute $s_e$ as one minus the relative humidity of the non-cloudy initial filaments, which we extract from tables in \cite{Andrejczuk:2006}. We also extract the volume fraction $\chi$ of cloudy air, and the liquid-water mixing ratio $q_\ell$ of the initially cloudy filaments from these tables. We interpret the initially cloudy filaments as saturated and possesing sharp edges, even though this is not stated explicitly in \cite{Andrejczuk:2006}, so that $\mathscr{R}_c = -2 \chi/[3(\chi -1)]$. \cite{Andrejczuk:2006} do not use Lagrangian droplets. Instead, they employ a field description in which droplets are binned into 16 size categories. The three initially populated bins are centered at droplet radii $8$, $8.75$, and $9.5$ $\mu$m, and they contained 25\%, 50\%, and 25\% of the liquid-water mixing ratio $q_\ell$. We compute the number density of droplets within a bin centered at the droplet radius $r$ that contains a fraction $\xi_r$ of liquid-water mixing ratio as $n_r = 3 \xi_r q_\ell \varrho_a/(4 \pi r^3 \varrho_{\rm w})$. By summing up the number densities for $r=8$, $8.75$, and $9.5$ $\mu$m, we find the droplet-number density $n_0$ of the initially cloudy filaments. We compute the initial volume radii of droplets as $r_0 = [ 3 q_\ell \varrho_a/(4 \pi n_0 \varrho_{\rm w})]^{1/3}$ \section{Estimates used to Analyse data from \cite{beals2015holographic}}\label{sec:beals} The black crosses in Figure~5{\bf(a)} are from the top panel of Figure~3 of \cite{beals2015holographic}, and represent droplets measured during a research flight through a convective cloud. Droplet-size distributions and thermodynamic conditions are not given for this flight, so we estimate them using data from a typical flight detailed in the supplementary materials of \cite{beals2015holographic}, namely pass 2 of the research flight RF05. To estimate the parameters of our model, we need actual values for physical coefficients. We use $\Lambda = 2.5\cdot 10^6$ J/kg, $c_p = 1005$ J/(kg$\cdot$K), and $R_{\rm v} = 461.5$ J/(kg$\cdot$K) from \cite{rogers1989short}. We also set the density of pure liquid water to $\varrho_{\rm w} = 1000$~kg/m$^3$. Figure~S3 of \cite{beals2015holographic} shows that pass 2 of RF05 traverses a cloud at a constant altitude of 4000 m. We therefore assume the pressure of the International Standard Atmosphere \citep{cavcar2000international} at this altitude, $p \sim 62000$~Pa. The precise value of the pressure is not important, since conclusions that rely on the estimates in this Section are unchanged if we assume a pressure that is 10 \% lower or higher. We extract $K_{\rm a}$ and $D_{\rm v}$ at our assumed pressure from Table 7.1 of \cite{rogers1989short}. Furthermore, we assume that the saturation pressure of water vapour is given by Equation~(2.12) in \cite{rogers1989short}. We also use the relation between potential temperature $\Theta$, temperature $T$, and pressure $p$ of \cite{rogers1989short}, $\Theta = T(10^5\,{\rm Pa}/p)^{0.286}$. The droplet-size distribution of undiluted cloud air traversed during pass 2 of RF05 is shown in Figure~S7 of \cite{beals2015holographic}. The distribution of droplet radii is very close Gaussian, normalised to the number-density of droplets. We make a Gaussian fit with mean $4.42$ $\mu$m, and standard deviation $0.626$ $\mu$m. The corresponding volume radius is $r_0 = 4.51$ $\mu$m, which gives the non-dimensional~ standard deviation $\sigma_ 0 = 0.1386$. We use this value of $\sigma_ 0$ to parameterise the initial droplet-size distribution in the statistial-model simulations shown in Figure~5. The normalisation of the Gaussian fit gives the droplet-number density $n_0 = 764$ cm$^{-3}$. We compute the liquid-water density of the undiluted cloud as $\varrho_{\ell \rm c}= (4 \pi r_0^3/3)n_0 \varrho_{\rm w} = 2.94\cdot10^{-4}$~kg/m$^3$. This value of $\varrho_{\ell \rm c}$ is consistent with the liquid-water density plot in Figure~S4 of \cite{beals2015holographic}. We estimate temperatures from the liquid-water potential temperature shown in Figure~S4 of \cite{beals2015holographic}. Here, the liquid-water potential temperature varies between $\Theta_{\ell \rm c} = 294.8$~K within the cloud and $\Theta_{\ell \rm e} = 295.8$ K outside the cloud. The liquid-water potential temperature is the same as the potential temperature $\Theta_{\rm e}$ outside the cloud, $\Theta_{\rm e} = 295.8$~K. We compute the temperature $T_{\rm e} \sim 257.6$ outside the cloud, using $\Theta_{\rm e}$ and the pressure of the International Standard Atmosphere \citep{cavcar2000international}. We estimate the density of dry air as $\varrho_{\rm a} = p/(R_{\rm v} T_{\rm e}) \sim 0.52$ kg/m$^3$. This density gives the liquid-water mixing ratio $q_{\ell \rm c} = \varrho_{\ell \rm c}/\varrho_{\rm a} \sim 5.6\cdot10^{-4}$ within the cloud. The potential temperature within the cloud can now be estimated as $\Theta_{\rm c} = \Theta_{\ell \rm c} + (\Lambda/c_p)q_{\ell \rm c} \sim 296.2$~K \citep{lamb2011physics}, and the corresponding temperature estimate is $T_{\rm c} \sim 257.9$~K. It is seen in Figure~S4 of \cite{beals2015holographic} that the supersaturation within the cloud is smaller than $\sim$2\% in magnitude. Assuming saturation within the cloud, the estimate $T_{\rm c} \sim 257.9$~K gives the water-vapour mixing ratio $q_{\rm vc} = e_{\rm vs}(T_{\rm c})/(\varrho_{\rm a} R_{\rm v}T_{\rm c})\sim 3.1\cdot10^{-3}$. We extract the (negative) supersaturaion $s_{\rm e} \sim -0.08$ from Figure~S4 of \cite{beals2015holographic}, and estimate the water-vapour mixing ratio outside the cloud, $q_{\rm ve} = e_{\rm vs}(T_{\rm e})(s_{\rm e}+1)/(\varrho_{\rm a} R_{\rm v}T_{\rm e})\sim 2.7\cdot10^{-3}$. In our analysis of empirical data from \cite{beals2015holographic} we estimate $\tau_{\rm s} \sim 1$~s based on our estimates of $T_{\rm c}$, $q_{\rm vc}$, and $p$. Equation~\eqnref{a2other} gives $A_2 \sim 1000$~$m^3$/kg when linearising the supersaturation around the temperature $T_{\rm c}\sim 257.9$~K and the water-vapour mixing ratio $q_{\rm vc} \sim 3.1\cdot10^{-3}$. We compute $A_3\sim 2\cdot10^{-11}$~m$^2$/s at the pressure $p = 62000$~Pa and temperature $T_{\rm c}\sim 257.9$~K using Equation~\eqnref{athreedef}. Using the volume radius $r_0$ and droplet-number density $n_0$ estimated above, we find $\tau_{\rm s} \sim 1$~s using Equation~\eqnref{taus}. \newpage \begin{landscape} \part*{Table S1} \begin{table}[h] \centering \captionsetup{width=160mm} \footnotesize \caption{\label{tab:param} Simulation parameters. Our statistical-model simulations are completely specified by the two Damk\"{o}hler numbers $ \normalfont \text{Da}_\text{d}$ and $\normalfont \text{Da}_\text{s}$, the constant $C_0$ that regulates the auto-correlation time of fluid elements, the domain size $L$, the volume fraction $\chi$ of cloudy air, the constants $\profilefactor$ and $\profileexponent$ that determine the shape of the initial supersaturation profile in Figures~2~to~4, and the supersaturation $s_\cloudairsubscript$ at the center of the initial cloud slab, together with the empirical constant $C_\phi = 2$. From these parameters, we compute the Damk\"{o}hler number ratio $\mathscr{R}$, the critical Damk\"{o}hler number ratio $\mathscr{R}_{\rm c}$, and the contribution $\chi_0$ to the initial volume average of the supersaturation from the shape of the initial supersaturation profile. } \begin{tabular}{l|lllllllll|lll} \hline \hline \multirow{2}{*}{Simulation} & \multicolumn{9}{c|}{ Independent parameters} & \multicolumn{3}{l}{ Dependent parameters} \\ & $\text{Da}_\text{d}$ & $\text{Da}_\text{s}$ & $C_0$ & $L$ & $\chi$ & $\sigma_0$ & $\profilefactor$ & $\profileexponent$ & $s_\cloudairsubscript$ & $\mathscr{R}$ & $\mathscr{R}_{\rm c}$ & $\chi_0$ \\ \hline Figures~2 and 3, dry & 2.44 & 0.968 & 5.22 & 2.96 & 0.428 & 0 & 4722 & 8 & 0.021 & 2.52 & 0.859 & 0.226 \\ Figure~2, moist & 1.09 & 1.433 & 5.22 & 2.96 & 0.428 & 0 & 4722 & 8 & 0.021 & 0.760 & 0.859 & 0.226 \\ Figure~3, very moist & 0.754 & 8.2 & 4.50 & 2.99 & 0.4 & 0 & 1410 & 6 & 0.1 & 0.092 & 0.683 & 0.154 \\ Figure~4 & 5E$^{\text{-3}}$-4E$^{\text{2}}$ & 1E$^{\text{-3}}$-9E$^{\text{3}}$ & 6.09 & 2.66 & 0.429 & 0 & 690 & 6 & 0.1 & 5E$^{\text{-2}}$-4E$^{\text{0}}$ & 0.913 & 0.195 \\ Figure~5({\bf a}) & 1E$^{\text{-2}}$-1E$^{\text{3}}$ & 6E$^{\text{-2}}$-6E$^{\text{3}}$ & 6.5 & 2.28 &0.2-0.8 & 0.1386 & & & 0 & 0.17 & 0.17-2.7 & 0 \\ Figure~5({\bf b}) & 1E$^{\text{-2}}$-1E$^{\text{3}}$ & 3E$^{\text{-1}}$-4E$^{\text{4}}$ & 6.5 & 2.28 & 0.369-0.374 & 0.1386 & & & 0 & 2E$^{\text{-2}}$-3E$^{\text{-2}}$ & 0.38-0.41 & 0 \\ \hline \end{tabular} \end{table} \end{landscape} \newpage {\huge \bf Table S2} \begin{table}[h!] \small \centering \setlength{\tabcolsep}{6pt} \caption{\label{tab:dnskumar} Same as Table~2, but separately for each DNS from \cite{kumar2012extreme,Kumar:2013,kumar2014lagrangian,kumar2018scale} in Figure~4. Non-dimensional~parameters: Damk\"{o}hler number $\text{Da}_\text{d}$, Damk\"{o}hler-number ratio $\mathscr{R}$, critical ratio $\mathscr{R}_{\rm c}$, and volume fraction $\chi$ of cloudy air. Dimensional~ parameters: domain size $L$, mean dissipation rate $\varepsilon$, and droplet-number density $n_0$ of the initially cloudy air. } \begin{threeparttable} \begin{tabular}{lllllllll} \headrow & \multicolumn{5}{c}{\bf Non-dimensional~parameters} & \multicolumn{3}{c}{\bf Dimensional~parameters} \\[-1.0ex] \headrow & & & & & & $\,\,\,L$& $\,\,\,\,\,\,\,\varepsilon$ & $\,\,\,\,\,n_0$ \\[-0.9ex] \headrow \multirow{-1.5}{*}{\bf Reference}& \multirow{-1.5}{*}{$\text{Da}_\text{d}$} & \multirow{-1.5}{*}{$\text{Da}_\text{s}$} & \multirow{-1.5}{*}{$\mathscr{R}$}& \multirow{-1.5}{*}{$\mathscr{R}_{\rm c}$} & \multirow{-1.5}{*}{$\chi$} & [cm] & [cm$^2$/s$^3$] & [cm$^{-3}$] \\ \hiderowcolors \multicolumn{1}{l|}{Table 2 in \cite{kumar2012extreme}, Row 1}&\multicolumn{1}{|l}{0.0075}&0.0820&0.0916&0.6829&\multicolumn{1}{l|}{ }&\multicolumn{1}{|l}{25.6}&33.8&164\\ \multicolumn{1}{l|}{Table 2 in \cite{kumar2012extreme}, Row 2}&\multicolumn{1}{|l}{0.0751}&0.8204&0.0916&0.6829&\multicolumn{1}{l|}{ }&\multicolumn{1}{|l}{25.6}&33.8&164\\ \multicolumn{1}{l|}{Table 2 in \cite{kumar2012extreme}, Row 3}&\multicolumn{1}{|l}{0.7511}&8.2041&0.0916&0.6829&\multicolumn{1}{l|}{ }&\multicolumn{1}{|l}{25.6}&33.8&164\\ \multicolumn{1}{l|}{Table 2 in \cite{Kumar:2013}, Row 3}&\multicolumn{1}{|l}{0.3065}&0.4185&0.7324&0.6829&\multicolumn{1}{l|}{ }&\multicolumn{1}{|l}{25.6}&33.8&164\\ \multicolumn{1}{l|}{Table 2 in \cite{Kumar:2013}, Row 7}&\multicolumn{1}{|l}{0.1362}&0.6277&0.2170&0.6829&\multicolumn{1}{l|}{ }&\multicolumn{1}{|l}{25.6}&33.8&164\\ \multicolumn{1}{l|}{Simulation S1 in \cite{kumar2014lagrangian}}&\multicolumn{1}{|l}{2.4320}&0.9660&2.5177&0.8377&\multicolumn{1}{l|}{ }&\multicolumn{1}{|l}{51.2}&33.8&153\\ \multicolumn{1}{l|}{Simulation S2 in \cite{kumar2014lagrangian}}&\multicolumn{1}{|l}{1.0809}&1.4490&0.7460&0.8377&\multicolumn{1}{l|}{ }&\multicolumn{1}{|l}{51.2}&33.8&153\\ \multicolumn{1}{l|}{Simulation S3 in \cite{kumar2014lagrangian}}&\multicolumn{1}{|l}{0.6080}&1.9319&0.3147&0.8377&\multicolumn{1}{l|}{ }&\multicolumn{1}{|l}{51.2}&33.8&153\\ \multicolumn{1}{l|}{Run 1 in \cite{kumar2018scale}}&\multicolumn{1}{|l}{0.1191}&0.5185&0.2296&0.9150&\multicolumn{1}{l|}{ }&\multicolumn{1}{|l}{12.8}&31.9&118\\ \multicolumn{1}{l|}{Run 2 in \cite{kumar2018scale}}&\multicolumn{1}{|l}{0.1914}&0.8333&0.2297&0.9155&\multicolumn{1}{l|}{ }&\multicolumn{1}{|l}{25.6}&34.6&118\\ \multicolumn{1}{l|}{Run 3 in \cite{kumar2018scale}}&\multicolumn{1}{|l}{0.3227}&1.4042&0.2298&0.9163&\multicolumn{1}{l|}{ }&\multicolumn{1}{|l}{51.2}&34.7&118\\ \multicolumn{1}{l|}{Run 4 in \cite{kumar2018scale}}&\multicolumn{1}{|l}{0.5842}&2.4509&0.2384&0.9503&\multicolumn{1}{l|}{ }&\multicolumn{1}{|l}{102.4}&32.1&113\\ \multicolumn{1}{l|}{Run 5 in \cite{kumar2018scale}}&\multicolumn{1}{|l}{0.9122}&4.0429&0.2256&0.9000&\multicolumn{1}{l|}{ }&\multicolumn{1}{|l}{204.8}&33.6&120\\ \hline \end{tabular} \end{threeparttable} \end{table} \clearpage {\huge \bf Table S3} \begin{table}[h!] \small \centering \setlength{\tabcolsep}{10pt} \caption{\label{tab:dnsandr} Same as Table~2, but separately for each DNS from \cite{Andrejczuk:2006} in Figure~4. Non-dimensional~parameters: Damk\"{o}hler number $\text{Da}_\text{d}$, Damk\"{o}hler-number ratio $\mathscr{R}$, critical ratio $\mathscr{R}_{\rm c}$, and volume fraction $\chi$ of cloudy air. Dimensional~parameters: domain size $L$, mean dissipation rate $\varepsilon$, and droplet-number density $n_0$ of the initially cloudy air. } \begin{threeparttable} \def\arraystretch{0.89} \begin{tabular}{lllllllll} \headrow & \multicolumn{5}{c}{\bf Non-dimensional~parameters} & \multicolumn{3}{c}{\bf Dimensional~parameters} \\[0ex] \headrow & & & & & & $\,\,\,L$& $\,\,\,\,\,\,\,\varepsilon$ & $\,\,\,\,\,n_0$ \\[0ex] \headrow \multirow{-1.5}{*}{\bf \#}& \multirow{-1.5}{*}{$\text{Da}_\text{d}$} & \multirow{-1.5}{*}{$\text{Da}_\text{s}$} & \multirow{-1.5}{*}{$\mathscr{R}$}& \multirow{-1.5}{*}{$\mathscr{R}_{\rm c}$} & \multirow{-1.5}{*}{$\chi$} & [cm] & [cm$^2$/s$^3$] & [cm$^{-3}$] \\ \hiderowcolors \multicolumn{1}{l|}{S2a$_1$}&\multicolumn{1}{|l}{13.7874}&78.1243&0.1765&0.0996&\multicolumn{1}{l|}{0.13}&\multicolumn{1}{|l}{64}&1.5&1166\\ \multicolumn{1}{l|}{S2a$_2$}&\multicolumn{1}{|l}{6.1722}&34.9739&0.1765&0.3284&\multicolumn{1}{l|}{0.33}&\multicolumn{1}{|l}{64}&3&1166\\ \multicolumn{1}{l|}{S2a$_3$}&\multicolumn{1}{|l}{12.1003}&68.5642&0.1765&0.5029&\multicolumn{1}{l|}{0.43}&\multicolumn{1}{|l}{64}&1.7&1166\\ \multicolumn{1}{l|}{S2a$_4$}&\multicolumn{1}{|l}{9.6240}&54.5331&0.1765&0.6667&\multicolumn{1}{l|}{0.5}&\multicolumn{1}{|l}{64}&1.7&1166\\ \multicolumn{1}{l|}{S2a$_5$}&\multicolumn{1}{|l}{10.1375}&57.4425&0.1765&1.3535&\multicolumn{1}{l|}{0.67}&\multicolumn{1}{|l}{64}&1.5&1166\\ \multicolumn{1}{l|}{S2b$_1$}&\multicolumn{1}{|l}{4.3101}&24.4223&0.1765&0.0996&\multicolumn{1}{l|}{0.13}&\multicolumn{1}{|l}{64}&8.1&1166\\ \multicolumn{1}{l|}{S2b$_2$}&\multicolumn{1}{|l}{4.3101}&24.4223&0.1765&0.3284&\multicolumn{1}{l|}{0.33}&\multicolumn{1}{|l}{64}&8.1&1166\\ \multicolumn{1}{l|}{S2b$_3$}&\multicolumn{1}{|l}{4.3101}&24.4223&0.1765&0.5029&\multicolumn{1}{l|}{0.43}&\multicolumn{1}{|l}{64}&8.1&1166\\ \multicolumn{1}{l|}{S2b$_4$}&\multicolumn{1}{|l}{4.3101}&24.4223&0.1765&0.6667&\multicolumn{1}{l|}{0.5}&\multicolumn{1}{|l}{64}&8.1&1166\\ \multicolumn{1}{l|}{S2b$_5$}&\multicolumn{1}{|l}{4.3101}&24.4223&0.1765&1.3535&\multicolumn{1}{l|}{0.67}&\multicolumn{1}{|l}{64}&8.1&1166\\ \multicolumn{1}{l|}{S2c$_1$}&\multicolumn{1}{|l}{0.8232}&4.6646&0.1765&0.0996&\multicolumn{1}{l|}{0.13}&\multicolumn{1}{|l}{64}&935.3&1166\\ \multicolumn{1}{l|}{S2c$_2$}&\multicolumn{1}{|l}{0.8232}&4.6646&0.1765&0.3284&\multicolumn{1}{l|}{0.33}&\multicolumn{1}{|l}{64}&935.3&1166\\ \multicolumn{1}{l|}{S2c$_3$}&\multicolumn{1}{|l}{0.8232}&4.6646&0.1765&0.6667&\multicolumn{1}{l|}{0.5}&\multicolumn{1}{|l}{64}&935.3&1166\\ \multicolumn{1}{l|}{S3a$_2$}&\multicolumn{1}{|l}{8.2897}&7.3394&1.1295&0.6667&\multicolumn{1}{l|}{0.5}&\multicolumn{1}{|l}{64}&1.3&182\\ \multicolumn{1}{l|}{S3a$_3$}&\multicolumn{1}{|l}{11.6711}&20.6664&0.5647&0.6667&\multicolumn{1}{l|}{0.5}&\multicolumn{1}{|l}{64}&0.9&364\\ \multicolumn{1}{l|}{S3a$_4$}&\multicolumn{1}{|l}{9.7721}&25.9556&0.3765&0.6667&\multicolumn{1}{l|}{0.5}&\multicolumn{1}{|l}{64}&1.4&547\\ \multicolumn{1}{l|}{S3a$_5$}&\multicolumn{1}{|l}{8.4295}&47.7646&0.1765&0.6667&\multicolumn{1}{l|}{0.5}&\multicolumn{1}{|l}{64}&2&1166\\ \multicolumn{1}{l|}{S4a$_1$}&\multicolumn{1}{|l}{18.0709}&47.9980&0.3765&0.0996&\multicolumn{1}{l|}{0.13}&\multicolumn{1}{|l}{64}&0.5&547\\ \multicolumn{1}{l|}{S4a$_2$}&\multicolumn{1}{|l}{7.3957}&19.6436&0.3765&0.3284&\multicolumn{1}{l|}{0.33}&\multicolumn{1}{|l}{64}&1.6&547\\ \multicolumn{1}{l|}{S4a$_3$}&\multicolumn{1}{|l}{8.4955}&22.5650&0.3765&0.5029&\multicolumn{1}{l|}{0.43}&\multicolumn{1}{|l}{64}&1.5&547\\ \multicolumn{1}{l|}{S4a$_4$}&\multicolumn{1}{|l}{9.7754}&25.9643&0.3765&0.6667&\multicolumn{1}{l|}{0.5}&\multicolumn{1}{|l}{64}&1.3&547\\ \multicolumn{1}{l|}{S4a$_5$}&\multicolumn{1}{|l}{12.6777}&33.6732&0.3765&1.3535&\multicolumn{1}{l|}{0.67}&\multicolumn{1}{|l}{64}&1.3&547\\ \multicolumn{1}{l|}{S4b$_2$}&\multicolumn{1}{|l}{14.1203}&12.5016&1.1295&0.3284&\multicolumn{1}{l|}{0.33}&\multicolumn{1}{|l}{64}&0.4&182\\ \multicolumn{1}{l|}{S4b$_3$}&\multicolumn{1}{|l}{13.1643}&11.6552&1.1295&0.5029&\multicolumn{1}{l|}{0.43}&\multicolumn{1}{|l}{64}&0.6&182\\ \multicolumn{1}{l|}{S4b$_4$}&\multicolumn{1}{|l}{8.1755}&7.2383&1.1295&0.6667&\multicolumn{1}{l|}{0.5}&\multicolumn{1}{|l}{64}&1.1&182\\ \multicolumn{1}{l|}{S4b$_5$}&\multicolumn{1}{|l}{9.8085}&8.6841&1.1295&1.3535&\multicolumn{1}{l|}{0.67}&\multicolumn{1}{|l}{64}&1.2&182\\ \multicolumn{1}{l|}{S4b$_6$}&\multicolumn{1}{|l}{13.9764}&12.3742&1.1295&4.4615&\multicolumn{1}{l|}{0.87}&\multicolumn{1}{|l}{64}&0.9&182\\ \multicolumn{1}{l|}{S4c$_5$}&\multicolumn{1}{|l}{9.6947}&3.4333&2.8237&1.3535&\multicolumn{1}{l|}{0.67}&\multicolumn{1}{|l}{64}&1&73\\ \multicolumn{1}{l|}{S4c$_6$}&\multicolumn{1}{|l}{16.1282}&5.7117&2.8237&4.4615&\multicolumn{1}{l|}{0.87}&\multicolumn{1}{|l}{64}&0.6&73\\ \multicolumn{1}{l|}{S5a$_1$}&\multicolumn{1}{|l}{12.0940}&34.2643&0.3530&0.6667&\multicolumn{1}{l|}{0.5}&\multicolumn{1}{|l}{64}&3.7&1166\\ \multicolumn{1}{l|}{S5a$_2$}&\multicolumn{1}{|l}{11.6186}&41.8950&0.2773&0.6667&\multicolumn{1}{l|}{0.5}&\multicolumn{1}{|l}{64}&3.1&1166\\ \multicolumn{1}{l|}{S5a$_3$}&\multicolumn{1}{|l}{9.3465}&52.9606&0.1765&0.6667&\multicolumn{1}{l|}{0.5}&\multicolumn{1}{|l}{64}&1.7&1166\\ \multicolumn{1}{l|}{S5a$_4$}&\multicolumn{1}{|l}{7.7943}&61.8316&0.1261&0.6667&\multicolumn{1}{l|}{0.5}&\multicolumn{1}{|l}{64}&1.1&1166\\ \multicolumn{1}{l|}{S6a$_1$}&\multicolumn{1}{|l}{13.3405}&37.7959&0.3530&0.6667&\multicolumn{1}{l|}{0.5}&\multicolumn{1}{|l}{64}&2.7&1166\\ \multicolumn{1}{l|}{S6a$_2$}&\multicolumn{1}{|l}{14.7497}&41.7886&0.3530&0.6667&\multicolumn{1}{l|}{0.5}&\multicolumn{1}{|l}{64}&3.4&1166\\ \multicolumn{1}{l|}{S6a$_3$}&\multicolumn{1}{|l}{12.8279}&36.3436&0.3530&0.6667&\multicolumn{1}{l|}{0.5}&\multicolumn{1}{|l}{64}&5.9&1166\\ \multicolumn{1}{l|}{S6a$_4$}&\multicolumn{1}{|l}{10.1435}&28.7383&0.3530&0.6667&\multicolumn{1}{l|}{0.5}&\multicolumn{1}{|l}{64}&10.2&1166\\ \multicolumn{1}{l|}{S6b$_1$}&\multicolumn{1}{|l}{1.6879}&4.7821&0.3530&0.6667&\multicolumn{1}{l|}{0.5}&\multicolumn{1}{|l}{64}&3&1166\\ \multicolumn{1}{l|}{S6b$_2$}&\multicolumn{1}{|l}{3.5983}&10.1947&0.3530&0.6667&\multicolumn{1}{l|}{0.5}&\multicolumn{1}{|l}{64}&3.1&1166\\ \multicolumn{1}{l|}{S6b$_3$}&\multicolumn{1}{|l}{13.1067}&37.1334&0.3530&0.6667&\multicolumn{1}{l|}{0.5}&\multicolumn{1}{|l}{64}&3.8&1166\\ \multicolumn{1}{l|}{S6b$_4$}&\multicolumn{1}{|l}{56.6502}&160.4998&0.3530&0.6667&\multicolumn{1}{l|}{0.5}&\multicolumn{1}{|l}{64}&3.4&1166\\ \multicolumn{1}{l|}{S6b$_5$}&\multicolumn{1}{|l}{97.6054}&276.5329&0.3530&0.6667&\multicolumn{1}{l|}{0.5}&\multicolumn{1}{|l}{64}&4.7&1166\\ \hline \end{tabular} \end{threeparttable} \end{table} \clearpage \part*{SI References} \begingroup \renewcommand{\section}[2]{}
1602.04771
\section{Introduction} Over the last decades, the so-called {\it cosmological constant problem} has been one of the major challenges in theoretical physics. The issue refers to the absence of gravitational effects, particularly at the cosmological level, of the vacuum energy density predicted by quantum field theories, or better said, the impossibility of fine tuning properly its counterterms, what is known as radiative instability (for a review see Refs.~\cite{Weinberg:1988cp} and \cite{Padilla:2015aaa}). On the other hand, after the discovery of some deviations of the luminosity distances of Supernovae Ia in 1998, what was then interpreted as a consequence of the acceleration of the universe expansion (and confirmed by other proofs later), the best and most accepted model that can explain such behaviour lies on the presence of a cosmological constant in the gravitational field equations, which in principle should be connected somehow to the vacuum energy density. However, the required cosmological constant for the acceleration of the expansion (of the order the Hubble parameter today) is around 120 orders of magnitude smaller than the one predicted by quantum field theories. Hence, here the problem arises, how to drop down the huge value of vacuum fluctuations. In this sense, there have been plenty of proposals, which include a possible symmetry that protects the cosmological constant in the same sense as chiral symmetry does with the electron mass as well as supersymmetry attempts. In addition, an alternative way, which may include the dark energy models/modified gravities, tries to suppress such large value by additional fields or modifications of General Relativity (GR). In this sense, there have been plenty of dark energy models proposed, which may play that role, see \cite{dark_energy} and \cite{Nojiri:2010wj}. However, rather than solving the problem, the former always requires a precise fine tuning as well (Weinberg's no go theorem).\\ An alternative widely studied in the literature is the so-called unimodular gravity (see Refs.~\cite{Weinberg:1988cp,Padilla:2015aaa,Finkelstein:2000pg,Alvarez:2006uu,Alvarez:2005iy,Alvarez:2007nn,Alvarez:2008zw,Alvarez:2009ga,Buchmuller:1988wx,Henneaux:1989zc,Padilla:2014yea}). The theory fixes the determinant of the metric, such that the field equations are given by the trace-free part of GR's field equations. From the classical point of view, fixing the determinant of the metric provides a cosmological constant that naturally arises as an integration constant after applying the corresponding geometrical identities, what at the cosmological level may be a way of understanding the problem of dark energy \cite{Finkelstein:2000pg}. The unimodular constraint can be implemented in several ways, all of them leading to the same classical theory, as shown in the literature \cite{Finkelstein:2000pg,Alvarez:2006uu,Alvarez:2005iy,Alvarez:2007nn,Alvarez:2008zw,Alvarez:2009ga,Buchmuller:1988wx,Henneaux:1989zc,Padilla:2014yea}. However, in spite of the theory is equivalent to GR at the classical level, the equivalence is not clear at the quantum one, where a great effort has been done in order to get a better understanding of the features of the theory \cite{Alvarez:2005iy,Padilla:2014yea}. When the theory is analyzed in quantum mechanics, radiative instability is absence for this effective cosmological constant, being one of the most interesting features of the theory, since it can suppress the large contribution of the vacuum energy density \cite{Alvarez:2007nn}. The absence of radiative instability has been shown in the literature by using different approaches, for instance by the existence of a shift symmetry in the classical field equations that remove the contributions from the quantum vacuum, but also by the evaluation of the renormalization group equation for the cosmological constant. In addition, unimodular gravity may be distinguished from GR by some observables, as it may lead to a different concept of mass \cite{Alvarez:2009ga}. Hence, unimodular gravity may provide not only a way of understanding the cosmological constant problem better but also to shed some light to the dark energy issue. \\ In this sense, an extension of unimodular gravity has been recently proposed, where more general actions rather than the Hilbert-Einstein action are considered \cite{Eichhorn:2015bna,Nojiri:2015sfd}. Note that modified gravities as $f(R)$ gravity, have drawn a lot of attention over the last years, as they can realize the cosmological history, being alternatives to dark energy/inflaton. In addition, some of them are able to satisfy the last observational constraints with great accuracy, such as Starobinsky inflation \cite{staro}, or the so-called Hu-Sawicki model for late-time acceleration \cite{Hu:2007nk}. Hence, the analysis of such extensions within the unimodular-like framework may provide interesting features. \\ With this aim, this paper is devoted to the analysis of some generalizations of unimodular gravity at the classical level. Firstly, we carefully reconstruct such extensions departing from variational principles by using a Lagrange multiplier which imposes the unimodular constraint, leading to the trace-free part of the field equations. We show that as in the case of unmodular gravity, any extension leads to the same result, i.e. a cosmological constant arises naturally in the field equations, recovering full diffeomorphisms. We also analyze how conformal transformations affect the gauge choice imposed initially, and the effects of unimodular gravity in the Einstein frame. Finally, some cosmological solutions are obtained for $f(R)$ gravity and Gauss-Bonnet gravities, while Starobinsky inflation is also analyzed, where we find a constraint on the merging constant in order to keep Starobinsky predictions.\\ The paper is organized as follows: Section \ref{Section_General} gives a brief review on unimodular gravity. In section \ref{General_unimodular_grav}, we introduce $f(R)$ and Gauss-Bonnet unimodular gravity. In Section \ref{Einstein_sect}, the conformal transformation is analyzed and the corresponding unimodular version is obtained in the Einstein frame. Then, Section \ref{cosmo_sect} is devoted to the analysis of cosmological solutions. Finally, section \ref{Conclusions_sect} gathers the conclusions.\\ \section{Unimodular gravity} \label{Section_General} Unimodular gravity is constructed in such a way that the determinant of spacetime metric is not dynamical but is restricted to be: \begin{equation} \sqrt{-g}=s_0\ , \label{1.10} \end{equation} which fixes the determinant of the metric to be a constant $s_0$. As states in \cite{Weinberg:1988cp}, {\it just because we use a generally covariant formalism does not mean that we are committed to treating all components of the metric as dynamical fields.} Then, restricted variations of the action with respect to the metric have to be null only for those which keep the determinant fixed, \begin{equation} g_{\mu\nu}\delta g^{\mu\nu}=0\ , \label{1bis} \end{equation} The variation of the metric can be then written in terms of the unconstrained variation as: \begin{equation} \delta g^{\mu\nu}=\delta_{u} g^{\mu\nu}-\frac{1}{4}g^{\mu\nu}g_{\lambda\gamma}\delta_{u} g^{\lambda\gamma}\ . \label{2bis} \end{equation} The gravitational field equations are obtained by varying the gravitational action $S_G$, which can be also expressed in terms of the unconstrained variation, leading to \begin{equation} \frac{\delta S_G}{\delta g^{\mu\nu}}=\frac{\delta S_G}{\delta_{u} g^{\mu\nu}}-\frac{1}{4}g_{\mu\nu}g^{\lambda\gamma}\frac{\delta S_G}{\delta_{u} g^{\lambda\gamma}}\ . \label{3bis} \end{equation} These are precisely the traceless part of the gravitational field equations, which for the Hilbert-Einstein action leads to the traceless part of the Einstein's field equations: \begin{equation} R_{\mu\nu}-\frac{1}{4}g_{\mu\nu}R=\kappa^2\left(T_{\mu\nu}-\frac{1}{4}g_{\mu\nu}T\right)\ , \label{1.1} \end{equation} where $R_{\mu\nu}$ is the usual Ricci tensor and $R$ its trace, while $T_{\mu\nu}=\frac{2}{\sqrt{-g}}\frac{\partial S_m}{\partial g_{\mu\nu}}$ is the matter energy-momentum tensor, and $\kappa^2=8\pi G$. Contrary to General Relativity, the field equations (\ref{1.1}) are not divergence free, \begin{equation} \nabla_{\mu}\left(R_{\mu\nu}-\frac{1}{4}g_{\mu\nu}R-\kappa^2T_{\mu\nu}-\frac{1}{4}g_{\mu\nu}T\right)=0\ . \label{1.2} \end{equation} Then, by using the Bianchi identities, which holds \begin{equation} \nabla_{\mu}\left(R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R\right)=0\ , \label{1.3} \end{equation} and the energy conservation, \begin{equation} \nabla_{\mu}T_{\mu\nu}=0\ , \label{1.4} \end{equation} The divergence of the field equations (\ref{1.2}) yields \begin{equation} \nabla_{\mu}\left(R+\kappa^2 T\right)=0\ , \label{1.5} \end{equation} which is the so-called integrability condition that after integrating leads to \begin{equation} R+\kappa^2T=4\lambda_0=\text{constant}\ , \label{1.6} \end{equation} where $\lambda_0$ is an integration constant. Hence, by inserting (\ref{1.6}) in the field equations (\ref{1.1}), we get \begin{equation} R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R+g_{\mu\nu}\lambda_0=\kappa^2T_{\mu\nu}\ , \label{1.7} \end{equation} where the usual Einstein field equations are recovered, being $\lambda_0$ a cosmological constant. This is the great success of unimodular gravity, since a cosmological constant emerges naturally as an integration constant by departing from the trace-free part of the Einstein equations. Such constant may compensate the large value of the vacuum energy density. In addition, since the integrability condition (\ref{1.5}) recovers the usual General Relativity equations, any prediction from the former turns out a prediction of unimodular gravity, what avoids any possible discrepancy with well tested experiments. \\ Alternatively, unimodular gravity can be implemented through a variational principle with unrestricted variations of the metric by assuming transverse diffeomorphisms (TDiff) instead of the full diffeomorphisms \cite{Alvarez:2006uu,Alvarez:2005iy,Alvarez:2007nn}, which gives rise to the appearance of a scalar field that represents the determinant of the metric. Such extra degree of freedom can be removed by an additional Weyl symmetry (WTDiff), \cite{Alvarez:2006uu,Alvarez:2005iy,Alvarez:2008zw}. Moreover, unimodular gravity can be also obtained by using a Lagrange multiplier in the action as follows \cite{Buchmuller:1988wx}, \begin{equation} S=\frac{1}{2\kappa^2}\int dx^4 \left[\sqrt{-g}R-2\lambda(\sqrt{-g}-s_0)\right]+S_m\ , \label{1.8} \end{equation} where $s_0$ is a constant and $\lambda$ is the Lagrange multiplier, which in principle is dynamical. Note that the last term in (\ref{1.8}) breaks the full diffeomorphisms invariance, since it fixes the determinant of the metric, restricting the group of symmetries. Then, by varying the action with respect to the metric, the field equations yield, \begin{equation} R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R+g_{\mu\nu}\lambda=\kappa^2T_{\mu\nu}\ , \label{1.9} \end{equation} while the variation with respect to $\lambda$ leads to the unimodular restriction (\ref{1.10}). Taking the trace of the field equations (\ref{1.9}), \begin{equation} R+\kappa^2T=4\lambda(x)\ , \label{1.11} \end{equation} This looks as (\ref{1.6}) except that in principle $\lambda=\lambda(x)$ is not a constant. Nevertheless, taking the divergence of equations (\ref{1.9}) together with the energy conservation $\nabla_{\mu}T_{\mu\nu}=0$, yields, \begin{equation} \nabla_{\mu}\lambda=0\ \rightarrow \lambda=\lambda_0\ . \label{1.12} \end{equation} Then, the trace-free part of the equations follow and the previous result (\ref{1.7}) is obtained, in this case by means of the action (\ref{1.8}). Moreover, equivalently to (\ref{1.8}) one can depart from the Henneaux-Teitelboim action, leading to the same result \cite{Henneaux:1989zc}. Note that while all these implementations of unimodular gravity are classically equivalent, they are not at the quantum level (see \cite{Alvarez:2005iy}). However, as we focus just on classical aspects along the paper, we are assuming the action (\ref{1.8}) as the departing point for convenience, as shown below. \section{Generalizations of unimodular gravity} \label{General_unimodular_grav} Over the last years, some modifications of the Hilbert-Einstein action have been considered, particularly as infrared corrections to GR in order to provide a natural explanation to the late-time acceleration of the expansion \cite{Nojiri:2010wj}. Moreover, such modifications have been widely applied to inflation, where nowadays data seem to favor some of such models . Within modified gravities, the so-called $f(R)$ gravity has drawn a lot of attention, whose principle states on a gravitational action given precisely by, \begin{equation} S=\frac{1}{2\kappa^2}\int dx^4\sqrt{-g}\ f(R)+S_m\ , \label{2.1} \end{equation} whose field equations are obtained by varying the action with respect to the metric, leading to \begin{equation} R_{\mu\nu}f_R-\frac{1}{2}g_{\mu\nu}f+\left(g_{\mu\nu}\Box-\nabla_{\mu}\nabla_{\nu}\right)f_R=\kappa^2T_{\mu\nu}\ , \label{2.2} \end{equation} where $f_R=\frac{\partial f}{\partial R}$. Generalization of unimodular gravity turns out now clear. As pointed in \cite{Nojiri:2015sfd}, the action (\ref{2.1}) has an unimodular $f(R)$ version which is constructed by fixing the determinant to be a constant, \begin{equation} S=\frac{1}{2\kappa^2}\int dx^4\left[\sqrt{-g}\ f(R)-2\lambda(\sqrt{-g}-s_0)\right]+S_m\ . \label{2.3} \end{equation} The field equations are then given by \begin{equation} R_{\mu\nu}f_R-\frac{1}{2}g_{\mu\nu}f+\left(g_{\mu\nu}\Box-\nabla_{\mu}\nabla_{\nu}\right)f_R+g_{\mu\nu}\lambda=\kappa^2T_{\mu\nu}\ , \label{2.4} \end{equation} While the variation of the action with respect to $\lambda$ leads to $\sqrt{-g}=s_0$. As in the previous section, taking the divergence of the field equations (\ref{2.4}) yields \begin{equation} \nabla_{\mu}\lambda=0\ \rightarrow \lambda=\lambda_0\ , \label{2.6} \end{equation} where we have used the identities $\nabla_{\mu}\left(R^{\mu\nu}-\frac{1}{2}Rg^{\mu\nu}\right)=0$ and $(\nabla_{\nu}\Box-\Box\nabla_{\nu})f_R=R_{\mu\nu}\nabla^{\mu}f_R$. Then, by using the trace of the field equations (\ref{2.4}), the following condition is provided \begin{equation} -Rf_R+2f-3\Box f_R+\kappa^2T=4\lambda_0\ , \label{2.7} \end{equation} which is the generalization of the integrability condition (\ref{1.6}). Hence, the usual $f(R)$ equations are recovered with an additional cosmological constant: \begin{equation} R_{\mu\nu}f_R-\frac{1}{2}g_{\mu\nu}f+\left(g_{\mu\nu}\Box-\nabla_{\mu}\nabla_{\nu}\right)f_R+g_{\mu\nu}\lambda_0=\kappa^2T_{\mu\nu}\ , \label{2.8} \end{equation} Equivalently, one may proceed to obtain the same result by starting from the trace-free part of (\ref{2.2}) as the field equations: \begin{widetext} \begin{equation} R_{\mu\nu}f_R-\frac{1}{2}g_{\mu\nu}f+\left(g_{\mu\nu}\Box-\nabla_{\mu}\nabla_{\nu}\right)f_R-\frac{1}{4}\left(Rf_R-2f+3\Box f_R\right)g_{\mu\nu}=\kappa^2\left(T_{\mu\nu}-\frac{1}{4}g_{\mu\nu}T\right)\ . \label{2.9} \end{equation} \end{widetext} By using $\nabla_{\mu}T^{\mu\nu}=0$ and the Bianchi identities, the divergence of (\ref{2.9}) yields \begin{equation} \nabla_{\mu}\left(Rf_R-2f+3\Box f_R-\kappa^2T\right)=0\ , \label{2.10} \end{equation} which is equivalent to (\ref{2.7}) after integrating and the field equations (\ref{2.8}) are recovered. \\ Hence, it is straightforward to construct other generalizations of unimodular gravity by following the procedure described above. For instance, we may consider the so-called modified Gauss-Bonnet gravity, \begin{equation} S=\frac{1}{2\kappa^2}\int dx^4\left[\sqrt{-g}\ (R+f(G)-2\lambda(\sqrt{-g}-s_0)\right]+S_m\ , \label{2.11} \end{equation} where $G=R_{\mu\nu\lambda\sigma}R^{\mu\nu\lambda\sigma}-4R_{\mu\nu}R^{\mu\nu}+R^2$ is the Gauss-Bonnet topological invariant. The field equations are obtained by varying the action (\ref{2.11}) with respect to the metric \cite{Elizalde:2010jx}, \begin{widetext} \begin{eqnarray} R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R-\frac{1}{2}g_{\mu\nu}f(G)+2f_{G}RR_{\mu\nu}-4f_{G}R_{\mu\rho}R_{\nu}^{\;\;\rho}+2f_{G}R^{\mu\rho\sigma\tau}R_{\nu\rho\sigma\tau}\nonumber\\ +4f_{G}R_{\mu\rho\sigma\nu}R^{\rho\sigma}-2R\nabla_{\mu}\nabla_{\nu}f_{G} +2g_{\mu\nu}R\Box f_{G}+4R_{\nu\rho}\nabla^{\rho}\nabla_{\mu}f_{G}+4R_{\mu\rho}\nabla^{\rho}\nabla_{\nu}f_{G} \nonumber\\ -4R_{\mu\nu}\Box f_{G}-4g_{\mu\nu}R^{\rho\sigma}\nabla_{\rho}\nabla_{\sigma}f_{G}+4R_{\mu\rho\nu\sigma}\nabla^{\rho}\nabla^{\sigma}f_{G}+\lambda g_{\mu\nu}=\kappa^2T^{\mu\nu}\ . \label{2.12} \end{eqnarray} \end{widetext} As above, by taking the divergence of the field equations, the condition $\nabla_{\mu}\lambda=0$ arises again, what leads to the integrability condition for this case:\\ \begin{widetext} \begin{eqnarray} R+2f-2f_GR^2+4f_GR_{\mu\nu}R^{\mu\nu}-2f_GR^{\mu\nu\lambda\sigma}R_{\mu\nu\lambda\sigma}-4f_GR^{\mu\nu\lambda}_{\;\;\;\;\;\;\mu}R_{\nu\lambda}\\ \nonumber -2R\Box f_G+8R^{\mu\nu}\nabla_{\mu}\nabla_{\nu}f_G+4R^{\mu\nu\lambda}_{\;\;\;\;\;\;\mu}\nabla_{\nu}\nabla_{\lambda}f_G+T=4\lambda_0\ . \label{2.13} \end{eqnarray} \end{widetext} Then, the usual modified Gauss-Bonnet gravity with an additional cosmological constant. Note that the same result is obtained when departing from the trace-free part of the field equations fro Gauss-Bonnet gravity, as was shown above for the case of $f(R)$ gravity. Hence, following any of the above procedures, unimodular gravity can be easily extended to other more complex actions. The result basically adds a cosmological constant to the field equations, equivalently to the case of Hilbert-Einstein unimodular gravity. \\ Alternatively to the Lagrange multiplier, one may depart from restricting variations over the gravitational action (\ref{3bis}), leading to the traceless part of the corresponding $f(R)$ or $f(R,G)$ action, as above. Other implementations of unimodular gravity can be also applied for these cases equivalently at the classical level. However, by using a Lagrange multiplier instead of other implementations of the unimodular condition, calculations are simplified when dealing with theories with higher order derivatives. In the next section, we analyze unimodular scalar-tensor theories (equivalent to $f(R)$ gravities) and its transformation to the so-called Einstein frame when applying a conformal transformation, which becomes also simpler when forcing the unimodular constraint by a Lagrange multiplier than other alternative -classically- equivalent implementations. \section{Conformal frames} \label{Einstein_sect} As well known, $f(R)$ gravities can be expressed in terms of an scalar field with a null kinetic term through the action: \begin{equation} S=\frac{1}{2\kappa^2}\frac{1}{2\kappa^2}\int dx^4\sqrt{-g}\left(\phi R-V(\phi)\right)+S_m\ , \label{3.1} \end{equation} Varying the action with respect to the scalar field, the corresponding equivalence is found: \begin{equation} V'(\phi)=R\ \rightarrow\ \phi=\phi(R)\ ,\nonumber\\ f(R)=\phi(R)R-V(\phi(R)), \label{3.2} \end{equation} which yields the relations: \begin{equation} \phi=f_R\ ,\quad V=Rf_R-f\ , \label{3.3} \end{equation} As in the previous section, the reconstruction of the unimodular theory for the action (\ref{3.1}) is given by fixing the determinant of the metric, \begin{equation} S=\frac{1}{2\kappa^2}\int dx^4\sqrt{-g}\left(\phi R-V(\phi)\right)-2\lambda(\sqrt{-g}-s_0)+S_m\ , \label{3.4} \end{equation} The field equations are given by: \begin{equation} R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}\left(\phi R-V(\phi)\right)+\left(g_{\mu\nu}\Box-\nabla_{\mu}\nabla_{\nu}\right)\phi+g_{\mu\nu}\lambda=\kappa^2T_{\mu\nu}^{(m)}\ , \label{3.5} \end{equation} Taking the divergence of the field equations, the condition $\nabla_{\mu}\lambda=0$ turns out and the integrability condition (\ref{2.7}) is obtained, which now is given by, \begin{equation} \phi R-2V-3\Box\phi+\kappa^2T^{(m)}=4\lambda_0\ . \label{3.6} \end{equation} Consequently, the field equations (\ref{3.5}) become the usual equations for the scalar-tensor theory (\ref{3.1}) with an additional cosmological constant. The question now arises, does the action (\ref{3.4}) have a counterpart in the Einstein frame? To do so, let us transform the action (\ref{3.4}) into the Einstein frame, what basically means recovering the usual Hilbert-Einstein action by applying the following conformal transformation: \begin{equation} \tilde{g}_{\mu\nu}=\Omega^2g_{\mu\nu}\ \quad \text{where} \quad \Omega^2=\phi\ . \label{3.7} \end{equation} Here the tilde refers to the Einstein frame. Then, the Ricci scalar is transformed as follows: \begin{equation} \tilde{R}=\frac{2}{\Omega^2}\left(R-\frac{6\Box\Omega}{\Omega}\right)\ . \label{3.8} \end{equation} And the action (\ref{3.4}) becomes \begin{widetext} \begin{equation} \tilde{S}=\int dx^4\left[\sqrt{-\tilde{g}}\left(\frac{\tilde{R}}{2\kappa^2}-\frac{1}{2}\partial_{\mu}\varphi\partial^{\mu}\varphi-\tilde{V}(\varphi)\right)-2\tilde{\lambda}\left(\sqrt{-\tilde{g}}e^{-2\sqrt{2/3}\kappa\varphi}-s_0\right)\ ,\right] \label{3.9} \end{equation} \end{widetext} where we have redefined the scalar field, \begin{equation} \phi=e^{\sqrt{2/3}\kappa\varphi}\ \quad \tilde{V}(\varphi)=\frac{e^{-2\sqrt{2/3}\kappa\varphi}}{2\kappa^2}V(\varphi)\ , \quad \tilde{\lambda}=\frac{\lambda}{2\kappa^2}\ . \label{3.10} \end{equation} The field equations are obtained by varying the action with respect to the metric, \begin{equation} \tilde{R}_{\mu\nu}-\frac{1}{2}\tilde{g}_{\mu\nu}\tilde{R}=\kappa^2\left(T_{\mu\nu}^{(\varphi)}+T_{\mu\nu}^{(m)}\right)\ , \label{3.11} \end{equation} where we have defined the energy-momentum tensor of the scalar field as, \begin{equation} T_{\mu\nu}^{(\varphi)}=\partial_{\mu}\varphi\partial_{\nu}\varphi-g_{\mu\nu}\left(\frac{1}{2}\partial_{\sigma}\varphi\partial^{\sigma}\varphi+\tilde{V}\right)-2\tilde{\lambda} g_{\mu\nu}e^{-2\sqrt{2/3}\kappa\varphi} \label{3.11b} \end{equation} The scalar field equation is obtained by varying the action (\ref{3.9}) with respect to the scalar field, \begin{equation} \Box\varphi-V'(\varphi)+4\tilde{\lambda}\sqrt{\frac{2}{3}}\kappa e^{-2\sqrt{2/3}\kappa\varphi}=0\ . \label{3.12} \end{equation} While the variation of the action with respect to the Lagrange multiplier leads to the constraint, \begin{equation} \sqrt{\tilde{-g}}=s_0\times e^{2\sqrt{2/3}\kappa\varphi}\ . \label{3.13} \end{equation} Hence, contrary to the case of the Jordan frame, the determinant of the metric $\tilde{g}_{\mu\nu}$ is not constant. Taking the divergence of the field equations (\ref{3.12}), and applying the identity $\nabla_{\mu}\left(\tilde{R}^{\mu\nu}-\frac{1}{2}\tilde{g}^{\mu\nu}\tilde{R}\right)=0$ and the matter-energy conservation $\nabla_{\mu}T^{\mu\nu(m)}=0$, yields, \begin{eqnarray} \nabla_{\mu}T^{\mu\nu(\varphi)}&=&\left(\Box\varphi-V'+4\tilde{\lambda}\sqrt{\frac{2}{3}}\kappa e^{-2\sqrt{2/3}\kappa\varphi}\right)\partial^{\nu}\varphi\nonumber\\ &&-2e^{-2\sqrt{2/3}\kappa\varphi}\partial^{\nu}\tilde{\lambda}=0\ . \label{3.14} \end{eqnarray} The first term in (\ref{3.14}) is the scalar field equation (\ref{3.12}), which becomes null leading to, \begin{equation} \partial_{\nu}\tilde{\lambda}=0\ , \quad \rightarrow \tilde{\lambda}=\tilde{\lambda}_0\ . \label{3.15} \end{equation} Then, the energy-momentum tensor for the scalar field (\ref{3.11b}) turns out, \begin{equation} T_{\mu\nu}^{(\varphi)}=\partial_{\mu}\varphi\partial_{\nu}\varphi-g_{\mu\nu}\left(\frac{1}{2}\partial_{\sigma}\varphi\partial^{\sigma}\varphi+\tilde{V}\right)-2\tilde{\lambda}_0 g_{\mu\nu}e^{-2\sqrt{2/3}\kappa\varphi} \label{3.16} \end{equation} Hence, the field equations (\ref{3.11}) are basically the equations of the action, \begin{equation} \tilde{S}=\frac{1}{2\kappa^2}\int dx^4\left[\sqrt{-\tilde{g}}\left(\frac{\tilde{R}}{2\kappa^2}-\frac{1}{2}\partial_{\mu}\varphi\partial^{\mu}\varphi-\tilde{V}_{eff}(\varphi)\right)\right]\ , \label{3.17} \end{equation} where the effective potential is defined as, \begin{equation} \tilde{V}_{eff}(\varphi)=\tilde{V}(\varphi)+2\tilde{\lambda}_0 e^{-2\sqrt{2/3}\kappa\varphi}\ . \label{3.18} \end{equation} In comparison with the case in the Jordan frame, where a cosmological constant naturally arises, here the scalar potential is modified. what may introduce corrections to some solutions. In the next section, we explore some cosmological solutions within the context of $f(R)$ and modified Gauss-Bonnet gravities, but also solutions in the Einstein frame are analyzed, particularly we study Starobinsky inflation within the context of unimodular gravity by applying the results obtained above.\\ \section{Cosmological solutions} \label{cosmo_sect} Let us now explore some cosmological solutions in the generalizations of unimodular gravity studied above. Here we intend to analyze dark energy solutions as well as some inflationary models. \subsection{Late-time acceleration} Since we are interested in late-time cosmological solutions, we assume a flat Friedmann-Lema\^itre-Robertson.Walker metric, \begin{equation} ds^2=-dt^2+a^2(t)\sum_{i=1}^{3}dx^{i\;\;2} \label{4.1} \end{equation} Let us start by studying solutions in $f(R)$ unimodular gravity, whose field equations (\ref{2.8}) for the metric (\ref{4.1}) turns out \begin{widetext} \begin{eqnarray} H^2&=&\frac{1}{3f_R}\left[\kappa^2 \rho_m +\frac{Rf_R-f}{2}-3H\dot{R}f_{RR}+\lambda_0\right]\ , \nonumber\\ -3H^2-2\dot{H}&=&\frac{1}{f_R}\left[\kappa^2p_m+\dot{R}^2f_{RRR}+2H\dot{R}f_{RR}+\ddot{R}f_{RR}+\frac{1}{2}(f-Rf_R-2\lambda_0)\right]\ , \label{4.2} \end{eqnarray} \end{widetext} Note that every solution of a particular $f(R)$ gravity is also a solution of unimodular $f(R)$ gravity just by shifting the action $f\rightarrow f+2\lambda_0$. Nevertheless, the additional constant in the FLRW equations (\ref{4.2}) may provide a wider set of solutions. In order to show so, let us analyze some particular and illustrative cosmological solutions. Since the universe goes through several accelerating stages, de Sitter solution plays an important role, where the Hubble parameter is given by \begin{equation} H(t)=H_0 \label{4.3} \end{equation} Moreover, $H=H_0$ is a critical point in every $f(R)$ gravity \cite{Cognola:2008zp}, such that the possible critical points of a particular gravitational action can be identified with the dark energy epoch and also inflation. Then, for the de Sitter solution (\ref{4.3}), the first FLRW (in vacuum) is given by, \begin{equation} 3f_{R_0}H_0^2-\frac{1}{2}(R_0f_{R_0}-f_0-\lambda_0)=0\ . \label{4.4} \end{equation} Hence, every root of this equation is a critical point and becomes a possible de Sitter stage along which the universe may go through. The presence of $\lambda_0$ introduces a correction that some particular $f(R)'s$, which lead to effective cosmological constant (as the Hu-Sawicky model \cite{Hu:2007nk}), may requires.\\ Let us now explore power-law solutions in cosmology, which also have a great importance along the universe history, \begin{equation} a(t)=a_0t^m\ ,\quad H(t)=\frac{m}{t}\ , \label{4.5} \end{equation} Note that for pressureless matter $m=2/3$, for radiation $m=1/2$ and for an accelerating universe $m>1$. The above solution has been analyzed in standard $f(R)$ gravity, where the following action holds \cite{Goheer:2009ss}, \begin{equation} f(R)=A_{\pm}R^{\frac{1}{4}\left(3-m\pm\sqrt{1+10m+m^2}\right)}\ . \label{4.6} \end{equation} Whether we assume the above $f(R)$ gravity in unimodular gravity with $m<1$, the effective cosmological constant $\lambda_0$ may become important at late-times, when the dark energy epoch starts, while the terms in (\ref{4.6}) may contribute during the matter/radiation epochs when they dominate over $\lambda_0$. Moreover, whether $m>1$ the unimodular $f(R)$ gravity (\ref{4.6}) contributes to the acceleration of the expansion, leading to corrections over a de Sitter expansion which would depend on the weight of $A_{\pm}$ in comparison with $\lambda_0$. \\ Let us now consider the unimodular version of Gauss-Bonnet gravity (\ref{2.11}), whose FLRW equation becomes: \begin{equation} 3H^2=\kappa^2\rho_m+\frac{1}{2}(Gf_G-f)-12f_{GG}\dot{G}H^3+\lambda_0\ , \label{4.7} \end{equation} where $G=24(\dot{H}H^2+H^4)$. As in the previous case, we can analyze de Sitter solutions (\ref{4.3}) by introducing (\ref{4.3}) into the equation (\ref{4.7}), which turns out an algebraic equation, \begin{equation} 3H_0^2+\frac{1}{2}(f_0-G_0f_{G_0})-\lambda_0=0\ . \label{4-8} \end{equation} Hence, the merging cosmological constant $\lambda_0$ would determine the de Sitter points, and consequently the accelerating stages of the universe. In the case of power-law solutions (\ref{4.5}), the exact action within pure Gauss-Bonnet gravity (with no Ricci scalar in the action) that reproduces such solutions in vacuum are \cite{Myrzakulov:2010gt}: \begin{equation} f(G)=AG^{\frac{1-m}{4}}\ , \label{4.9} \end{equation} which may play the same role as in the case of $f(R)$ gravity, as shown above. Nevertheless, the most important feature of the action $f(R,G)=R+f(G)$ is that reproduces exact $\Lambda$CDM model, \begin{equation} H^2=\frac{\Lambda}{3}+\frac{\kappa^2}{3}\rho_0 a^{-3}\ , \label{4.10} \end{equation} by means of the gravitational action given by \cite{Elizalde:2010jx}, \begin{eqnarray} f(R,G)&=&R+a_1\left(\Lambda\pm\sqrt{9\Lambda^2-3G}\right)^2\nonumber\\ &&+a_2\left(\Lambda\pm\sqrt{9\Lambda^2-3G}\right)+a_3\ , \label{4.11} \end{eqnarray} where $a_{1}$ is an integration constant, $a_2=\frac{6-30a_1\Lambda}{15}$ and $a_3=3\left(1-6a_1\Lambda\right)$ are constants. Then, by identifying the last term of (\ref{4.11}) with the cosmological constant $\lambda_0$, \begin{equation} \lambda_0=-\frac{3}{2}\left(1-6a_1\Lambda\right) \label{4.12} \end{equation} The unimodular version of Gauss-Bonnet gravity described by the action (\ref{4.11}) arises naturally as the gravitational action which leads to the $\Lambda$CDM model (\ref{4.10}). \\ Hence, extensions of unimodular gravity provide reliable descriptions of the late-time acceleration in a natural way. \subsection{Inflation} Let us now study how these extensions of unimodular gravity may affect the inflationary paradigm. In particular, here we analyze Starobinsky inflation \cite{staro} when considering the unimodular $f(R)$ theory (\ref{2.3}), which for the case of Starobinsky inflation is given by \begin{equation} S=\frac{1}{2\kappa^2}\int dx^4\left[\sqrt{-g}\left( R+\frac{R^2}{6m^2}\right)-2\lambda(\sqrt{-g}-s_0)\right]\ , \label{4.13} \end{equation} where $m^2$ is a constant. In order to simplify the calculations, we work in the scalar-tensor equivalence (\ref{3.4}), whose correspondence to the action (\ref{4.13}) is provided by \begin{equation} \phi=1+\frac{R}{3m^2}\ , \quad V(\phi)=3m^2(\phi-1)^2\ . \label{4.14} \end{equation} Applying the conformal transformation (\ref{3.7}) and the definitions (\ref{3.10}), the action (\ref{3.17}) is constructed following the steps described in Section \ref{Einstein_sect}, where the effective potential for the case (\ref{4.13}) is given by, \begin{equation} \tilde{V}_{eff}(\varphi)=\frac{1}{2\kappa^2}\left[\frac{3m^2}{2}\left(1-e^{-2\sqrt{2/3}\kappa\varphi}\right)^2-2\lambda_0 e^{-2\sqrt{2/3}\kappa\varphi}\right]\ . \label{4.16} \end{equation} Then, the FLRW equations are: \begin{eqnarray} \frac{3}{\kappa^2} H^2&=& \frac{1}{2}{\dot \varphi}^2 + \tilde{V}_{eff}(\varphi)\, , \nonumber\\ - \frac{1}{\kappa^2} \left( 3 H^2 + 2\dot H \right) &=& \frac{1}{2}{\dot \varphi}^2 - \tilde{V}_{eff}(\varphi)\, , \label{4.17} \end{eqnarray} While the scalar field satisfies \begin{equation} \ddot{\varphi}+3H\dot{\varphi}+\frac{\partial \tilde{V}_{eff}(\varphi)}{\partial\varphi}=0 \label{4.18} \end{equation} Slow-roll inflation occurs in the regime $\kappa \varphi \gg 1$, where the friction term in (\ref{4.18}) dominates, and the expansion grows exponentially approximately, being the Hubble parameter $H\sim H_0$. Then, the following relations hold \begin{equation} H\dot{\phi}\gg\ddot{\varphi}\,,\quad \tilde{V}\gg\dot{\varphi}^2\ . \label{4.19} \end{equation} Equivalently, we can define the slow-roll parameters, \begin{equation} \epsilon= \frac{1}{2\kappa^2} \left( \frac{\tilde{V}_{eff}'(\varphi)}{\tilde{V}_{eff}(\varphi)} \right)^2\, ,\quad \eta= \frac{1}{\kappa^2} \frac{\tilde{V}_{eff}''(\varphi)}{\tilde{V}_{eff}(\varphi)}\, , \quad \label{4.19b} \end{equation} Hence, during inflation $\epsilon\ll 1$ and $\eta<1$, while after an enough number of e-foldings, usually around $N=50-65$, $\epsilon\geq 1$, when the scalar field $\varphi$ rolls down the potential slope and the kinetic term becomes important and eventually dominates. Then, the field oscillates around the minimum of the potential, emitting particles and reheating the Universe. Hence, by using these approximations and combining the FLRW equations (\ref{4.17}) and the scalar field equation (\ref{4.18}), the equations during inflation are given approximately by, \begin{eqnarray} H^2 \simeq \frac{\kappa^2}{3} \tilde{V}_{eff}(\varphi), \nonumber\\ 3H \dot{\varphi} \simeq - \tilde{V}_{eff}'(\varphi) \label{4.20} \end{eqnarray} The slow-roll parameters (\ref{4.19b}) for the potential (\ref{4.16}) are given by, \begin{eqnarray} \epsilon&=&\frac{4}{3} \frac{\left[3m^2\left(-1+e^{\sqrt{2/3}\kappa\varphi}\right)-4\lambda_0\right]^2}{\left[3m^2\left(-1+e^{\sqrt{2/3}\kappa\varphi}\right)^2+4\lambda_0\right]^2}\ , \\ \nonumber \eta&=&\frac{4}{3} \frac{-3m^2\left(-2+e^{\sqrt{2/3}\kappa\varphi}\right)+8\lambda_0}{3m^2\left(-1+e^{\sqrt{2/3}\kappa\varphi}\right)^2+4\lambda_0}\ , \label{4.21} \end{eqnarray} Starobinsky inflation is recovered by setting $\lambda_0=0$. Nevertheless, since $m^2/\lambda_0>\mathrm{e}^{-2\sqrt{2/3}\kappa\varphi_{start}}$ in order to ensure an enough number of e-foldings before the field rolls down, together with $\kappa\varphi\gg 1$, it gives the following the slow-roll parameters, \begin{eqnarray} \epsilon&=&\frac{4}{3} e^{-2\sqrt{2/3}\kappa\varphi}\ , \\ \nonumber \eta&=&-\frac{4}{3} e^{-\sqrt{2/3}\kappa\varphi}\ , \label{4.22} \end{eqnarray} In addition, the spectral index and the scalar-to-tensor ratio are given in terms of the slow-roll parameters by \begin{equation} n_s-1=-3\epsilon+2\eta\ \quad r=16\epsilon\ . \label{4.23} \end{equation} It is straightforward to calculate the number of e-foldings during inflation, which is given by \begin{equation} N \equiv \int_{t_{start}}^{t_{end}} \tilde{H} dt=-\kappa^2 \int_{\varphi_{start}}^{\varphi_{end}} \frac{\tilde{V}_{eff}(\varphi)}{\tilde{V}_{eff}'(\varphi)} \simeq \frac{3}{4}e^{\sqrt{2/3}\kappa \; \tilde{\varphi}_{start}}. \label{4.24} \end{equation} Note that the number of e-foldings is related to the slow-roll parameters as \begin{equation} \epsilon \simeq \frac{3}{4}\frac{1}{N^2}\ , \quad \eta \simeq -\frac{1}{N}\ . \label{SRparamStaro} \end{equation} Then, assuming a number of e-foldings $N\sim65$,the following values of the inflationary observables are obtained: \begin{equation} n_s=0.968\ , \quad r=0.00284\ . \label{SpRStaro} \end{equation} This is exactly the result as in Starobisnky inflation, which satisfies quite well the constraints provided by the last data \cite{Ade:2015lrj}. Hence, as far as $m^2/\lambda_0>\mathrm{e}^{-2\sqrt{2/3}\kappa\varphi_{start}}$, the unimodular version of Starobinsky inflation is likewise successful, but also includes the corresponding cosmological constant that may dominate at late-times, leading to a complete description of the universe evolution. \section{Conclusions} \label{Conclusions_sect} Summarizing, in this manuscript we have extended the so-called unimodular gravity to other more general actions rather than the Hilbert-Einstein action. As in the original case, extensions of unimodular gravity can be constructed departing from the trace-free part of the field equations or alternatively by the gauge choice that fixes the determinant of the metric to be a constant. As widely pointed in the literature, any implementation of the unimodular constraint lead to the same results at the classical level, but may provide differences when quantum mechanics are considered. Nevertheless, since the paper is devoted to classical aspects, we have forced the unimodular constraint in the action through a Lagrange multiplier, such that calculations become simpler when dealing gravitational Lagrangians with more general functions of curvature invariants. Hence, following this procedure, and in spite of the apparent lack of symmetries, extensions of unimodular gravity lead to the covariant field equations of the originals with the presence of a cosmological constant. \\ The issue is more subtle when dealing with conformal transformations. As shown, by transforming the gravitational action from the Jordan to the Einstein frame, the determinant is not fixed to be a constant anymore. However, the Lagrange multiplier used to fix the determinant of the metric, turns out a constant as well, such that the corresponding counterpart in the Einstein frame becomes the usual quintessence-like model but in this case with a correction in the scalar potential. Such additional term may have consequence when studying some particular solutions.\\ Finally, some cosmological solutions have been studied within the unimodular version of Gauss-Bonnet gravity and $f(R)$ gravity (together with its scalar-tensor equivalence). As shown, unimodular version of these theories provides a richer set of solution and is able to give a complete picture of the universe evolution in a natural way. In addition, predictions from Starobinsky inflation are fully recovered as far as the correction in the scalar potential is well set. Moreover, the unimodular version of Starobinsky inflation may provide an explanation to the late-time acceleration through the effective cosmological constant that naturally arises. Hence, such results point to $R+R^2$ as a reliable cosmological model for describing the whole universe history. \section*{Acknowledgments} I would like to thank Sergei D. Odintsov and Ippocratis Saltas for his help and valuable comments. I acknowledge support from a postdoctoral fellowship Ref.~SFRH/BPD/95939/2013 by Funda\mbox{curl}{c}\~ao para a Ci\^encia e a Tecnologia (FCT, Portugal) and the support through the research grant UID/FIS/04434/2013 (FCT, Portugal).
2012.04863
\section{Introduction} Given a group of human students, assuming they work equally hard, there are three major factors determining which students learn better than others, including intelligence, learning skills, and learning materials. People with higher intelligence quotient (IQ) are stronger learners. Learning materials, such as textbooks, video lectures, practice questions, etc. are also crucial in determining the quality of learning. Another vital factor impacting learning outcomes is learning skills. Oftentimes, students in the same class have similar IQ and have access to the same learning materials, but their final grades (which measure learning quality) have a large variance. The major differentiating factor is that different students have different levels of mastery of learning skills. Some students have better learning methodologies, which enable them to learn faster and better. In the long history of learning, humans have accumulated a lot of effective learning skills, such as learning through tests, interleaving learning, self-explanation, active recalling, etc. \begin{figure*}[t] \centering \includegraphics[width=0.9\textwidth]{figs/hl-ml} \caption{Human learning (HL) versus machine learning (ML). Model capacity in ML is analogous to intelligence in HL. Data in ML is analogous to learning materials in HL. The machines' learning skills formulated by our proposed Skillearn framework are analogous to humans' learning skills. } \label{fig:hl-ml} \end{figure*} Similar to human learning, the performance of machine learning (ML) models is also determined by several factors. In the current practice of ML, two dominant factors determining ML performance are the capacity of models and the abundance of data. ML model capacity is analogous to the intelligence of humans. From linear models such as support vector machine to nonlinear models such as deep neural networks, ML researchers have been continuously building more powerful ML models to deal with more complicated tasks. It is like the evolution of humans' brains, which become increasingly intelligent. Data for ML is analogous to learning materials for humans. ML models trained with more labeled data in general perform better. For intelligence and learning materials in human learning (HL), we identify their counterparts in machine learning as model capacity and data. We are interested in asking: for learning skills in HL, do they have counterparts in ML as well? Can machines be equipped with effective learning skills as humans are? In this paper, we aim to address these questions. We propose a general framework -- Skillearn, which draws inspiration from humans' learning skills and formulates them into machines' learning skills (MLS). These MLS are leveraged to train better ML models. In Skillearn, there are one or multiple learner models, each with one or multiple sets of learnable parameters such as weight parameters, architectures, hyperparameters, etc. Different learners interact with each other through interaction functions. The learning of all learners is organized into multiple stages, each involving a subset of learners. The stages have an order, but they are performed end-to-end in a multi-level optimization framework where latter stages influence earlier stages and vice versa. We develop a unified optimization algorithm for solving the multi-level optimization problem in Skillearn. In two case studies, we apply Skillearn to formalize two learning skills of humans -- learning by passing tests (LPT) and interleaving learning (IL) -- into machines' learning skills (MLS) and leverage these MLS for neural architecture search~\citep{zoph2016neural,real2019regularized,liu2018darts}. In LPT, a tester model dynamically creates tests with increasing levels of difficulty to evaluate a testee model; the testee continuously improves its architecture by passing however difficult tests created by the tester. In IL, a set of models collaboratively learn a data encoder in an interleaving fashion: the encoder is trained by model 1 for a while, then passed to model 2 for further training, then model 3, and so on; after trained by all models, the encoder returns back to model 1 and is trained again, then moving to model 2, 3, etc. This process repeats for multiple rounds. Experiments on various datasets demonstrate that ML models trained by these two learning skills achieve significantly better performance. The major contributions of this work are as follows. \begin{itemize} \item We propose to leverage the broadly-used and effective learning skills in human learning to develop better machine learning methods. \item We propose Skillearn, a general framework for formulating humans' learning skills into machines' learning skills that can be leveraged by ML models for achieving better learning outcomes. \item We apply Skillearn to formalize two skills in human learning -- learning by passing tests (LPT) and interleaving learning (IL), and apply them to improve neural architecture search. \item On various datasets, we demonstrate the effectiveness of the two skills -- LPT and IL formalized by Skillearn -- in learning better neural architectures. \end{itemize} The rest of the paper is organized as follows. Section 2 presents the general Skillearn framework. In Section 3 and 4, we present two case studies, where Skillearn is applied to formalize two skills in human learning: learning by passing tests and interleaving learning. Section 5 reviews related works and Section 6 concludes the paper. \section{Skillearn: Machine Learning Inspired by Human's Learning Skills} In this section, we present a general framework called Skillearn, which gets inspiration from humans' learning skills, formalize these skills, and leverage them to improve machine learning. We begin with a brief overview of humans' learning skills and summarize their properties. Then we present the Skillearn framework and the optimization algorithm for this framework. \subsection{Humans' Learning Skills} Humans, as the most powerful learners on the planet, have accumulated a lot of skills and techniques in learning faster and better. Here are some examples. \begin{itemize} \item \textbf{Learning through testing.} After learning a topic, a student can solve some test problems (created or selected by a teacher) about this topic to identify the strong and weak points in his/her understanding of this topic, and re-learn the topic based on the identified strong and weak points. In re-learning, the identified strong and weak points help the student to know what to focus on. The quality of test problems plays a crucial role in effectively evaluating the student. How to create or select high-quality test problems is an important skill that the teacher needs to learn. \item \textbf{Interleaving learning} is a learning technique where a learner interleaves the studies of multiple topics: study topic $A$ for a while, then switch to $B$, subsequently to $C$; then switch back to $A$, and so on, forming a pattern of $ABCABCABC\cdots$. Interleaving learning is in contrast to blocked learning, which studies one topic very thoroughly before moving to another topic. Compared with blocked learning, interleaving learning increases long-term retention and improves ability to transfer learned knowledge. \item \textbf{Learning by ignoring.} In course learning, given a large collection of practice problems provided in the textbook, the teacher selects a subset of problems as homework for the students to practice instead of using all problems in the textbook. Some practice problems are ignored because 1) they are too difficult which might confuse the students; 2) they are too simple which are not effective in helping the students to practice their knowledge learned during lectures; 3) they are repetitive. \end{itemize} \subsubsection{Properties of Humans' Learning Skills} From the above examples of humans' learning skills, we observe the following properties of them. \begin{itemize} \item A learning event involves multiple learners. For example, in learning through testing, there are two learners: a student and a teacher. The teacher learns how to create test problems and the student learns how to solve these test problems. \item In a learning task, a learner has multiple aspects to learn about this task. For example, in learning by ignoring, to create effective homework problems, the teacher needs to learn: 1) how to solve these problems; 2) which problems are more valuable to use as homework. \item Different learners interact with each other during learning. For example, in learning through testing, the teacher creates test problems and uses them to evaluate the student. \item In a learning task, the learning process is divided into multiple stages. These stages have a certain order. Each stage involves a subset of learners. For example, in learning through testing, there are three stages: 1) the teacher learns a topic; 2) the teacher creates test problems about this topic and uses them to evaluate the student; 3) based on the strong and weak points identified during solving the test problems, the student re-learns this topic. The three stages have a sequential order and cannot be switched. The first stage involves the teacher only; the second stage involves both the teacher and the student; the third stage involves the student only. \item Testing and validation are widely used to evaluate the outcome of learning and provide feedback for improving learning. For example, in learning through testing, the student takes a test to identify the strong and weak points in his/her learning of a topic. \item Learning is performed on various learning materials, including textbooks used for initial learning, homeworks used for enhancing the understanding of knowledge learned from textbooks, tests used for evaluating the outcome of learning, etc. \end{itemize} \subsection{General Framework of Skillearn} \begin{figure}[H] \centering \includegraphics[width=\textwidth]{figs/skillearn} \caption{\scriptsize The elements of Skillearn and their counterparts in human learning. The goal of Skillearn is to learn one or a set of ML models which are analogous to learners in human learning. The models can be of any type, such as deep neural network, decisions tree, support vector machine, etc. In a human learning event, a learner has multiple aspects to learn, such as how to read, how to write, how to draw, etc. Analogously, a model in Skillearn has multiple sets of parameters that are learnable, such as architectures, network weights, weights of training examples, etc. In human learning, different learners interact with each other. For example, a teacher can teach a student. An examiner can evaluate an examinee. Likewise, the models in Skillearn can interact with each other. For example, in knowledge distillation, a teacher model (e.g., a deep neural network) can ``teach" a student model (e.g., a decision tree) where the teacher predicts pseudo labels on unlabeled data, then these pseudo-labeled data examples are used to train the student model. In a human learning event, there are multiple stages of learning events. For example, in classroom learning, there could be three learning stages: 1) a teacher learns the course materials; 2) the teacher teaches these materials to students; 3) the students take tests to evaluate how well they learn. Analogously, in Skillearn, the learning involves multiple stages. For example, in knowledge distillation, there could be three learning stages: 1) a teacher model is trained; 2) the teacher performs knowledge distillation to ``teach" a student model as described above; 3) the performance of the student model is evaluated. In human learning, tests are widely used to evaluate the learners and provide feedback for improving the learners. Analogously, ML models are validated for further improvement. In human learning, the learners learn from learning materials such as textbooks, lecture notes, homework, etc. Likewise, ML models are learned on various datasets, such as training data, validation data, and other auxiliary data. } \label{fig:hl-ml} \end{figure} Based on the properties of humans' learning skills, we propose a framework called Skillearn to formalize the learning skills of humans and incorporate them into machine learning. In Skillearn, we have the following elements. \begin{itemize} \item \textbf{Learners}. There could be one or multiple learners. Each learner is an ML model, such as a deep convolutional network, a deep generative model, a nonparametric kernel density estimator, etc. This is analogous to human learning which involves one or multiple human learners. \item \textbf{Learnable parameters}. Each learner has one or more sets of learnable parameters, which could be weight parameters of a network, architecture of a network, weights of training examples, hyperparameters, etc. This is analogous to human learning where each human learner learns multiple aspects in a learning task. \item \textbf{Interaction function}, which describes how two or more learners interact with others. Some examples of interaction include: 1) in knowledge distillation, given an unlabeled image dataset, model $A$ predicts the pseudo labels of these images; then model $B$ is trained using these images and the pseudo labels generated by model $A$; 2) given a set of texts, two text encoders $A$ and $B$ extract embeddings of the texts; $A$ and $B$ are tied together via distributional matching: the distribution of embeddings extracted by $A$ is encouraged to have small total-variance with the distribution of embeddings extracted by $B$. This is analogous to human learning where multiple human learners interact with each other. \item \textbf{Learning stages}. The learning of all learners is not conducted at one shot simultaneously. The learning is performed at multiple stages with an order. At each stage, a subset of learners participate in the learning. For example, in knowledge distillation, there are two stages: 1) a teacher model is trained; 2) the teacher model predicts pseudo labels on an unlabeled dataset and the pseudo-labeled dataset is used to train the student model. The first stage involves a single learner, which is the teacher. The second stage involves two learners: the teacher and the student. This is analogous to human learning where the learning process is divided into multiple stages. Mathematically, we formulate the learning at each stage as an optimization problem. The outcome of one learning stage is passed to another learning stage via the interaction function. \item \textbf{Validation stage.} This stage evaluates the outcome of learning and provides feedback to improve the learning at the learning stages. This is analogous to the testing and validation in human learning. The validation stage is formulated as an optimization problem as well. The learning outcomes produced in the learning stages are passed to the validation stage. \item \textbf{Datasets.} Datasets in ML are analogous to learning materials in human learning. Each learner has a training dataset and a validation dataset. The training dataset is used in the learning stages and the validation dataset is used in the validation stage. Besides, there are auxiliary datasets (labeled or unlabeled) on which the learners interact with each other. \end{itemize} Next, we define the learning stages. Each learning stage performs a focused learning activity which is defined as an optimization problem. The optimization problem involves a training loss and (optionally) an interaction function which describes how the learners involved in this stage interact with each other. A learning stage consists of the following elements: \begin{itemize} \item \textbf{Active learners}. A subset of learners (one or more) are involved at this learning stage. These learners are called active learners. \item \textbf{Active learnable parameters}. For each active learner, a sub-collection of its learnable parameter sets are trained in this stage. \item \textbf{Supporting learnable parameters}. For each active learner, a sub-collection of its learnable parameter sets are used to define the loss function and interaction function, but they are not updated at this stage. \item \textbf{Active training datasets}, which include the training dataset of every active learner. \item \textbf{Active auxiliary datasets}, which include the auxiliary datasets where the interaction function in this learning stage is defined on. \item \textbf{Training loss}, which is defined on the active training data collection, active learnable parameters, and supporting learnable parameters. \item \textbf{Interaction function}, which depicts the interaction between two or more active learners. It is defined on the active auxiliary datasets, active learnable parameters, and supporting learnable parameters. \end{itemize} In Skillearn, there is a single validation stage where an optimization problem is defined. The optimization problem involves one or more validation losses and (optionally) an interaction function which describes how the learners in the validation stage interact with each other. The validation stage consists of the following elements. \begin{itemize} \item \textbf{Active learners}, which are the learners to validate. \item \textbf{Remaining learnable parameters}. At each learning stage, a subset of parameters are learned. After all learning stages, the parameters that have not been learned are called remaining parameters. The remaining parameters are updated in the validation stage. \item \textbf{Validation datasets}: validation datasets of all active learners. \item \textbf{Active auxiliary datasets}, which include the auxiliary datasets where the interaction function in the validation stage is defined on. \item \textbf{Validation losses}, which are defined on remaining learnable parameters, validation datasets, and (optionally) active auxiliary datasets. \item \textbf{Interaction function}, which depicts the interaction between two or more active learners. It is defined on remaining learnable parameters and active auxiliary datasets. \end{itemize} \subsubsection{Mathematical Setup} \begin{table}[t] \centering \begin{tabular}{l|p{12cm}} \hline Notation & Meaning \\ \hline $M$ & Number of learners\\ $D_m^{(\textrm{tr})}$ & Training data of the $m$-th learner\\ $D_m^{(\textrm{val})}$ & Validation data of the $m$-th learner\\ $\mathcal{F}$ & Auxiliary datasets accessible to all learners\\ $N_m$ & Number of learnable parameter sets belonging to the $m$-th learner\\ $W_i^{(m)}$ & The $i$-th learnable parameter set of the $m$-th learner\\ $K$ & Number of learning stages\\ $M_k$ & Number of active learners in the $k$-th learning stage\\ $\mathcal{A}_k$ & The set of active learners in the $k$-th learning stage\\ $a_i^{(k)}$ & The $i$-th active learner in the $k$-th learning stage\\ $O_{ki}$ & The number of active parameter sets of the $i$-th active learner in the $k$-th learning stage\\ $W_{kij}$ & The $j$-th active parameter set of the $i$-th learner in the $k$-th learning stage \\ $\mathcal{W}_{ki}$ & The collection of active parameter sets of the $i$-th active learner in the $k$-th learning stage \\ $\mathcal{W}_k$ & All active parameter sets in the $k$-th learning stage \\ $P_{ki}$ & The number of supporting parameter sets of the $i$-th active learner in the $k$-th learning stage\\ $U_{kij}$ & The $j$-th supporting parameter set of the $i$-th active learner in the $k$-th learning stage \\ $\mathcal{U}_{ki}$ & The collection of supporting parameter sets of the $i$-th active learner in the $k$-th learning stage \\ $\mathcal{U}_k$ & All supporting parameter sets in the $k$-th learning stage \\ $D_{ki}^{(\textrm{tr})}$ & Training dataset of the $i$-th active learner in the $k$-th learning stage\\ $\mathcal{D}_k^{(\textrm{tr})}$ & Active training datasets in the $k$-th learning stage\\ $\mathcal{F}_k$ & Active auxiliary datasets in the $k$-th learning stage\\ $L_k$ & Training loss in the $k$-th learning stage\\ $I_k$ & Interaction function in the $k$-th learning stage\\ \hline \end{tabular} \caption{Notations in the Skillearn framework} \label{tb:skl_notations} \end{table} We assume there are $M$ learners in total. For each learner $m$, it has a training set $D^{(\textrm{tr})}_m$ and optionally a validation set $D^{(\textrm{val})}_m$. Meanwhile, all learners share a common collection of auxiliary datasets $\mathcal{F}$, which could be unlabeled datasets used for self-supervised pretraining~\citep{he2019moco}, additional labeled datasets used for validation, and so on. The learner $m$ has one or more sets of learnable parameters $\{W_i^{(m)}\}_{i=1}^{N_m}$. The learnable parameters could be network weights, architectures, hyperparameters, weights of training examples, etc. We assume there are $K$ learning stages. At each stage $k$, a subset of $M_k$ learners $\mathcal{A}_k=\{a^{(k)}_i\}_{i=1}^{M_k}$ are involved in the learning, which are called active learners. For each active learner $a^{(k)}_i$, a sub-collection of its learnable parameter sets $\mathcal{W}_{ki}=\{W_{kij}\}_{j=1}^{O_{ki}}$ are trained at this stage, which are called active learnable parameters. Let $\mathcal{W}_{k}=\{\mathcal{W}_{ki}|i=1,\cdots,M_k\}$ denote the active learnable parameters for all active learners. Meanwhile, another sub-collection of its learnable parameter sets $\mathcal{U}_{ki}=\{U_{kij}\}_{j=1}^{P_{ki}}$ are used to define the training loss function and interaction function. But $\mathcal{U}_{ki}$ are not updated at this stage. They are called supporting learnable parameters. Let $\mathcal{U}_{k}=\{\mathcal{U}_{ki}|i=1,\cdots,M_k\}$ denote the supporting learnable parameters for all active learners. Let $\mathcal{D}_{k}$ denote the active training datasets, consisting of the training dataset of each active learner in $\mathcal{A}_k$. Let $\mathcal{F}_k$ denote the active auxiliary datasets used in this stage to define the interaction function. The learning activity at stage $k$ is formulated as an optimization problem where the optimization variables are active learnable parameters and the objective involves 1) a training loss $L_k$ defined on the active training datasets, active learnable parameters, and supporting learnable parameters; 2) (optionally) an interaction function $I_k$ that depicts the interaction between learners in $\mathcal{A}_k$. The notations are summarized in Table~\ref{tb:skl_notations}. \subsubsection{The Mathematical Framework for Skillearn} The formulation of Skillearn is shown in Eq.(\ref{eq:overall}). \begin{equation} \begin{array}{ll} \max_{\{\mathcal{U}_{i}\}_{i=1}^{K}} & L_{val}(\{\mathcal{W}_{j}^*(\{\mathcal{U}_{i}\}_{i=1}^{j})\}_{j=1}^{K},\mathcal{D}^{(\textrm{val})},\mathcal{F})+\gamma_{val} I_{val}(\{\mathcal{W}_{j}^*(\{\mathcal{U}_{i}\}_{i=1}^{j})\}_{j=1}^{K},\mathcal{F}) \quad (\textrm{Validation stage})\\ s.t. & \textrm{Learning stage $K$:}\\ & \mathcal{W}_{K}^*(\{\mathcal{U}_{j}\}_{j=1}^K)= \underset{\mathcal{W}_{K}}{\textrm{min}}\; L_K(\mathcal{W}_{K},\mathcal{U}_{K},\{\mathcal{W}_{j}^*(\{\mathcal{U}_{i}\}_{i=1}^{j})\}_{j=1}^{K-1},\mathcal{D}^{(\textrm{tr})}_{K},\mathcal{F}_{K})+\\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad\;\gamma_K I_K(\mathcal{W}_{K},\mathcal{U}_{K},\{\mathcal{W}_{j}^*(\{\mathcal{U}_{i}\}_{i=1}^{j})\}_{j=1}^{K-1},\mathcal{F}_K) \\ &\vdots\\ & \textrm{Learning stage $k$:}\\ &\mathcal{W}_{k}^*(\{\mathcal{U}_{j}\}_{j=1}^k)= \underset{\mathcal{W}_{k}}{\textrm{min}}\; L_k(\mathcal{W}_{k},\mathcal{U}_{k},\{\mathcal{W}_{j}^*(\{\mathcal{U}_{i}\}_{i=1}^{j})\}_{j=1}^{k-1},\mathcal{D}^{(\textrm{tr})}_{k},\mathcal{F}_{k})+ \\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad\gamma_k I_k(\mathcal{W}_{k},\mathcal{U}_{k},\{\mathcal{W}_{j}^*(\{\mathcal{U}_{i}\}_{i=1}^{j})\}_{j=1}^{k-1},\mathcal{F}_k) \\ &\vdots\\ & \textrm{Learning stage $1$:}\\ &\mathcal{W}_{1}^*(\mathcal{U}_{1})= \underset{\mathcal{W}_{1}}{\textrm{min}}\; L_1(\mathcal{W}_{1},\mathcal{U}_{1},\mathcal{D}^{(\textrm{tr})}_{1}, \mathcal{F}_{1})+ \gamma_1 I_1(\mathcal{W}_{1},\mathcal{U}_{1},\mathcal{F}_1)\\ \end{array} \label{eq:overall} \end{equation} It is a multi-level optimization framework, which involves $K+1$ optimization problems. On the constraints are $K$ optimization problems, each corresponding to a learning stage. The $K$ learning stages are ordered. From bottom to top, the optimization problems correspond to the learning stage $1,2,\cdots,K$ respectively. In the optimization problem of the learning stage $k$, the optimization variables are the active learnable parameters $\mathcal{W}_k$ of all active learners in this stage. The objective function consists of a training loss $L_k(\mathcal{W}_{k},\mathcal{U}_{k},\{\mathcal{W}_{j}^*(\{\mathcal{U}_{i}\}_{i=1}^{j})\}_{j=1}^{k-1},\mathcal{D}^{(\textrm{tr})}_{k},\mathcal{F}_{k})$ defined on the active learnable parameters $\mathcal{W}_k$, supporting learnable parameters $\mathcal{U}_k$, optimal solutions $\{\mathcal{W}_{j}^*(\{\mathcal{U}_{i}\}_{i=1}^{j})\}_{j=1}^{k-1}$ obtained in previous learning stages, active training datasets $\mathcal{D}_k^{(\textrm{tr})}$, and active auxiliary datasets $\mathcal{F}_k$. Typically, $L_k(\mathcal{W}_{k},\mathcal{U}_{k},\{\mathcal{W}_{j}^*(\{\mathcal{U}_{i}\}_{i=1}^{j})\}_{j=1}^{k-1},\mathcal{D}^{(\textrm{tr})}_{k},\mathcal{F}_{k})$ can be decomposed into a summation of active learners' individual training losses: \begin{equation} L_k(\mathcal{W}_{k},\mathcal{U}_{k},\{\mathcal{W}_{j}^*(\{\mathcal{U}_{i}\}_{i=1}^{j})\}_{j=1}^{k-1},\mathcal{D}^{(\textrm{tr})}_{k},\mathcal{F}_{k})=\sum_{i=1}^{M_k} L_{ki}(\mathcal{W}_{ki},\mathcal{U}_{ki},\{\mathcal{W}_{j}^*(\{\mathcal{U}_{i}\}_{i=1}^{j})\}_{j=1}^{k-1},D^{(\textrm{tr})}_{ki},\mathcal{F}_{k}) \end{equation} where $L_{ki}(\mathcal{W}_{ki},\mathcal{U}_{ki},\{\mathcal{W}_{j}^*(\{\mathcal{U}_{i}\}_{i=1}^{j})\}_{j=1}^{k-1},D^{(\textrm{tr})}_{ki},\mathcal{F}_{k})$ is the training loss of the active learner $i$ defined on its active parameters $\mathcal{W}_{ki}$, supporting parameters $\mathcal{U}_{ki}$, and training dataset $D^{(\textrm{tr})}_{ki}$. The $M_k$ learners do not interact in the training loss. The other part of the objective function is the interaction function $I_k(\mathcal{W}_{k},\mathcal{U}_{k},\{\mathcal{W}_{j}^*(\{\mathcal{U}_{i}\}_{i=1}^{j})\}_{j=1}^{k-1},\mathcal{F}_k)$ which depicts how the $M_k$ active learners interact with each other in this learning stage. It is defined on the active learnable parameters $\mathcal{W}_k$, supporting learnable parameters $\mathcal{U}_k$, optimal solutions $\{\mathcal{W}_{j}^*(\{\mathcal{U}_{i}\}_{i=1}^{j})\}_{j=1}^{k-1}$ in previous stages, and active auxiliary datasets $\mathcal{F}_k$. $\gamma_k$ is a tradeoff parameter between the training loss and interaction function. $\mathcal{U}_{k}$ is needed to define the objective, but it is not updated at this stage. After completing the learning at stage $k$, we obtain the optimal solution $\mathcal{W}_{k}^*(\{\mathcal{U}_{j}\}_{j=1}^k)$. Note that $\mathcal{W}^*_{k}$ is function of $\{\mathcal{U}_{j}\}_{j=1}^k$ since $\mathcal{W}^*_{k}$ is a function of the objective and the objective is a function of $\{\mathcal{U}_{j}\}_{j=1}^k$. $\mathcal{W}_{k}^*(\{\mathcal{U}_{k}\}_{j=1}^k)$ is used to define the objectives in later stages. At the very top of Eq.(\ref{eq:overall}), the optimization problem (outside the constraint block) corresponds to the validation stage which validates the optimal solutions $\{\mathcal{W}_{j}^*(\{\mathcal{U}_{i}\}_{i=1}^{j})\}_{j=1}^{K}$ obtained in the $K$ learning stages. The optimization variables are remaining learnable parameters $\{\mathcal{U}_{i}\}_{i=1}^{K}$ that have not been learned in the $K$ learning stages. The objective function consists of a validation loss and an interaction function. $\gamma_{val}$ is a tradeoff parameter. The validation loss $L_{val}(\{\mathcal{W}_{j}^*(\{\mathcal{U}_{i}\}_{i=1}^{j})\}_{j=1}^{K},\mathcal{D}^{(\textrm{val})},\mathcal{F})$ is defined on the validation sets of all learners $\mathcal{D}^{(\textrm{val})}=\{\mathcal{D}^{(\textrm{val})}_i\}_{i=1}^M$, the optimal solutions $\{\mathcal{W}_{j}^*(\{\mathcal{U}_{i}\}_{i=1}^{j})\}_{j=1}^{K}$, and the auxiliary datasets $\mathcal{F}$. The interaction function is defined on $\{\mathcal{W}_{j}^*(\{\mathcal{U}_{i}\}_{i=1}^{j})\}_{j=1}^{K}$ and $\mathcal{F}$. \textbf{Remarks:} \begin{itemize} \item Note that for simplicity, we assume the optimization problem at each stage is a minimization problem. The optimization problem can be more complicated problems such as min-max problems. \item At a certain stage, a learnable parameter cannot be simultaneously an active parameter and a supporting parameter. For active parameters in stage $k$, once learned, they cannot be active parameters or supporting parameters in later stages. For supporting parameters in stage $k$, they can be active parameters or supporting parameters in later stages. \item The supporting parameters are not learned in previous stages. \end{itemize} \subsection{Optimization Algorithm for Skillearn} \label{opt:sk} In this section, we develop an algorithm to solve the Skillearn problem in Eq.(\ref{eq:overall}), inspired by the algorithm in \citep{liu2018darts}. For each learning stage $k$ with an optimization problem: $\mathcal{W}_{k}^*(\{\mathcal{U}_{j}\}_{j=1}^k)=\textrm{min}_{\mathcal{W}_{k}} L_k(\mathcal{W}_{k},\mathcal{U}_{k},\{\mathcal{W}_{j}^*(\{\mathcal{U}_{i}\}_{i=1}^{j})\}_{j=1}^{k-1},\mathcal{D}^{(\textrm{tr})}_{k},\mathcal{F}_{k})+\gamma_k I_k(\mathcal{W}_{k},\mathcal{U}_{k},\{\mathcal{W}_{j}^*(\{\mathcal{U}_{i}\}_{i=1}^{j})\}_{j=1}^{k-1},\mathcal{F}_k)$, we approximate the optimal solution $\mathcal{W}_{k}^*(\{\mathcal{U}_{j}\}_{j=1}^k)$ by one-step gradient descent update of the variable $\mathcal{W}_{k}$: \begin{equation} \begin{array}{l} \mathcal{W}_{k}^*(\{\mathcal{U}_{j}\}_{j=1}^k)\approx \mathcal{W}'_{k}(\{\mathcal{U}_{j}\}_{j=1}^k)=\\ \mathcal{W}_{k}-\eta \nabla_{\mathcal{W}_{k}}(L_k(\mathcal{W}_{k},\mathcal{U}_{k},\{\mathcal{W}_{j}^*(\{\mathcal{U}_{i}\}_{i=1}^{j})\}_{j=1}^{k-1},\mathcal{D}^{(\textrm{tr})}_{k},\mathcal{F}_{k})+\gamma_k I_k(\mathcal{W}_{k},\mathcal{U}_{k},\{\mathcal{W}_{j}^*(\{\mathcal{U}_{i}\}_{i=1}^{j})\}_{j=1}^{k-1},\mathcal{F}_k)). \end{array} \end{equation} In learning stages $k+1,\cdots,K$, $\mathcal{W}_{k}^*(\{\mathcal{U}_{j}\}_{j=1}^k)$ may be used to define objective functions. For a stage $l$ where $k<l<K$, if its objective involves $\mathcal{W}_{k}^*(\{\mathcal{U}_{j}\}_{j=1}^k)$, we replace $\mathcal{W}_{k}^*(\{\mathcal{U}_{j}\}_{j=1}^k)$ with $\mathcal{W}'_{k}(\{\mathcal{U}_{j}\}_{j=1}^k)$ and get an approximated objective. When approximating $\mathcal{W}_{l}^*(\{\mathcal{U}_{j}\}_{j=1}^l)$, we use the gradient of the approximated objective: \begin{equation} \begin{array}{l} \mathcal{W}_{l}^*(\{\mathcal{U}_{j}\}_{j=1}^l)\approx \mathcal{W}'_{l}(\{\mathcal{U}_{j}\}_{j=1}^l)=\\ \mathcal{W}_{l}-\eta \nabla_{\mathcal{W}_{l}}(L_l(\mathcal{W}_{l},\mathcal{U}_{l},\{\mathcal{W}'_{j}(\{\mathcal{U}_{i}\}_{i=1}^{j})\}_{j=1}^{l-1},\mathcal{D}^{(\textrm{tr})}_{l},\mathcal{F}_{l})+\gamma_l I_l(\mathcal{W}_{l},\mathcal{U}_{l},\{\mathcal{W}'_{j}(\{\mathcal{U}_{i}\}_{i=1}^{j})\}_{j=1}^{l-1},\mathcal{F}_l)). \end{array} \label{eq:w_ln_appro} \end{equation} For the objective in the validation stage, it can be approximated as: \begin{equation} L_{val}(\{\mathcal{W}'_{j}(\{\mathcal{U}_{i}\}_{i=1}^{j})\}_{j=1}^{K},\mathcal{D}^{(\textrm{val})},\mathcal{F})+\gamma_{val} I_{val}(\{\mathcal{W}'_{j}(\{\mathcal{U}_{i}\}_{i=1}^{j})\}_{j=1}^{K},\mathcal{F}). \label{eq:val_appro} \end{equation} We update the remaining learnable parameters $\{\mathcal{U}_{i}\}_{i=1}^{K}$ by minimizing this approximated objective. The optimization algorithm for Skillearn is summarized in Algorithm~\ref{algo:algo-sk}. \begin{algorithm}[H] \SetAlgoLined \While{not converged}{ 1. For $k=1\cdots K$, update $\mathcal{W}_{k}^*(\{\mathcal{U}_{j}\}_{j=1}^k)$ using Eq.(\ref{eq:w_ln_appro})\\ 2. Update $\{\mathcal{U}_{i}\}_{i=1}^{K}$ by minimizing the approximated objective in Eq.(\ref{eq:val_appro}) } \caption{Optimization algorithm for Skillearn} \label{algo:algo-sk} \end{algorithm} \section{Case Study I: Learning by Passing Tests} In this section, we apply our general Skillearn framework to formalize a human learning technique -- learning by passing tests, and apply it to improve machine learning. In human learning, an effective and widely used methodology for improving learning outcome is to let the learner take increasingly more-difficult tests. To successfully pass a more challenging test, the learner needs to gain better learning ability. By progressively passing tests that have increasing levels of difficulty, the learner strengthens his/her learning capability gradually. Inspired by this test-driven learning technique of humans, we are interested in investigating whether this methodology is helpful for improving machine learning as well. We use the Skillearn framework to formalize this human learning technique, which results in a novel machine learning framework called learning by passing tests (LPT). In this framework, there are two learners: a ``testee" model and a ``tester" model. The tester creates a sequence of ``tests" with growing levels of difficulty. The testee tries to learn better so that it can pass these increasingly more-challenging tests. Given a large collection of data examples called ``test bank", the tester creates a test $T$ by selecting a subset of examples from the test bank. The testee applies its intermediately-trained model $M$ to make predictions on the examples in $T$. The prediction error rate $R$ reflects how difficult this test is. If the testee can make correct predictions on $T$, it means that $T$ is not difficult enough. The tester will create a more challenging test $T'$ by selecting a new set of examples from the test bank in a way that the new error rate $R'$ achieved by $M$ is larger than $R$. Given this more demanding test $T'$, the testee re-learns its model to pass $T'$, in a way that the newly-learned model $M'$ achieves a new error rate $R''$ on $T'$ where $R''$ is smaller than $R'$. This process iterates until convergence. In our framework, both the testee and tester perform learning. The testee learns how to best conduct a target task $J_1$ and the tester learns how to create difficult and meaningful tests. To encourage a created test $T$ to be meaningful, the tester trains a model using $T$ to perform a target task $J_2$. If the model performs well on $J_2$, it indicates that $T$ is meaningful. The testee has two sets of learnable parameters: neural architecture and network weights. The tester has three learnable modules: data encoder, test creator, and target-task executor. The learning is organized into three stages. In the first stage, the testee trains its network weights on the training set of task $J_1$ with the architecture fixed. In the second stage, the tester trains its data encoder and target-task executor on a created test to perform the target task $J_2$, with the test creator fixed. In the third stage, the testee updates its model architecture by minimizing the predictive loss $L$ on the test created by the tester; the tester updates its test creator by maximizing $L$ and minimizing the loss on the validation set of $J_2$. The testee and tester interact on the loss function $L$ in an adversarial manner, where the testee minimizes this loss while the tester maximizes this loss. The three stages are performed jointly end-to-end in a multi-level optimization framework, where a latter stage influences an earlier stage and vice versa. We apply our method for neural architecture search~\citep{zoph2016neural,liu2018darts,real2019regularized} in image classification tasks on CIFAR-100, CIFAR-10, and ImageNet~\citep{deng2009imagenet}. Our method achieves significant improvement over state-of-the-art baselines. \subsection{Method} In this section, we describe how to instantiate the general Skillearn framework to the LPT framework, and how to instantiate the general optimization procedure of Skillearn to a specialized optimization algorithm for LPT. \subsubsection{Learning by Passing Tests} In the learning by passing tests (LPT) framework, there are two learners: a testee model and a tester model, where the testee studies how to perform a target task $J_1$ such as classification, regression, etc. The eventual goal is to make the testee achieve a better learning outcome with the help of the tester. There is a collection of data examples called ``test bank". The tester creates a test by selecting a subset of examples from the test bank. Given a test $T$, the testee applies its intermediately-trained model $M$ to make predictions on $T$ and measures the prediction error rate $R$. From the perspective of the tester, $R$ indicates how difficult the test $T$ is. If $R$ is small, it means that the testee can easily pass this test. Under such circumstances, the tester will create a more difficult test $T'$ which renders the new error rate $R'$ achieved by $M$ on $T'$ is larger than $R$. From the testee's perspective, $R'$ indicates how well the testee performs on the test. Given this more difficult test $T'$, the testee refines its model to pass this new test. It aims to learn a new model $M'$ in a way that the error rate $R''$ achieved by $M'$ on $T'$ is smaller than $R'$. This process iterates until an equilibrium is reached. In addition to being difficult, the created test should be meaningful as well. It is possible that the test bank contains poor-quality examples where the class labels may be incorrect or the input data instances are outliers. Using an unmeaningful test containing poor-quality examples to guide the learning of the testee may render the testee to overfit these bad-quality examples and generalize poorly on unseen data. To address this problem, we encourage the tester to generate meaningful tests by leveraging the generated tests to perform a target task $J_2$ (e.g., classification). Specifically, the tester uses examples in the test to train a model for performing $J_2$. If the performance (e.g., accuracy) $P$ achieved by this model in conducting $J_2$ is high, the test is considered to be meaningful. The tester aims to create a test that can yield a high $P$. \begin{figure}[t] \centering \includegraphics[width=0.8\textwidth]{figs/archi.pdf} \caption{Illustration of learning by passing tests. The solid arrows denote the process of making predictions and calculating losses. The dotted arrows denote the process of updating learnable parameters by minimizing corresponding losses.} \label{fig:arch} \end{figure} \begin{table}[t] \centering \begin{tabular}{l|l} \hline Notation & Meaning \\ \hline $A$ & Architecture of the testee\\ $W$ & Network weights of the testee\\ $E$ & Data encoder of the tester\\ $C$ & Test creator of the tester \\ $X$ & Target-task executor of the tester\\ $D_{ee}^{(\textrm{tr})}$ & Training data of the testee\\ $D_{er}^{(\textrm{tr})}$ & Training data of the tester\\ $D_{er}^{(\textrm{val})}$ & Validation data of the tester\\ $D_b$ & Test bank \\ \hline \end{tabular} \caption{Notations in Learning by Passing Tests} \label{tb:notations} \end{table} In our framework, both the testee and the tester performs learning. The testee studies how to best fulfill the target task $J_1$. The tester studies how to create tests that are difficult and meaningful. In the testee' model, there are two sets of learnable parameters: model architecture and network weights. The architecture and weights are both used to make predictions in $J_1$. The tester's model performs two tasks simultaneously: creating tests and performing target-task $J_1$. The model has three modules with learnable parameters: data encoder, test creator, and target-task executor, where the test creator performs the task of generating tests and the target-task executor conducts $J_1$. The test creator and target-task executor share the same data encoder. The data encoder takes a data example $d$ as input and generates a latent representation for this example. Then the representation is fed into the test creator which determines whether $d$ should be selected into the test. The representation is also fed into the target-task executor which performs prediction on $d$ during performing the target task $J_2$. \begin{table}[t] \centering \begin{tabular}{l|p{8cm}} \hline Active learners & Testee\\ \hline Active learnable parameters & Network weights of the testee\\ \hline Supporting learnable parameters & Architecture of the testee\\ \hline Active training datasets & Training dataset of target-task $J_1$ performed by the testee \\ \hline Active auxiliary datasets & -- \\ \hline Training loss & Training loss of target-task $J_1$: $L(A, W, D_{ee}^{(\mathrm{tr})})$ \\ \hline Interaction function & -- \\ \hline Optimization problem & $W^{*}(A)=\min _{W} L(A, W, D_{ee}^{(\mathrm{tr})})$ \\ \hline \end{tabular} \caption{Learning Stage I in LPT} \label{tb:lpt-s1} \end{table} \begin{table}[t] \centering \begin{tabular}{l|p{8cm}} \hline Active learners & Examiner\\ \hline Active learnable parameters & 1) Data encoder of the tester; 2) Target-task executor of the tester. \\ \hline Supporting learnable parameters & Test creator of the tester\\ \hline Active training datasets & Training data of target-task $J_2$ performed by the tester\\ \hline Active auxiliary datasets & Test bank\\ \hline Training loss & $L(E, X, D_{er}^{(\mathrm{tr})}) +\gamma L(E, X, \sigma(C, E, D_{b}))$\\ \hline Interaction function & -- \\ \hline Optimization problem & $ E^{*}(C), X^{*}(C)=\min _{E, X} \;\; L(E, X, D_{er}^{(\mathrm{tr})}) +\gamma L(E, X, \sigma(C, E, D_{b})).$\\ \hline \end{tabular} \caption{Learning Stage II in LPT } \label{tb:lpt-s2} \end{table} In our framework, the learning of the testee and the tester is organized into three stages. In the first stage, the testee learns its network weights $W$ by minimizing the training loss $L(A, W, D_{ee}^{(\mathrm{tr})})$ defined on the training data $D_{ee}^{(\mathrm{tr})}$ in the task $J_1$. The architecture $A$ is used to define the training loss, but it is not learned in this stage. If $A$ is learned by minimizing this training loss, a trivial solution will be yielded where $A$ is very large and complex that it can perfectly overfit the training data but will generalize poorly on unseen data. Let $W^*(A)$ denotes the optimally learned $W$ in this stage. Note that $W^*$ is a function of $A$ because $W^*$ is a function of the training loss and the training loss is a function of $A$. Table~\ref{tb:lpt-s1} shows the key elements of this learning stage under the Skillearn terminology. The testee is the active learner, which performs learning in this stage. Network weights of the testee are the active learnable parameters, which are updated at this stage. The architecture variables of the testee are the supporting learnable parameters, which are used to define the loss function, but are not updated at this stage. Active training datasets include the training data of the task $J_1$ performed by the testee. There are no active auxiliary datasets. Training loss is $L(A, W, D_{ee}^{(\mathrm{tr})})$. There is no interaction function at this stage. The optimization problem is: \begin{equation} W^{*}(A)=\min _{W} L\left(A, W, D_{ee}^{(\mathrm{tr})}\right). \end{equation} In the second stage, the tester learns its data encoder $E$ and target-task executor $X$ by minimizing the training loss $L(E, X, D_{er}^{(\mathrm{tr})}) +\gamma L(E, X, \sigma(C, E, D_{b}))$ in the task $J_2$. The training loss consists of two parts. The first part $L(E, X, D_{er}^{(\mathrm{tr})})$ is defined on the training dataset $D_{er}^{(\textrm{tr})}$ in $J_2$. The second part $L(E, X, \sigma(C, E, D_{b}))$ is defined on the test $\sigma(C, E, D_{b})$ created by the test creator. For each example $d$ in the test bank $D_{b}$, it is first fed into the encoder $E$, then the creator $C$, which outputs a binary value indicating whether $d$ should be selected into the test. $\sigma(C, E, D_{b})$ is the collection of examples whose binary value is equal to 1. $\gamma$ is a tradeoff parameter between these two parts of losses. The creator $C$ is used to define the second-part loss, but it is not learned in this stage. Otherwise, a trivial solution will be yielded where $C$ always sets the binary value to 0 for each test-bank example so that the second-part loss becomes 0. Let $E^*(C)$ and $X^*(C)$ denote the optimally trained $E$ and $X$ in this stage. Note that they are both functions of $C$ since they are functions of the training loss and the training loss is a function of $C$. Table~\ref{tb:lpt-s2} shows the key elements of this learning stage under the Skillearn terminology. The tester is the active learner. The active learnable parameters include the data encoder and target-task executor of the tester. The supporting learnable parameters include the test creator. The active training datasets include the training data of target-task $J_2$ performed by the tester. The active auxiliary datasets include the test bank. The training loss is $L(E, X, D_{er}^{(\mathrm{tr})}) +\gamma L(E, X, \sigma(C, E, D_{b}))$. There is no interaction function at this stage. The optimization problem is: \begin{equation} E^{*}(C), X^{*}(C)=\min _{E, X} \;\; L\left(E, X, D_{er}^{(\mathrm{tr})}\right) +\gamma L\left(E, X, \sigma\left(C, E, D_{b}\right)\right). \end{equation} \begin{table}[t] \centering \begin{tabular}{p{4cm}|p{10.5cm}} \hline Active learners & Testee, tester\\ \hline Remaining learnable parameters & 1) Architecture of the testee; 2) Test creator of the tester\\ \hline Validation datasets& Validation dataset of the tester\\ \hline Active auxiliary datasets & Test bank\\ \hline Validation loss & $L(E^{*}(C), X^{*}(C), D_{er}^{(\mathrm{val})})$\\ \hline Interaction function & Testee's prediction loss defined on the test created by the tester: $ L(A, W^{*}(A), \sigma(C, E^{*}(C), D_{b}))/|\sigma(C, E^{*}(C), D_{b})|$\\ \hline Optimization problem & $\max_{C} \min _{A}\;\; L(A, W^{*}(A), \sigma(C, E^{*}(C), D_{b}))/|\sigma(C, E^{*}(C), D_{b})|-\lambda L(E^{*}(C), X^{*}(C), D_{er}^{(\mathrm{val})})$\\ \hline \end{tabular} \caption{Validation Stage in LPT} \label{tb:lpt-vs} \end{table} In the third stage, the testee learns its architecture by trying to pass the test $\sigma(C, E^*(C), D_{b})$ created by the tester. Specifically, the testee aims to minimizes the predictive loss of its model on the test: \begin{equation} L(A, W^{*}(A), \sigma(C, E^{*}(C), D_{b}))=\sum_{d\in \sigma(C, E^{*}(C), D_{b}) } \ell(A, W^{*}(A), d) \end{equation} where $d$ is an example in the test and $\ell(A, W^{*}(A), d)$ is the loss defined in this example. A smaller $L(A, W^{*}(A), \sigma(C, E^{*}(C), D_{b}))$ indicates that the testee performs well on this test. Meanwhile, the tester learns its test creator $C$ in a way that $C$ can create a test with more difficulty and meaningfulness. Difficulty is measured by the testee's predictive loss $L(A, W^{*}(A), \sigma(C, E^{*}(C), D_{b}))$ on the test. Given a model $(A, W^{*}(A))$ of the testee and two tests of the same size (same number of examples): $\sigma(C_1, E^{*}(C_1), D_{b})$ created by $C_1$ and $\sigma(C_2, E^{*}(C_2), D_{b})$ created by $C_2$, if $L(A, W^{*}(A), \sigma(C_1, E^{*}(C_1), D_{b}))> L(A, W^{*}(A), \sigma(C_2, E^{*}(C_2), D_{b}))$, it means that $\sigma(C_1, E^{*}(C_1), D_{b})$ is more challenging to pass than $\sigma(C_2, E^{*}(C_2), D_{b})$. Therefore, the tester can learn to create a more challenging test by maximizing $L(A, W^{*}(A), \sigma(C, E^{*}(C), D_{b}))$. A trivial solution of increasing $L(A, W^{*}(A), \sigma(C, E^{*}(C), D_{b}))$ is to enlarge the size of the test. But a larger size does not imply more difficulty. To discourage this degenerated solution from happening, we normalize the loss using the size of the test: \begin{equation} \frac{1}{\left|\sigma\left(C, E^{*}(C), D_{b}\right)\right|} L\left(A, W^{*}\left(A\right), \sigma\left(C, E^{*}(C), D_{b}\right)\right) \label{eq:interact} \end{equation} where $|\sigma(C, E^{*}(C), D_{b})|$ is the cardinality of the set $\sigma(C, E^{*}(C), D_{b})$. Under the Skillearn terminologies, the loss in Eq.(\ref{eq:interact}) is the interaction function where the testee and tester interact. The testee aims to minimize this loss to ``pass" the testee and the tester aims to maximize this loss to ``fail" the testee. To measure the meaningfulness of a test, we check how well the optimally-trained task executor $E^*(C)$ and data encoder $X^*(C)$ of the tester perform on the validation data $D_{er}^{\textrm{(val)}}$ in the target task $J_2$, and the performance is measured by the validation loss: $L(E^{*}(C), X^{*}(C), D_{er}^{(\mathrm{val})})$. $E^*(C)$ and $X^*(C)$ are trained using the test generated by $C$ in the second stage. If the validation loss is small, it means that the created test is helpful in training the task executor and therefore is considered as being meaningful. To create a meaningful test, the tester learns $C$ by minimizing $L(E^{*}(C), X^{*}(C), D_{er}^{(\mathrm{val})})$. In sum, $C$ is learned by maximizing $L(A, W^{*}(A), \sigma(C, E^{*}(C), D_{b}))/|\sigma(C, E^{*}(C), D_{b})|-\lambda L(E^{*}(C), X^{*}(C), D_{er}^{(\mathrm{val})})$, where $\lambda$ is a tradeoff parameter between these two objectives. Under the Skillearn terminology, this stage is a validation stage. Table~\ref{tb:lpt-vs} summarizes the key elements of this stage. The active learners include both the testee and the tester. The remaining learnable parameters include the architecture of the testee and the test creator of the tester. The validation datasets include the validation data in the target-task $J_2$ performed by the tester. The active auxiliary datasets include the test bank. The validation loss is $L(E^{*}(C), X^{*}(C), D_{er}^{(\mathrm{val})})$. The interaction function is $\frac{1}{|\sigma(C, E^{*}(C), D_{b})|} L(A, W^{*}(A), \sigma(C, E^{*}(C), D_{b}))$. The optimization problem is: \begin{equation} \max _{C} \min _{A}\;\; \frac{1}{\left|\sigma\left(C, E^{*}(C), D_{b}\right)\right|} L\left(A, W^{*}\left(A\right), \sigma\left(C, E^{*}(C), D_{b}\right)\right)-\lambda L\left(E^{*}(C), X^{*}(C), D_{er}^{(\mathrm{val})}\right). \end{equation} \begin{table}[t] \centering \begin{tabular}{l|p{10.5cm}} \hline Skillearn & Learning by Passing Tests\\ \hline Learners & 1) Testee; 2) Tester \\ \hline Learnable parameters & 1) Architecture of testee; 2) Network weights of testee; 3) Data encoder of tester; 4) Target-task executor of tester; 5) Test creator of tester. \\ \hline Interaction function & Testee's prediction loss defined on the test created by the tester: $ L(A, W^{*}(A), \sigma(C, E^{*}(C), D_{b}))/|\sigma(C, E^{*}(C), D_{b})|$ \\ \hline Learning stages & Learning stage I: the testee learns its network weights on its training data: $W^{*}(A)=\min _{W} L(A, W, D_{ee}^{(\mathrm{tr})})$\newline Learning stage II: the tester uses its test creator to select a subset of examples from the test bank, then it learns its data encoder and target-task executor on its training data and on the selected examples from the test bank: $E^{*}(C), X^{*}(C)=\min _{E, X} \;\; L(E, X, D_{er}^{(\mathrm{tr})}) +\gamma L(E, X, \sigma(C, E, D_{b})).$ \\ \hline Validation stage & 1) The testee updates its architecture to minimize the prediction loss on the test created by the tester; 2) The tester updates its test creator to maximize the testee's prediction loss and minimize its own validation loss. $\max _{C} \min _{A}\;\; \frac{1}{|\sigma(C, E^{*}(C), D_{b})|} L(A, W^{*}(A), \sigma(C, E^{*}(C), D_{b}))-\lambda L(E^{*}(C), X^{*}(C), D_{er}^{(\mathrm{val})}).$\\ \hline Datasets & 1) Training data of the testee; 2) Training data of the tester; 3) Validation data of the tester; 4) Test bank.\\ \hline \end{tabular} \caption{Instantiation of Skillearn to LPT} \label{tb:sltolpt} \end{table} The three stages are mutually dependent: $W^*(A)$ learned in the first stage and $E^*(C)$ and $X^*(C)$ learned in the second stage are used to define the objective function in the third stage; the updated $C$ and $A$ in the third stage in turn change the objective functions in the first and second stage, which subsequently render $W^*(A)$, $E^*(C)$, and $X^*(C)$ to be changed. Putting these pieces together, we instantiate the Skillearn framework into the following LPT formulation: \begin{equation} \begin{array}{l} \max _{C} \min _{A}\;\; \frac{1}{\left|\sigma\left(C, E^{*}(C), D_{b}\right)\right|} L\left(A, W^{*}\left(A\right), \sigma\left(C, E^{*}(C), D_{b}\right)\right)-\lambda L\left(E^{*}(C), X^{*}(C), D_{er}^{(\mathrm{val})}\right) \textrm{(III)} \\ s.t. \;\; E^{*}(C), X^{*}(C)=\min _{E, X} \;\; L\left(E, X, D_{er}^{(\mathrm{tr})}\right) +\gamma L\left(E, X, \sigma\left(C, E, D_{b}\right)\right) \textrm{(Stage II)} \\ \quad\;\;\; W^{*}\left(A\right)=\min _{W}\;\; L\left(A, W, D_{ee}^{(\mathrm{tr})}\right) \textrm{(Stage I)} \end{array} \label{eq:learning_objective} \end{equation} This formulation nests three optimization problems. On the constraints of the outer optimization problem are two inner optimization problems corresponding to the first and second learning stage respectively. The objective function of the outer optimization problem corresponds to the validation stage. Table~\ref{tb:sltolpt} summarizes the instantiation of Skillearn to LPT. As of now, the test $\sigma(C, E, D_{b})$ is represented as a subset, which is highly discrete and therefore difficult for optimization. To address this problem, we perform a continuous relaxation of $\sigma(C, E, D_{b})$: \begin{equation} \sigma(C, E, D_{b}) =\{(d,f(d,C,E))|d\in D_{b}\} \end{equation} where for each example $d$ in the test bank, the original binary value indicating whether $d$ should be selected is now relaxed to a continuous probability $f(d,C,E)$ representing how likely $d$ should be selected. Under this relaxation, $L(E, X, \sigma(C, E, D_{b}))$ can be computed as follows: \begin{equation} L(E, X, \sigma(C, E, D_{b}))= \sum_{d\in D_{b}} f(d,C,E) \ell (E,X,d) \end{equation} where we calculate the loss $\ell (E,X,d)$ on each test-bank example and weigh this loss using $f(d,C,E))$. If $f(d,C,E))$ is small, it means that $d$ is less likely to be selected into the test and its corresponding loss should be down-weighted. Similarly, $L(A, W^{*}(A), \sigma(C, E^{*}(C), D_{b}))$ is calculated as $\sum_{d\in D_{b}} f(d,C,E^{*}(C)) \ell (A, W^{*}(A),d)$. And $|\sigma(C, E^{*}(C), D_{b})|$ can be calculated as \begin{equation} |\sigma(C, E^{*}(C), D_{b})|=\sum_{d\in D_{b}} f(d,C,E^{*}(C)) \end{equation} Similar to \citep{liu2018darts}, we represent the architecture $A$ of the testee in a differentiable way. The search space of $A$ is composed of a large number of building blocks. The output of each block is associated with a variable $a$ indicating how important this block is. After learning, blocks whose $a$ is among the largest are retained to form the final architecture. In this end, architecture search amounts to optimizing the set of architecture variables $A=\{a\}$. \subsubsection{Optimization Algorithm} In this section, we instantiate the general optimization framework in Section~\ref{opt:sk} to derive an optimization algorithm for LPT. We approximate $E^{*}(C)$ and $X^{*}(C)$ using one-step gradient descent update of $E$ and $X$ with respect to $L(E, X, D_{er}^{(\mathrm{tr})}) +\gamma L(E, X, \sigma(C, E, D_{b}))$ and approximate $W^{*}(A)$ using one-step gradient descent update of $W$ with respect to $L(A, W, D_{ee}^{(\mathrm{tr})})$. Then we plug in these approximations into \begin{equation} L(A, W^{*}(A), \sigma(C, E^{*}(C), D_{b}))/|\sigma(C, E^{*}(C), D_{b})|-\lambda L(E^{*}(C), X^{*}(C), D_{er}^{(\mathrm{val})}), \label{eq:3rd-obj} \end{equation} and perform gradient-descent update of $C$ and $A$ with respect to this approximated objective. In the sequel, we use $\nabla^2_{Y,X}f(X,Y)$ to denote $\frac{\partial f(X,Y)}{\partial X\partial Y}$. Approximating $W^{*}(A)$ using $W'=W - \xi_{ee} \nabla_{W}L(A, W, D_{ee}^{(\mathrm{tr})})$ where $\xi_{ee}$ is a learning rate and simplifying the notation of $ \sigma(C, E^*(C), D_{b})$ as $\sigma$, we can calculate the approximated gradient of $L\left(A, W^{*}\left(A\right),\sigma\right)$ w.r.t $A$ as: \begin{equation} \begin{array}{l} \nabla_{A} L\left(A, W^{*}\left(A\right),\sigma\right)\approx \\ \nabla_{A} L\left(A,W - \xi_{ee} \nabla_{W}L\left(A, W, D_{ee}^{(\mathrm{tr})}\right), \sigma\right)=\\ \nabla_{A} L\left(A, W^{\prime}, \sigma\right)-\xi_{ee} \nabla_{A, W}^{2} L\left(A, W, D_{ee}^{(\mathrm{tr})}\right) \nabla_{W^{\prime}} L\left(A, W^{\prime},\sigma\right), \end{array} \label{eq:descent_arch} \end{equation} The second term in the third line involves expensive matrix-vector product, whose computational complexity can be reduced by a finite difference approximation: \begin{equation} \begin{array}{ll} \nabla_{A, W}^{2} L\left(A, W, D_{ee}^{(\mathrm{tr})}\right)\nabla_{W^{\prime}} L\left(A, W^{\prime},\sigma\right)\approx \frac{1}{2\alpha_{ee}}\left(\nabla_{A} L\left(A, W^{+}, D_{ee}^{(\mathrm{tr})}\right)-\nabla_{A} L\left(A, W^{-}, D_{ee}^{(\mathrm{tr})}\right)\right), \end{array} \label{eq:finite-aw} \end{equation} where $W^{\pm}=W \pm \alpha_{ee} \nabla_{W^{\prime}} L\left(A, W^{\prime},\sigma\right)$ and $\alpha_{ee}$ is a small scalar that equals $0.01 /\left\|\nabla_{W^{\prime}} L\left(A, W^{\prime},\sigma\right))\right\|_{2}$. We approximate $E^*(C)$ and $X^*(C)$ using the following one-step gradient descent update of $E$ and $C$ respectively: \begin{equation} \begin{array}{l} E^{\prime}=E-\xi_{E} \nabla_{E}[ L(E, X, D_{er}^{(\mathrm{tr})})+\gamma L(E, X, \sigma(C,E,D_b))]\\ X^{\prime}=X-\xi_{X} \nabla_{X}[ L(E, X, D_{er}^{(\mathrm{tr})})+\gamma L(E, X, \sigma(C,E,D_b))] \label{eq:update_ec} \end{array} \end{equation} where $\xi_{E}$ and $\xi_{X}$ are learning rates. Plugging in these approximations into the objective function in Eq.(\ref{eq:3rd-obj}), we can learn $C$ by maximizing the following objective using gradient methods: \begin{equation} L(A, W^{\prime}, \sigma(C, E^{\prime}, D_{b}))/|\sigma(C, E', D_{b})|-\lambda L(E^{\prime}, X^{\prime}, D_{er}^{(\mathrm{val})}) \end{equation} The derivative of the second term in this objective with respect to $C$ can be calculated as: \begin{equation} \begin{array}{l} \nabla _{C}L(E^{\prime}, X^{\prime}, D_{er}^{(\mathrm{val})})= \frac{\partial E'}{\partial C} \nabla _{E'} L(E^{\prime}, X^{\prime}, D_{er}^{(\mathrm{val})}) + \frac{\partial X'}{\partial C}\nabla _{X'} L(E^{\prime}, X^{\prime}, D_{er}^{(\mathrm{val})})\\ \end{array} \label{eq:grad_c} \end{equation} where \begin{equation} \begin{array}{l} \frac{\partial E'}{\partial C}=-\xi_{E}\gamma \nabla^{2}_{C,E} L(E, X, \sigma(C,E,D_b))\\ \frac{\partial X'}{\partial C}=-\xi_{X}\gamma \nabla^{2}_{C,X} L(E, X, \sigma(C,E,D_b))\\ \end{array} \label{eq:sec-gra-ec} \end{equation} Similar to Eq.(\ref{eq:finite-aw}), using finite difference approximation to calculate $\nabla^{2}_{C,E} L(E, X, \sigma(C,E,D_b))$\\$\nabla _{E'} L(E^{\prime}, X^{\prime}, D_{er}^{(\mathrm{val})})$ and $\nabla^{2}_{C,X} L(E, X, \sigma(C,E,D_b))\nabla _{X'} L(E^{\prime}, X^{\prime}, D_{er}^{(\mathrm{val})})$, we have: \begin{equation} \begin{array}{l} \nabla _{C}L(E^{\prime}, X^{\prime}, D_{er}^{(\mathrm{val})})=\\ -\gamma\xi_{E}\frac{\nabla_{C}L(E^+,X,\sigma(C,E^+,D_b))-\nabla_{C}L(E^-,X,\sigma(C,E^-,D_b))}{2\alpha_{E}} -\gamma\xi_{X}\frac{\nabla_{C}L(E,X^+,\sigma(C,E,D_b))-\nabla_{C}L(E,X^-,\sigma(C,E,D_b))}{2\alpha_{X}} \end{array} \end{equation} where $E^{\pm}=E\pm\alpha_{E} \nabla_{E^\prime}L(E^\prime,X^\prime,D_{er}^{\mathrm{(val)}})$ and $X^{\pm}=X\pm\alpha_{X} \nabla_{X^\prime}L(E^\prime,X^\prime,D_{er}^{\mathrm{(val)}})$. For the first term $L(A, W^{\prime}, \sigma(C, E^{\prime}, D_{b}))/|\sigma(C, E', D_{b})|$ in the objective, we can use chain rule to calculate its derivative w.r.t $C$, which involves calculating the derivative of $L(A, W^{\prime}, \sigma(C, E^{\prime}, D_{b}))$ and $|\sigma(C, E', D_{b})|$ w.r.t to $C$. The derivative of $L(A, W^{\prime}, \sigma(C, E^{\prime}, D_{b}))$ w.r.t $C$ can be calculated as: \begin{equation} \begin{array}{l} \nabla _{C}L(A, W^{\prime}, \sigma(C, E^{\prime}, D_{b}))= \frac{\partial E'}{\partial C } \nabla_{E^{\prime}}L(A, W^{\prime}, \sigma(C, E^{\prime}, D_{b})), \end{array} \label{eq:descent_teach_v2} \end{equation} where $ \frac{\partial E'}{\partial C }$ is given in Eq.(\ref{eq:sec-gra-ec}) and $ \nabla^{2}_{C,E} L(E, X, \sigma(C,E,D_b))$ $\times \nabla_{E^{\prime}}L(A, W^{\prime}, \sigma(C, E^{\prime}, D_{b}))$ can be approximated with $\frac{1}{2\alpha_{E}}(\nabla_{C}L(E^+,X,\sigma(C,E^+,D_b))-\nabla_{C}L(E^-,X,\sigma(C,E^-,D_b)))$,\\ where $E^{\pm}$ is $E\pm\alpha_{E}\nabla_{E^{\prime}}L(A, W^{\prime}, \sigma(C, E^{\prime}, D_{b}))$. The derivative of $|\sigma(C, E', D_{b})|=\sum_{d\in D_{b}} f(d,C,E')$ w.r.t $C$ can be calculated as \begin{equation} \sum_{d\in D_{b}} \nabla_C f(d,C,E')+\frac{\partial E'}{\partial C} \nabla_{E'} f(d,C,E') \label{eq:grad_cardi} \end{equation} where $\frac{\partial E'}{\partial C}$ is given in Eq.(\ref{eq:sec-gra-ec}). The algorithm for solving LPT is summarized in Algorithm~\ref{algo:algo}. \begin{algorithm}[H] \SetAlgoLined \While{not converged}{ 1. Update the architecture of the testee by descending the gradient calculated in Eq.(\ref{eq:descent_arch})\\ 2. Update the test creator of the tester by ascending the gradient calculated in Eq.(\ref{eq:grad_c}-\ref{eq:grad_cardi})\\ 3. Update the data encoder and target-task executor of the tester using Eq.(\ref{eq:update_ec})\\ 4. Update the weights of the testee by descending $\nabla_{W}L(A, W, D_{ee}^{(\mathrm{tr})})$ } \caption{Optimization algorithm for learning by passing tests} \label{algo:algo} \end{algorithm} \subsection{Experiments} \label{sec:exp_lpt} We apply LPT for neural architecture search in image classification tasks. Following~\citep{liu2018darts}, we first perform architecture search which finds out an optimal cell, then perform architecture evaluation which composes multiple copies of the searched cell into a large network, trains it from scratch, and evaluates the trained model on the test set. We let the target task of the learner and that of the tester be the same. \subsubsection{Datasets} \label{sec:datasets} We used three datasets in the experiments: CIFAR-10, CIFAR-100, and ImageNet~\citep{deng2009imagenet}. The CIFAR-10 dataset contains 50K training images and 10K testing images, from 10 classes (the number of images in each class is equal). Following~\citep{liu2018darts}, we split the original 50K training set into a new 25K training set and a 25K validation set. In the sequel, when we mention ``training set", it always refers to the new 25K training set. During architecture search, the training set is used as the training data $D_{ee}^{(\textrm{tr})}$ of the learner and the training data $D_{er}^{(\textrm{tr})}$ of the tester. The validation set is used as the test bank $D_b$ and the validation data $D_{er}^{(\textrm{val})}$ of the tester. During architecture evaluation, the combination of the training data and validation data is used to train the large network stacking multiple copies of the searched cell. The CIFAR-100 dataset contains 50K training images and 10K testing images, from 100 classes (the number of images in each class is equal). Similar to CIFAR-100, the 50K training images are split into a 25K training set and 25K validation set. The usage of the new training set and validation set is the same as that for CIFAR-10. The ImageNet dataset contains a training set of 1.2M images and a validation set of 50K images, from 1000 object classes. The validation set is used as a test set for architecture evaluation. Following~\citep{liu2018darts}, we evaluate the architectures searched using CIFAR-10 and CIFAR-100 on ImageNet: given a cell searched using CIFAR-10 and CIFAR-100, multiple copies of it compose a large network, which is then trained on the 1.2M training data of ImageNet and evaluated on the 50K test data. \subsubsection{Experimental Settings} \label{sec:settings} Our framework is a general one that can be used together with any differentiable search method. Specifically, we apply our framework to the following NAS methods: 1) DARTS~\citep{liu2018darts}, 2) P-DARTS~\citep{chen2019progressive}, 3) DARTS\textsuperscript{+}~\citep{liang2019darts+}, 4) DARTS$^{-}$~\citep{abs-2009-01027}. The search space in these methods are similar. The candidate operations include: $3\times 3$ and $5\times 5$ separable convolutions, $3\times 3$ and $5\times 5$ dilated separable convolutions, $3\times 3$ max pooling, $3\times 3$ average pooling, identity, and zero. In LPT, the network of the learner is a stack of multiple cells, each consisting of 7 nodes. For the data encoder of the tester, we tried ResNet-18 and ResNet-50~\citep{resnet}. For the test creator and target-task executor, they are set to one feed-forward layer. $\lambda$ and $\gamma$ are both set to 1. For CIFAR-10 and CIFAR-100, during architecture search, the learner's network is a stack of 8 cells, with the initial channel number set to 16. The search is performed for 50 epochs, with a batch size of 64. The hyperparameters for the learner's architecture and weights are set in the same way as DARTS, P-DARTS, PC-DARTS, DARTS\textsuperscript{+}, and DARTS$^{-}$. The data encoder and target-task executor of the tester are optimized using SGD with a momentum of 0.9 and a weight decay of 3e-4. The initial learning rate is set to 0.025 with a cosine decay scheduler. The test creator is optimized with the Adam~\citep{adam} optimizer with a learning rate of 3e-4 and a weight decay of 1e-3. During architecture evaluation, 20 copies of the searched cell are stacked to form the learner's network, with the initial channel number set to 36. The network is trained for 600 epochs with a batch size of 96 (for both CIFAR-10 and CIFAR-100). The experiments are performed on a single Tesla v100. For ImageNet, following~\citep{liu2018darts}, we take the architecture searched on CIFAR-10 and evaluate it on ImageNet. We stack 14 cells (searched on CIFAR-10) to form a large network and set the initial channel number as 48. The network is trained for 250 epochs with a batch size of 1024 on 8 Tesla v100s. Each experiment on LPT is repeated for ten times with the random seed to be from 1 to 10. We report the mean and standard deviation of results obtained from the 10 runs. \subsubsection{Results} \begin{table}[t] \centering \begin{tabular}{l|ccc} \toprule Method & Error(\%)& Param(M)& Cost\\ \midrule *ResNet \citep{he2016deep}&22.10&1.7&-\\ *DenseNet \citep{HuangLMW17}&17.18&25.6 &-\\ \hline *PNAS \citep{LiuZNSHLFYHM18}&19.53&3.2&150\\ *ENAS \citep{pham2018efficient}&19.43&4.6&0.5\\ *AmoebaNet \citep{real2019regularized}&18.93&3.1&3150\\ \hline *GDAS \citep{DongY19}&18.38&3.4&0.2\\ *R-DARTS \citep{ZelaESMBH20}&18.01$\pm$0.26&-&1.6 \\ *DropNAS \citep{HongL0TWL020} & 16.39&4.4&0.7 \\ \hline \hline ${}^{\dag}$DARTS-1st \citep{liu2018darts} &20.52$\pm$0.31 &1.8 &0.4\\ $\;\;$LPT-R18-DARTS-1st (ours) &\textbf{19.11}$\pm$0.11&2.1&0.6 \\ \hline *DARTS-2nd \citep{liu2018darts} & 20.58$\pm$0.44&1.8&1.5 \\ $\;\;$LPT-R18-DARTS-2nd (ours) &19.47$\pm$0.20 & 2.1&1.8 \\ $\;\;$LPT-R50-DARTS-2nd (ours) &\textbf{18.40}$\pm$0.16 &2.5&2.0 \\ \hline *DARTS$^{-}$ \citep{abs-2009-01027}&17.51$\pm$0.25&3.3&0.4\\ ${}^{\dag}$DARTS$^{-}$ \citep{abs-2009-01027}& 18.97$\pm$0.16& 3.1&0.4\\ $\;\;$LPT-R18-DARTS$^{-}$ (ours) &\textbf{18.28}$\pm$0.14&3.4& 0.6\\ \hline ${}^{\Delta}$DARTS$^{+}$ \citep{abs-1909-06035}&17.11$\pm$0.43&3.8&0.2\\ $\;\;$LPT-R18-DARTS$^{+}$ (ours) &\textbf{16.58}$\pm$0.19& 3.7&0.3 \\ \hline $\dag$PC-DARTS \citep{abs-1907-05737} &17.96$\pm$0.15&3.9&0.1 \\ $\;\;$LPT-R18-PC-DARTS (ours)&17.04$\pm$0.05&3.6&0.1 \\ $\;\;$LPT-R50-PC-DARTS (ours)& \textbf{16.97}$\pm$0.21&4.0 &0.1 \\ \hline *P-DARTS \citep{chen2019progressive}&17.49&3.6&0.3\\ $\;\;$LPT-R18-P-DARTS (ours) &\textbf{16.28$\pm$0.10}&3.8& 0.5\\ $\;\;$LPT-R50-P-DARTS (ours) & 16.38$\pm$0.07& 3.6& 0.5\\ \bottomrule \end{tabular} \caption{Results on CIFAR-100, including classification error (\%) on the test set, number of parameters (millions) in the searched architecture, and search cost (GPU days). LPT-R18-DARTS-1st denotes that our method LPT is applied to the search space of DARTS. Similar meanings hold for other notations in such a format. R18 and R50 denote that the data encoder of the tester in LPT is set to ResNet-18 and ResNet-50 respectively. DARTS-1st and DARTS-2nd denotes that first order and second order approximation is used in DARTS. * means the results are taken from DARTS$^{-}$ \citep{abs-2009-01027}. $\dag$ means we re-ran this method for 10 times. $\Delta$ means the algorithm ran for 600 epochs instead of 2000 epochs in the architecture evaluation stage, to ensure a fair comparison with other methods (where the epoch number is 600). The search cost is measured by GPU days on a Tesla v100. } \label{tab:cifar100} \end{table} \begin{table}[t] \centering \begin{tabular}{l|ccc} \toprule Method& Error(\%)& Param(M) & Cost\\ \midrule *DenseNet \citep{HuangLMW17}&3.46&25.6 &-\\ \hline *HierEvol \citep{liu2017hierarchical}&3.75$\pm$0.12& 15.7 &300\\ *NAONet-WS \citep{LuoTQCL18} & 3.53 & 3.1&0.4 \\ *PNAS \citep{LiuZNSHLFYHM18} &3.41$\pm$0.09 &3.2& 225\\ *ENAS \citep{pham2018efficient} &2.89 & 4.6 &0.5 \\ *NASNet-A \citep{zoph2018learning} & 2.65 & 3.3& 1800\\ *AmoebaNet-B \citep{real2019regularized} & 2.55$\pm$0.05 & 2.8&3150 \\ \hline *R-DARTS \citep{ZelaESMBH20} &2.95$\pm$0.21 &- & 1.6 \\ *GDAS \citep{DongY19}&2.93& 3.4& 0.2 \\ *SNAS \citep{xie2018snas} &2.85 & 2.8& 1.5\\ *BayesNAS \citep{ZhouYWP19} &2.81$\pm$0.04 &3.4&0.2 \\ *MergeNAS \citep{WangXYYHS20} &2.73$\pm$0.02 &2.9 & 0.2 \\ *NoisyDARTS \citep{abs-2005-03566} &2.70$\pm$0.23&3.3 & 0.4 \\ *ASAP \citep{NoyNRZDFGZ20} &2.68$\pm$0.11 & 2.5&0.2 \\ *SDARTS \citep{abs-2002-05283}&2.61$\pm$0.02 & 3.3& 1.3 \\ *DropNAS \citep{HongL0TWL020} &2.58$\pm$0.14 & 4.1&0.6 \\ *PC-DARTS \citep{abs-1907-05737} &2.57$\pm$0.07&3.6& 0.1\\ *FairDARTS \citep{abs-1911-12126} &2.54 &3.3 &0.4 \\ *DrNAS \citep{abs-2006-10355} &2.54$\pm$0.03&4.0& 0.4\\ *P-DARTS \citep{chen2019progressive} &2.50 &3.4&0.3\\ \hline \hline *DARTS-1st \citep{liu2018darts} &3.00$\pm$0.14&3.3& 0.4\\ $\;\;$LPT-R18-DARTS-1st (ours) &\textbf{2.85}$\pm$0.09 &2.7&0.6 \\ \hline *DARTS-2nd \citep{liu2018darts} &2.76$\pm$0.09&3.3& 1.5\\ $\;\;$LPT-R18-DARTS-2nd (ours) &2.72$\pm$0.07&3.4& 1.8 \\ $\;\;$LPT-R50-DARTS-2nd (ours) & \textbf{2.68}$\pm$0.02 &3.4& 2.0\\ \hline *DARTS$^{-}$ \citep{abs-2009-01027}&2.59$\pm$0.08& 3.5&0.4\\ ${}^{\dag}$DARTS$^{-}$ \citep{abs-2009-01027}& 2.97$\pm$0.04& 3.3&0.4\\ $\;\;$LPT-R18-DARTS$^{-}$ (ours) &2.74$\pm$0.07&3.4& 0.6\\ \hline ${}^{\Delta}$DARTS$^{+}$ \citep{abs-1909-06035}&2.83$\pm$0.05&3.7&0.4\\ $\;\;$LPT-R18-DARTS$^{+}$ (ours) &\textbf{2.69}$\pm$0.05&3.6& 0.5\\ \hline *PC-DARTS \citep{abs-1907-05737} &\textbf{2.57}$\pm$0.07&3.6& 0.1\\ $\;\;$LPT-R18-PC-DARTS (ours)& 2.65$\pm$0.17&3.7&0.1\\ \hline *P-DARTS \citep{chen2019progressive}& 2.50&3.4& 0.3\\ $\;\;$LPT-R18-P-DARTS (ours)& 2.58$\pm$0.14& 3.3 & 0.5 \\ \bottomrule \end{tabular} \caption{ Results on CIFAR-10. * means the results are taken from DARTS$^{-}$ \citep{abs-2009-01027}, NoisyDARTS \citep{abs-2005-03566}, and DrNAS \citep{abs-2006-10355}. The rest notations are the same as those in Table~\ref{tab:cifar100}. } \label{tab:cifar10} \end{table} \begin{table*}[t] \small \centering \begin{tabular}{l|cccc} \toprule \multirow{2}{*}{Method} & Top-1 &Top-5 &Param & Cost \\ & Error (\%) & Error (\%)&(M) & (GPU days)\\ \midrule *Inception-v1 \citep{googlenet}&30.2 &10.1&6.6&- \\ *MobileNet \citep{HowardZCKWWAA17} & 29.4& 10.5 &4.2&- \\ *ShuffleNet 2$\times$ (v1) \citep{ZhangZLS18} & 26.4 &10.2 & 5.4&-\\ *ShuffleNet 2$\times$ (v2) \citep{MaZZS18} & 25.1 &7.6 & 7.4&-\\ \hline *NASNet-A \citep{zoph2018learning} &26.0 &8.4 &5.3 &1800 \\ *PNAS \citep{LiuZNSHLFYHM18} &25.8 &8.1 &5.1 &225 \\ *MnasNet-92 \citep{TanCPVSHL19} & 25.2 & 8.0& 4.4&1667\\ *AmoebaNet-C \citep{real2019regularized} & 24.3 &7.6 &6.4&3150 \\ \hline *SNAS \citep{xie2018snas} & 27.3 &9.2 &4.3 &1.5 \\ *BayesNAS \citep{ZhouYWP19} &26.5 &8.9 &3.9&0.2 \\ *PARSEC \citep{abs-1902-05116} & 26.0 &8.4&5.6&1.0 \\ *GDAS \citep{DongY19} & 26.0&8.5 &5.3 & 0.2\\ *DSNAS \citep{HuXZLSLL20} &25.7& 8.1 &- & -\\ *SDARTS-ADV \citep{abs-2002-05283}&25.2& 7.8 &5.4& 1.3 \\ *PC-DARTS \citep{abs-1907-05737} & 25.1 &7.8&5.3&0.1\\ *ProxylessNAS \citep{cai2018proxylessnas} & 24.9 &7.5 &7.1 &8.3 \\ *FairDARTS \citep{abs-1911-12126} &24.9 &7.5 &4.8 &0.4 \\ *P-DARTS (CIFAR-100) \citep{chen2019progressive}&24.7& 7.5&5.1&0.3\\ *P-DARTS (CIFAR-10) \citep{chen2019progressive}&24.4 &7.4&4.9&0.3\\ *FairDARTS \citep{abs-1911-12126} &24.4 &7.4 &4.3 &3.0 \\ *DrNAS \citep{abs-2006-10355} & 24.2 &7.3& 5.2&3.9\\ *PC-DARTS \citep{abs-1907-05737} & 24.2 &7.3&5.3&3.8\\ *DARTS$^{+}$ \citep{abs-1909-06035}& 23.9& 7.4&5.1&6.8\\ *DARTS$^{-}$ \citep{abs-2009-01027}&23.8& 7.0&4.9&4.5\\ *DARTS$^{+}$ (CIFAR-100) \citep{abs-1909-06035}&23.7& 7.2&5.1&0.2\\ \hline \hline *DARTS-2nd-CIFAR-10 \citep{liu2018darts} & 26.7 &8.7&4.7&4.0 \\ $\;\;$LPT-R18-DARTS-2nd-CIFAR-10 (ours) & \textbf{25.3}&7.9&4.7&4.0 \\ \hline *P-DARTS (CIFAR10) \citep{chen2019progressive}&24.4 &7.4&4.9&0.3\\ $\;\;$LPT-R18-P-DARTS-CIFAR10 (ours) & \textbf{24.2}& 7.3&4.9&0.5 \\ \hline *P-DARTS (CIFAR100) \citep{chen2019progressive}&24.7& 7.5&5.1&0.3\\ $\;\;$LPT-R18-P-DARTS-CIFAR100 (ours) & \textbf{24.0}& 7.1&5.3&0.5\\ \hline *PC-DARTS-ImageNet \citep{abs-1907-05737} & 24.2 &7.3&5.3&3.8\\ $\;\;$LPT-R18-PC-DARTS-ImageNet (ours)& \textbf{23.4} & \textbf{6.8}&5.7&4.0\\ \bottomrule \end{tabular} \caption{Results on ImageNet, including top-1 and top-5 classification errors on the test set, number of weight parameters (millions), and search cost (GPU days). * means the results are taken from DARTS$^{-}$ \citep{abs-2009-01027} and DrNAS \citep{abs-2006-10355}. The rest notations are the same as those in Table~\ref{tab:cifar100}. The first row block shows networks designed by humans manually. The second row block shows non-gradient based search methods. The third block shows gradient-based methods. } \label{tab:imagenet} \end{table*} Table~\ref{tab:cifar100} shows the classification error (\%), number of weight parameters (millions), and search cost (GPU days) of different NAS methods on CIFAR-100. From this table, we make the following observations. \textbf{First}, when our method LPT is applied to different NAS baselines including DARTS-1st (first order approximation), DARTS-2nd (second order approximation), DARTS$^{-}$ (our run), DARTS$^{+}$, PC-DARTS, and P-DARTS, the classification errors of these baselines can be significantly reduced. For example, applying our method to P-DARTS, the error reduces from 17.49\% to 16.28\%. Applying our method to DARTS-2nd, the error reduces from 20.58\% to 18.40\%. This demonstrates the effectiveness of our method in searching for a better architecture. In our method, the learner continuously improves its architecture by passing the tests created by the tester with increasing levels of difficulty. These tests can help the learner to identify the weakness of its architecture and provide guidance on how to improve it. Our method creates a new test on the fly based on how the learner performs in the previous round. From the test bank, the tester selects a subset of difficult examples to evaluate the learner. This new test poses a greater challenge to the learner and encourages the learner to improve its architecture so that it can overcome the new challenge. In contrast, in baseline NAS approaches, a single fixed validation set is used to evaluate the learner. The learner can achieve a good performance via ``cheating": focusing on performing well on the majority of easy examples and ignoring the minority of difficult examples. As a result, the learner's architecture does not have the ability to deal with challenging cases in the unseen data. \textbf{Second}, LPT-R50-DARTS-2nd outperforms LPT-R18-DARTS-2nd, where the former uses ResNet-50 as the data encoder in the tester while the latter uses ResNet-18. ResNet-50 has a better ability of learning representations than ResNet-18 since it is ``deeper": 50 layers versus 18 layers. This shows that a ``stronger" tester can help the learner to learn better. With a more powerful data encoder, the tester can better understand examples in the test bank and can make better decisions in creating difficult and meaningful tests. Tests with better quality can more effectively evaluate the learner and promote its learning capability. \textbf{Third}, our method LPT-R18-P-DARTS achieves the best performance among all methods, which further demonstrates the effectiveness of LPT in driving the frontiers of neural architecture search forward. \textbf{Fourth}, the number of weight parameters and search costs corresponding to our methods are on par with those in differentiable NAS baselines. This shows that LPT is able to search better-performing architectures without significantly increasing network size and search cost. A few additional remarks: 1) On CIFAR-100, DARTS-2nd with second-order approximation in the optimization algorithm is not advantageous compared with DARTS-1st which uses first-order approximation; 2) In our run of DARTS$^{-}$, the performance reported in~\citep{abs-2009-01027} cannot be achieved; 3) In our run of DARTS$^+$, in the architecture evaluation stage, we set the number of epochs to 600 instead of 2000 as used in~\citep{abs-1909-06035}, to ensure a fair comparison with other methods (where the epoch number is 600). Table~\ref{tab:cifar10} shows the classification error (\%), number of weight parameters (millions), and search cost (GPU days) of different NAS methods on CIFAR-10. As can be seen, applying our proposed LPT to DARTS-1st, DARTS-2nd, DARTS$^{-}$, and DARTS$^{+}$ significantly reduces the errors of these baselines. For example, with the usage of LPT, the error of DARTS-2nd is reduced from 2.76\% to 2.68\%. This further demonstrates the efficacy of our method in searching better-performing architectures, by creating tests with increasing levels of difficulty and improving the learner through taking these tests. On PC-DARTS and P-DARTS, applying our method does not yield better performance. Table~\ref{tab:imagenet} shows the results on ImageNet, including top-1 and top-5 classification errors on the test set. In our proposed LPT-R18-PC-DARTS-ImageNet, the architecture is searched on ImageNet, where our method performs much better than PC-DARTS-ImageNet and achieves the lowest error (23.4\% top-1 error and 6.8\% top-5 error) among all methods in Table~\ref{tab:imagenet}. In our methods including LPT-R18-P-DARTS-CIFAR100, LPT-R18-P-DARTS-CIFAR10, and LPT-R18-DARTS-2nd-CIFAR10, the architectures are searched on CIFAR-10 or CIFAR-100 and evaluated on ImageNet, where these methods outperform their corresponding baselines P-DARTS-CIFAR100, P-DARTS-CIFAR10, and DARTS-2nd-CIFAR10. These results further demonstrate the effectiveness of our method. \subsubsection{Ablation Studies} In order to evaluate the effectiveness of individual modules in LPT, we compare the full LPT framework with the following ablation settings. \begin{itemize}[leftmargin=*] \item \textbf{Ablation setting 1}. In this setting, the tester creates tests solely by maximizing their level of difficulty, without considering their meaningfulness. Accordingly, the second stage in LPT where the tester learns to perform a target-task by leveraging the created tests is removed. The tester directly learns a selection scalar $s(d)\in[0,1]$ for each example $d$ in the test bank without going through a data encoder or a test creator. The corresponding formulation is: \begin{equation} \begin{array}{l} \max _{S} \min _{A} \;\; \frac{1}{\sum_{d\in D_{b}}s(d)} \sum_{d\in D_{b}} s(d) \ell (A, W^{*}(A),d)\\ s.t. \;\; W^{*}(A)=\min _{W} \;\; L\left(A, W, D_{ee}^{(\mathrm{tr})}\right) \end{array} \end{equation} where $S=\{s(d)|d\in D_{b}\}$. In this study, $\lambda$ and $\gamma$ are both set to 1. The data encoder of the tester is ResNet-18. For CIFAR-100, to avoid performance collapse because of skip connections, LPT is applied to P-DARTS. For CIFAR-10, LPT is applied to DARTS-2nd. \item \textbf{Ablation setting 2}. In this setting, in the second stage of LPT, the tester is trained solely based on the create test, without using the training data of the target task. The corresponding formulation is: \begin{equation} \begin{array}{l} \max _{C} \min _{A} \;\; \frac{1}{\left|\sigma\left(C, E^{*}(C), D_{b}\right)\right|} L\left(A, W^{*}\left(A\right), \sigma\left(C, E^{*}(C), D_{b}\right)\right)-\lambda L\left(E^{*}(C), X^{*}(C), D_{er}^{(\mathrm{val})}\right) \\ s.t. \;\; E^{*}(C), X^{*}(C)=\min _{E, X} \;\; L\left(E, X, \sigma\left(C, E, D_{b}\right)\right) \\ \quad\;\;\; W^{*}\left(A\right)=\min _{W}\;\;L\left(A, W, D_{ee}^{(\mathrm{tr})}\right) \end{array} \end{equation} In this study, $\lambda$ and $\gamma$ are both set to 1. The data encoder of the tester is ResNet-18. For CIFAR-100, to avoid performance collapse because of skip connections, LPT is applied to P-DARTS. For CIFAR-10, LPT is applied to DARTS-2nd. \item Ablation study on $\lambda$. We are interested in how the learner's performance varies as the tradeoff parameter $\lambda$ in Eq.(\ref{eq:learning_objective}) increases. In this study, the other tradeoff parameter $\gamma$ in Eq.(\ref{eq:learning_objective}) is set to 1. For both CIFAR-100 and CIFAR-10, we randomly sample 5K data from the 25K training and 25K validation data, and use it as a test set to report performance in this ablation study. The rest 45K data (22.5K training data and 22.5K validation data) is used for architecture search and evaluation. Tester's data encoder is ResNe-18. LPT is applied to P-DARTS. \item Ablation study on $\gamma$. We investigate how the learner's performance varies as $\gamma$ increases. In this study, the other tradeoff parameter $\lambda$ is set to 1. Similar to the ablation study on $\lambda$, on 5K randomly-sampled test data, we report performance of architectures searched and evaluated on 45K data. Tester's data encoder is ResNe-18. LPT is applied to P-DARTS. \end{itemize} \begin{table}[t] \centering \begin{tabular}{l|c} \hline Method & Error (\%)\\ \hline Difficulty only (CIFAR-100) & 18.12$\pm$0.11 \\ Difficulty + meaningfulness (CIFAR-100) &\textbf{17.18}$\pm$0.12 \\ \hline Difficulty only (CIFAR-10) & 2.79$\pm$0.06 \\ Difficulty + meaningfulness (CIFAR-10) &\textbf{2.72}$\pm$0.07 \\ \hline \end{tabular} \caption{Results for ablation setting 1. ``Difficulty only" denotes that the tester creates tests solely by maximizing their level of difficulty, without considering their meaningfulness, i.e., the tester does not use the tests to learn to perform the target task. ``Difficulty + meaningfulness" denotes the full LPT framework where the tester creates tests by maximizing both difficulty and meaningfulness. } \label{tab:ab1} \end{table} \begin{table}[t] \centering \begin{tabular}{l|c} \hline Method & Error (\%)\\ \hline Test only (CIFAR-100) & 17.54$\pm$0.07 \\ Test + Training data (CIFAR-100) &\textbf{17.18}$\pm$0.12 \\ \hline Test only (CIFAR-10) & 2.75$\pm$0.03 \\ Test + Training data (CIFAR-10) &\textbf{2.72}$\pm$0.07 \\ \hline \end{tabular} \caption{ Results for ablation setting 2. ``Test only" denotes that the tester is trained only using the create test to perform the target task. ``Test + Training data" denotes that the tester is trained using both the test and the training data of the target task. } \label{tab:ab4} \end{table} Table~\ref{tab:ab1} shows the results for ablation setting 1. As can be seen, on both CIFAR-10 and CIFAR-100, creating tests that are both difficult and meaningful is better than creating tests solely by maximizing difficulty. The reason is that a difficult test could be composed of bad-quality examples such as outliers and incorrectly-labeled examples. Even a highly-accurate learner model cannot achieve good performance on such erratic examples. To address this problem, it is necessary to make the created tests meaningful. LPT achieves meaningfulness of the tests by making the tester leverage the created tests to perform the target task. The results demonstrate that this is an effective way of improving meaningfulness. Table~\ref{tab:ab4} shows the results for ablation setting 2. As can be seen, for both CIFAR-100 and CIFAR-10, using both the created test and the training data of the target task to train the tester performs better than using the test only. By leveraging the training data, the data encoder can be better trained. And a better encoder can help to create higher-quality tests. \begin{figure}[H] \centering \includegraphics[width=0.49\columnwidth]{figs/gamma-100.pdf} \includegraphics[width=0.49\columnwidth]{figs/gamma-10.pdf} \caption{How errors change as $\lambda$ increases.} \label{fig:lambda} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.49\columnwidth]{figs/lambda-100.pdf} \includegraphics[width=0.49\columnwidth]{figs/lambda-10.pdf} \caption{How errors change as $\gamma$ increases.} \label{fig:gamma} \end{figure} Figure~\ref{fig:lambda} shows how classification errors change as $\lambda$ increases. As can be seen, on both CIFAR-100 and CIFAR-10, when $\lambda$ increases from 0.1 to 0.5, the error decreases. However, further increasing $\lambda$ renders the error to increase. From the tester's perspective, $\lambda$ explores a tradeoff between difficulty and meaningfulness of the tests. Increasing $\lambda$ encourages the tester to create tests that are more meaningful. Tests with more meaningfulness can more reliably evaluate the learner. However, if $\lambda$ is too large, the tests are biased to be more meaningful and less difficult. Lacking enough difficulty, the tests may not be compelling enough to drive the learner for improvement. Such a tradeoff effect is observed in the results on CIFAR-10 as well. Figure~\ref{fig:gamma} shows how classification errors change as $\gamma$ increases. As can be seen, on both CIFAR-100 and CIFAR-10, when $\gamma$ increases from 0.1 to 0.5, the error decreases. However, further increasing $\gamma$ renders the error to increase. Under a larger $\gamma$, the created test plays a larger role in training the tester to perform the target task. This implicitly encourages the test creator to generate tests that are more meaningful. However, if $\gamma$ is too large, the training is dominated by the created test which incurs the following risk: if the test is not meaningful, it will result in a poor-quality data-encoder which further degrades the quality of test creation. \subsection{Summary} In this section, we apply Skillearn to formalize a skill in human learning -- learning by passing tests (LPT) and use it for neural architecture search. In LPT, a tester model creates a sequence of tests with growing levels of difficulty. A learner model continuously improves its learning ability by striving to pass these increasingly more-challenging tests. The tester learns to select hard validation examples rendering the learner to make large prediction errors and the learner refines its model to rectify these prediction errors. Our framework achieves significant improvement in neural architecture search on CIFAR-100, CIFAR-10, and ImageNet. \section{Case Study II: Interleaving Learning} In this section, we instantiate our general Skillearn framework to formalize another human learning technique – interleaving learning, and apply it to improve machine learning. Interleaving learning is a learning technique where a learner interleaves the studies of multiple topics: study topic $A$ for a while, then switch to $B$, subsequently to $C$; then switch back to $A$, and so on, forming a pattern of $ABCABCABC\cdots$. Interleaving learning is in contrast to blocked learning, which studies one topic very thoroughly before moving to another topic. Compared with blocked learning, interleaving learning increases long-term retention and improves ability to transfer learned knowledge. We are interested in investigating whether the interleaving strategy is helpful for training machine learning models. We instantiate the Skillearn framework to an interleaving learning (IL) framework. We assume there are $K$ learning tasks, each performed by a learner model. Each learner has a data encoder and a task-specific head. The data encoders of all learners share the same architecture, but may have different weight parameters. The $K$ learners perform $M$ rounds of interleaving learning with the following order: \begin{equation} \underbrace{l_1,l_2,\cdots,l_K}_{\textrm{Round }1} \underbrace{l_1,l_2,\cdots,l_K}_{\textrm{Round }2} \cdots \underbrace{l_1,l_2,\cdots,l_K}_{\textrm{Round }m} \cdots \underbrace{l_1,l_2,\cdots,l_K}_{\textrm{Round }M} \end{equation} where $l_k$ denotes that the $k$-th learner performs learning. In the first round, we first learn $l_1$, then learn $l_2$, and so on. At the end of the first round, $l_K$ is learned. Then we move to the second round, which starts with learning $l_1$, then learns $l_2$, and so on. This pattern repeats until the $M$ rounds of learning are finished. Between two consecutive learners $l_kl_{k+1}$, the encoder weights of the latter learner $l_{k+1}$ are encouraged to be close to the optimally learned encoder weights of the former learner $l_k$. \subsection{Method} \begin{table}[t] \centering \begin{tabular}{l|p{12cm}} \hline Notation & Meaning \\ $K$ & Number of learners\\ $M$ & Number of rounds\\ $D_k^{(\textrm{tr})}$ & Training dataset of the $k$-th learner\\ $D_k^{(\textrm{val})}$ & Validation dataset of the $k$-th learner\\ $A$ & Encoder architecture shared by all learners\\ $W_k^{(m)}$ & Weight parameters in the data encoder of the $k$-th learner in the $m$-th round\\ $H_k^{(m)}$ & Weight parameters in the task-specific head of the $k$-th learner in the $m$-th round\\ $\widetilde{W}_k^{(m)}$ & The optimal encoder weights of the $k$-th learner in the $m$-th round\\ $\widetilde{H}_k^{(m)}$ & The optimal weight parameters of the task-specific head in the $k$-th learner in the $m$-th round\\ $\gamma$ & Tradeoff parameter\\ \hline \end{tabular} \caption{Notations in interleaving learning} \label{tb:notations} \end{table} In this section, we present the details of the interleaving learning framework. There are $K$ learners. Each learner learns to perform a task. These tasks could be the same, e.g., image classification on CIFAR-10; or different, e.g., image classification on CIFAR-10, image classification on ImageNet~\citep{deng2009imagenet}, object detection on MS-COCO~\citep{coco}, etc. Each learner $k$ has a training dataset $D_k^{(\textrm{tr})}$ and a validation dataset $D_k^{(\textrm{val})}$. Each learner has a data encoder and a task-specific head performing the target task. For example, if the task is image classification, the data encoder could be a convolutional neural network extracting visual features of the input images and the task-specific head could be a multi-layer perceptron which takes the visual features of an image extracted by the data encoder as input and predicts the class label of this image. We assume the architecture of the data encoder in each learner is learnable. The data encoders of all learners share the same architecture, but their weight parameters could be different in different learners. The architectures of task-specific heads are manually designed by humans and they could be different in different learners. The $K$ learners perform $M$ rounds of interleaving learning with the following order: \begin{equation} \underbrace{l_1,l_2,\cdots,l_K}_{\textrm{Round }1} \underbrace{l_1,l_2,\cdots,l_K}_{\textrm{Round }2} \cdots \underbrace{l_1,l_2,\cdots,l_K}_{\textrm{Round }m} \cdots \underbrace{l_1,l_2,\cdots,l_K}_{\textrm{Round }M} \end{equation} where $l_k$ denotes that the $k$-th learner performs learning. In the first round, we first learn $l_1$, then learn $l_2$, and so on. At the end of the first round, $l_K$ is learned. Then we move to the second round, which starts with learning $l_1$, then learns $l_2$, and so on. This pattern repeats until the $M$ rounds of learning are finished. Between two consecutive learners $l_kl_{k+1}$, the weight parameters of the latter learner $l_{k+1}$ are encouraged to be close to the optimally learned encoder weights of the former learner $l_k$. For each learner, the architecture of its encoder remains the same across all rounds; the weights of the encoder and head can be different in different rounds. Each learner $k$ has the following learnable parameter sets: 1) architecture $A$ of the encoder; 2) in each round $m$, the learner's encoder has a set of weight parameters $W_k^{(m)}$ specific to this round; 3) in each round $m$, the learner's task-specific head has a set of weight parameters $H_k^{(m)}$ specific to this round. The encoders of all learners share the same architecture and this architecture remains the same in different rounds. The encoders of different learners have different weight parameters. The weight parameters of a learner's encoder are different in different rounds. Different learners have different task-specific heads in terms of both architectures and weight parameters. In the interleaving process, the learning of the $k$-th learner is assisted by the $(k-1)$-th learner. Specifically, during learning, the encoder weights $W_{k}$ of the $k$-th learner are encouraged to be close to the optimal encoder weights $\widetilde{W}_{k-1}$ of the $(k-1)$-th learner. This is achieved by minimizing an interactive function: $\|W_{k}-\widetilde{W}_{k-1}\|_2^2$. \begin{table}[t] \centering \begin{tabular}{l|p{8cm}} \hline Active learners & The first learner\\ \hline Active learnable parameters & Weights of the data encoder and weights of the task-specific head in the first learner\\ \hline Supporting learnable parameters & Encoder architecture shared by all learners \\ \hline Active training datasets & Training dataset of the first learner\\ \hline Active auxiliary datasets & -- \\ \hline Training loss & The first learner trains the weights of its data encoder and the weights of its task-specific head on its training dataset: $L(A, W^{(1)}_1, H^{(1)}_1,D_1^{(\textrm{tr})})$. \\ \hline Interaction function & -- \\ \hline Optimization problem & $\widetilde{W}_1^{(1)}(A) =\textrm{min}_{W^{(1)}_1,H^{(1)}_1} \; L(A, W^{(1)}_1, H^{(1)}_1,D_1^{(\textrm{tr})})$ \\ \hline \end{tabular} \caption{Learning stage 1 in interleaving learning} \label{tb:il-1} \end{table} There are $M\times K$ learning stages: in each of the $M$ rounds, each of the $K$ learners is learned in a stage. In the very first learning stage, the first learner in the first round is learned. It trains the weight parameters of its data encoder and the weight parameters of its task-specific head on its training dataset. In this learning stage (Table~\ref{tb:il-1}), the active learner is the first learner. The active learnable parameters are the weight parameters of the data encoder and the weight parameters of the task-specific head in the first learner in the first round. The supporting learnable parameters include the encoder architecture shared by all learners. The active training dataset is the training data of the first learner. There is no auxiliary dataset. The training loss is the target-task's loss defined on the training dataset of the first learner: $L(A, W^{(1)}_1, H^{(1)}_1,D_1^{(\textrm{tr})})$. There is no interaction function. The optimization problem is: \begin{equation} \widetilde{W}_1^{(1)}(A) =\textrm{min}_{W^{(1)}_1,H^{(1)}_1} \; L(A, W^{(1)}_1, H^{(1)}_1,D_1^{(\textrm{tr})}) \end{equation} In this optimization problem, $A$ is not learned. After learning, the optimal head is discarded. The optimal encoder weights $\widetilde{W}_1^{(1)}(A)$ are a function of $A$ since the training loss is a function of $A$ and $\widetilde{W}_1$ is a function of the training loss. $\widetilde{W}_1^{(1)}(A)$ is passed to the next learning stage to help with the learning of the second learner. In any other learning stage (Table~\ref{tb:il-k}), e.g., the $l$-th stage where the learner is $k$ and the round of interleaving is $m$, the active learner is the learner $k$. The active learnable parameters include weights of the data encoder and weights of the task-specific head in the $k$-th learner in the $m$-th round. The supporting learnable parameters are the encoder architecture shared by all learners. The active training dataset is the training dataset of the $k$-th learner. There is no active auxiliary dataset. The training loss is the target-task's loss defined on the training dataset of the $k$-th learner: $L(A, W^{(m)}_k, H^{(m)}_k,D_k^{(\textrm{tr})})$. The interaction function $\|W^{(m)}_k-\widetilde{W}_{l-1}\|_2^2$ encourages the encoder weights $W^{(m)}_k$ at this stage to be close to the optimal encoder weights $\widetilde{W}_{l-1}$ learned in the previous stage. The optimization problem is: \begin{equation} \widetilde{W}_k^{(m)}= \textrm{min}_{W_k^{(m)},H_k^{(m)}} \; L(A, W_k^{(m)}, H_k^{(m)},D_k^{(\textrm{tr})})+\lambda\|W^{(m)}_k-\widetilde{W}_{l-1}(A)\|^2_{2} \end{equation} where $\lambda$ is a tradeoff parameter. The optimal encoder weights are a function of the encoder architecture. The encoder architecture is not updated at this learning stage. In the round of 1 to $M-1$, the optimal heads are discarded after learning. In the round of $M$, the optimal heads are retained and will be used in the validation stage. \begin{table}[t] \centering \begin{tabular}{l|p{8cm}} \hline Active learners & The $k$-th learner \\ \hline Active learnable parameters & Weights of the data encoder and weights of the task-specific head in the $k$-th learner\\ \hline Supporting learnable parameters & Encoder architecture shared by all learners\\ \hline Active training datasets & Training dataset of the $k$-th learner \\ \hline Active auxiliary datasets & -- \\ \hline Training loss & The $k$-th learner trains the weights of its data encoder and the weights of its task-specific head on its training dataset: $L(A, W^{(m)}_k, H^{(m)}_k,D_k^{(\textrm{tr})})$ \\ \hline Interaction function & The learner encourages its encoder weights to be close to the optimal encoder weights $\widetilde{W}_{l-1}$ learned in the $l-1$ stage: $\|W^{(m)}_k-\widetilde{W}_{l-1}\|_2^2$ \\ \hline Optimization problem & $\widetilde{W}_k^{(m)}=\textrm{min}_{W_k^{(m)},H_k^{(m)}} \; L(A, W_k^{(m)}, H_k^{(m)},D_k^{(\textrm{tr})})+\lambda\|W^{(m)}_k-\widetilde{W}_{l-1}(A)\|^2_{2}$ \\ \hline \end{tabular} \caption{Learning stage $l$ with the $k$-th learner at the $m$-th round, in interleaving learning} \label{tb:il-k} \end{table} \begin{table}[t] \centering \begin{tabular}{l|p{8cm}} \hline Active learners & All learners\\ \hline Remaining learnable parameters & Encoder's architecture of all learners\\ \hline Validation datasets& Validation datasets of all learners\\ \hline Active auxiliary datasets & -- \\ \hline Validation loss & The sum of every learner's validation loss on its validation dataset: $\sum_{k=1}^K L(A, \widetilde{W}_k^{(M)}(A), \widetilde{H}_k^{(M)}(A),D_k^{(\textrm{val})}) $ \\ \hline Interaction function & -- \\ \hline Optimization problem & $\textrm{min}_{A} \; \sum_{k=1}^K L(A, \widetilde{W}_k^{(M)}(A), \widetilde{H}_k^{(M)}(A),D_k^{(\textrm{val})}) $ \\ \hline \end{tabular} \caption{Validation stage in interleaving learning} \label{tb:il-val} \end{table} In the validation stage (Table~\ref{tb:il-val}), the active learners are all $K$ learners. The remaining learnable parameters are the encoder architecture shared by all learners. The validation datasets are the validation datasets of all learners. There is no active auxiliary dataset. The validation loss is the sum of every learner's validation loss calculated using the optimal encoder weights and head weights learned in the final round: $\sum_{k=1}^K L(A, \widetilde{W}_k^{(M)}(A), \widetilde{H}_k^{(M)}(A),D_k^{(\textrm{val})})$. There is no interaction function. The optimization problem is: \begin{equation} \textrm{min}_{A} \; \sum_{k=1}^K L(A, \widetilde{W}_k^{(M)}(A), \widetilde{H}_k^{(M)}(A),D_k^{(\textrm{val})}). \end{equation} Putting all these pieces together, we instantiate the Skillearn framework to an interleaving learning framework, as shown in Eq.(\ref{eq:il}). From bottom to top, the $K$ learners perform $M$ rounds of interleaving learning. Learners in adjacent learning stages are coupled via the interaction function. The architecture $A$ is not updated in the learning stages. It is learned by minimizing the validation loss. Table~\ref{tb:sk-il} summarizes the key elements of interleaving learning under the Skillearn terminology. \begin{equation} \begin{array}{ll} \textrm{min}_{A} & \sum_{k=1}^K L(A, \widetilde{W}_k^{(M)}(A), \widetilde{H}_k^{(M)}(A),D_k^{(\textrm{val})}) \\ s.t. & \textrm{\textbf{Round $\mathbf{M}$}:}\\ & \widetilde{W}_K^{(M)}(A),\widetilde{H}_K^{(M)}(A) =\textrm{min}_{W_K^{(M)},H^{(M)}_K} \quad L(A, W_K^{(M)}, H^{(M)}_K,D_K^{(\textrm{tr})})+\lambda\|W^{(M)}_K-\widetilde{W}_{K-1}^{(M)}(A)\|^2_{2}\\ & \cdots \\ & \widetilde{W}_1^{(M)}(A), \widetilde{H}_1^{(M)}(A) =\textrm{min}_{W^{(M)}_1,H^{(M)}_1} \quad L(A, W^{(M)}_1, H^{(M)}_1,D_1^{(\textrm{tr})})+\lambda\|W^{(M)}_1-\widetilde{W}_K^{(M-1)}(A)\|^2_{2}\\ & \cdots\\ & \textrm{\textbf{Round 2}:}\\ & \widetilde{W}_K^{(2)}(A) =\textrm{min}_{W_K^{(2)},H^{(2)}_K} \quad L(A, W_K^{(2)}, H^{(2)}_K,D_K^{(\textrm{tr})})+\lambda\|W^{(2)}_K-\widetilde{W}_{K-1}^{(2)}(A)\|^2_{2}\\ & \cdots \\ & \widetilde{W}_1^{(2)}(A) =\textrm{min}_{W^{(2)}_1,H^{(2)}_1} \quad L(A, W^{(2)}_1, H^{(2)}_1,D_1^{(\textrm{tr})})+\lambda\|W^{(2)}_1-\widetilde{W}_K^{(1)}(A)\|^2_{2}\\ & \textrm{\textbf{Round 1}:}\\ & \widetilde{W}_K^{(1)}(A) =\textrm{min}_{W_K^{(1)},H^{(1)}_K} \quad L(A, W_K^{(1)}, H^{(1)}_K,D_K^{(\textrm{tr})})+\lambda\|W^{(1)}_K-\widetilde{W}_{K-1}^{(1)}(A)\|^2_{2}\\ & \cdots \\ & \widetilde{W}_k^{(1)}(A) =\textrm{min}_{W_k^{(1)},H^{(1)}_k} \quad L(A, W_k^{(1)}, H^{(1)}_k,D_k^{(\textrm{tr})})+\lambda\|W^{(1)}_k-\widetilde{W}_{k-1}^{(1)}(A)\|^2_{2}\\ & \cdots \\ & \widetilde{W}_2^{(1)}(A) =\textrm{min}_{W_2^{(1)},H^{(1)}_2} \quad L(A, W^{(1)}_2, H^{(1)}_2,D_2^{(\textrm{tr})})+\lambda\|W^{(1)}_2-\widetilde{W}_1^{(1)}(A)\|^2_{2}\\ & \widetilde{W}_1^{(1)}(A) =\textrm{min}_{W^{(1)}_1,H^{(1)}_1} \quad L(A, W^{(1)}_1, H^{(1)}_1,D_1^{(\textrm{tr})}) \end{array} \label{eq:il} \end{equation} \begin{table}[t] \centering \begin{tabular}{l|p{10.5cm}} \hline Skillearn & Interleaving Learning\\ \hline Learners & $K$ learners \\ \hline Learnable parameters & 1) Encoder architecture shared by all learners; 2) In each round, each learner has weight parameters for the data encoder and weight parameters for the task-specific head. \\ \hline Interaction function & The encoder weights $W_l$ at learning stage $l$ are encouraged to be close to the optimal encoder weights $\widetilde{W}_{l-1}$ at stage $l-1$: $\|W_l-\widetilde{W}_{l-1}\|_2^2$.\\ \hline Learning stages & 1) In the first learning stage (the first learner in the first round), the learner trains the weights of its data encoder and the weights of its task-specific head on its training dataset: $\widetilde{W}_1^{(1)}(A) =\textrm{min}_{W^{(1)}_1,H^{(1)}_1} \quad L(A, W^{(1)}_1, H^{(1)}_1,D_1^{(\textrm{tr})})$; 2) In other learning stages, the learner trains the weights of its data encoder and the weights of its task-specific head on its training dataset where the encoder weights are encouraged to be close to the optimal encoder weights trained in the previous stage: $\widetilde{W}_k^{(m)}(A) =\textrm{min}_{W_k^{(m)},H^{(m)}_k} \quad L(A, W^{(m)}_k, H^{(m)}_k,D_k^{(\textrm{tr})})+\lambda\|W^{(m)}_k-\widetilde{W}_{k-1}^{(m)}(A)\|^2_{2}$. \\ \hline Validation stage & Each learner validates its optimal data encoder and task-specific head learned in the last round on its validation dataset.\\ \hline Datasets & Each learner has a training dataset and a validation dataset.\\ \hline \end{tabular} \caption{Mapping from Skillearn to Interleaving Learning} \label{tb:sk-il} \end{table} \subsection{Optimization Algorithm} In this section, we develop an optimization algorithm for interleaving learning by instantiating the general optimization framework of Skillearn in Section~\ref{opt:sk}. For each optimization problem $\widetilde{W}_k^{(m)}(A) =\textrm{min}_{W^{(m)}_k,H^{(m)}_k} \quad L(A, W^{(m)}_k, H^{(m)}_k,D_k^{(\textrm{tr})})+\lambda\|W^{(m)}_k-\widetilde{W}_{k-1}^{(m)}(A)\|^2_{2} $ in a learning stage, we approximate the optimal solution $\widetilde{W}_k^{(m)}(A)$ by one-step gradient descent update of the optimization variable $W^{(m)}_k$: \begin{equation} \widetilde{W}_k^{(m)}(A)\approx \overline{W}_k^{(m)}(A)= W^{(m)}_k-\eta \nabla_{W^{(m)}_k}( L(A, W^{(m)}_k, H^{(m)}_k,D_k^{(\textrm{tr})})+\lambda\|W^{(m)}_k-\widetilde{W}_{k-1}^{(m)}(A)\|^2_{2}) \end{equation} For $\widetilde{W}_1^{(1)}(A)$, the approximation is: \begin{equation} \widetilde{W}_1^{(1)}(A)\approx \overline{W}_1^{(1)}(A)= W^{(1)}_1-\eta \nabla_{W^{(1)}_1} L(A, W^{(1)}_1, H^{(1)}_1,D_1^{(\textrm{tr})}) \label{eq:w11} \end{equation} For $\widetilde{W}_k^{(m)}(A)$, the approximation is: \begin{equation} \widetilde{W}_k^{(m)}(A)\approx \overline{W}_k^{(m)}(A)= W^{(m)}_k-\eta \nabla_{W^{(m)}_k} L(A, W^{(m)}_k, H^{(m)}_k,D_k^{(\textrm{tr})})-2\eta\lambda(W^{(m)}_k-\overline{W}_{k-1}^{(m)}(A)) \label{eq:wkm} \end{equation} where $\overline{W}_{k-1}^{(m)}(A)$ is the approximation of $\widetilde{W}_{k-1}^{(m)}(A)$. Note that $\{\overline{W}_k^{(m)}(A)\}_{k,m=1}^{K,M}$ are calculated recursively, where $\overline{W}_k^{(m)}(A)$ is a function of $\overline{W}_{k-1}^{(m)}(A)$, $\overline{W}_{k-1}^{(m)}(A)$ is a function of $\overline{W}_{k-2}^{(m)}(A)$, and so on. When $m>1$ and $k=1$, $\overline{W}_{k-1}^{(m)}(A)=\overline{W}_{K}^{(m-1)}(A)$. For $\widetilde{H}_k^{(M)}(A)$, the approximation is: \begin{equation} \widetilde{H}_k^{(M)}(A)\approx \overline{H}_k^{(M)}(A)=H_k^{(M)}(A)-\eta \nabla_{H_k^{(M)}(A)}L(A, W_k^{(M)}, H^{(M)}_k,D_k^{(\textrm{tr})}) \label{eq:hm} \end{equation} In the validation stage, we plug in the approximations of $\{\widetilde{W}_k^{(M)}(A)\}_{k=1}^K$ and $\{\widetilde{H}_k^{(M)}(A)\}_{k=1}^K$ into the validation loss function, calculate the gradient of the approximated objective w.r.t the encoder architecture $A$, then update $A$ via: \begin{equation} A\gets A-\eta \sum_{k=1}^K \nabla_AL(A, \overline{W}_k^{(M)}(A), \overline{H}_k^{(M)}(A),D_k^{(\textrm{val})}) \label{eq:a-il} \end{equation} The update steps from Eq.(\ref{eq:w11}) to Eq.(\ref{eq:a-il}) until convergence. The entire algorithm is summarized in Algorithm~\ref{algo:algo-il}. \begin{algorithm}[H] \SetAlgoLined \While{not converged}{ 1. Update $\widetilde{W}_1^{(1)}(A)$ using Eq.(\ref{eq:w11})\\ 2. For $k=2\cdots K$, update $\widetilde{W}_k^{(1)}(A)$ using Eq.(\ref{eq:wkm})\\ 3. For $k=1\cdots K$ and $m=2\cdots M$, update $\widetilde{W}_k^{(m)}(A)$ using Eq.(\ref{eq:wkm})\\ 4. For $k=1\cdots K$, update $\widetilde{H}_k^{(M)}(A)$ using Eq.(\ref{eq:hm})\\ 5. Update $A$ using Eq.(\ref{eq:a-il}) } \caption{Optimization algorithm for interleaving learning} \label{algo:algo-il} \end{algorithm} \subsection{Experiments} We apply interleaving learning for neural architecture search in image classification tasks. Two tasks are interleaved: image classification on CIFAR-10 and image classification on CIFAR-100. We search the shared architecture of data encoders in these two tasks. \begin{table}[t] \caption{Classification error (\%) on the test set of CIFAR-100, number of parameters (millions) in the searched architecture, and search cost (GPU days). DARTS-1st and DARTS-2nd denotes that first-order and second-order approximations are used in DARTS. * denotes that the results are taken from DARTS$^{-}$ \citep{abs-2009-01027}. $\dag$ denotes that this approach was re-run for 10 times. The search cost is measured by GPU days on a Tesla v100. } \centering \begin{tabular}{l|ccc} \toprule Method & Error(\%)& Param(M)& Cost\\ \midrule *ResNet \citep{he2016deep}&22.10&1.7&-\\ *DenseNet \citep{HuangLMW17}&17.18&25.6 &-\\ \hline *PNAS \citep{LiuZNSHLFYHM18}&19.53&3.2&150\\ *ENAS \citep{pham2018efficient}&19.43&4.6&0.5\\ *AmoebaNet \citep{real2019regularized}&18.93&3.1&3150\\ \hline ${}^{\dag}$DARTS-1st \citep{liu2018darts} &20.52$\pm$0.31 &1.8 &0.4\\ *GDAS \citep{DongY19}&18.38&3.4&0.2\\ *R-DARTS \citep{ZelaESMBH20}&18.01$\pm$0.26&-&1.6 \\ *DARTS$^{-}$ \citep{abs-2009-01027}&17.51$\pm$0.25&3.3&0.4\\ ${}^{\dag}$DARTS$^{-}$ \citep{abs-2009-01027}& 18.97$\pm$0.16& 3.1&0.4\\ ${}^{\Delta}$DARTS$^{+}$ \citep{abs-1909-06035}&17.11$\pm$0.43&3.8&0.2\\ *DropNAS \citep{HongL0TWL020} & 16.39&4.4&0.7 \\ \hline \hline *DARTS-2nd \citep{liu2018darts} & 20.58$\pm$0.44&1.8&1.5 \\ $\;\;$JL(DARTS2nd) & 18.92$\pm$0.17 & 2.4& 3.1 \\ $\;\;$IL(DARTS2nd) (ours) & \textbf{17.12}$\pm$0.08 & 2.6& 3.2 \\ \hline *P-DARTS \citep{chen2019progressive}&17.49&3.6&0.3\\ $\;\;$JL(PDARTS) &17.67$\pm$0.31 &3.5 &0.6\\ $\;\;$IL(PDARTS) (ours)& \textbf{16.14}$\pm$0.17& 3.6&0.6\\ \hline $\dag$PC-DARTS \citep{abs-1907-05737} &17.96$\pm$0.15&3.9&0.1 \\ $\;\;$JL(PCDARTS) & 18.11$\pm$0.27& 3.9&0.2\\ $\;\;$IL(PCDARTS) (ours)&17.83$\pm$0.14 &3.8 &0.3\\ \bottomrule \end{tabular} \label{tab:cifar100-il} \end{table} \begin{table}[t] \caption{ Classification error (\%) on the test set of CIFAR-10, number of parameters (millions) in the searched architecture, and search cost (GPU days). * denotes that the results are taken from DARTS$^{-}$ \citep{abs-2009-01027}, NoisyDARTS \citep{abs-2005-03566}, and DrNAS \citep{abs-2006-10355}. } \centering \begin{tabular}{l|ccc} \toprule Method& Error(\%)& Param(M) & Cost\\ \midrule *DenseNet \citep{HuangLMW17}&3.46&25.6 &-\\ \hline *HierEvol \citep{liu2017hierarchical}&3.75$\pm$0.12& 15.7 &300\\ *NAONet-WS \citep{LuoTQCL18} & 3.53 & 3.1&0.4 \\ *PNAS \citep{LiuZNSHLFYHM18} &3.41$\pm$0.09 &3.2& 225\\ *ENAS \citep{pham2018efficient} &2.89 & 4.6 &0.5 \\ *NASNet-A \citep{zoph2018learning} & 2.65 & 3.3& 1800\\ *AmoebaNet-B \citep{real2019regularized} & 2.55$\pm$0.05 & 2.8&3150 \\ \hline *DARTS-1st \citep{liu2018darts} &3.00$\pm$0.14&3.3& 0.4\\ *R-DARTS \citep{ZelaESMBH20} &2.95$\pm$0.21 &- & 1.6 \\ *GDAS \citep{DongY19}&2.93& 3.4& 0.2 \\ *SNAS \citep{xie2018snas} &2.85 & 2.8& 1.5\\ ${}^{\Delta}$DARTS$^{+}$ \citep{abs-1909-06035}&2.83$\pm$0.05&3.7&0.4\\ *BayesNAS \citep{ZhouYWP19} &2.81$\pm$0.04 &3.4&0.2 \\ *MergeNAS \citep{WangXYYHS20} &2.73$\pm$0.02 &2.9 & 0.2 \\ *NoisyDARTS \citep{abs-2005-03566} &2.70$\pm$0.23&3.3 & 0.4 \\ *ASAP \citep{NoyNRZDFGZ20} &2.68$\pm$0.11 & 2.5&0.2 \\ *SDARTS \citep{abs-2002-05283}&2.61$\pm$0.02 & 3.3& 1.3 \\ *DARTS$^{-}$ \citep{abs-2009-01027}&2.59$\pm$0.08& 3.5&0.4\\ ${}^{\dag}$DARTS$^{-}$ \citep{abs-2009-01027}& 2.97$\pm$0.04& 3.3&0.4\\ *DropNAS \citep{HongL0TWL020} &2.58$\pm$0.14 & 4.1&0.6 \\ *FairDARTS \citep{abs-1911-12126} &2.54 &3.3 &0.4 \\ *DrNAS \citep{abs-2006-10355} &2.54$\pm$0.03&4.0& 0.4\\ \hline \hline *DARTS2nd \citep{liu2018darts} &2.76$\pm$0.09&3.3& 1.5\\ $\;\;$JL(DARTS2nd) & 2.91$\pm$0.12 &2.4 & 3.1 \\ $\;\;$IL(DARTS2nd) (ours) & \textbf{2.62}$\pm$0.04 &2.6 & 3.2\\ \hline *PC-DARTS \citep{abs-1907-05737} &2.57$\pm$0.07&3.6& 0.1 \\ $\;\;$JL(PCDARTS) &2.63$\pm$0.05&3.9& 0.2\\ $\;\;$IL(PCDARTS) (ours) &2.55$\pm$0.11&3.8&0.3 \\ \hline *P-DARTS \citep{chen2019progressive} &2.50 &3.4&0.3\\ $\;\;$JL(PDARTS) & 2.63$\pm$0.12 & 3.5& 0.6\\ $\;\;$IL(PDARTS) (ours) & 2.51$\pm$0.10 & 3.6& 0.6\\ \bottomrule \end{tabular} \label{tab:c10-il} \end{table} \begin{table*}[t] \caption{In interleaving learning experiments, top-1 and top-5 classification errors on the test set of ImageNet, number of weight parameters, and search cost. Results marked with * are taken from DARTS$^{-}$ \citep{abs-2009-01027} and DrNAS \citep{abs-2006-10355}. From top to bottom, in the first, second, and third block are human-designed networks, non-differentiable search methods, and differentiable search methods. } \centering \begin{tabular}{l|cccc} \toprule \multirow{2}{*}{Method} & Top-1 &Top-5 &Param & Cost \\ & Error (\%) & Error (\%)&(M) & (GPU days)\\ \midrule *Inception-v1 \citep{googlenet}&30.2 &10.1&6.6&- \\ *MobileNet \citep{HowardZCKWWAA17} & 29.4& 10.5 &4.2&- \\ *ShuffleNet 2$\times$ (v1) \citep{ZhangZLS18} & 26.4 &10.2 & 5.4&-\\ *ShuffleNet 2$\times$ (v2) \citep{MaZZS18} & 25.1 &7.6 & 7.4&-\\ \hline *NASNet-A \citep{zoph2018learning} &26.0 &8.4 &5.3 &1800 \\ *PNAS \citep{LiuZNSHLFYHM18} &25.8 &8.1 &5.1 &225 \\ *MnasNet-92 \citep{TanCPVSHL19} & 25.2 & 8.0& 4.4&1667\\ *AmoebaNet-C \citep{real2019regularized} & 24.3 &7.6 &6.4&3150 \\ \hline *SNAS \citep{xie2018snas} & 27.3 &9.2 &4.3 &1.5 \\ *BayesNAS \citep{ZhouYWP19} &26.5 &8.9 &3.9&0.2 \\ *PARSEC \citep{abs-1902-05116} & 26.0 &8.4&5.6&1.0 \\ *GDAS \citep{DongY19} & 26.0&8.5 &5.3 & 0.2\\ *DSNAS \citep{HuXZLSLL20} &25.7& 8.1 &- & -\\ *SDARTS-ADV \citep{abs-2002-05283}&25.2& 7.8 &5.4& 1.3 \\ *PC-DARTS \citep{abs-1907-05737} & 25.1 &7.8&5.3&0.1\\ *ProxylessNAS \citep{cai2018proxylessnas} & 24.9 &7.5 &7.1 &8.3 \\ *FairDARTS (CIFAR-10) \citep{abs-1911-12126} &24.9 &7.5 &4.8 &0.4 \\ *FairDARTS (ImageNet) \citep{abs-1911-12126} &24.4 &7.4 &4.3 &3.0 \\ *DrNAS \citep{abs-2006-10355} & 24.2 &7.3& 5.2&3.9\\ *DARTS$^{+}$ (ImageNet) \citep{abs-1909-06035}& 23.9& 7.4&5.1&6.8\\ *DARTS$^{-}$ \citep{abs-2009-01027}&23.8& 7.0&4.9&4.5\\ *DARTS$^{+}$ (CIFAR-100) \citep{abs-1909-06035}&23.7& 7.2&5.1&0.2\\ \hline \hline *DARTS2nd(CIFAR10) \citep{liu2018darts} & 26.7 &8.7&4.7&1.5 \\ $\;\;$JL(DARTS2nd,CIFAR10/100) & 26.4& 8.5 &3.5 & 3.1\\ $\;\;$IL(DARTS2nd,CIFAR10/100) (ours) & \textbf{25.5} &\textbf{8.0} &3.8 & 3.2\\ \hline *PDARTS(CIFAR10) \citep{chen2019progressive}&24.4 &7.4&4.9&0.3\\ *PDARTS(CIFAR100) \citep{chen2019progressive}&24.7& 7.5&5.1&0.3\\ $\;\;$JL(PDARTS,CIFAR10/100) & 25.0 & 7.9 &5.1 &0.6 \\ $\;\;$IL(PDARTS,CIFAR10/100) (ours) & \textbf{24.1} & \textbf{7.1} & 5.3&0.6 \\ \bottomrule \end{tabular} \label{tab:imagenet-il} \end{table*} \subsubsection{Experimental Settings} We follow the experimental protocol in \citep{liu2018darts}. Each experiment consists of a search phrase and an evaluation phrase. In the search phrase, an optimal architecture cell is searched. In the evaluation phrase, the searched cell is copied multiple times and these copies are stacked into a larger network. The larger network is trained from scratch. Each experiment was repeated 10 times with different random initialization. In interleaving learning, we perform two tasks: image classification on CIFAR-100 and image classification on CIFAR-10, using two classification models $A$ and $B$. CIFAR-10 contains 10 classes and CIFAR-100 contains 100 classes. For CIFAR-10 and CIFAR-100, each of them is split into a 25K training set, a 25K validation set, and a 10K test set. The training and validation set of CIFAR-100 is used as $D_A^{(\textrm{tr})}$ and $D_A^{(\textrm{val})}$ respectively; the training and validation set of CIFAR-10 is used as $D_B^{(\textrm{tr})}$ and $D_B^{(\textrm{val})}$ respectively. For the architecture search space of the feature extractors, we experimented with the search spaces of DARTS~\citep{liu2018darts}, P-DARTS~\citep{chen2019progressive}, and PC-DARTS~\citep{abs-1907-05737}. These search spaces are composed of $3\times 3$ and $5\times 5$ (dilated) separable convolutions, $3\times 3$ max pooling, $3\times 3$ average pooling, zero, and identity. For the CIFAR-100 classification head, we set it to a 100-way linear classifier. For the CIFAR-10 classification head, we set it to a 10-way linear classifier. During architecture search, we perform two rounds of learning, with an order of CIFAR-100, CIFAR-10, CIFAR-100, CIFAR-10. The tradeoff parameter $\beta$ in interleaving learning was set to 100. The network of the feature extractor is a stack of 8 cells. Each cell has 7 nodes. We set the initial channel number to 16. We optimized the architecture variables using Adam~\citep{adam}. The learning rate was set to 3e-5 for IL-DARTS, 6e-4 for IL-P-DARTS, and 3e-3 for IL-PC-DARTS. The weight decay was set to 1e-3. We optimized the network weights using SGD. The initial learning rate was set to 0.025 for IL-DARTS and IL-P-DARTS and 0.1 for IL-PC-DARTS. The batch size was set to 64 for IL-DARTS and IL-P-DARTS and 256 for IL-PC-DARTS. The epoch number was set to 50 for IL-DARTS and IL-PC-DARTS and 25 for IL-P-DARTS. Weight decay was set to 3e-4 and momentum was set to 0.9. Cosine decay scheduler was used for scheduling the learning rate. The search was performed on a single Teslav100 GPU. During architecture evaluation, the searched cell in interleaving learning is evaluated on CIFAR-10 and CIFAR-100 independently. For either dataset, 20 copies of the searched cell are composed into a larger network as the feature extractor, which is trained on the combination of training and validation sets and tested on the test set. The initial channel number was set to 36. We trained the network for 600 epochs, with a mini-batch size of 96 for IL-DARTS and 64 for IL-P-DARTS and IL-PC-DARTS. The evaluation on CIFAR-10 and CIFAR-100 was performed on a single Tesla v100 GPU. Given the architecture searched on CIFAR10/100, we also evaluate it on ImageNet. Specifically, 14 copies of the searched cell are composed into a larger network as the feature extractor, which is trained on 1.2M training images in ImageNet and tested on 50K testing images. We set the initial channel number to 48. The number of epochs was set to 250. The batch size was set to 1024. The training on ImageNet was performed on Tesla v100 GPUs. \subsubsection{Experimental Results} Table~\ref{tab:cifar100-il} and Table~\ref{tab:c10-il} shows the results on CIFAR-100 and CIFAR-10 respectively, including classification errors on the test set, number of model parameters, and search cost (GPU days). As can be seen, applying interleaving learning (IL) to DARTS-2nd, P-DARTS, and PC-DARTS significantly reduces the errors of these baselines approaches. For example, on CIFAR-100, applying IL to DARTS-2nd reduces the error from 20.58\% to 17.12\% and applying IL to P-DARTS reduces the error from 17.49\% to 16.14\%. As another example, on CIFAR-10, applying IL to DARTS-2nd reduces the error from 2.76\% to 2.62\%. These results show that interleaving learning can help to search better architectures. With interleaving learning, the search task on CIFAR-100 and CIAFR-10 can mutually benefit each other. The feature extractor $W_A$ trained on CIFAR-100 is used to initialize the feature extractor $W_B$ for CIFAR-10. Since $W_A$ is trained on CIFAR-100, it is better than random weights. Therefore, using $W_A$ to initialize $W_B$ is better than random initialization. Likewise, the $W_B$ trained on CIFAR-10 is used to initialize $W_A$ in the next round of training, which is better than random initialization. These two feature extractors mutually help each other to improve in the interleaving process. In baseline approaches, such a mechanism is missing. Therefore, interleaving learning achieves better performance than the baselines. One may wonder whether the performance gain of interleaving learning is due to more data (CIFAR100+CIFAR10) is used. To investigate this, we compare interleaving learning with a joint learning (JL) baseline with the following formulation: \begin{equation} \begin{array}{ll} \textrm{min}_{T} & L(T, W^*_A(T), H^*_A(T), D_A^{(\textrm{val})})+ L(T, W^*_B(T),H^*_B(T), D_B^{(\textrm{val})})\\ s.t. & W^*_A(T), H^*_A(T),W^*_B(T),H^*_B(T) =\textrm{argmin}_{W_A, H_A, W_B, H_B} \;\;L(T, W_A, H_A, D_A^{(\textrm{tr})})+\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad L(T, W_B, H_B, D_B^{(\textrm{tr})}) \end{array} \end{equation} where $T$ denotes the architecture of the feature extractors; $W_A$ and $H_A$ denote the feature extractor weights and classification head of model $A$ (for CIFAR-100); $W_B$ and $H_B$ denote the feature extractor weights and classification head of model $B$ (for CIFAR-10). In the inner optimization problem (on the constraint), we train $W_A$, $H_A$, $W_B$, and $H_B$ by minimizing the training losses defined of both datasets, with the architecture fixed. In the outer optimization problem, we update the architecture by minimizing the validation losses of both datasets. In this formulation, the architecture is learned using both datasets. Table~\ref{tab:cifar100-il} and Table~\ref{tab:c10-il} show the results of this joint learning (JL) formulation. As can be seen, our IL method outperforms JL. For example, on CIFAR-100, when applied to DARTS-2nd, the error of IL is 17.12\% while that of JL is 18.92\%; when applied to P-DARTS, the error of IL is 16.14\% while that of JL is 17.67\%. As another example, on CIFAR-10, when applied to DARTS-2nd, the error of IL is 2.62\% while that of JL is 2.91\%. These results show that the performance gain of interleaving learning comes from the interleaving mechanism rather than due to more data is used. In the JL formulation, the weights of feature extractors of model $A$ and $B$ are trained independently. In contrast, in IL, the weights of feature extractors of model $A$ and $B$ mutually help each other to improve via pretraining in the interleaving process. Therefore, IL achieves better performance than JL. It is worth noting that while IL achieves better performance than baselines, it does not substantially increase parameter number or search cost. Table~\ref{tab:imagenet-il} shows the results on ImageNet, including top-1 and top-5 errors, number of model parameters, and search cost (GPU days). From this table, we make the following observations. First, applying IL to DARTS and P-DARTS reduces the errors of these two baselines. For example, applying IL to DARTS-2nd reduces the top-1 error from 26.7\% to 25.5\% and reduces the top-5 error from 8.7\% to 8.0\%. This further demonstrates the effectiveness of IL in searching better architectures by encouraging two models to mutually help each other in the interleaving process. Second, IL achieves better performance than JL. For example, applied to DARTS-2nd, the top-1 and top-5 errors of IL are lower than those in JL. This further shows that the performance gain of IL comes from the interleaving process rather than leveraging more data. Third, while achieving better performance, IL does not substantially increase parameter number or search cost. \subsection{Summary} In this section, we apply Skillearn to formalize the interleaving learning (IL) skill of humans. In IL, a set of models collaboratively learn a data encoder in an interleaving fashion: the encoder is trained by model 1 for a while, then passed to model 2 for further training, then model 3, and so on; after trained by all models, the encoder returns back to model 1 and is trained again, then moving to model 2, 3, etc. This process repeats for multiple rounds. Via interleaving, different models transfer their learned knowledge to each other to better represent data and avoid being stuck in bad local optimums. Experiments of neural architecture search on CIFAR-100 and CIFAR-10 demonstrate the effectiveness of interleaving learning. \subsection{Related Works} \subsubsection{Neural Architecture Search} Neural architecture search (NAS) has achieved remarkable progress recently, which aims at searching for the optimal architecture of neural networks to achieve the best predictive performance. In general, there are three paradigms of methods in NAS: reinforcement learning (RL) approaches~\citep{zoph2016neural,pham2018efficient,zoph2018learning}, evolutionary learning approaches~\citep{liu2017hierarchical,real2019regularized}, and differentiable approaches~\citep{cai2018proxylessnas,liu2018darts,xie2018snas}. In RL-based approaches, a policy is learned to iteratively generate new architectures by maximizing a reward which is the accuracy on the validation set. Evolutionary learning approaches represent the architectures as individuals in a population. Individuals with high fitness scores (validation accuracy) have the privilege to generate offspring, which replaces individuals with low fitness scores. Differentiable approaches adopt a network pruning strategy. On top of an over-parameterized network, the weights of connections between nodes are learned using gradient descent. Then weights close to zero are pruned later on. There have been many efforts devoted to improving differentiable NAS methods. In P-DARTS \citep{chen2019progressive}, the depth of searched architectures is allowed to grow progressively during the training process. Search space approximation and regularization approaches are developed to reduce computational overheads and improve search stability. PC-DARTS \citep{abs-1907-05737} reduces the redundancy in exploring the search space by sampling a small portion of a super network. Operation search is performed in a subset of channels with the held-out part bypassed in a shortcut. Our proposed LCT framework can be applied to any differentiable NAS methods. \subsubsection{Adversarial Learning} Our proposed LPT involves a min-max optimization problem, which is analogous to that in adversarial learning. Adversarial learning~\citep{goodfellow2014generative} has been widely applied to 1) data generation~\citep{goodfellow2014generative,yu2017seqgan} where a discriminator tries to distinguish between generated images and real images and a generator is trained to generate realistic data by making such a discrimination difficult to achieve; 2) domain adaptation~\citep{ganin2015unsupervised} where a discriminator tries to differentiate between source images and target images while the feature learner learns representations which make such a discrimination unachievable; 3) adversarial attack and defence~\citep{goodfellow2014explaining} where an attacker adds small perturbations to the input data to alter the prediction outcome and the defender trains the model in a way that the prediction outcome remains the same given perturbed inputs. Different from these existing works, in our work, a tester aims to create harder tests to ``fail" the learner while the learner learns to ``pass" however hard tests created by the tester. \citet{shu2020identifying} proposed to use an adversarial examiner to identify the weakness of a trained model. Our work differs from this one in that we progressively re-train a learner model based on how it performs on the tests dynamically created by a tester model while the learner model in \citep{shu2020identifying} is fixed and not affected by the examination results. \section{Conclusions} In this paper, we develop a general framework called Skillearn to formalize humans' learning skills into machine-executable learning skills and leverage them to train better machine learning models. Our framework can flexibly formulate many learning skills of humans, by mapping from learners, learnable parameters, interaction functions, learning stages, etc. in Skillearn to their counterparts in human learning. The formulated machine-executable learning skills can be applied to improve any ML model. In two case studies, we apply Skillearn to formalize two learning skills of humans -- learning by passing tests (LPT) and interleaving learning (IL). In LPT, a tester model dynamically creates tests with increasing levels of difficulty to evaluate a testee model; the testee continuously improves its architecture by passing however difficult tests created by the tester. In IL, a set of models collaboratively learn a data encoder in an interleaving fashion: the encoder is trained by model 1 for a while, then passed to model 2 for further training, then model 3, and so on; after trained by all models, the encoder returns back to model 1 and is trained again, then moving to model 2, 3, etc. This process repeats for multiple rounds. Experiments on various datasets demonstrate that ML models trained by these two learning skills achieve significantly better performance.
1105.2506
\section{Introduction} How the outer atmosphere of the Sun, the solar corona, is heated to several million degrees Kelvin is one of the most compelling questions in space science \citep{Klimchuk_2006}. Simple thermal conduction from below is clearly not the answer, since the corona is more than two orders of magnitude hotter than the solar surface. Indeed, whatever mechanism heats the corona must do so in the face of strong energy {\it losses} from both downward thermal conduction and radiation. Soft X-ray and EUV images of the corona reveal many beautiful loop structures---arched magnetic flux tubes filled with plasma. It is generally agreed that warm loops -- whose temperature is {\it only} about 1 MK, well observed in EUV images -- are bundles of unresolved thin strands that are heated by small energy bursts called nanoflares (\citealp{parker_1988,Gomez_93ApJ,Warren_2002ApJ,Klimchuk_2006,Sakamoto_2008}). Identifiable warm loops account for only a small fraction of the coronal plasma, however. Most emission has a diffuse appearance, and the question remains as to how this dominant component is heated, especially in the hotter central parts of active regions. Is it also energized by nanoflares, or is the heating more steady? Recent observations have revealed that small amounts of extremely hot plasma are widespread in active regions \citep{Reale_2009ApJ} and are consistent with the predictions of theoretical nanoflare models \citep{Klimchuk_2008ApJ}. This suggests that nanoflare heating may indeed be universal. However, the conclusion is far from certain \citep{Brooks_2009ApJ}. The work reported here sheds new light on this fundamental question. A magnetic strand that is heated by a nanoflare evolves in a well defined manner. Its light curve (intensity vs. time) has a characteristic shape: the intensity rises quickly as the nanoflare occurs, levels off temporarily, then enters a longer period of exponential decay as the plasma cools (\citealp{L_Fuentes_2010ApJ}). If we could isolate individual strands in real observations, it would be easy to establish whether the heating is impulsive or steady. Unfortunately, this is not the case. The corona is optically thin, so each line of sight represents an integration through a large number overlapping translucent strands. Nonetheless, it may be possible to infer the presence of nanoflares. Actual light curves exhibit both long and short-term temporal variations. Some of the short-term fluctuation is due to photon statistical noise, but some may be caused by nanoflares. The amplitude of the fluctuations seems to be larger than expected from noise alone \citep{Sakamoto_2008,Sakamoto_2009ApJ,Vekstein_2009A&A}. However, this is difficult to determine with confidence, because the precise level of noise depends on the temperature of the plasma, and this is known only approximately in these studies. As we report here for the first time, there is another method for detecting nanoflares from intensity fluctuations that does not depend sensitively on the noise. If heating is impulsive we expect the light curves of individual strands to be asymmetric. The strand is bright for less time than it is faint, and when it is bright it is much brighter than the temporal average. This results in a distribution of intensities that is also very asymmetric. A good measure of the asymmetry is the difference between the median and mean values. This is a generic property of light curves that are dominated by an exponential decay, as is the case with nanoflares. We use this property to demonstrate that nanoflares are occurring throughout a particular active region that we studied in detail. Since the light curve at each pixel in the image set is a composite of many light curves from along the line-of-sight, the asymmetries of the intensity distributions and the differences between the median and mean values are small. We use both statistical analysis and quantitative modeling to show that the differences are nonetheless significant and consistent with widespread nanoflaring in the active region. In Section~\ref{sec:data} we describe the data analysis and results, in Section~\ref{sec:model} we interpret the results in the light of Monte Carlo simulations and of loop modeling and in Section~\ref{sec:disc} the whole scenario is discussed. \section{Data analysis} \label{sec:data} \subsection{The observation and preliminary analysis} The grazing-incidence X-Ray Telescope (XRT) \citep{Golub_2007,Kano_2008SoPh,Narukage_2011SoPh} on the Hinode spacecraft \citep{Kosugi_2007} detects plasmas in the temperature range $6.1 < \log~T<7.5$ with 1 arcsec spatial resolution. Active region AR 10923 was observed on 14 November 2006 near the center of the solar disk. It was also studied previously in other ways \citep{Reale_2007Sci,Reale_2009ApJ}. The observations used for this study were made in the Al\_poly filterband starting at 11~UT and lasting $\sim 26$ min. A total of 303 images were taken with a 0.26 s exposure at cadence intervals between 3 and 9 s. No major flare activity or significant change in the morphology occurred during this time. We concentrated on a 256$\times$256 arcsec$^2$ field of view and used the standard XRT software to calibrate the data. The images were co-aligned using the jitter information provided with the data. \subsection{Data cleaning} Because we are interested in low level systematic variations that could be indicative of nanoflares, we removed pixels from the dataset that show phenomena which may obscure the effect we are attempting to study. Our analysis is best applied to light curves that are approximately constant or that exhibit only a slow linear trend. We therefore excluded pixels that have a low signal or that show macroscopic variations that might be attributed to cosmic ray hits, microflares or other transient brightenings, or to slow variations due, for instance, to local loop drifts or motions. We discuss each of these possibilities in turn. Since we expect the fluctuations that result from episodic heating to be erratic and of very small amplitude, they may be very difficult to distinguish from the noise, so we removed all pixels with an average count rate below $30 DN/s$. This is essentially the entire dark area outside of the active region proper. These pixels amount to $\sim 11$\% of the total. We removed all pixels affected by bright spikes due to cosmic rays or point-like brightenings. These pixels were identified by the condition that the signal is at least $1.5$ times the spatial median of the immediately surrounding pixels \citep{Sakamoto_2009ApJ}. They represent $\sim 15 \%$ of the total. We also excluded continuous macroscopic events, i.e. large scale events such as \emph{microflares}. To this aim, we performed a linear fit of the pixel light curve and removed the pixels whose intensity became or exceeded 1.5 times of the bestfit line at any time during the observation. These account for $\sim 10 \%$ of the total. Finally, we removed slow intensity variations due to displacement or drift of coronal structures along the line of sight. We used a method based on counting the number of crossings of the bestfit line by the light curve. If the fluctuations of \emph{m} data points around the linear fit are completely random, the time profile has \emph{m-1} possibilities to cross the linear fit, with $0.5$ probability. The \emph{``number of crossings''} follows a \emph{Binomial} distribution with a mean of ${(m-1)}/{2}$ and a standard deviation of ${\sqrt{(m-1)}}/{2}$. Assuming that the duration of intrinsic intensity fluctuations is shorter than the observing time ($\sim 26$ min), and the duration of the fluctuations due to loop drifts or motions is comparable with observing time, the number of crossings due to loop motions should be smaller than $\sqrt{(m-1)}/{2}$. We removed all pixels where the number of crossings is smaller than the mean of the binomial distribution ($\sim 7 \%$). At the end of the cleaning we are left with about $56\%$ of the total number of pixels as shown in Figure \ref{fig1}. \subsection{Temporal analysis} The light curves of the remaining pixels (green in Fig.\ref{fig1}) can be fit satisfactorily well with a linear regression. The slopes tend to be very small ($0\pm0.15$ in $90\%$ of the cases), and there is no preference for increasing or decreasing intensity. Figure \ref{fig2} shows light curves for two sample pixels with the linear fit in blue and 9-point ($\sim 1$ min) running averages in green. The light curve in the lower panel is one with a highly negative median, and on it we mark three decaying exponentials that fit well the respective data segments and provide good evidence for cooling (see Sections~\ref{sec:mc},\ref{sec:disc}). We measure intensity fluctuations relative to the linear fit according to: \begin{equation} dI(x,y,t) = \frac{I(x,y,t) - I_0(x,y,t)}{\sigma_P(x,y,t)} \label{eq:fluc} \end{equation} \noindent where $I(x,y,t)$ is the count rate (DN/s) at position $[x,y]$ and time $t$, $I_0(x,y,t)$ is the value of the linear fit at the same position and time, and $\sigma_P(x,y,t)$ is the photon noise estimated as the standard deviation of the pixel light curve with respect to the linear fit, with a small correction to account for the variation of the average count rate with time (described by the linear fit)\footnote{An alternative possibility is to estimate the photon noise from the nominal relations with signal intensity. These relations require the conversion from DN to photon counts, and therefore depend on the source emitted spectrum. This introduces a strong dependence on the temperature of the emitting plasma. So, to estimate the photon noise in this way one has to make an assumption on the plasma temperature. This is not straightforward in an inhomogeneous active region, and we preferred a model-independent approach.}. The distribution of the intensity fluctuations (Fig.\ref{fig3}) is not symmetric at either pixel. There is a slight excess of negative fluctuations (fainter than average emission) compared to positive. The mean fluctuation is 0, by definition, but the median fluctuation (normalized to $\sigma_P$) is $-0.08 \pm 0.07$ in the brighter pixel (upper panel of Fig.~\ref{fig2}) and $-0.12 \pm 0.07$ in the fainter pixel (lower panel of Fig.~\ref{fig2}). The uncertainties in the median values have been rigorously computed according to \cite{Hong_2004ApJ}. Since the fluctuations of each pixel light curve are normalized, in the same way we can build a distribution with higher statistical significance simply including the fluctuations from more pixels. Figure \ref{fig4} (left panel) shows the distributions of the three $32 \times 32$ pixels sub-regions marked in Figure \ref{fig1} and of the whole active region. Subtle asymmetries can be detected by eye when compared to the Gaussian distribution shown as a dashed curve for comparison. The right panel in Figure \ref{fig4} shows the distributions of the median values themselves, computed individually at each pixel. There is a clear preference for the medians to be negative. The median averages (coinciding with the peak of the median distributions, that are highly symmetric) are between $-0.025 \pm 0.002$ and $-0.030 \pm 0.002$ for the sub-regions and $-0.0258 \pm 0.0004$ for the entire active region. Uncertainties are estimated according to \cite{Hong_2004ApJ}. Results for the active region and the selected subregions are listed in Table \ref{tbl-1}. The fact that the results are similar in the subregions and in the whole active region (and the significance increases) is important because it shows that the effect is widespread and real. Were it due simply to random Gaussian fluctuations (or fluctuations of any random variable that is symmetrically distributed), the magnitude would decrease as more and more pixels are included in the statistics, i.e., the effect would be smaller for the whole active region. Furthermore, if the effect were due entirely to photon noise, which obeys Poisson statistics (see next Section), then increasing the sample size would bring the Poisson distribution closer to a symmetric Gaussian and decrease the difference between the median and the mean (i.e., bring the mean closer to zero). However, the measured median is just as large for the entire active region as it is for the sub-regions. \section{Modeling and interpretation} \label{sec:model} \subsection{MonteCarlo simulations} \label{sec:mc} Photon counting obeys Poisson statistics, and since the Poisson distribution is asymmetric, part of the negative offset of the median values is due to photon noise. We determine how much by performing MonteCarlo simulations to generate synthetic light curves for an appropriate number of pixels. As a null-hypothesis, we assume that the fluctuations at each pixel are due only to photon noise, i.e., that the intrinsic light curve is flat. To simulate this, we start from an observed emission map obtained by time averaging all the actual images. We then introduce synthetic noise at each pixel using Poisson statistics and having the same average fluctuation amplitude as observed, derived according to Equation \ref{eq:fluc}. In this way we obtain a noisy light curve, with fluctuations Poisson-distributed around the zero-value. We repeat this procedure for all valid pixels, thereby obtaining a datacube of artificial XRT images exactly analogous to the real one. We can then apply the same analysis to the synthetic data. As already mentioned, we obtain asymmetric distributions from the null-hypothesis. For the three subregions marked in Figure \ref{fig1} we obtain median values between $-0.013 \pm 0.002$ and $-0.018 \pm 0.002$. These values are incompatible with and significantly lower than those measured from the observational data ($-0.025/-0.030 \pm 0.002$). For the whole region we obtain $-0.0164 \pm 0.0004$ to be compared to $-0.0258 \pm 0.0004$ from the data. Analogously we have computed that for all pixels with an average rate $\geq 800$ and $\geq 1600$ DN/s the median distribution for the whole region is $-0.0096 \pm 0.0009$ and $-0.0096 \pm 0.0017$, respectively, to be compared with observational data ($-0.0160 \pm 0.0009$ and $-0.0136 \pm 0.0018$) for the same threshold values respectively. Our next step is to perturb the intrinsically flat light curves with a sequence of random segments of exponential decays, linked one to the other. We slightly reduce the constant offset so as to maintain the same average DN rate after adding the perturbations, which are all positive. The parameters of the perturbations are the $e$-folding time, $\tau$, the average time interval between two successive perturbations, $dt$, and the amplitude, $A$. The $e$-folding time is fixed for each simulation. The cadence is Poisson-distributed around the average value, because each perturbation is triggered an integer number of frames after the previous one. Since the number of frames is relatively large (tens) the Poisson distribution approaches a Gaussian one. The amplitude is random-uniform between 0.5 and 1.5 of the average value. The flat light curve becomes ``\emph{saw-toothed}'', but non-periodic, with exponential descending trends. This new light curve is then randomized according to the pixel average counting statistics, as was done for the constant light curve (Fig.\ref{fig5}). Again, we repeat this procedure for all valid pixels to obtain new datacubes, which we analyze as if they were real data. We perform a sample exploration of the parameter space. In particular, we consider reasonable loop cooling timescales as possible $e$-folding times, i.e. $\tau = 180,\;360,\;540\;s$. The larger values more likely for realistic active region loops of length $5 - 10 \times 10^{9}$ cm, according to the loop cooling times ($\tau_{s}$), which are of the order of \citep{Serio_1991A&A}: \begin{equation} \tau_{s} = 4.8 \times 10^{-4} \frac{L}{\sqrt{T_0}} = 120 \frac{L_9}{\sqrt{T_{0,7}}} \label{eq:serio} \end{equation} where $L$ ($L_9$) is the loop half-length (in units of $10^9$ cm) and $T_0$ ($T_{0,7}$) is the loop maximum temperature (in units of $10^7$ K). To give a significantly negative median, each exponential must be visible uninterrupted for a relatively long time, even more since its amplitude is relatively small with respect to the constant background. Therefore we have set the average time interval between two successive perturbations to a value compatible with the chosen $e$-folding time. We make two different sets of simulations with amplitude A = $30$ and $60\;DN/s$. The results of the simulations are listed in Tables \ref{tbl-2} and \ref{tbl-3}. The median values from the simulations approach those obtained from the data for all values of $\tau$, for $A = 60$ DN/s, and for time intervals of the order or larger than $\tau$ (Figures \ref{fig4} and \ref{fig6}). The best match with data results is obtained with $A = 60$ DN/s, $\tau = 360\;s$ and $dt = 360\;s$. It is worth commenting further on the distribution of median values obtained from the individual pixels (Figures \ref{fig4} and \ref{fig6}, right panels). As we have discussed, a negative median is indicative of exponentially decreasing intensity and cooling plasma (and also Poisson photon statistics to some degree). However, a sizable fraction of the observed median values are positive. Without the benefit of our simulations, we might conclude that these pixels do not have cooling plasma. The good agreement between the observed (Fig.\ref{fig4}, right panel) and simulated (Fig.\ref{fig6}, right panel) distributions, both in terms of the centroid offset and the width, shows that the observations are in fact consistent with all of the pixels having cooling plasma. Positive median values occur when photon statistics mask the relative weak signal of the exponentially decreasing intensity. \subsection{Loop hydrodynamic modeling} \label{sec:loopm} In a possible scenario, a coronal loop consists of many independent strands, each ignited by a heat pulse that we call a nanoflare. The evolution of the plasma confined in a single strand driven by a heat pulse has been described in the past by means of time-dependent hydrodynamic loop models \citep{Nagai80,Peres82,Cheng83,Fisher85,MacNeice86}. The light curve in Figure~\ref{fig7} is synthesized in the Hinode/XRT Al\_poly filterband from the results of a hydrodynamic model of a nanoflaring strand \citep{Guarrasi010}. This hydrodynamic simulation has been used successfully to explain totally different observational results, which indicates that the parameters are realistic. The strand half-length is $3 \times 10^9$ cm. The heat pulse of the single strand is a \emph{top-hat} function in time, the high state lasting $60\;s$, and in space it is uniformly distributed along the strand. Its intensity is 0.38 erg cm$^{-3}$ s$^{-1}$ and brings the strand to a maximum temperature $\log T \approx 7$. The total energy injected in the strand is therefore $\approx 1.4 \times 10^{11}$ erg cm$^{-2}$ to be multiplied by the strand cross-section area. The loop hydrodynamic simulations are one-dimensional and in the synthesis of the loop emission the cross-section area is a free parameter. We have chosen the cross-section area so as to have an emission peak of 60 DN/s, a realistic value suggested by the MonteCarlo simulations described above. The light curve is characterized by a steep rise phase, a short plateau and a much longer decay phase, which can be well approximated by a decreasing exponential (Figure~\ref{fig7}). For this particular model strand (it depends on the strand half-length, see Eq.~\ref{eq:serio}), the best-fit $e$-folding time is $\sim 300$ s. We verified that the median intensity (7.0 DN/s) is much smaller than (less than half of) the mean intensity (16.6 DN/s). \section{Discussion} \label{sec:disc} We find evidence that the light curves in each pixel of an active region have systematic features: the distribution of intensity fluctuations is asymmetric and the median value is less than the mean. The effect is confirmed and even at higher level of significance when summed over larger and larger parts of the region, and therefore widespread and real. We have also shown that part of the negative offset of the median values is due to photon noise. We determine how much by performing MonteCarlo simulations to generate synthetic light curves. Comparing the value of the median for the entire region in Table~\ref{tbl-1} with the value of the median for the simulations with Poisson noise only (\emph{null hypothesis}, $A = 0$, $threshold = 30$ in Table~\ref{tbl-2}) we see that the Poisson noise accounts only for the $\sim 60$\% of the negative shift of the median. The significance of the remainder is at the $5 \sigma$ level for the subregions and $25 \sigma$ level for the active region! We also perform simulations meant to represent cooling plasma by randomly adding pieces of exponential decays onto the constant background intensity. Photon noise is included as explained above. The resulting light curves (see Figure \ref{fig5}) look similar to those in Figure \ref{fig2}. The distributions of the intensity fluctuations agree well with observations, with median values that have a similar negative offset. As an aside, the parameters of the simulations lead to realistic constraints about the loop substructuring (see the Appendix). We roughly estimate a possible strand diameter around $10^7$ cm, i.e. a fraction of arcsec, not far from the resolution of the current instruments. Probably these are the most significant nanoflare events, the high tail of a distribution. The bulk of the events may occur with higher frequency and in finer strands. We remark that our analysis is entirely independent of filter calibration and highly model-independent. The data error is in principle dependent on the emitted spectrum and therefore on the plasma temperature and filter calibration, but we have estimated it directly from the noise of the light curves. The model we use in Monte Carlo simulations is very simple and has a minimal set of free parameters. Previous attempts to determine the nature of coronal heating outside of isolated warm loops have been inconclusive (\citealp{Brooks_2009ApJ,Tripathi_2011ApJ}). Our study provides strong evidence for widespread cooling plasma in active region AR 10923. This suggests heating that is impulsive and definitively excludes steady heating, which in turn suggests that nanoflares play a universal role in active regions. We favor nanoflares occurring within the corona, but we do not exclude that our observations may also be consistent with the impulsive injection of hot plasma from below, as has recently been suggested \citep{De_Pontieu_2011Sci}. \bigskip \acknowledgements{We thank the anonymous referee for very useful suggestions. We also thank M. Caramazza and Y. Sakamoto for help in data analysis. Hinode is a Japanese mission developed and launched by ISAS/JAXA, collaborating with NAOJ as a domestic partner, NASA and STFC (UK) as international partners. Scientific operation of the Hinode mission is conducted by the Hinode science team organized at ISAS/JAXA. This team mainly consists of scientists from institutes in the partner countries. Support for the post-launch operation is provided by JAXA and NAOJ (Japan), STFC (U.K.), NASA, ESA, and NSC (Norway). F.R., S.Te. and M.M. acknowledge support from Italian Ministero dell'Universit\`a e Ricerca and Agenzia Spaziale Italiana (ASI), contracts I/015/07/0 and I/023/09/0. The work of J.A.K. was supported by the NASA Supporting Research and Technology and LWS Targeted Research and Technology programs.}
2004.06452
\section{Introduction} It is not easy to find an astrophysical phenomenon with as little clue to the physical nature of the cause of the phenomenon as the (quasi)periodic modulation of the light curves RR~Lyrae stars (or, as commonly known, the Blazhko effect -- \citealt{1907AN....175..325B}). Although several ideas have been suggested, none of them succeeded, likely because of the missing underlying physics in those ideas -- see Koll\'ath in these proceedings and also earlier reviews of \cite{2016CoKon.105...61K} and \cite{2016pas..conf...22S}. Under this circumstance, most of the works in this field focus on further analyses of observational data to spark some workable idea. In this purely experimental approach various studies aim for finding new frequency components, establishing relations among the observed parameters (e.g., \citealt{2020MNRAS.494.1237S}) and derive accurate population parameters, such as incidence rates. In this work we follow the latter thread by extending our earlier work on the incidence rate of Blazhko stars \citep{2018A&A...614L...4K}. \section{Method of Analysis} The hunt for transiting extrasolar planets has revolutionized the analysis of photometric time series. Considering the small signals to be searched for, filtering out systematic effects both from the ground- and space-based observations has become of vital importance. Many methods have been developed primarily for transiting planet search, e.g., TFA, SysRem, PDC, SFF, EVEREST, respectively, by \cite{2005MNRAS.356..557K}, \cite{2005MNRAS.356.1466T}, \cite{2012PASP..124..985S}, \cite{2014PASP..126..948V}, \cite{2018AJ....156...99L}. However, only a few of these were extended to general variability search \citep{2016MNRAS.459.2408A}, primarily because of the difficulties encountered in separating the instrumental and environmental systematics from the underlying astrophysical signal. In particular, except for the very recent study by \cite{2019ApJS..244...32P}, all previous investigations on the Blazhko phenomenon have been carried out on datasets using Simple Aperture Photometry (SAP) for the space data \citep{2014ApJS..213...31B} and standard ensemble photometry for the ground-based data (e.g., analyses performed on the OGLE database -- see \citealt{2017MNRAS.466.2602P}). The main reason why standard methods (e.g., SFF by \citealt{2014PASP..126..948V}) are not applicable to variable stars with dominating large amplitude variations is that these methods employ the ``null-signal'' assumption.\footnote{That is, there is no underlying variation, except for the brief moments of the transit events.} Although inclusion of Gaussian process models has been proven to be a useful way of conserving stellar variability \citep{2016MNRAS.459.2408A}, here we resort to another approach, we have already employed in our pilot survey on Blazhko stars in fields C01-C04 of the K2 mission \citep{2018A&A...614L...4K}. \begin{figure}[!h] \vspace{-10pt} \begin{minipage}[c]{0.55\textwidth} \includegraphics[angle=-0, width=1.0\textwidth]{fig1_gk.ps} \end{minipage}\hfill \begin{minipage}[c]{0.40\textwidth} \vspace{10pt} \caption{Basic constituents of the time series model used to search for small signal components. It is assumed that a good approximation is available for the frequencies entering in the Fourier representation of the large amplitude pulsation. We use a $15^{\rm th}$ order Fourier sum with $100$--$400$ cotrending time series to compute systematics-free residuals to be searched for faint signals.~Normalized fluxes are used throughout the data processing.} \label{equations} \end{minipage} \vspace{-10pt} \end{figure} As briefly summarized in Fig.~\ref{equations}, the observed signal is represented by the sum of two distinct components: i) the large amplitude pulsation, modelled by the Fourier series $\{b_k,c_k\}$ and ii) systematics, estimated by the linear combination of the time series $\{U_j(i)\}$. This latter set may include other photometric time series from the same field (or, the combinations thereof -- often referred to as cotrending vectors -- e.g., \citealt{2012PASP..124..985S}) and stellar image parameters (e.g., positions on the CCD chip). While using cotrending vectors enables us to grab the common features of the light variation (the very essence of systematics), the image parameters help to disentangle those variations in the stellar flux that have their origin in the peculiarities of the given stellar image \citep{2010ApJ...710.1724B}. For cotrending time series we select bright stars uniformly distributed throughout the field of the campaign under scrutiny. Whenever available, the cotrending set is extended by the pixel coordinates of the image center. \begin{figure}[!h] \vspace{15pt} \begin{minipage}[c]{0.55\textwidth} \includegraphics[angle=-0, width=1.0\textwidth]{fig2_gk.ps} \end{minipage}\hfill \begin{minipage}[c]{0.40\textwidth} \vspace{-30pt} \caption{Flow chart of the algorithm employed in this work to separate the large amplitude component and the systematics from the observed light curve $\{T(i)\}$ and access the small amplitude residual $\{R(i)\}$ to search for additional Fourier components. See Fig.~\ref{equations} and text for additional discussion of the symbols and the method.} \label{flow_chart} \end{minipage} \end{figure} Because the pulsation frequency is known only approximately from the direct analysis of the SAP data, we need to perform an iterative search to make the frequency determination more accurate. This procedure serves to avoid artificial remnant power in the residuals, that could be incorrectly identified as a modulation component. The multistep procedure is depicted in Fig.~\ref{flow_chart}. In the final grand fit (step 9) we also include the newly found small amplitude components to minimize the effect of incomplete signal modeling. The main interest of this paper is step 8, the search for additional frequency components, and, in particular, the search for those components that are close to the pulsation frequency (and its harmonics). \section{Results} By following the RR~Lyrae list of \cite{2018A&A...620A.127M}, as in our former study, we gathered the publicly available items from the NASA Exoplanet Archive\footnote{\url{https://exoplanetarchive.ipac.caltech.edu/})}. We relied mostly on the database stored in the ExoFOP section of the site by \cite{2015ApJ...811..102P} (see also \citealt{2016MNRAS.459.2408A}). When the target was missing in the ExoFOP section, but it was available through the Kepler pipeline \citep{2012PASP..124.1000S}, we used this latter set. In both cases only the respective raw (i.e., SAP) fluxes were used with TFA for correcting systematics. As in our earlier work, the classification of the variables was based on the residual frequency spectra: a star was declared as a Blazhko variable, if there was one or more significant peaks in the residual spectrum near the fundamental frequency. \definecolor{magicmint}{rgb}{0.67, 0.94, 0.82} \definecolor{almond}{rgb}{0.94, 0.87, 0.8} \definecolor{blizzardblue}{rgb}{0.67, 0.9, 0.93} \definecolor{paleaqua}{rgb}{0.74, 0.83, 0.9} \begin{table}[!h] \centering \caption{Result of the search for modulation components.} \label{incidence} \scalebox{1.0}{ \begin{tabular}{crrrrc} \hline Field & N$_{\rm tot}$ & N$_{\rm RRab}$ & $N_{\rm BL}$ & $N_{\rm nonBL}$ & $N_{\rm BL}/N_{\rm RRab}$ \\ \hline\hline 01 & 14 & 14 & 9 & 5\hskip 5pt & 0.64 \\ 02 & 57 & 57 & 52 & 5 & 0.91 \\ 03 & 62 & 62 & 59 & 3 & 0.95 \\ 04 & 56 & 33 & 30 & 3 & 0.91 \\ 05 & 57 & 57 & 52 & 5 & 0.91 \\ 06 & 142 & 107 & 99 & 8 & 0.93 \\ \rowcolor{paleaqua} 07 & 353 & 235 & 184 & 51 & 0.78 \\ 08 & 47 & 46 & 44 & 2 & 0.96 \\ \hline Sum: & 788 & 611 & 529 & 82 & 0.87 \\ \hline \end{tabular}} \begin{flushleft} {\bf Notes:}~{\small N$_{\rm tot}:$ Total number of RRab stars from the list of \cite{2018A&A...620A.127M}; N$_{\rm RRab}:$ Actual number of RRab stars publicly available from the NASA Exoplanet Archive; $N_{\rm BL}:$ Number of Blkazhko stars; $N_{\rm nonBL}:$ Number of non-Blazhko stars (single period or other). Campaign $07$ is highlighted because of the low Blazhko rate -- see text.} \end{flushleft} \end{table} The campaign-by-campaign incidence rates are shown in Table~\ref{incidence}. We missed certain targets from the list of \cite{2018A&A...620A.127M}, because they were not included in the publicly available databases. Nevertheless, this study is based on four times larger sample than our pilot survey \citep{2018A&A...614L...4K}, yielding statistically more significant estimate of the high incidence rate of the Blazhko effect. The field of C07 has apparently a low rate, but this is likely due to the overabundance of faint, blended targets associated with the Sagittarius dwarf galaxy. \begin{figure}[!h] \centering \subfloat{\includegraphics[angle=-0, width=0.45\textwidth]{fig3a_gk.ps}} \qquad \subfloat{\includegraphics[angle=-0, width=0.45\textwidth]{fig3b_gk.ps}} \caption{Residual Fourier spectra of a star from two overlapping campaigns. Spectra of the residuals after subtracting the variation corresponding to the monoperiodic pulsation are shown. Insets are zoomed near the fundamental frequency $\nu_0$. Interestingly, the modulation power is higher at the harmonics of $\nu_0$. The panels on the right show the systematics-filtered light curves (the magnitude zero point is arbitrary). The object was classified as a non-Blazhko variable by \cite{2019ApJS..244...32P} -- see text for possible reasons.} \label{plachy1} \end{figure} In a recent paper, \cite{2019ApJS..244...32P} claim a more traditional observed rate of $\sim 50$~\%, based on the analysis of the fields of C03$-$C06. Although we did not make a star-by-star comparison between their and our results, the following examples are aimed for the illustration of the possible sources of the discrepancy. Figure~\ref{plachy1} shows an example for a target of short modulation period, that was mis-classified likely because of the poor performance of the K2SC detrending algorithm in this particular case. Figure~\ref{plachy2}, on the other hand, shows the frequency spectra of two stars from their Fig.~12.\footnote{The 3rd star shown in their figure is not accessible from the public archives used in our work.} We guess that in these cases K2SC might have overfitted the data and eliminated the small amplitude of the long period modulation. We also note that their classification (Plachy, private communication) have multiple components (including the less sensitive O-C analysis), that leads to the de-selection of many candidates that otherwise show signatures of modulation in the frequency spectra. \begin{figure}[!h] \centering \subfloat{\includegraphics[angle=-90, width=0.48\textwidth]{fig4a_gk.ps}} \quad \subfloat{\includegraphics[angle=-90, width=0.48\textwidth]{fig4b_gk.ps}} \caption{Fourier spectra of two stars classified as non-Blazhko variables by \cite{2019ApJS..244...32P}. Figure setting is the same as in Fig.~\ref{plachy1}. High-frequency peaks above $8$~c/d are aliases (via the half hour sampling) of the higher (16th and up) harmonics left in the time series. The heights of the side lobes are $\sim 5$ and $\sim 0.5$~ppt for 206333750 and 210697426, respectively. } \label{plachy2} \end{figure} \acknowledgements We would like to thank the organizers (and, in particular the Chair of the LOC, Karen Kinemuchi) for their effort in making this workshop scientifically fruitful and socially commemorable. This research has made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. Supports from the National Research, Development and Innovation Office (grants K~129249 and NN~129075) are acknowledged.
2004.02846
\section{Introduction} The concept of Hausdorff dimension has led to interesting results in the theory of profinite groups; for instance, see~\cite{KlThZR19} and the references therein. Let $G$ be an infinite countably based profinite group and let $\mathcal{S}$ be a \emph{filtration series} of $G$, that is, a chain $G = S_0\ge S_1 \ge S_2 \ge \ldots$ of open normal subgroups $S_i \trianglelefteq_\mathrm{o} G$ such that $\bigcap_i S_i = 1$. These subgroups form a base of neighbourhoods of $1$ and induce a translation-invariant metric on $G$ which, in turn, associates a \emph{Hausdorff dimension} $\hdim_G^{\mathcal{S}}(U) \in [0,1]$ to any subset $U\subseteq G$ with respect to the filtration series $\mathcal{S}$. Barnea and Shalev~\cite{BaSh97} established a group-theoretical interpretation of $\hdim_G^{\mathcal{S}}(H)$ for closed subgroups $H \le_\mathrm{c} G$; they showed that \[ \hdim_G^{\mathcal{S}}(H)=\varliminf_{i\rightarrow\infty}\dfrac{\log_p \lvert HS_i : S_i \rvert}{\log_p \lvert G : S_i \rvert} \] can be regarded as a `logarithmic density' of $H$ in~$G$. The (ordinary) \emph{Hausdorff spectrum} of~$G$ is $\hspec^{\mathcal{S}}(G) = \{\hdim_{G}^{\mathcal{S}}(H)\mid H \le_\mathrm{c} G\}$. The \emph{normal Hausdorff spectrum} of~$G$, defined as \[ \hspec_{\trianglelefteq}^{\mathcal{S}}(G)=\{\hdim_{G}^{\mathcal{S}}(H)\mid H\trianglelefteq_\mathrm{c} G\}, \] provides a snapshot of the normal subgroup structure of~$G$; its significance was highlighted by Shalev in~\cite[\S 4.7]{Sh00}. Typically, the Hausdorff dimension function and the normal Hausdorff spectrum depend very much on the underlying filtration~$\mathcal{S}$; compare~\cite{KlThZR19}. For a finitely generated pro-$p$ group~$G$, there are natural choices for $\mathcal{S}$ that encapsulate group-theoretic properties of~$G$: the lower $p$-series $\mathcal{L}$, the dimension subgroup series~$\mathcal{D}$, the $p$-power series~$\mathcal{P}$, the iterated $p$-power series~$\mathcal{P}^*$, and the Frattini series~$\mathcal{F}$; see Section~\ref{sec:prelim}. We refer to these filtration series loosely as the five standard filtration series. Several types of profinite groups with full ordinary Hausdorff spectra $[0,1]$ have been identified. The first examples of finitely generated pro-$p$ groups $G$ with $\hspec^{\mathcal{P}}(G) = [0,1]$ were discovered by Levai (see \cite[\S 4.2]{Sh00}) and Klopsch~\cite[VIII, \S 7]{Kl99}; more complicated examples of profinite groups with full Hausdorff spectra can be found, for example, in~\cite{AbVi05, BaVa19, GaGaKlxx}. But until now no examples of finitely generated pro-$p$ groups with full normal Hausdorff spectra were known. Already twenty years ago, Shalev~\cite[\S 4.7]{Sh00} put up the challenge to construct finitely generated pro-$p$ groups with infinite normal Hausdorff spectra and he asked whether the normal Hausdorff spectra could even contain infinite real intervals. Recently, Klopsch and Thillaisundaram~\cite{KlTh19} succeeded in constructing such examples, with respect to the five standard filtration series. Even though the normal Hausdorff spectra of their groups each contain infinite intervals, none of the spectra covers the full unit interval~$[0,1]$. In this paper we modify the construction of Klopsch and Thillaisundaram to produce the first example of a finitely generated pro-$p$ group with full normal Hausdorff spectrum $[0,1]$, with respect to any of the five standard filtration series. Our construction proceeds as follows. Throughout, let $p$ denote an odd prime. For $k\in \mathbb{N}$, consider the finite wreath product \begin{multline*} W_k = B_k \rtimes \langle \dot x_k \rangle \cong \langle \dot y_k\rangle \wr \langle \dot x_k \rangle, \qquad \text{with cyclic top group $\langle \dot x_k\rangle\cong C_{p^k}$}\\ \text{and elementary abelian base group $B_k = \prod_{j = 0}^{p^k-1} \langle {\dot y_k}^{\,\dot x_k^{\, j}}\rangle \cong C_p^{\, p^k}$.} \end{multline*} Basic structural properties of the finite wreath products $W_k$ transfer naturally to the inverse limit $W \cong \varprojlim_k W_k$, i.e., the pro-$p$ wreath product \begin{multline*} W = \langle \dot x, \dot y \rangle = B \rtimes \langle \dot x \rangle\cong C_p\ \hat{\wr}\ \mathbb{Z}_p \qquad \text{with procyclic top group $\langle \dot x\rangle \cong \mathbb{Z}_p$} \\ \text{and elementary abelian base group $B = \overline{\langle {\dot y}^{\dot x^j} \mid j \in\mathbb{Z} \rangle} \cong C_p^{\, \aleph_0}$.} \end{multline*} Let $F = F_2 = \langle \tilde x, \tilde y \rangle$ be a free pro-$p$ group on two generators, and let $\eta \colon F \to W$, resp.\ $\eta_k \colon F \to W_k$, for $k \in \mathbb{N}$, denote the continuous epimorphisms induced by $\tilde x \mapsto \dot x$ and $\tilde y \mapsto \dot y$, resp.\ $\tilde x \mapsto \dot x_k $ and $\tilde y \mapsto \dot y_k$. Set $R = \mathrm{ker}(\eta) \trianglelefteq_\mathrm{c} F$ and $R_k = \mathrm{ker}(\eta_k) \trianglelefteq_\mathrm{o} F$; set $Y = B \eta^{-1} \trianglelefteq_\mathrm{c} F$ and $Y_k = B_k \eta_k^{\, -1} \trianglelefteq_\mathrm{o} F$. We define \begin{align*} G = F/N, & \quad \text{where $N = [R,Y]Y^p \trianglelefteq_\mathrm{c} F$}, \\ G_k = F/N_k, & \quad \text{where $N_k = [R_k,Y_k]Y_k^{\, p} \langle {\tilde x}^{p^k} \rangle^F$.} \end{align*} Furthermore, we write \begin{align*} H = Y/N \trianglelefteq_\mathrm{c} G & \qquad \text{and} \qquad Z = R/N \trianglelefteq_\mathrm{c} G, \\ H_k = Y_k/N_k \trianglelefteq G_k & \qquad \text{and} \qquad Z_k = R_k / N_k \trianglelefteq G_k. \end{align*} We denote the images of $\tilde x, \tilde y$ in~$G$, resp.\ in~$G_k$, by $x,y$, resp.\ $x_k,y_k$, so that $G = \overline{\langle x,y\rangle}$ and $G_k=\langle x_k,y_k\rangle$. We observe that the finite groups $G_k$, $k\in\mathbb{N}$, naturally form an inverse system and that $G \cong \varprojlim_k G_k$. Furthermore, we have $[H,Z]=1$, and $[H_k,Z_k]=1$ for all $k \in \mathbb{N}$. \begin{theorem}\label{thm:main} For $p > 2$, the $2$-generated pro-$p$ group $G$ constructed above has full normal Hausdorff spectra with respect to the five standard filtration series: \[ \hspec_{\trianglelefteq}^{\mathcal{L}}(G) = \hspec_{\trianglelefteq}^{\mathcal{D}}(G) = \hspec_{\trianglelefteq}^{\mathcal{P}}(G) = \hspec_{\trianglelefteq}^{\mathcal{P}^*}(G) = \hspec_{\trianglelefteq}^{\mathcal{F}}(G) = [0,1]. \] \end{theorem} This resolves Problems~1.2 (b),(c) in~\cite{KlTh19} and Problem~5 in~\cite{BaSh97} for all five standard series. The latter problem was already solved previously for the series $\mathcal{D}$, $\mathcal{P}$, $\mathcal{P}^*$ and~$\mathcal{F}$: in \cite[VIII, \S 7]{Kl99} it was seen that $W \cong C_p\ \hat{\wr}\ \mathbb{Z}_p$ has $\hspec^{\mathcal{D}}(W) = \hspec^{\mathcal{P}}(W) = \hspec^{\mathcal{F}}(W) = [0,1]$, and by completely different means it was shown in~\cite{GaGaKlxx} that a non-abelian finitely generated free pro-$p$ group $E$ has $\hspec^{\mathcal{D}}(E) = \hspec^{\mathcal{P}^*}(E) = \hspec^{\mathcal{F}}(E) = [0,1]$. \smallskip \noindent \textit{Notation.} Throughout, $p$ denotes an \emph{odd} prime. From now on, all subgroups of profinite groups are tacitly understood to be closed subgroups to simplify the notation. All iterated commutators are left-normed, e.g., $[x,y,z] = [[x,y],z]$. Section~$2$ contains basic material and fairly general considerations that do not yet involve the notation used in the construction of the particular groups $G$ and $G_k$, $k \in \mathbb{N}$. In Sections~\ref{sec:structure-G-k} and \ref{sec:normal-hspec} we use the special notation from the introduction. In addition, we write $c_1=y$ and $c_i=[y,x,\overset{i-1}{\ldots},x]$ for $i \in \mathbb{N}_{\ge 2}$; furthermore, we set $c_{i,1}=[c_i,y]$ and $c_{i,j}=[c_i,y,x,\overset{j-1}{\ldots},x]$ for $j \in \mathbb{N}_{\ge 2}$. To keep the notation manageable, we denote, for $k \in \mathbb{N}$, the corresponding elements in the finite group $G_k$ by the same symbols (suppressing the parameter~$k$): $c_1=y_k$ and $c_i=[y_k,x_k,\overset{i-1}{\ldots},x_k]$ for $i\in\mathbb{N}_{\ge 2}$, and similarly $c_{i,1}=[c_i,y_k]$ and $c_{i,j}=[c_i,y_k,x_k,\overset{j-1}{\ldots},x_k]$ for $j \in \mathbb{N}_{\ge 2}$. From the context it will be clear whether our considerations apply to $G$ or one of the groups~$G_k$. \section{Preliminaries} \label{sec:prelim} Let $G$ be an arbitrary finitely generated pro-$p$ group. We recall the definition of the five standard filtration series referred to in the introduction. The \emph{lower $p$-series} $\mathcal{L}$ of~$G$, the \emph{dimension subgroup series} $\mathcal{D}$ of~$G$, the \emph{$p$-power series} $\mathcal{P}$ of~$G$, the \emph{iterated $p$-power series} $\mathcal{P}^*$ of~$G$ and the \emph{Frattini series} $\mathcal{F}$ of~$G$ are defined recursively by \begin{align*} \mathcal{L} & \colon P_1(G)=G\ \ \text{ and }\ \ P_i(G)=P_{i-1}(G)^p[P_{i-1}(G),G]\ \ \text{for $i\ge 2$,} \\ \mathcal{D} & \colon D_1(G)=G\ \ \text{ and }\ \ D_i(G) = D_{\lceil i/p\rceil}(G)^p \prod\nolimits_{1\le j<i}[D_{j}(G),D_{i-j}(G)]\ \ \text{for $i\ge 2$}, \\ \mathcal{P} & \colon \pi_i(G) = G^{p^i}=\langle g^{p^i}\mid g\in G\rangle \ \text{for $i \ge 0$}, \\ \mathcal{P}^* & \colon \pi^*_0(G)=G\ \ \text{ and }\ \ \pi^*_i(G) = \pi^*_{i-1}(G)^p\ \text{for $i \ge 1$,} \\ \mathcal{F} & \colon \Phi_0(G)=G\ \ \text{ and }\ \ \Phi_i(G)=\Phi_{i-1}(G)^p[\Phi_{i-1}(G),\Phi_{i-1}(G)]\ \text{for $i \ge 1$.} \end{align*} Next we recall two standard commutator identities; compare~\cite[Prop.~1.1.32]{LeMK02}. \begin{lemma} \label{lem:com-ids} Let $G=\langle a, b\rangle$ be a finite $p$-group, for $p \ge 3$, such that $\gamma_2(G)$ has exponent~$p$, and let $r\in\mathbb{N}$. For $u,v\in G$, let $K(u,v)$ denote the normal closure in $G$ of all commutators in $\{u,v\}$ of weight at least $p^r$ that have weight at least $2$ in~$v$. Then the following congruences hold: \[ (ab)^{p^r} \equiv_{K(a,b)} a^{p^r} b^{p^r} [b,a,\overset{p^r-1}{\ldots},a] \qquad \text{and} \qquad [a^{p^r},b] \equiv_{K(a,[a,b])} [a,b,a,\overset{p^r-1}{\ldots},a]. \] \end{lemma} The main ingredient of the proof of Theorem~\ref{thm:main} is Proposition~\ref{pro:path-area}. For the proof we first establish two lemmata. The first lemma is a variation of~\cite[Prop.~5.2]{KlThZR19}. \begin{lemma} Let $G$ be a countably based pro-$p$ group, and let $Z \trianglelefteq_\mathrm{c} G$ be infinite. Let $\mathcal{S} \colon Z_0 \supseteq Z_1 \supseteq \ldots$ be a filtration series of~$Z$ consisting of $G$-invariant subgroups $Z_i \trianglelefteq_\mathrm{o} Z$. Let $\eta \in [0,1]$ be such that the normal closure in $G$ of every finite collection of elements $z_1, \ldots, z_m \in Z$ satisfies $\hdim_Z^\mathcal{S}(\langle z_1, \ldots, z_m \rangle^G) \le \eta$. Then there exists $H \le_\mathrm{c} Z$ with $H \trianglelefteq G$ such that $\hdim_Z^\mathcal{S}(H) = \eta$. \end{lemma} \begin{proof} The claim can be verified in close analogy to the proof of~\cite[Prop.~5.2]{KlThZR19}. One constructs the subgroup $H \le_\mathrm{c} Z$ as $H = \langle H_0 \cup H_1 \cup \ldots \rangle$, where $1 = H_0 \subseteq H_1 \subseteq \ldots$ is a suitable ascending sequence of subgroups $H_i \le_\mathrm{c} Z$ each of which is the normal closure in $G$ of finitely many elements. To see that the argument in op.\ cit.\ can be used, it suffices to observe that, for each $i \in \mathbb{N}$, the pro-$p$ group $G/Z_i$ acts nilpotently on the finite $p$-group~$Z/Z_i$ (and its quotients by $G$-invariant subgroups). \end{proof} \begin{lemma} \label{lemma path/area} Let $G$ be a countably based profinite group with an infinite abelian normal subgroup $Z \trianglelefteq_\mathrm{c} G$ and $x \in G$ such that $G = \langle x \rangle C_G(Z)$. Let $\mathcal{S}:Z=Z_0\ge Z_1\ge\ldots$ be a filtration series of $Z$ consisting of $G$-invariant subgroups $Z_i \trianglelefteq_\mathrm{o} Z$; for $i \in \mathbb{N}_0$, let $p^{e_i}$ be the exponent of~$Z/Z_i$. Suppose that, for every $i \in \mathbb{N}_0$, there exist $n_i \in \mathbb{N}$ and $N_i \le_\mathrm{c} Z$ such that \[ \gamma_{n_i +1}(G)\cap Z\le Z_i\le N_i \qquad \text{and} \qquad \varliminf_{i\rightarrow\infty} \frac{e_in_i}{\log_p \lvert Z : N_i \rvert}=0. \] Then every finite collection of elements $z_1, \ldots, z_m \in Z$ satisfies \[ \hdim_Z^\mathcal{S}(\langle z_1, \ldots, z_m \rangle^G) = 0. \] \end{lemma} \begin{proof} Consider first a single element $z \in Z$. From \[ \langle z\rangle^G=\langle z,[z,x],[z,x,x],\ldots\rangle, \] and $\gamma_{n_i +1}(G)\cap Z\le Z_i$, for $i \in \mathbb{N}$, we deduce that \[ \langle z\rangle^G Z_i = \langle z,[z,x],\ldots,[z,x,\overset{n_i-1}{\ldots},x]\rangle Z_i; \] in particular, since $Z$ is abelian, this yields \[ \log_p \lvert \langle z \rangle^G Z_i : Z_i \rvert \le e_in_i. \] Now consider finitely many elements $z_1, \ldots, z_m \in Z$. Since $Z$ is abelian, we have $\langle z_1,\ldots,z_m \rangle^G = \langle z_1\rangle^G \cdots \langle z_m\rangle^G$. From this we deduce \[ \hdim_Z^{\mathcal{S}}(\langle z_1,\ldots,z_m \rangle^G) \leq \varliminf_{i\rightarrow\infty}\frac{ \sum_{j=1}^m \log_p \lvert \langle z_j \rangle^G Z_i : Z_i \rvert}{\log_p \lvert Z : Z_i \rvert} \le \varliminf_{i\rightarrow\infty} \frac{m e_i n_i}{\log_p \lvert Z : N_i \rvert} = 0. \qedhere \] \end{proof} For an infinite countably based pro-$p$ group $G$, equipped with a filtration series $\mathcal{S} \colon G = S_0 \supseteq S_1\supseteq \ldots$, and a closed subgroup $H \le_\mathrm{c} G$ we adopt the following terminology from~\cite{KlTh19}: we say that $H$ has \emph{strong} Hausdorff dimension in $G$ with respect to $\mathcal{S}$ if its Hausdorff dimension is given by a proper limit, i.e., if \[ \hdim^{\mathcal{S}}_G(H) = \lim_{i \to \infty} \frac{\log_p \lvert H S_i : S_i \rvert}{\log_p \lvert G : S_i \rvert}. \] Using the previous two lemmata, we follow the proof of~\cite[Thm.~5.4]{KlThZR19} to obtain our main tool. \begin{proposition} \label{pro:path-area} Let $G$ be a countably based pro-$p$ group with an infinite abelian normal subgroup $Z \trianglelefteq_\mathrm{c} G$ such that $G/C_G(Z)$ is procyclic. Let $\mathcal{S} \colon G = S_0\ge S_1\ge \ldots$ be a filtration series of $G$ and consider the induced filtration series $\mathcal{S} |_Z \colon Z = S_0 \cap Z \ge S_1 \cap Z \ge \ldots$ of $Z$; for $i\in\mathbb{N}_0$, let $p^{e_i}$ be the exponent of $Z/(S_i \cap Z)$. Suppose that, for every $i\in \mathbb{N}_0$, there exist $n_i \in \mathbb{N}$ and $M_i \le_\mathrm{c} G$ such that \[ \gamma_{n_i + 1}(G) \cap Z \le S_i \cap Z \le M_i \qquad \text{and} \qquad \varliminf_{i\rightarrow\infty}\frac{e_in_i}{\log_p \lvert Z : M_i \cap Z \rvert} = 0. \] If $Z$ has strong Hausdorff dimension $\xi = \hdim_G^{\mathcal{S}}(Z) \in [0,1]$ then we have \[ [0,\xi]\subseteq \hspec_{\trianglelefteq}^{\mathcal{S}}(G). \] \end{proposition} \section{The structure of the finite groups $G_k$} \label{sec:structure-G-k} In this section we collect some structural results for the finite $p$-groups $G_k$ defined in the introduction. We use the notation set up there, in particular, in the last paragraph of that section: $W_k$, $B_k$, $\dot x_k$, $\dot y_k$, $G_k$, $H_k$, $Z_k$, $x_k$, $y_k$, $c_i$, $c_{i,j}$, \ldots. \begin{proposition}[Prop.~2.6 in \cite{KlTh19}] \label{pro:Wk-lcs} For $k\in\mathbb{N}$, the wreath product $W_k \cong C_p \wr C_{p^k}$ is nilpotent of class $p^k$ and the lower central series of $W_k$ satisfies \begin{align*} W_k & =\gamma_1(W_k) = \langle \dot x_k, \dot y_k \rangle \gamma_2(W_k) \text{ with }W_k / \gamma_2(W_k) \cong C_{p^k} \times C_{p}, \\ \gamma_i(W_k) & = \langle [\dot y_k, \dot x_k, \overset{i-1}{\ldots}, \dot x_k]\rangle\gamma_{i+1}(W_k) \text{ with }\gamma_i(W_k)/\gamma_{i+1}(W_k)\cong C_p \text{ for } 2\le i\le p^k. \end{align*} In particular, the base group satisfies \[ B_k = \langle \dot y_k \rangle \gamma_2(W_k) = \langle \dot y_k, [\dot y_k, \dot x_k], \ldots, [\dot y_k, \dot x_k, \overset{p^k-1}{\ldots}, \dot x_k]\rangle. \] \end{proposition} \begin{proposition} \label{pro:order-G-k} For $k\in\mathbb{N}$, we have $G_k = \langle x_k \rangle \ltimes H_k$, where $\langle x_k \rangle \cong C_{p^k}$ and $H_k$ is freely generated in the variety of class-$2$ nilpotent groups of exponent~$p$ by the conjugates $y_k^{\, x_k^{\, j}}$, $0 \le j < p^k$. In particular, the logarithmic order of $G_k$ is \[ \log_p \lvert G_k \rvert = k+p^k+\binom{p^k}{2}. \] \end{proposition} \begin{proof} The proof is very similar to that of \cite[Lem.~5.1]{KlTh19}. From $G_k/Z_k\cong W_k $ we obtain \[ \log_p \lvert G_k \rvert = \log_p \lvert G_k/Z_k \rvert + \log_p \lvert Z_k \rvert = k + p^k + \log_p \lvert Z_k \rvert. \] By construction, $Z_k$ is elementary abelian, and from~\cite[Eq.~(3.1)]{KlTh19} we get \[ Z_k = \Big\langle \big[ y_k^{\, x_k^{p^i}},y_k^{\, x_k^{p^j}} \big] \mid 0 \le i < j \le p^{k}-1 \Big\rangle. \] This yields $\log_p \lvert G_k \rvert \le k+p^k+\binom{p^k}{2}$. Consider the finite $p$-group \[ M = \langle b_0, \ldots, b_{p^k-1} \rangle = E / \gamma_3(E)E^p, \] where $E$ is the free group on $p^k$ generators. Then, the images of $b_0, \ldots, b_{p^k-1}$ generate independently the elementary abelian quotient $M/M'$, and the commutators $[b_i,b_j]$ with $0\le i<j\le p^k-1$ generate independently the elementary abelian subgroup~$M'$. The latter can be checked, for instance, by considering homomorphisms from $M$ onto the group $\mathrm{Heis}(\mathbb{F}_p)$ of upper unitriangular $3\times 3$ matrices over the prime field~$\mathbb{F}_p$. Next consider the faithful action of the cyclic group $A \cong \langle a \rangle \cong C_{p^k}$ on~$M$ induced by \[ b_i^{\, a} = \begin{cases} b_{i+1} & \text{if $0\le i\le p^k-2$,} \\ b_0 & \text{if $i=p^{k}-1$.} \end{cases} \] We define $\widetilde{G}_k = A \ltimes M$ and note that $\log_p \lvert G_k \rvert \le k + p^k + \binom{p^k}{2} = \log_p \lvert \widetilde{G}_k \rvert$. Furthermore, it is easy to see that $\widetilde{G}_k/M' \cong W_k$. Thus there is an epimorphism $\varepsilon \colon G_k \to \widetilde{G}_k$ with $x_k \,\varepsilon = a$ and $y_k \,\varepsilon = b_0$, and from $\lvert G_k \rvert \le \lvert \widetilde{G}_k \rvert$ we conclude that $G_k\cong \widetilde{G}_k$. \end{proof} \begin{remark} \label{rem:H'-eq-Z} \label{rem:derived-subgroup} The proof of Proposition~\ref{pro:order-G-k} shows that $[H_k,H_k] = Z_k$ for $k \in \mathbb{N}$, and thus $[H,H] = Z$. \end{remark} \begin{proposition} \label{pro:G-k-lcs} For $k\in\mathbb{N}$, the nilpotency class of $G_k$ is $2p^k-1$. The terms of the lower central series of $G_k$ are as follows: \[ \gamma_1(G_k)=G_k=\langle x_k,y_k\rangle \, \gamma_2(G_k) \quad \text{with $G_k/\gamma_2(G_k)\cong C_{p^k}\times C_p$} \] and, with the notation \begin{align*} I_1 & = \{ i \mid 2 \le i \le p^k \text{ with } i \equiv_2 0 \}, && I_2 = \{ i \mid 2 \le i \le p^k \text{ with } i \equiv_2 1 \}, \\ I_3 &= \{ i \mid p^k + 1 \le i \le 2 p^k-1 \text{ with } i \equiv_2 0 \}, && I_4 = \{ i \mid p^k + 1 \le i \le 2p^k-1 \text{ with } i \equiv_2 1 \}, \end{align*} the series continues as \begin{align*} \gamma_i(G_k) % & = \begin{cases} \langle c_i, \, c_{2,i-2}, \, c_{4,i-4}, \, \ldots, \, c_{i-2,2} \rangle \gamma_{i+1}(G_k) & \text{for $i \in I_1$,} \\ \langle c_i, \, c_{2,i-2}, \, c_{4,i-4}, \, \ldots, \, c_{i-1,1} \rangle \gamma_{i+1}(G_k) & \text{for $i \in I_2$,} \\ \langle c_{i-p^k+1,p^k-1}, \, c_{i-p^k+3,p^k-3}, \, \ldots, \, c_{p^k-1,i-p^k+1} \rangle \gamma_{i+1}(G_k) & \text{for $i \in I_3$,} \\ \langle c_{i-p^k,p^k}, \, c_{i-p^k+2,p^k-2}, \, \ldots, \, c_{p^k-1,i-p^k+1} \rangle \gamma_{i+1}(G_k) & \text{for $i \in I_4$} \end{cases} \end{align*} with \begin{align*} \gamma_i(G_k)/\gamma_{i+1}(G_k) % & \cong \begin{cases} C_p^{\, i/2} & \text{for $i \in I_1$,} \\ C_p^{\, (i+1)/2} & \text{for $i \in I_2$,} \\ C_p^{\, (2p^k-i)/2} & \text{for $i \in I_3$,} \\ C_p^{\, (2p^k-i+1)/2} & \text{for $i \in I_4$.} \end{cases} \end{align*} \end{proposition} \begin{proof} The description of $\gamma_1(G_k)$ modulo $\gamma_2(G_k)$ is clear. Now consider $i \in I_1$, that is $2 \le i \le p^k$ and $i \equiv_2 0$. Our first aim is to show, by induction on~$i$, that \begin{equation} \label{equ:cases-I1-I2} \begin{split} \gamma_i(G_k) & = \langle c_i, \, c_{2,i-2}, \, c_{4,i-4}, \, \ldots, \, c_{i-2,2} \rangle \gamma_{i+1}(G_k), \\ \gamma_{i+1}(G_k) & = \langle c_{i+1}, \, c_{2,i-1}, \, c_{4,i-3}, \, \ldots, c_{i,1} \rangle\gamma_{i+2}(G_k). \end{split} \end{equation} The induction base, i.e., the case $i=2$, is clear: $\gamma_2(G_k) = \langle [x_k, y_k] \rangle \gamma_3(G_k) = \langle c_2 \rangle \gamma_3(G_k)$ and $\gamma_3(G_k) = \langle [c_2,x_k], [c_2,y_k] \rangle \gamma_4(G_k) = \langle c_3,c_{2,1}\rangle \gamma_4(G_k)$. Next suppose that $i \ge 4$. The induction hypothesis yields \begin{align*} \gamma_{i-2}(G_k) & =\langle c_{i-2}, \, c_{2,i-4}, \, c_{4,i-6}, \, \ldots, \, c_{i-4,2} \rangle\gamma_{i-1}(G_k), \\ \gamma_{i-1}(G_k) & = \langle c_{i-1}, \, c_{2,i-3}, \, c_{4,i-5}, \, \ldots, \, c_{i-2,1}\rangle\gamma_{i}(G_k). \end{align*} From $c_{m,n}\in [H_k,H_k] = Z_k$ we deduce $[c_{m,n},y_k]=1$ for all $m,n \ge 1$. This gives \[ \gamma_i(G_k) = \langle c_i, \, c_{i-1,1}, \, c_{2,i-2}, \, c_{4,i-4}, \, \ldots, \, c_{i-2,2}\rangle\gamma_{i+1}(G_k). \] We put \[ M= \langle c_i, \, c_{2,i-2}, \, c_{4,i-4}, \, \ldots, c_{i-2,2} \rangle \gamma_{i+1}(G_k) \] and aim to show that $c_{i-1,1}\in M$. This will establish the first equation in~\eqref{equ:cases-I1-I2}; the second equation then follows immediately, again from $[c_{n,m},y_k]=1$ for $m,n \ge 1$. As $c_{i-1,1} = [c_{i-2},x_k,y_k]$, the Hall--Witt identity yields \[ c_{i-1,1} [x_k,y_k,c_{i-2}] [y_k,c_{i-2},x_k] \equiv 1 \pmod{M}. \] Furthermore, $[y_k,c_{i-2},x_k] \equiv c_{i-2,2}^{\, -1} \equiv 1$ modulo $M$, and this gives \[ c_{i-1,1}\equiv [c_{i-2},c_2]^{-1}\pmod{M}. \] Thus it suffices to prove that \[ [c_m,c_n] \equiv 1 \pmod{M} \qquad \text{for all $m,n \in \mathbb{N}$ with $m \ge n\ge 2$ and $m+n = i$.} \] We argue by induction on $m-n$. If $m-n=0$ then $m=n$ and $[c_m,c_n]=1$. Now suppose that $m-n\ge 1$. As $[c_m,c_n] = [c_{m-1},x_k,c_n]$, the Hall--Witt identity yields \[ [c_m,c_n][x_k,c_n,c_{m-1}][c_n,c_{m-1},x_k]\equiv 1\pmod{M}, \] where $[x_k,c_n,c_{m-1}] \equiv [c_{m-1},c_{n+1}] \equiv 1 \pmod{M}$ by induction. This yields \[ [c_m,c_n] \equiv[c_n,c_{m-1},x_k]^{-1} \equiv [[c_n,c_{m-1}]^{-1},x_k] \pmod{M}. \] From $[c_n,c_{m-1}]^{-1} \in \gamma_{i-1}(G_k)$ we deduce that \[ [c_n,c_{m-1}]^{-1} \equiv c_{i-1}^{\, r_0} c_{2,i-3}^{\, r_2} c_{4,i-5}^{\, r_4} \cdots c_{i-2,1}^{\, r_{i-2}} \pmod{\gamma_i(G_k)} \] for suitable $r_0, r_2, \ldots, r_{i-2} \in \mathbb{Z}$. It follows that \[ [c_m,c_n] \equiv [[c_n,c_{m-1}]^{-1},x_k] \equiv c_i^{\, r_0} c_{2,i-2}^{\, r_2} c_{4,i-4}^{\, r_4} \cdots c_{i-2,2}^{\, r_{i-2}} \equiv 1 \pmod{M}. \] This finishes the proof of~\eqref{equ:cases-I1-I2}. Finally, we observe from~\eqref{equ:cases-I1-I2} that \[ \gamma_i(G_k)/\gamma_{i+1}(G_k) \cong C_p^{\, l(i)} \qquad \text{and} \qquad \gamma_{i+1}(G_k)/\gamma_{i+2}(G_k) \cong C_p^{\, l(i+1)}, \] where $l(i) \le i/2$ and $l(i+1) \le i/2 + 1$; below we will see that, in fact, all the generators appearing in~\eqref{equ:cases-I1-I2} are necessary. Now consider $i \in I_3$, that is $p^k +1 \le i \le 2p^k-2$ and $i \equiv_2 0$. Lemma~\ref{lem:com-ids} yields \[ c_{p^k+1} \equiv [y_k,x_k^{\, p^k}] = [y_k,1] = 1 \pmod{\gamma_{p^k+2}(G_k)}, \] thus $c_{p^k+1}\in\gamma_{p^{k}+2}(G_k)$ and $c_{p^k+1,n} \in \gamma_{p^k+n+2}(G_k)$ for $ n\ge 1$. For similar reasons, we have $c_{n,p^k+1}\in \gamma_{p^k+n+2}(G_k)$ for all $n\ge 1$. This yields, by induction on~$i$, \begin{equation}\label{equ:cases-I3-I4} \begin{split} \gamma_i(G_k) & = \langle c_{i-p^k+1,p^k-1}, \, c_{i-p^k+3,p^k-3}, \, \ldots, \, c_{p^k-1,i-p^k+1} \rangle \gamma_{i+1}(G_k), \\ \gamma_{i+1}(G_k) & = \langle c_{i-p^k+1,p^k}, \, c_{i-p^k+3,p^k-2}, \, \ldots, \, c_{p^k-1,i-p^k+2} \rangle \gamma_{i+2}(G_k). \end{split} \end{equation} Similarly as before, we observe that \[ \gamma_i(G_k)/\gamma_{i+1}(G_k) \cong C_p^{\, l(i)} \qquad \text{and} \qquad \gamma_{i+1}(G_k)/\gamma_{i+2}(G_k) \cong C_p^{\, l(i+1)}, \] where $l(i), l(i+1) \le (2p^k - i)/2$. Extending the argument one step further, we obtain $\gamma_{2p^k}(G_k) = 1$: the group $G_k$ has nilpotency class at most $2p^k -1$. Finally, it suffices to check that the upper bounds that we derived from \eqref{equ:cases-I1-I2} and \eqref{equ:cases-I3-I4} for the logarithmic orders $\log \lvert \gamma_i(G_k) : \gamma_{i+1}(G_k) \rvert$, $1 \le i \le 2p^k -1$, sum to the logarithmic order of~$G_k$. Indeed, based on Proposition~\ref{pro:order-G-k}, we confirm that \[ (k+1) + \sum_{i=2}^{p^k} \lceil i/2 \rceil + \sum_{i=p^k+1}^{2p^k-1} \lceil (2p^k-i)/2 \rceil = k + p^k +\binom{p^k}{2} = \log_p \lvert G_k \rvert. \qedhere \] \end{proof} \begin{corollary} \label{cor:index} For $i \in \mathbb{N}$ we have \[ \log_p \lvert Z : \gamma_i(G) \cap Z \rvert = \begin{cases} 2 \sum_{j=1}^{(i-3)/2} j = (i^2-4i+3)/4 & \text{if $i \equiv_2 1$,} \\ 2 \sum_{j=1}^{(i-4)/2} j + \frac{i-2}{2} = (i^2-4i+4)/4 & \text{if $i \equiv_2 0$.} \end{cases} \] \end{corollary} \begin{proof} The claim follows from the standard identity \[ \lvert \gamma_2(G) : \gamma_i(G) \rvert = \lvert \gamma_2(G) : \gamma_i(G) Z \rvert \lvert \gamma_i(G) Z : \gamma_i(G) \rvert = \lvert \gamma_2(W) : \gamma_i(W) \rvert \lvert Z : \gamma_i(G) \cap Z \rvert \] and Propositions~\ref{pro:Wk-lcs} and~\ref{pro:G-k-lcs}. \end{proof} From the lower central series of~$G_k$, it is easy to compute the lower $p$-series and the dimension subgroup series of~$G_k$. \begin{proposition} \label{pro:Gk-pcs} For $k\in\mathbb{N}$, the $p$-central series of $G_k$ has length $2p^k -1$ and its terms satisfy, for $1 \le i \le 2p^k -1$, \[ P_i(G_k) = \langle x_k^{\, p^{i-1}} \rangle \gamma_i(G_k). \] \end{proposition} \begin{proof} The description of $P_1(G_k) = \gamma_1(G_k)$ is correct. Now suppose that $i\ge 2$. By induction, we have \[ P_{i-1}(G_k) = \langle x_k^{\, p^{i-2}} \rangle \gamma_{i-1}(G_k). \] Recall that $P_i(G_k) = [P_{i-1}(G_k),G_k] P_{i-1}(G_k)^p$ and consider the two factors one after the other. The first factor satisfies \[ [P_{i-1}(G_k),G_k] = [\langle x_k^{\, p^{i-2}} \rangle \gamma_{i-1}(G_k),G_k] = [\langle x_k^{\, p^{i-2}} \rangle,G_k] \gamma_{i}(G_k), \] and Lemma~\ref{lem:com-ids} yields \[ [\langle x_k^{\, p^{i-2}} \rangle,G_k] \le [G_k^{\, p^{i-2}},G_k] \le \gamma_{p^{i-2}+1}(G_k). \] From $p^{i-2}+1 \ge i$ we deduce that $[P_{i-1}(G_k),G_k]= \gamma_{i}(G_k)$. The second factor satisfies \[ P_{i-1}(G_k)^p\equiv \langle x_k^{\, p^{i-2}} \rangle^p \, \gamma_{i-1}(G_k)^p \equiv \langle x_k^{\, p^{i-1}} \rangle\pmod{\gamma_i(G_k)}. \] We conclude that $P_i(G_k)=\langle x_k^{\, p^{i-1}} \rangle \gamma_i(G_k)$. \end{proof} \begin{proposition} \label{pro:Gk-dss} For $k\in\mathbb{N}$, the dimension subgroup series of $G_k$ has length $2p^k-1$ and its terms satisfy, for $1 \le i \le 2p^k-1$, \[ D_i(G_k) = \langle x_k^{p^{l(i)}}\rangle\gamma_i(G_k), \quad \text{where $l(i)=\lceil\log_p(i)\rceil$}. \] \end{proposition} \begin{proof} Let $i \in \mathbb{N}$. Since $\gamma_2(G_k)$ has exponent~$p$, Lazard's formula (see~\cite[Thm.~11.2]{DidSMaSe99}) shows that \[ D_i(G_k) = \prod_{np^m\ge i} \gamma_n(G_k)^{p^m} = G_k^{\, p^{l(i)}} \gamma_i(G_k), \quad \text{where $l(i) = \lceil \log_p(i) \rceil$.} \] Lemma~\ref{lem:com-ids} yields $a^{p^{l(i)}} b^{p^{l(i)}} \equiv (ab)^{p^{l(i)}}$ modulo $\gamma_{p^{l(i)}}(G)$ for all $a,b \in G_k$ and, as $p^{l(i)} \ge i$, we deduce that \[ D_i(G_k) = \langle x_k^{p^{l(i)}} \rangle \gamma_i(G_k). \qedhere \] \end{proof} \section{Normal Hausdorff spectra} \label{sec:normal-hspec} In this section we establish Theorem~\ref{thm:main}; we split the proof into three parts and formulate three separate results, in dependence on the filtration series. We use the notation set up in the introduction; in particular, $G \cong \varprojlim_k G_k$ denotes the group constructed there. \begin{theorem} \label{thm:spec-lcs} The pro-$p$ group $G$ has full normal Hausdorff spectra \[ \hspec_{\trianglelefteq}^{\mathcal{L}}(G)=[0,1] \qquad \text{and} \qquad \hspec_{\trianglelefteq}^{\mathcal{D}}(G)=[0,1], \] with respect to the lower $p$-series~$\mathcal{L}$ and the dimension subgroup series~$\mathcal{D}$. \end{theorem} \begin{proof} Let $\mathcal{S}$ be $\mathcal{L}$, resp.\ $\mathcal{D}$. Write $\mathcal{S} \colon G = S_0 = S_1\ge S_2\ge\ldots$, where $S_i = P_i(G)$, resp.\ $S_i = D_i(G)$, for $i \ge 1$, and observe that $Z \le \gamma_2(G)$; compare Remark~\ref{rem:H'-eq-Z}. Thus Proposition~\ref{pro:Gk-pcs}, resp.\ Proposition~\ref{pro:Gk-dss}, yields \[ S_i \cap Z = \gamma_i(G) \cap Z \quad \text{for $i\ge 1$}. \] From Corollary~\ref{cor:index} we see that \begin{equation} \label{equ:lim-zero} \lim_{i\rightarrow\infty} \frac{i}{\log_p \lvert Z : \gamma_{i}(G) \cap Z \rvert} = 0. \end{equation} This allows us to pin down the Hausdorff dimension of $Z \le_\mathrm{c} G$: \begin{multline*} \hdim_{G}^{\mathcal{S}}(Z) = \varliminf_{i \to \infty} \left(\frac{\log_p \lvert G : S_i \rvert }{\log_p \lvert S_iZ : S_i \rvert }\right)^{-1} = \varliminf_{i \to \infty} \left(\frac{\log_p \lvert G : S_iZ \rvert + \log_p \lvert S_iZ : S_i\rvert}{\log_p \lvert S_iZ : S_i \rvert}\right)^{-1} \\ = \varliminf_{i \to \infty}\left( \frac{\log_p \lvert G : S_iZ \rvert}{\log_p \lvert Z : S_i \cap Z \rvert} + 1\right)^{-1} =\varliminf_{i \to \infty}\left(\dfrac{\log_p \lvert G : S_iZ \rvert}{\log_p \lvert Z : \gamma_i(G)\cap Z \rvert} + 1 \right)^{-1}=1, \end{multline*} where the last equality follows from~\eqref{equ:lim-zero} and the fact that $\log_p \lvert G : S_iZ \rvert \le 2i$, by \cite[Prop.~2.6]{KlTh19} and Proposition~\ref{pro:Gk-dss}. In particular, $Z$ has strong Hausdorff dimension. Thus Proposition~\ref{pro:path-area}, with $e_i=1$, $n_i=i$ and $M_i = \gamma_i(G)$, yields \[ [0,1] = [0,\hdim_G^{\mathcal{S}}(Z)] \subseteq \hspec_{\trianglelefteq}^{\mathcal{S}}(G). \qedhere \] \end{proof} \begin{theorem} \label{thm:spec-pp} The pro-$p$ group $G$ has full normal Hausdorff spectra \[ \hspec_{\trianglelefteq}^{\mathcal{P}}(G)=[0,1] \qquad \text{and} \qquad \hspec_{\trianglelefteq}^{\mathcal{P}^*}(G)=[0,1], \] with respect to the $p$-power series~$\mathcal{P}$ and the iterated $p$-power series $\mathcal{P}^*$. \end{theorem} \begin{proof} Recall our notation $\pi_i(G) = G^{p^i}$ and $\pi_i^*(G)$ for the terms of the series $\mathcal{P}$ and~$\mathcal{P}^*$. Our first aim is to show that \begin{equation} \label{equ:rel-p-to-ip} \gamma_{2p^i}(G) \le G^{p^i} \le \pi_i^*(G) \le \langle x^{p^i} \rangle \gamma_{p^i}(G) \quad \text{for $i \in \mathbb{N}_0$.} \end{equation} Let $i \in \mathbb{N}_0$. From the construction of $G$ and $G_k$, it is easily seen that $G/G^{p^k} \cong G_k/G_k^{p^k}$ for $k \in \mathbb{N}$. Hence Proposition~\ref{pro:G-k-lcs} yields $\gamma_{2p^i}(G) \le G^{p^i}$. Clearly, we have $G^{p^i} \le \pi_i^*(G)$. It remains to justify the last inclusion in~\eqref{equ:rel-p-to-ip}. We proceed by induction on~$i$. For $i=0$ even equality holds, trivially. Now suppose that $i \ge 1$. The induction hypothesis yields \[ \pi_{i-1}^*(G) \le \langle x^{p^{i-1}} \rangle \gamma_{p^{i-1}}(G). \] Let $g \in \pi_{i-1}^*(G)$, and write $g = x^{m p^{i-1}} h$ with $m \in \mathbb{Z}$ and $h \in \gamma_{p^{i-1}}(G) \cap H$. Lemma~\ref{lem:com-ids} yields $g^p = x^{m p^i} z$ with $x^{m p^i} \in \langle x^{p^i} \rangle$ and $z \in\gamma_p(\langle x^{p^{i-1}}, h \rangle)$. Thus it suffices to show that $\gamma_p(\langle x^{p^{i-1}}, h \rangle) \le \gamma_{p^i}(G)$. Suppose that $c$ is an arbitrary commutator of weight $n \ge 2$ in $\{ x^{p^{i-1}}, h \}$; we show by induction on $n$ that $c \in \gamma_{np^{i-1}}(\langle x^{p^{i-1}}, h \rangle)$. For $n=2$, it suffices to consider $c = [h,x^{p^{i-1}}]$, and Lemma~\ref{lem:com-ids} shows that $c \in \gamma_{2p^{i-1}}(G)$. For $n\ge 3$, we see by induction that it suffices to consider $c = [d,h]$ and $[d, x^{p^{i-1}}]$ with $d \in \gamma_{(n-1)p^{i-1}}(G)$; if $c = [d,h]$, the result follows immediately, and, if $c = [d,x^{p^{i-1}}]$, the result follows again by Lemma~\ref{lem:com-ids}. This concludes the proof of~\eqref{equ:rel-p-to-ip}. Let $\mathcal{S} = \mathcal{P}$, resp.\ $\mathcal{S} = \mathcal{P}^*$, and write $S_i = \pi_i(G) = G^{p^i}$, resp.\ $S_i = \pi_i^*(G)$, for $i \in \mathbb{N}_0$. Recall that $Z \le \gamma_2(G)$; compare Remark~\ref{rem:H'-eq-Z}. Thus \eqref{equ:rel-p-to-ip} yields \begin{equation} \label{equ:Si-cap-Z-estimate} \gamma_{2p^i}(G) \cap Z \le S_i \cap Z \le \big(\langle x^{p^i} \rangle \gamma_{p^i}(G) \big) \cap Z =\gamma_{p^i}(G) \cap Z. \end{equation} From Corollary~\ref{cor:index} we see that \begin{equation} \label{equ:lim-zero-2} \lim_{i \to \infty}\frac{2p^i}{\log_p \lvert Z:\gamma_{p^i}(G) \cap Z \rvert}=0. \end{equation} As in the proof of Theorem~\ref{thm:spec-lcs} we want to apply Proposition~\ref{pro:path-area}, here with $e_i = 1$, $n_i = 2p^i$ and $M_i = \gamma_{p^i}(G)$, to conclude that $G$ has full normal Hausdorff spectrum. It remains to check that $\hdim_G^{\mathcal{S}}(Z)=1$. We observe that, for $i \in \mathbb{N}_0$, \[ \log_p \lvert G : S_i Z \rvert \le \log_p \lvert G_i : G_i^{\, p^i} Z_i \rvert \le \log_p \lvert W_i \rvert = i + p^i \le 2p^i, \] and thus, by~\eqref{equ:Si-cap-Z-estimate} and~\eqref{equ:lim-zero-2}, \[ \lim_{i \to \infty} \frac{\log_p \lvert G : S_i Z \rvert}{\log_p \lvert Z : S_i\cap Z \rvert} \le \lim_{i \to \infty} \frac{\log_p \lvert G : S_i Z \rvert}{\log_p \lvert Z : \gamma_{p^i}(G) \cap Z \rvert} = 0. \] As in the proof of Theorem~\ref{thm:spec-lcs} we conclude that $\hdim_{G}^{\mathcal{S}}(Z) = 1$. \end{proof} A little extra work is required to determine the normal Hausdorff spectrum of~$G$ with respect to the Frattini series. We define \[ z_{i,j} = \begin{cases} [c_i,c_j] \in\gamma_{i+j}(G) & \text{for $i,j \ge 1$,} \\ 1 & \text{otherwise.} \end{cases} \] Proposition~\ref{pro:Wk-lcs} and Remark~\ref{rem:derived-subgroup} show that \[ H =\langle c_i \mid i \ge 1 \rangle \qquad \text{and} \qquad Z = \langle z_{i,j} \mid 1\le j<i \rangle. \] Moreover, from Corollary~\ref{cor:index} it can be seen that, for $k \ge 2$, \begin{equation} \label{equ:gamma-cap-Z} \gamma_k(G) \cap Z = \langle z_{i,j} \mid 1 \le j <i \text{ and } i+j \ge k \rangle. \end{equation} \begin{lemma} \label{lem:double-prod} For $i,j\in\mathbb{N}$ and $r \in \mathbb{N}_0$, the following identity holds: \[ [z_{i,j}, x, \overset{r}{\ldots}, x] = \prod_{s=0}^r \prod_{t=0}^s \, z_{i+r-t,j+r-s+t}^{\, \binom{r}{s} \binom{s}{t}}. \] \end{lemma} \begin{proof} We argue by induction on~$r$. For $r=0$ both sides are equal to~$z_{i,j}$. Now suppose that~$r \ge 1$. We observe that, for $m,n \ge 1$, \begin{equation} \label{equ:z-rel} [z_{m,n},x] = z_{m,n}^{\, -1} [c_m^{\, x}, c_n^{\, x}] = z_{m,n}^{\, -1} [c_m c_{m+1}, c_n c_{n+1}] = z_{m+1,n} \, z_{m,n+1} \, z_{m+1,n+1}. \end{equation} Thus the induction hypothesis yields \[ [z_{i,j}, x, \overset{r}{\ldots}, x] = [[z_{i,j}, x, \overset{r-1}{\ldots}, x],x] = \prod_{s=0}^r \prod_{t=0}^{s} \, [z_{i+r-1-t,j+r-1-s+t},x]^{\binom{r-1}{s} \binom{s}{t}} , \] and, in view of~\eqref{equ:z-rel}, the result follows from the identity \begin{multline*} \binom{r-1}{s-1}\binom{s-1}{t} + \binom{r-1}{s-1}\binom{s-1}{t-1} + \binom{r-1}{s}\binom{s}{t}\\ =\binom{r-1}{s-1}\binom{s}{t}+ \binom{r-1}{s}\binom{s}{t} = \binom{r}{s} \binom{s}{t} \end{multline*} for $0 \le s \le r$ and $0 \le t \le s$. \end{proof} Lemma~\ref{lem:com-ids} and Lemma~\ref{lem:double-prod} lead directly to a useful corollary. \begin{corollary} \label{cor:p-k} For $i,j\in\mathbb{N}$ and $k \in \mathbb{N}_0$, the following identity holds: \[ [z_{i,j},x^{p^k}]=z_{i+{p^k},j}z_{i,j+p^k}z_{i+p^k,j+p^k}. \] \end{corollary} \begin{theorem} \label{thm:spec-frattini} The pro-$p$ group $G$ has full normal Hausdorff spectrum \[ \hspec_{\trianglelefteq}^{\mathcal{F}}(G)=[0,1], \] with respect to the Frattini series~$\mathcal{F}$. \end{theorem} \begin{proof} For $i \in \mathbb{N}_0$, we write $[i]_p = (p^i-1)/(p-1)$ and note, for $i \ge 1$, that $[i-1]_p + p^{i-1} = [i]_p$. We consider \[ C_i = \langle x^{p^i} \rangle \ltimes \langle c_j \mid j \ge 1 + [i]_p \rangle \le_\mathrm{c} G \] and claim, for $i \ge 1$, that \begin{equation} \label{equ:Ck-claim} \Psi_i^-(G) \le \Phi_i(G) \le \Psi_i^+(G), \end{equation} where \[ \Psi_i^-(G) = C_i \big( \gamma_{1+2 [i-1]_p +p^{i-1}}(G) \cap Z \big) \quad \text{and} \quad \Psi_i^+(G) = C_i \big( \gamma_{2 + 2 [i-1]_p}(G) \cap Z \big). \] For $i=1$ the assertion is that $\Phi(G) = C_1 (\gamma_2(G) \cap Z) = \langle x^p, c_2, c_3 , \ldots \rangle (\gamma_2(G) \cap Z)$, which follows from Proposition~\ref{pro:Wk-lcs} and the fact that $Z \le \gamma_2(G)$. Now suppose that $i \ge 2$. Lemma~\ref{lem:com-ids} and the observation that $p^{i-1} \ge 2 p^{i-2}$ yield \[ [\gamma_{2+2 [i-2]_p}(G) \cap Z,x^{p^{i-1}}] \le \gamma_{2+2 [i-2]_p +p^{i-1}}(G) \cap Z \le \gamma_{2+2 [i-1]_p}(G) \cap Z; \] by construction, we have $[\gamma_{2 + 2 [i-2]_p}(G) \cap Z, c_n] = 1$ for all $n \ge 1$. Furthermore, Lemma~\ref{lem:com-ids} gives \begin{equation} \label{equ:c-n-x-comm} [c_n,x^{p^{i-1}}] \equiv c_{n+{p^{i-1}}} \pmod{\gamma_{2n+p^{i-1}}(G)\cap Z} \quad \text{for all $n \ge 1$,} \end{equation} and hence \[ [C_{i-1},x^{p^{i-1}}] \le C_i\big(\gamma_{2 + 2 [i-1]_p + p^{i-1}}(G) \cap Z \big). \] By induction, $\Phi_{i-1}(G) \le \Psi_{i-1}^+(G) = C_{i-1} \big(\gamma_{2+2 [i-2]_p}(G) \cap Z \big)$, and this implies \begin{multline*} \Phi_i(G) = \Phi(\Phi_{i-1}(G)) \le \langle x^{p^i} \rangle [C_{i-1},C_{i-1}] \big(\gamma_{2 + 2 [i-1]_p}(G) \cap Z \big) \\ \le C_i\big(\gamma_{2 + 2 [i-1]_p}(G) \cap Z \big) = \Psi_i^+(G). \end{multline*} It remains to check the first inclusion in~\eqref{equ:Ck-claim}; by induction, it suffices to show that \[ \Psi_i^-(G) \le K, \quad \text{where $K = \Phi\big( \Psi_{i-1}^-(G) \big).$} \] First we show that $\gamma_{1+2 [i-1]_p + p^{i-1}}(G)\cap Z \le K$ implies $C_i \le K$. Clearly, $x^{p^i} \in C_{i-1}^{\, p} \le K$, and \eqref{equ:c-n-x-comm} shows that, for $j \ge 1 + [i]_p$, there exists $d_j \in \gamma_{2 (j-p^{i-1}) + p^{i-1}}(G) \cap Z \le \gamma_{1 + 2 [i-1]_p + p^{i-1}}(G) \cap Z$ such that \[ c_j = [c_{j-p^{i-1}}, x^{p^{i-1}}] d_j \in [C_{i-1},C_{i-1}] \le K. \] Thus it suffices to prove that $\gamma_{1+2 [i-1]_p + p^{i-1}}(G) \cap Z \le K$. From \eqref{equ:gamma-cap-Z} we recall that \[ \gamma_{1+ 2 [i-1]_p + p^{i-1}}(G) \cap Z = \langle z_{j,k} \mid 1 \le k < j \text{ and } j+k \ge 1+2 [i-1]_p +p^{i-1} \rangle. \] From $[C_{i-1},C_{i-1}] \le K$ we deduce that \begin{equation} \label{equ:z-m-n-criterion} z_{m,n} \in K \quad \text{for $m > n \ge 1 + [i-1]_p$.} \end{equation} Thus, it remains to see that $z_{j,k} \in K$ for $j, k \in \mathbb{N}$ satisfying \[ 1 \le k < j, \qquad j+k \ge 1 + 2 [i-1]_p + p^{i-1} \qquad \text{and} \qquad k \le [i-1]_p. \] Given such $j, k \in \mathbb{N}$, we observe that \[ k < 1 + [i-1]_p \le j - p^{i-1} \qquad \text{and} \qquad (j - p^{i-1}) + k \ge 1 + 2 [i-1]_p; \] hence \eqref{equ:gamma-cap-Z} implies \[ z_{j - p^{i-1},k} \in \gamma_{1 + 2 [i-1]_p}(G) \cap Z \le \gamma_{1+2 [i-2]_p +p^{i-2}}(G) \cap Z \le \Psi_{i-1}^-(G). \] We apply Corollary~\ref{cor:p-k} to deduce that \begin{equation} \label{equ:z-product} z_{j,k} \, z_{j - p^{i-1}, k + p^{i-1}} \, z_{j, k + p^{i-1}} = [z_{j - p^{i-1},k}, x^{p^{i-1}}] \in [\Psi_{i-1}^-(G), C_{i-1}] \le K. \end{equation} As $j > k + p^{i-1} \ge 1 + [i-1]_p$, we see from \eqref{equ:z-m-n-criterion}, for $m=j$ and $n=k + p^{i-1}$ that $z_{j, k + p^{i-1}} \in K$. Similarly, we deduce that $z_{j - p^{i-1}, k + p^{i-1}} \in K$, if $j - p^{i-1} > k+p^{i-1}$, and, finally, $z_{j - p^{i-1}, k + p^{i-1}} = z_{k + p^{i-1}, j - p^{i-1}}^{\, -1} \in K$, if $j-p^{i-1} \le k + p^{i-1}$ and thus $j - p^{i-1} \ge 1+ [i-1]_p$. Feeding this information into \eqref{equ:z-product}, we obtain $z_{j,k} \in K$ which concludes the proof of~\eqref{equ:Ck-claim}. From~\eqref{equ:Ck-claim} we deduce that \[ \gamma_{1 + 2 [i-1]_p + p^{i-1}}(G) \cap Z \le \Phi_i(G) \cap Z \le \gamma_{2 + 2 [i-1]_p}(G) \cap Z, \] and from Corollary~\ref{cor:index} we see that \[ \lim_{i \to \infty} \frac{2 [i-1]_p + p^{i-1}}{\log_p \vert Z : \gamma_{2+2 [i-1]_p}(G) \cap Z \rvert} = 0. \] As in the proof of Theorem~\ref{thm:spec-lcs} we want to apply Proposition~\ref{pro:path-area}, here with $e_i = 1$, $n_i = 2 [i-1]_p +p^{i-1}$ and $M_i = \gamma_{2 + 2 [i-1]_p}(G)$, to conclude that $G$ has full normal Hausdorff spectrum. It remains to check that $\hdim_G^{\mathcal{F}}(Z)=1$. From \cite[Prop.~2.6]{KlTh19} we see that $\log_p \lvert G : \Phi_i(G) Z \rvert = i + [i]_p$, and hence Corollary~\ref{cor:index} implies \[ \lim_{i \to \infty} \frac{\log_p \lvert G : \Phi_i(G)Z \rvert}{\log_p \lvert Z : \Phi_i(G) \cap Z \rvert} = 0. \] As in the proof of Theorem~\ref{thm:spec-lcs} we see that $\hdim_G^{\mathcal{F}}(Z)=1$. \end{proof} Theorem~\ref{thm:main} summarises the results in Theorems~\ref{thm:spec-lcs}, \ref{thm:spec-pp} and~\ref{thm:spec-frattini}.
1810.08937
\section{Introduction} \label{sec0} Let $p$ be a prime and $k=\overline{{\mathbb{F}}}_p$ be an algebraic closure of the field with $p$ elements. Let $G$ be a connected reductive algebraic group over $k$ and assume that $G$ is defined over the finite subfield ${\mathbb{F}}_q\subseteq k$, where $q$ is a power of~$p$. Let $F\colon G\rightarrow G$ be the corresponding Frobenius map. We are interested in studying the representations (over an algebraically closed field of characteristic~$0$) of the finite group $G^F=\{g\in G \mid F(g)=g\}$. Assuming that $p$ is a ``good'' prime for $G$, Kawanaka \cite{kaw0}, \cite{kaw1}, \cite{kaw2} described a procedure by which one can associate with any unipotent element $u\in G^F$ a representation $\Gamma_u$ of $G^F$, obtained by induction of a certain one dimensional representation from a unipotent subgroup of $G^F$. If $u$ is the identity element, then $\Gamma_1$ is the regular representation of $G^F$; if $u$ is a regular unipotent element, then $\Gamma_u$ is a Gelfand--Graev representation as defined, for example, in \cite[\S 8.1]{C2} or \cite[\S 14]{St}. For arbitrary $u$, the representation $\Gamma_u$ is called a {\it generalised Gelfand--Graev representation} (GGGR for short); it only depends on the $G^F$-conjugacy class of~$u$. A fundamental step in understanding the GGGRs is achieved by Lusztig \cite{L2} where the characters of GGGRs are expressed in terms of characteristic functions of intersection cohomology complexes on~$G$. In \cite{L2} it is assumed that $p$ is sufficiently large; in \cite{tay} it is shown that one can reduce these assumptions so that everything works as in Kawanaka's original approach. These results have several consequences. By \cite{gehe} the characters of the various $\Gamma_u$ span the ${\mathbb{Z}}$-module of all unipotently supported virtual characters of~$G^F$. In addition to the original applications in \cite{kaw0}, \cite{kaw1}, \cite{kaw2}, GGGRs have turned out to be very useful in various questions concerning $\ell$-modular representations of $G^F$ where $\ell$ is a prime not equal to~$p$; see, e.g., \cite{gehi2}, \cite{duma}. Thus, it seems desirable to explore the possibilities for a definition without any restriction on~$p,q$. These notes arose from an attempt to give such a definition. Recall that $p$ is ``good'' for $G$ if $p$ is good for each simple factor involved in~$G$; the conditions for the various simple types are as follows. \begin{center} $\begin{array}{rl} A_n: & \mbox{no condition}, \\ B_n, C_n, D_n: & p \neq 2, \\ G_2, F_4, E_6, E_7: & p \neq 2,3, \\ E_8: & p \neq 2,3,5. \end{array}$ \end{center} Easy examples indicate that one can not expect a good definition of GGGRs for all unipotent elements of $G^F$. Instead, it seems reasonable to restrict oneself to those unipotent classes which ``come from characteristic~$0$'', where the classical Dynkin--Kostant theory is available; see Section~\ref{sec1}. This is also consistent with the picture presented by Lusztig \cite{L3a}, \cite{L3b}, \cite{L3c}, \cite{L3d} for dealing with unipotent classes in small characteristic. Based on this framework, we formulate in Definition~\ref{def24} some precise conditions under which it should be possible to define GGGRs for a given unipotent class. Of course, these conditions will be satisfied if $p$ is a good prime for~$G$, and lead to Kawanaka's original GGGRs; so then the question is how far we can go beyond this. Our answer to this question is as follows. An essential feature of GGGRs is that they are very closely related to the ``unipotent supports'' of the irreducible representations of $G^F$, in the sense of Lusztig \cite{L2}. (In a somewhat different way, and without complete proofs, this concept appeared under the name of ``wave front set'' in Kawanaka \cite{kaw2}.) Let ${\mathfrak{C}}^\bullet$ be the set of unipotent classes of $G$ which arise as the unipotent support of some irreducible representation of $G^F$, or of $G^{F^n}$ for some $n\geq 1$. (Thus, since $G=\bigcup_{n\geq 1} G^{F^n}$, the set ${\mathfrak{C}}^\bullet$ only depends on $G$ but not on the particular Frobenius map~$F$.) If $p$ is a good prime for $G$, then it is known that ${\mathfrak{C}}^\bullet$ is the set of all unipotent classes of $G$. In general, all classes in ${\mathfrak{C}}^\bullet$ indeed ``come from characteristic~$0$''. Based on the methods in \cite{L4}, an explicit description of the sets ${\mathfrak{C}}^\bullet$, for $G$ simple and $p$ bad, is given in Proposition~\ref{uni2}. This result complements the general results on ``unipotent support'' in \cite{gema}, \cite{L2} and may be of independent interest. Now, extensive experimentation (with computer programs written in {\sf GAP} \cite{gap4}) lead us to the expectation, formulated as Conjecture~\ref{main}, that our conditions in Definition~\ref{def24} will work for all the classes in ${\mathfrak{C}}^\bullet$, without any restriction on $p,q$. In Section~\ref{sec4}, we work out in detail the example where $G$ is of type $F_4$. Our investigations also suggest a new characterisation of Lusztig's special unipotent classes; see Conjecture~\ref{main2} and Corollary~\ref{corf4a}. These notes merely contain examples and conjectures; nevertheless we hope that they show that the story about GGGRs is by no means complete and that there is some evidence for a further theory in bad characteristic. \section{Weighted Dynkin diagrams} \label{sec1} We use Carter \cite{C2} as a general reference for results on algebraic groups and unipotent classes. Let $G,k,p,\ldots$ be as in Section~\ref{sec0}. Also recall that $G$ is defined over the finite field ${\mathbb{F}}_q\subseteq k$, with corresponding Frobenius map $F\colon G\rightarrow G$. We fix an $F$-stable maximal torus $T \subseteq G$ and an $F$-stable Borel subgroup $B\subseteq G$ containing~$T$. We have $B=U\rtimes T$ where $U$ is the unipotent radical of~$B$. Let $\Phi$ be the set of roots of $G$ with respect to~$T$ and $\Pi\subseteq\Phi$ be the set of simple roots determined by $B$. Let $\Phi^+$ and $\Phi^-$ be the corresponding sets of positive and negative roots, respectively. Let ${\mathfrak{g}}=\mbox{Lie}(G)$ be the Lie algebra of $G$. Then $G$ acts on ${\mathfrak{g}}$ via the adjoint representation $\mbox{Ad}\colon G\rightarrow {\operatorname{GL}}({\mathfrak{g}})$. \begin{abs} \label{abs11} For each $\alpha\in\Phi$, we have a corresponding homomorphism of algebraic groups $x_\alpha\colon k^+\rightarrow G$, $u \mapsto x_\alpha(u)$, which is an isomorphism onto its image; furthermore, $tx_\alpha(u)t^{-1}=x_\alpha(\alpha(t)u)$ for all $t\in T$ and $u\in k$. Setting $U_\alpha:=\{x_\alpha(u)\mid u\in k\}$, we have $G=\langle T,U_\alpha \;(\alpha\in \Phi)\rangle$. Note that $U_\alpha\subseteq G_{\operatorname{der}}$ for all $\alpha\in\Phi$, where $G_{\operatorname{der}}$ denotes the derived subgroup of~$G$. On the level of ${\mathfrak{g}}$, we have a direct sum decomposition \[ {\mathfrak{g}}={\mathfrak{t}}\oplus \bigoplus_{\alpha\in\Phi}{\mathfrak{g}}_\alpha\] where ${\mathfrak{t}}=\mbox{Lie}(T)$ is the Lie algebra of $T$ and ${\mathfrak{g}}_\alpha$ is the image of the differential $d_0x_\alpha\colon k\rightarrow {\mathfrak{g}}$; furthermore, $\mbox{Ad}(t)(y)=\alpha(t)y$ for $t\in T$ and $y\in {\mathfrak{g}}_\alpha$. We set \[e_\alpha:=d_0x_\alpha(1)\in {\mathfrak{g}}_\alpha.\] Then $e_\alpha\neq 0$ and ${\mathfrak{g}}_\alpha=ke_\alpha$. (For all this see, e.g., \cite[\S 8.1]{spr}.) \end{abs} \begin{abs} \label{abs12} For $\alpha\in\Phi$, we can write uniquely $\alpha= \sum_{\beta\in \Pi} n_\beta\beta$ where $n_\beta\in{\mathbb{Z}}$ for all $\beta\in\Pi$. Then ${\operatorname{ht}}(\alpha):=\sum_{\beta\in\Pi} n_\beta$ is called the height of~$\alpha$. We fix once and for all a total ordering $\preceq$ of $\Phi^+$ which is compatible with the height, that is, if $\alpha,\beta \in\Phi^+$ are such that $\alpha\preceq \beta$, then ${\operatorname{ht}}(\alpha)\leq {\operatorname{ht}}(\beta)$. Then every $u\in U$ has a unique expression $u=\prod_{\alpha\in\Phi^+} x_\alpha(u_\alpha)$ where $u_\alpha \in k$ (and the product is taken in the given order $\preceq$ of $\Phi^+$). Let $\alpha,\beta\in\Phi^+$, $\alpha\neq \beta$. Let $u,v\in k$. Then we have Chevalley's commutator relations \[ x_\alpha(u)x_\beta(v)x_\alpha(u)^{-1}x_\beta(v)^{-1}=\prod_{i,j>0;\; i\alpha+j\beta\in\Phi} x_{i\alpha+j\beta}(C_{\alpha,\beta,i,j} u^iv^j)\] where the constants $C_{\alpha,\beta,i,j}\in k$ only depend on $\alpha, \beta,i,j$ but not on~$u,v$ (and, again, the product on the right hand side is taken in the given order $\preceq$ of $\Phi^+$). Furthermore, if $\alpha+ \beta \in \Phi$, then \[[e_\alpha,e_\beta]=N_{\alpha,\beta}e_{\alpha+\beta}\qquad \mbox{where} \qquad N_{\alpha,\beta}:=C_{\alpha,\beta, 1,1}\in k.\] (Since all $U_\alpha$, $\alpha\in\Phi$, are contained in the semisimple algebraic group $G_{\operatorname{der}}$, this follows from \cite[Lemma~15 (p.~22) and Remark (p.~64)]{St}.) \end{abs} \begin{abs} \label{abs13} It will also be convenient to fix some notation concerning the action of the Frobenius map $F\colon G\rightarrow G$. By the results in \cite[\S 10]{St}, there exist a permutation $\tau \colon \Phi\rightarrow\Phi$ and signs $\epsilon_\alpha=\pm 1$ ($\alpha\in\Phi$) such that \[ F(x_\alpha(u))=x_{\tau(\alpha)}(\epsilon_\alpha u^q)\qquad \mbox{for all $u\in k$}.\] Here, we can assume that $\tau(\Pi)=\Pi$ and $\epsilon_{\pm \beta}=1$ for all $\beta\in\Pi$. Now $F$ also induces a Frobenius map on the Lie algebra ${\mathfrak{g}}$ which we denote by the same symbol. We have $F(uy)=u^qF(y)$ for all $u\in k$ and $y\in{\mathfrak{g}}$; furthermore, \[ F(e_\alpha)=\epsilon_\alpha e_{\tau(\alpha)} \qquad \mbox{for all $\alpha\in\Phi$}.\] Finally, for $g\in G$ and $y\in {\mathfrak{g}}$, we have $\mbox{Ad}(F(g))(F(y))= F(\mbox{Ad}(g)(y))$. \end{abs} \begin{abs} \label{abs14} Let $G_0$ be a connected reductive algebraic group over ${\mathbb{C}}$ of the same type as $G$; let ${\mathfrak{g}}_0=\mbox{Lie}(G_0)$ be its Lie algebra. Then, by the classical Dynkin--Kostant theory (see, e.g., \cite[\S 5.6]{C2}), the nilpotent $\mbox{Ad}(G_0)$-orbits in ${\mathfrak{g}}_0$ are parametrized by a certain set $\Delta$ of so-called {\it weighted Dynkin diagrams}, i.e., maps $d\colon \Phi\rightarrow {\mathbb{Z}}$ such that \begin{itemize} \item[(a)] $d(-\alpha)=-d(\alpha)$ for all $\alpha\in\Phi$ and $d(\alpha+\beta)=d(\alpha)+d(\beta)$ for all $\alpha,\beta\in\Phi$ such that $\alpha+\beta\in\Phi$; \item[(b)] $d(\beta)\in\{0,1,2\}$ for every simple root $\beta\in \Pi$. \end{itemize} Furthermore, these nilpotent orbits in ${\mathfrak{g}}_0$ are naturally in bijection with the unipotent classes of $G_0$ (see \cite[\S 1.15]{C2}). If $G_0$ is a simple algebraic group, then the corresponding set $\Delta$ of weighted Dynkin diagrams is explicitly known in all cases; see \cite[\S 13.1]{C2} and the references there. (Several examples will be given below.) For each $d\in \Delta$, we denote by ${\mathcal{O}}_d$ the corresponding nilpotent orbit in ${\mathfrak{g}}_0$ and set \begin{center} ${\mathbf{b}}_d:=\frac{1}{2}(\dim G_0-\mbox{rank}(G_0)-\dim {\mathcal{O}}_d)$. \end{center} This is a very useful invariant for distinguishing nilpotent orbits. (The number ${\mathbf{b}}_d$ is also the dimension of the variety of Borel subgroups of $G_0$ containing an element in the unipotent class corresponding to ${\mathcal{O}}_d$; see \cite[\S 1.15, \S 5.10]{C2}.). \end{abs} \begin{abs} \label{abs14a} Let us fix $d\in\Delta$. For $i\in{\mathbb{Z}}$, we set $\Phi_i:=\{\alpha\in\Phi \mid d(\alpha)=i\}$ and define \[ {\mathfrak{g}}(i):=\left\{ \begin{array}{rl} \bigoplus_{\alpha\in\Phi_i} {\mathfrak{g}}_\alpha &\quad \mbox{if $i\neq 0$},\\ {\mathfrak{t}}\oplus \bigoplus_{\alpha\in\Phi_0} {\mathfrak{g}}_\alpha &\quad \mbox{if $i=0$}. \end{array}\right.\] Thus, as in \cite[\S 2.1]{kaw1}, we obtain a grading ${\mathfrak{g}}= \bigoplus_{i\in{\mathbb{Z}}} {\mathfrak{g}}(i)$; note that we do have $[{\mathfrak{g}}(i),{\mathfrak{g}}(j)] \subseteq {\mathfrak{g}}(i+j)$ for all $i,j\in{\mathbb{Z}}$. For any $i\geq 0$, we also set ${\mathfrak{g}}(\geq i):=\bigoplus_{j\geq i} {\mathfrak{g}}(j)$. Furthermore, we define subgroups of $G$ as follows. \begin{align*} P&:=\langle T,U_\alpha\mid\alpha\in\Phi_i\mbox{ for all $i\geq 0$}\rangle,\\ U_1&:=\langle U_\alpha\mid\alpha\in\Phi_i\mbox{ for all $\geq 1$}\rangle,\\ L&:=\langle T,U_\alpha\mid\alpha\in\Phi_0\rangle. \end{align*} Then $P$ is a parabolic subgroup of $G$ with unipotent radical $U_1$ and Levi decomposition $P=U_1\rtimes L$. The Lie algebra of $P$ is given by ${\mathfrak{p}}:=\text{Lie}(P)={\mathfrak{g}}(\geq 0)\subseteq {\mathfrak{g}}$. More generally, for any integer $i\geq 1$, we set \[ U_i:=\langle U_\alpha\mid \alpha\in\Phi_j \mbox{ for all $j\geq i$} \rangle \subseteq G.\] Thus, we obtain a chain of subgroups $P\supseteq U_1\supseteq U_2 \supseteq U_3\supseteq \ldots$; using Chevalley's commutator relations, one immediately sees that each $U_i$ is a normal subgroup of $P$ and that $U_i/U_{i+1}$ is abelian. \end{abs} \begin{abs} \label{abs15} Let us fix a weighted Dynkin diagram $d\in \Delta$ as above. For any integer $i\geq 1$, we have a corresponding subgroup $U_i\subseteq G$ and a corresponding subspace ${\mathfrak{g}}(i)\subseteq {\mathfrak{g}}$. Following Kawanaka \cite[(3.1.1)]{kaw1}, we define a map \[f\colon U_1\rightarrow {\mathfrak{g}}(1)\oplus {\mathfrak{g}}(2)\] as follows. Let $u\in U_1$. As in \ref{abs12}, we have a unique expression \[u =\prod_{\alpha\in\Phi_i \text{ for $i\geq 1$}} x_\alpha(u_\alpha) \qquad (u_\alpha\in k)\] where the product is taken in the given order $\preceq$ on $\Phi^+$. Then we set \[f(u)=f\Bigl(\prod_{\alpha\in\Phi_i \text{ for $i\geq 1$}} x_\alpha (u_\alpha)\Bigr):=\sum_{\alpha\in\Phi_1\cup\Phi_2} u_\alpha e_\alpha.\] \end{abs} \begin{lem}[Cf.\ Kawanaka \protect{\cite[\S 3.1]{kaw1}}] \label{abs16} Let $u,v\in U_1$. Then the following hold. \begin{itemize} \item[(a)] If $u\in U_2$ or $v\in U_2$, then $f(uv)=f(u)+f(v)$. \item[(b)] $f(uvu^{-1}v^{-1})\equiv [f(u),f(v)] \bmod {\mathfrak{g}}(\geq 3)$. \item[(c)] $f(F(u))=F(f(u))$. \end{itemize} \end{lem} \begin{proof} This is a rather straightforward application of Chevalley's commutator relations. For (b), we use the fact that $[e_\alpha,e_\beta]= C_{\alpha,\beta,1,1} e_{\alpha+\beta}$ if $\alpha+\beta\in\Phi$; see \ref{abs12}. For (c), we use the formulae in \ref{abs13}. We omit further details. \end{proof} \begin{abs} \label{abs17} The general idea for defining GGGRs corresponding to a fixed $d\in \Delta$ is as follows. (In the following discussion we avoid any reference to the characteristic of~$k$.) First of all, we assume that $d$ is invariant under the permutation $\tau\colon \Phi\rightarrow\Phi$ induced by $F$. Consequently, all the subgroups $P$, $U_i$ ($i\geq 1$) of $G$ are $F$-stable and all the subspaces ${\mathfrak{g}}(i)$ ($i\geq 0$) are $F$-stable. Let us fix a non-trivial character $\psi\colon {\mathbb{F}}_q^+\rightarrow {\mathbb{C}}^\times$. Let us also consider a linear map $\lambda\colon {\mathfrak{g}}(2)\rightarrow k$ defined over ${\mathbb{F}}_q$, that is, we have $\lambda(F(y))=\lambda(y)^q$ for all $y\in {\mathfrak{g}}(2)$. Then Lemma~\ref{abs16}(a) shows that $U_2\rightarrow k^+$, $u\mapsto \lambda(f(u))$, is a group homomorphism and so, by Lemma~\ref{abs16}(c), we also obtain a group homomorphism \[ \chi_\lambda \colon U_2^F \rightarrow {\mathbb{C}}^\times, \qquad u\mapsto \psi\bigl(\lambda(f(u))\bigr).\] We shall require that $\lambda$ is in ``sufficiently general position'' (where this term will have to be further specified; see Definition~\ref{def24} below). Let us assume that this is the case. If ${\mathfrak{g}}(1)=\{0\}$, then the GGGR corresponding to~$d, \lambda$ will simply be given by the induced representation \[\Gamma_{d,\lambda}:={\operatorname{Ind}}_{U_2^F}^{G^F}\bigl(\chi_\lambda\bigr).\] Now consider the case where ${\mathfrak{g}}(1)\neq \{0\}$. Since $[{\mathfrak{g}}(1),{\mathfrak{g}}(1)] \subseteq {\mathfrak{g}}(2)$, we obtain a well-defined alternating bilinear form \[ \sigma_\lambda \colon {\mathfrak{g}}(1)\times {\mathfrak{g}}(1)\rightarrow k,\qquad (y,z)\mapsto \lambda\bigl([y,z]\bigr).\] Assume also that the radical of this bilinear form is zero. Then we choose an $F$-stable Lagrangian subspace in ${\mathfrak{g}}(1)$ and pull back this subspace to an $F$-stable subgroup $U_{1.5} \subseteq U_1$ via the map~$f$. Using Lemma~\ref{abs16}(b) we see that $\ker(\chi_\lambda)$ is normal in $U_{1.5}^F$ and $U_{1.5}^F/\ker(\chi_\lambda)$ is an abelian $p$-group. (See also the proof of \cite[Lemma~3.1.9]{kaw1}.) So we can extend $\chi_\lambda$ to a character $\tilde{\chi}_\lambda \colon U_{1.5}^F \rightarrow {\mathbb{C}}^\times$. In this case, the GGGR corresponding to~$d,\lambda$ will be given by the induced representation \[\Gamma_{d,\lambda}:={\operatorname{Ind}}_{U_{1.5}^F}^{G^F}\bigl(\tilde{\chi}_\lambda\bigr).\] Note that $[U_1^F:U_{1.5}^F]=[U_{1.5}^F:U_2^F]$. Furthermore, it turns out that \[ {\operatorname{Ind}}_{U_2^F}^{G^F} \bigl(\chi_\lambda \bigr)=[U_1^F:U_{1.5}^F]\cdot \Gamma_{d,\lambda},\] which shows that the definition of $\Gamma_{d,\lambda}$ does not depend on the choice of the Lagrangian subspace or the extension $\tilde{\chi}_\lambda$ of~$\chi_\lambda$ (cf.\ \cite[1.3.6]{kaw0}, \cite[3.1.12]{kaw1}). \end{abs} In Kawanaka's set-up \cite[\S 1.2]{kaw0}, \cite[\S 3.1]{kaw1}, the above assumption on the radical of $\sigma_\lambda$ is always satisfied. (See also Remark~\ref{rem24a} below.) Our plan for the definition of GGGRs in bad characteristic is to follow the above general procedure, but we have to find out in which situations this still makes sense at all. The following two examples show that there is a serious issue concerning the radical of $\sigma_\lambda$ when ${\mathfrak{g}}(1)\neq \{0\}$. \begin{exmp} \label{bsp4} Let $G=\mbox{Sp}_4(k)$. We have $\Phi=\{\pm \alpha,\pm \beta,\pm (\alpha+\beta),\pm (2\alpha+ \beta)\}$ where $\Pi=\{\alpha,\beta\}$; here, $\alpha$ is a short simple root and $\beta$ is a long simple root. By \cite[p.~394]{C2}, there are $4$~weighted Dynkin diagrams $d\in \Delta$, where: \[ (d(\alpha),d(\beta))\in\{(0,0),(1,0),(0,2),(2,2)\}.\] Let $d_0\in\Delta$ be such that $d_0(\alpha)=1$ and $d_0(\beta)=0$. Then ${\mathbf{b}}_{d_0}=2$ and \[ {\mathfrak{g}}(1)=\langle e_\alpha,e_{\alpha+\beta}\rangle_k, \qquad\qquad {\mathfrak{g}}(2)=\langle e_{2\alpha+\beta}\rangle_k.\] We have $[e_\alpha,e_{\alpha+\beta}]=\pm 2e_{2\alpha+\beta}$. Let $\lambda \colon {\mathfrak{g}}(2)\rightarrow k$ be a linear map. If $p\neq 2$, then the radical of the alternating form $\sigma_\lambda$ is zero whenever $\lambda(e_{2\alpha+ \beta})\neq 0$. Now assume that $p=2$. Then $[e_\alpha,e_{\alpha+\beta}]=0$ and so $\sigma_\lambda$ is identically zero for any~$\lambda$. The commutator relations in \ref{abs12} also show that the subgroup $U_1=\langle U_\alpha, U_{\alpha+\beta}, U_{2\alpha+\beta} \rangle$ associated with~$d_0$ is abelian. In this case, it is not clear to us at all how one should proceed in order to define a GGGR associated with~$d_0$. \end{exmp} \begin{abs} \label{antid} To simplify the notation for matrices, we define $\mbox{antidiag}(x_1,\ldots,x_n)$ to be the $n\times n$-matrix with entry $x_i$ at position $(i,n+1-i)$ for $1\leq i \leq n$, and entry $0$ otherwise. Thus, for example, \[ \mbox{antidiag}(x_1,x_2,x_3)=\left(\begin{array}{ccc} 0 & 0 & x_1\\ 0 & x_2 & 0 \\ x_3 & 0 & 0 \end{array}\right).\] \end{abs} \begin{exmp} \label{bg2} Let $G=G_2(k)$. We have \[\Phi=\{\pm \alpha,\pm \beta,\pm (\alpha+\beta),\pm (2\alpha+ \beta), \pm (3\alpha+\beta),\pm (3\alpha+2\beta)\}\] where $\Pi=\{\alpha,\beta\}$; here, $\alpha$ is a short simple root and $\beta$ is a long simple root. By \cite[p.~401]{C2}, there are $5$~weighted Dynkin diagrams $d\in \Delta$, where: \[ (d(\alpha),d(\beta))\in\{(0,0),(0,1),(1,0),(0,2),(2,2)\}.\] (a) Let $d\in\Delta$ be such that $d(\alpha)=1$ and $d(\beta)=0$. Then ${\mathbf{b}}_d=2$ and \[ {\mathfrak{g}}(1)=\langle e_\alpha,e_{\alpha+\beta}\rangle_k, \qquad\qquad {\mathfrak{g}}(2)=\langle e_{2\alpha+\beta}\rangle_k.\] We have $[e_\alpha,e_{\alpha+\beta}]=\pm 2e_{2\alpha+\beta}$. Let $\lambda \colon {\mathfrak{g}}(2)\rightarrow k$ be a linear map. If $p=2$, then $\sigma_\lambda$ is identically zero for any~$\lambda$. If $p\neq 2$, then the radical of $\sigma_\lambda$ is zero whenever $\lambda(e_{2\alpha+\beta})\neq 0$. (b) Let $d\in\Delta$ be such that $d(\alpha)=0$ and $d(\beta)=1$. Then ${\mathbf{b}}_d=3$ and \[ {\mathfrak{g}}(1)=\langle e_\beta,e_{\alpha+\beta},e_{2\alpha+\beta}, e_{3\alpha+\beta}\rangle_k, \qquad \qquad {\mathfrak{g}}(2)=\langle e_{3\alpha+2\beta}\rangle_k.\] The only non-zero Lie brackets are $[e_\beta,e_{3\alpha+\beta}]= \pm e_{3\alpha+2\beta}$, $[e_{\alpha+\beta},e_{2\alpha+\beta}]=\pm 3 e_{3\alpha+2\beta}$. Hence, if $\lambda\colon {\mathfrak{g}}(2)\rightarrow k$ is any linear map, then the Gram matrix of $\sigma_\lambda$ with respect to the above basis of ${\mathfrak{g}}(1)$ is given by \begin{center} $\pm x_1\cdot\mbox{antidiag}(1,3,-3,-1)\qquad\mbox{where}\qquad x_1:=\lambda(e_{3\alpha+2\beta})$. \end{center} The determinant of this matrix is $\pm 9x_1^4$. Hence, if $p=3$, then there is no $\lambda$ such that the radical of $\sigma_\lambda$ is zero. On the other hand, if $p\neq 3$, then the radical of $\sigma_\lambda$ is zero whenever $x_1=\lambda(e_{3\alpha+2\beta})\neq 0$. \end{exmp} \section{Nilpotent and unipotent pieces} \label{sec2} We keep the set-up of the previous section. Given a weighted Dynkin diagram $d\in\Delta$, our main task is to find suitable conditions under which a linear map $\lambda\colon {\mathfrak{g}}(2)\rightarrow k$ may be considered to be in ``sufficiently general position'' (cf.\ \ref{abs17}). For this purpose, we use Lusztig's framework \cite{L3a}--\cite{L3d} for dealing with unipotent elements in $G$ and nilpotent elements in ${\mathfrak{g}}$ when~$p$ (the characteristic of~$k$) is small. \begin{abs} \label{abs20} Let ${\mathcal{U}}$ be the variety of unipotent elements of $G$. In \cite[\S 1]{L3a}, Lusztig introduced a natural partition \[ {\mathcal{U}}=\coprod_{d \in \Delta} H_d\] where each $H_d$ is an irreducible locally closed subset stable under conjugation by $G$. A general, case-free proof for the existence of this partition was given by Clarke--Premet \cite[Theorem~1.4]{clpr}. The sets $\{H_d\mid d\in \Delta\}$ are called the {\it unipotent pieces} of $G$. In each such piece $H_d$, there is a unique unipotent class $C_d$ of $G$ such that $C_d$ is open dense in $H_d$. If $p$ is a good prime for $G$, then $H_d=C_d$. In general, $H_d$ is the union of $C_d$ and a finite number of unipotent classes of dimension strictly smaller than $\dim C_d$. We will say that the unipotent classes $\{C_d\mid d\in \Delta\}$ ``come from characteristic~$0$''. (Alternatively, the latter notion can be defined using the Springer correspondence, see \cite[1.3, 1.4]{L3a}, or the results of Spaltenstein \cite{spa}; see also \cite[\S 2]{gema}. All these definitions agree as can be checked using the explicit knowledge of the unipotent classes and the Springer correspondence in all cases.) \end{abs} \begin{abs} \label{abs21} We recall some further notation and some results from \cite[\S 2]{L3d}. There is a coadjoint action of $G$ on the dual vector space ${\mathfrak{g}}^*$ which we denote by $g.\xi$ for $g\in G$ and $\xi\in {\mathfrak{g}}^*$; thus, $(g.\xi)(y)=\xi(\mbox{Ad}(g^{-1})(y))$ for all $y\in {\mathfrak{g}}$. We denote by $G_\xi$ the stabilizer of $\xi\in{\mathfrak{g}}^*$ under this action. As in \cite{L3d}, an element $\xi\in {\mathfrak{g}}^*$ is called nilpotent if there exists some $g\in G$ such that the Lie algebra of the Borel subgroup $B\subseteq G$ is contained in $\mbox{Ann}(g.\xi)$. Let \[{\mathcal{N}}_{{\mathfrak{g}}^*}:=\{\xi\in{\mathfrak{g}}^*\mid \mbox{$\xi$ is nilpotent}\}.\] For any $Y\subseteq {\mathfrak{g}}$, we denote $\mbox{Ann}(Y):=\{\xi\in {\mathfrak{g}}^*\mid \xi(y)=0 \mbox{ for all $y\in Y$}\}$. Let us fix a weighted Dynkin diagram $d\in \Delta$. As in \ref{abs14a}, we have a corresponding grading ${\mathfrak{g}}=\bigoplus_{i\in {\mathbb{Z}}} {\mathfrak{g}}(i)$. In order to indicate the dependence on $d$, we shall now write ${\mathfrak{g}}_d(i)={\mathfrak{g}}(i)$ for all $i\in{\mathbb{Z}}$; similarly, we write $P_d=P$ for the corresponding parabolic subgroup of~$G$. Now, we also have a grading ${\mathfrak{g}}^*=\bigoplus_{j\in{\mathbb{Z}}}{\mathfrak{g}}_d(j)^*$ where we set \[{\mathfrak{g}}_d(j)^*:=\mbox{Ann}\Bigl(\bigoplus_{i\in {\mathbb{Z}}:\;i\neq -j} {\mathfrak{g}}_d(i)\Bigr) \qquad \mbox{for any $j\in{\mathbb{Z}}$}.\] We note that the subspace ${\mathfrak{g}}_d(\geq j)^*:=\bigoplus_{j'\in{\mathbb{Z}}:\;j'\geq j} {\mathfrak{g}}_d(j')^*$ is stable under the coadjoint action of $P_d$. Let \[ {\mathfrak{g}}_d(2)^{*!}:=\{\xi\in{\mathfrak{g}}_d(2)^*\mid G_\xi\subseteq P_d\}\] and $\sigma_d^*:={\mathfrak{g}}_d(2)^{*!}+{\mathfrak{g}}_d(\geq 3)^*\subseteq {\mathfrak{g}}_d(\geq 2)^*$. Then $\sigma_d^*$ is stable under the coadjoint action of $P_d$ on ${\mathfrak{g}}_d(\geq 2)^*$. Finally, let $\hat{\sigma}_d^* \subseteq {\mathfrak{g}}^*$ be the union of the orbits of the elements in $\sigma_d*$ under the coadjoint action of $G$. Then $\xi\mapsto\xi$ is a map \[ \Psi_{{\mathfrak{g}}^*}\colon \coprod_{d\in\Delta} \hat{\sigma}_d^*\quad\rightarrow \quad {\mathcal{N}}_{{\mathfrak{g}}^*}.\] By \cite[Theorem~2.2]{L3d}, the map $\Psi_{{\mathfrak{g}}^*}$ is a bijection if the adjoint group of $G$ is a direct product of simple groups of types $A$, $C$ and $D$. By the main result of \cite{xue1}, this also holds if there is a direct factor of type $B$. In the remarks just following \cite[Theorem~2.2]{L3d}, Lusztig expresses the expectation that $\Psi_{{\mathfrak{g}}^*}$ is a bijection without any restriction on $G$. By Clarke--Premet \cite[Theorem~7.3]{clpr} it is known that $\Psi_{{\mathfrak{g}}^*}$ is always surjective. \end{abs} \begin{abs} \label{abs23} In order to apply the above results to the situation in Section~\ref{sec1}, we need a mechanism by which we can pass back and forth between the vector spaces ${\mathfrak{g}}_d(i)$ and ${\mathfrak{g}}_d(i)^*$. If there exists a $G$-equivariant vector space isomorphism ${\mathfrak{g}} \stackrel{\sim}{\rightarrow} {\mathfrak{g}}^*$, then there is a canonical way of doing this, as explained in \cite[2.3]{L3d}. However, such an isomorphism will not always exist. To remedy this situation, we follow Kawanaka \cite[\S 1.2]{kaw0}, \cite[\S 3.1]{kaw1} and fix an ${\mathbb{F}}_q$-opposition automorphism ${\mathfrak{g}}\rightarrow {\mathfrak{g}}$, $y\mapsto y^\dagger$. This is a linear isomorphism, defined over ${\mathbb{F}}_q$, such that ${\mathfrak{t}}^\dagger={\mathfrak{t}}$ and $e_\alpha^\dagger=\pm e_{-\alpha}$ for all $\alpha\in\Phi$. (See also \cite[Lemma~5.2]{tay}.) If $\xi\in {\mathfrak{g}}^*$, then we define $\xi^\dagger\in{\mathfrak{g}}^*$ by $\xi^\dagger(y):=\xi(y^\dagger)$ for $y\in{\mathfrak{g}}$. \end{abs} \begin{defn} \label{def24} Let $d\in\Delta$ be a weighted Dynkin diagram and consider the corresponding grading ${\mathfrak{g}}=\bigoplus_{i\in {\mathbb{Z}}}{\mathfrak{g}}_d(i)$. Let $\lambda\colon {\mathfrak{g}}_d(2)\rightarrow k$ be a linear map. We regard $\lambda$ as an element of ${\mathfrak{g}}^*$ by setting $\lambda$ equal to zero on ${\mathfrak{g}}_d(i)$ for all $i\neq 2$. We say that $\lambda$ is in ``sufficiently general position'' if the following conditions hold. \begin{itemize} \item[(K1)] We require that $\lambda^\dagger \in{\mathfrak{g}}_d(2)^{*!}$, that is, $G_{\lambda^\dagger}\subseteq P_d$; see \ref{abs21}, \ref{abs23}. \item[(K2)] If ${\mathfrak{g}}_d(1)\neq \{0\}$, then we also require that the radical of the corresponding alternating form $\sigma_\lambda\colon {\mathfrak{g}}_d(1)\times {\mathfrak{g}}_d(1)\rightarrow k$ in \ref{abs17} is zero. \end{itemize} Note that (K1), (K2) only refer to the algebraic group $G$, but not to the Frobenius map~$F$. If (K1), (K2) hold and if $\lambda$ is defined over ${\mathbb{F}}_q$, then we can follow the procedure in \ref{abs17} and define the corresponding GGGR $\Gamma_{d,\lambda}$ of the finite group~$G^F$. \end{defn} \begin{rem} \label{rem24a} Kawanaka's work \cite{kaw0}, \cite{kaw1} fits into this setting as follows. Assume that $p$ is a good prime for $G$ and that there exists a non-degenerate, symmetric and $G$-invariant bilinear form $\kappa\colon {\mathfrak{g}}\times {\mathfrak{g}}\rightarrow k$. We also need to make a certain technical assumption on the isogeny type of the derived subgroup of~$G$. (For details see \cite[3.22]{tay}, \cite{prem}.) Let $d\in \Delta$. Then there is a dense open orbit under the adjoint action of $P_d$ on ${\mathfrak{g}}_d(2)$. Let $e$ be an element of this orbit and define a linear map $\lambda_e\colon {\mathfrak{g}}_d(2) \rightarrow k$ as follows. \[ \lambda_e(y):=\kappa(e^\dagger,y) \qquad \mbox{for $y\in {\mathfrak{g}}_d(2)$}.\] Then (K1), (K2) are satisfied for $\lambda=\lambda_e$; see \cite[\S 1.2]{kaw0}, \cite[\S 3.1]{kaw1}. Furthermore, if $d$ is invariant under the permutation of $\Phi$ induced by~$F$, then $e$ can be chosen such that (K1), (K2) hold and $\lambda_e$ is defined over~${\mathbb{F}}_q$. For example, all this holds for $G={\operatorname{SL}}_n(k)$ with no restriction on~$p$; see \cite[1.2]{kaw0}. \end{rem} \begin{rem} \label{rem24b} As already mentioned above, the map $\Psi_{{\mathfrak{g}}^*}$ in \ref{abs21} is always surjective. More precisely, given $d\in\Delta$, the subset ${\mathfrak{g}}_d(2)^{*!}\subseteq {\mathfrak{g}}_d(2)^*$ is non-empty. By \cite[Remark~1 (p.~665)]{clpr}, this subset actually contains a dense open subset of ${\mathfrak{g}}_d(2)^*$ (denoted by $X^{\triangle}({\mathfrak{g}}^*)$ in \cite[7.1]{clpr}) and so ${\mathfrak{g}}_d(2)^{*!}$ itself is a dense subset of ${\mathfrak{g}}_d(2)^*$. Thus, there always exists a dense set of linear maps $\lambda\colon {\mathfrak{g}}_d(2)\rightarrow k$ such that condition (K1) in Definition~\ref{def24} is satisfied. \end{rem} As illustrated by the examples at the end of Section~\ref{sec1}, the condition (K2) requires more attention. \begin{rem} \label{rem24c} Let $d\in \Delta$ and assume that ${\mathfrak{g}}_d(1)\neq \{0\}$. Let $\Phi_1=\{\beta_1,\ldots, \beta_n\}$ and $\Phi_2=\{\gamma_1, \ldots,\gamma_m\}$. Given a linear map $\lambda \colon {\mathfrak{g}}_d(2)\rightarrow k$, we denote by ${\mathcal{G}}_\lambda \in M_n(k)$ the Gram matrix of $\sigma_\lambda$ with respect to the basis $\{e_{\beta_1},\ldots, e_{\beta_n}\}$ of ${\mathfrak{g}}_d(1)$. The entries of ${\mathcal{G}}_\lambda$ are given as follows. We set $x_l:=\lambda(e_{\gamma_l})$ for $1\leq l \leq m$. For $1\leq i,j\leq n$, we define an element $\nu_{ij} \in k$ as follows. If $\beta_i+\beta_j\not\in\Phi$, then $\nu_{ij}:=0$. Otherwise, there is a unique $l(i,j)\in\{1,\ldots,m\}$ and some $\nu_{ij} \in k$ such that $[e_{\beta_i},e_{\beta_j}]=\nu_{ij}e_{\gamma_{l(i,j)}}$. Then we have \[ ({\mathcal{G}}_\lambda)_{ij}=\sigma_\lambda(e_{\beta_i},e_{\beta_j})= \left\{\begin{array}{cl} x_{l(i,j)} \nu_{ij} & \qquad \mbox{if $\beta_i+ \beta_j\in \Phi$},\\ 0&\qquad \mbox{otherwise}. \end{array}\right.\] In order to work this out explicity, we may assume without loss of generality that $G$ is semisimple (since $U_\alpha\subseteq G_{\operatorname{der}}$ for all $\alpha\in \Phi$; see \ref{abs11}). But then, by \cite[Remark (p.~64)]{St}, the structure constants of the Lie algebra ${\mathfrak{g}}$ are obtained from those of a Chevalley basis of ${\mathfrak{g}}_0$ by reduction modulo~$p$. Thus, we can explicitly determine the elements $\nu_{ij}$, via a computation inside~${\mathfrak{g}}_0$. Using one of the two canonical Chevalley bases in \cite[\S 5]{myg} (the two bases only differ by a global sign), one can even avoid the issue of choosing certain signs. Hence, there is a purely combinatorial algorithm for computing ${\mathcal{G}}_\lambda$, and this can be easily implemented in {\sf GAP} \cite{gap4}. In particular, we have: \begin{itemize} \item[($*$)] Up to a global sign, the Gram matrix ${\mathcal{G}}_\lambda$ only depends on the root system $\Phi$ and the values $\lambda(e_\alpha)$ ($\alpha\in\Phi_2$). \end{itemize} The radical of $\sigma_\lambda$ is zero if and only if $\det({\mathcal{G}}_\lambda)\neq 0$. Now we notice that this determinant is given by evaluating a certain $m$-variable polynomial at $x_1,\ldots,x_m$. In particular, we see that condition (K2) is an ``open'' condition: either there is no $\lambda$ at all for which (K2) holds, or (K2) holds for a non-empty open set of linear maps $\lambda\colon {\mathfrak{g}}_d(2)\rightarrow k$. Combining this with the discussion concerning (K1) in Remark~\ref{rem24b}, we immediately obtain the following conclusion. \end{rem} \begin{cor} \label{cor24} Let $d\in \Delta$ and assume that ${\mathfrak{g}}_d(1) \neq \{0\}$. Then either there is a non-empty open set of linear maps $\lambda \colon {\mathfrak{g}}_d(2) \rightarrow k$ in ``sufficiently general position'', or there is no such linear map at all. \end{cor} With these preparations, we now obtain our first example where bad primes exist but (K2) holds without any restriction on the field~$k$. \begin{exmp} \label{bd4} Let $G$ be of type $D_4$. Let $\Pi=\{\alpha_1, \alpha_2,\alpha_3,\alpha_4\}$ where $\alpha_1,\alpha_2,\alpha_4$ are all connected to $\alpha_3$. By \cite[p.~396--397]{C2}, there are $12$ weighted Dynkin diagrams $d\in \Delta$. There are two of them with ${\mathfrak{g}}_d(1)\neq\{0\}$. (a) Let $d(\alpha_1)=d(\alpha_2)=d(\alpha_4)=0$ and $d(\alpha_3)=1$. We have ${\mathbf{b}}_d=7$ and \begin{align*} {\mathfrak{g}}_d(1)&=\langle e_{\alpha_3}, e_{\alpha_1+\alpha_3}, e_{\alpha_2+\alpha_3}, e_{\alpha_3+\alpha_4}, e_{\alpha_1+\alpha_2+\alpha_3},\\&\qquad\quad e_{\alpha_1+\alpha_3+\alpha_4}, e_{\alpha_2+\alpha_3+\alpha_4}, e_{\alpha_1+\alpha_2+\alpha_3+\alpha_4}\rangle_k,\\ {\mathfrak{g}}_d(2)&=\langle e_{\alpha_1+\alpha_2+2\alpha_3+\alpha_4}\rangle_k. \end{align*} Let $\lambda\colon {\mathfrak{g}}_d(2)\rightarrow k$ be any linear map. As explained in Remark~\ref{rem24c}, we can work out the Gram matrix ${\mathcal{G}}_\lambda$ of the alternating form $\sigma_\lambda$. It is given by \begin{center} ${\mathcal{G}}_\lambda=\pm\mbox{antidiag}(-x_1,x_1,x_1,x_1,-x_1,-x_1,-x_1,x_1)$. \end{center} where we set $x_1:=\lambda(e_{\alpha_1+\alpha_2+2\alpha_3+\alpha_4})$. If $x_1\neq 0$, then $\det({\mathcal{G}}_\lambda)\neq 0$ and so the radical of $\sigma_\lambda$ is zero. Hence, condition (K2) is satisfied for such choices of~$\lambda$, and this works for any field~$k$. (b) Let $d(\alpha_1)=d(\alpha_2)=d(\alpha_4)=1$ and $d(\alpha_3)=0$. We have ${\mathbf{b}}_d=4$ and \begin{align*} {\mathfrak{g}}_d(1)&= \langle e_{\alpha_1},e_{\alpha_2},e_{\alpha_4},e_{\alpha_1+ \alpha_3},e_{\alpha_2+\alpha_3},e_{\alpha_3+\alpha_4}\rangle_k,\\ {\mathfrak{g}}_d(2)&=\langle e_{\alpha_1+\alpha_2+\alpha_3},e_{\alpha_1+\alpha_3+ \alpha_4},e_{\alpha_2+\alpha_3+\alpha_4}\rangle_k. \end{align*} Again, let $\lambda\colon {\mathfrak{g}}_d(2)\rightarrow k$ be any linear map. As above, we now obtain \begin{center} ${\mathcal{G}}_\lambda=\pm\left(\begin{array}{r@{\hspace{5pt}}r@{\hspace{5pt}} r@{\hspace{5pt}}r@{\hspace{5pt}}r@{\hspace{5pt}}r} 0& 0& 0& 0& x_1& x_2 \\ 0& 0& 0& x_1& 0& x_3 \\ 0& 0& 0& x_2& x_3& 0 \\ 0& -x_1& -x_2& 0& 0& 0 \\ -x_1& 0& -x_3& 0& 0& 0 \\ -x_2& -x_3& 0& 0& 0& 0 \end{array}\right).$} \end{center} where $x_1:=\lambda(e_{\alpha_1+\alpha_2+\alpha_3})$, $x_2:=\lambda(e_{\alpha_1+\alpha_3+\alpha_4})$, $x_3:=\lambda(e_{\alpha_2+\alpha_3+\alpha_4})$. We compute $\det({\mathcal{G}}_\lambda)=4x_1^2x_2^2x_3^2$. Hence, if $p=2$, then the radical of $\sigma_\lambda$ will never be zero and so condition (K2) will never be satisfied. On the other hand, if $p\neq 2$, then the radical of $\sigma_\lambda$ will be zero whenever $x_1x_2x_3\neq 0$. \end{exmp} In the above examples, it was easy to compute the determinant of the Gram matrix ${\mathcal{G}}_\lambda$. However, we will encounter examples below where the computation of $\det({\mathcal{G}}_\lambda)$ becomes a very serious issue. \section{Unipotent support} \label{sec3} We are now looking for a unifying principle behind the various examples that we have seen so far. In Conjecture~\ref{main} below, we propose such a unification. First we need some preparations. \begin{abs} \label{abs31} Let ${\operatorname{Irr}}(G^F)$ be the set of complex irreducible representations of $G^F$ (up to isomorphism). Then there is a canonical map \[ {\operatorname{Irr}}(G^F)\rightarrow \{\text{$F$-stable unipotent classes of $G$}\}, \qquad \rho\mapsto C_\rho,\] defined in terms of the notion of ``unipotent support''. To explain this, we need to introduce some notation. Let $C$ be an $F$-stable unipotent conjugacy class of $G$. Then $C^F$ is a union of conjugacy classes of $G^F$. Let $u_1, \ldots,u_r\in C^F$ be representatives of the classes of $G^F$ contained in $C^F$. For $1\leq i \leq r$ we set $A(u_i):=C_G(u_i)/C_G^\circ (u_i)$. Since $F(u_i)=u_i$, the Frobenius map $F$ induces an automorphism of $A(u_i)$ which we denote by the same symbol. Let $A(u_i)^F$ be the group of fixed points under $F$. Then we set \[ {\operatorname{AV}}(\rho,C):=\sum_{1 \leq i \leq r} |A(u_i):A(u_i)^F|\mbox{trace}\bigl( \rho(u_i)\bigr)\] for any $\rho\in{\operatorname{Irr}}(G^F)$. Note that this does not depend on the choice of the representatives $u_i$. Now the desired map is obtained as follows. Let $\rho\in{\operatorname{Irr}}(G^F)$ and set $a_\rho:=\max\{ \dim C\mid {\operatorname{AV}}(\rho, C) \neq 0\}$ (where the maximum is taken over all $F$-stable unipotent classes $C$ of~$G$). By the main results of \cite{gema}, \cite{L2}, there is a unique $C$ such that $\dim C=a_\rho$ and ${\operatorname{AV}}(\rho,C)\neq 0$. This $C$ will be denoted by $C_\rho$ and called the {\it unipotent support} of~$\rho$. By \cite[Remark~3.9]{gema}, it is known that $C_\rho$ comes from characteristic~$0$ (see \ref{abs20}) and, hence, equals $C_{d_\rho}$ for a well-defined weighted Dynkin diagram $d_\rho\in\Delta$. We set \[ \Delta_{k,F}^{\!\bullet}:=\{d_\rho\in \Delta\mid \rho\in {\operatorname{Irr}}(G^F)\}.\] Thus, $\Delta_{k,F}^{\!\bullet}$ consists precisely of those weighted Dynkin diagrams for which the corresponding unipotent class of $G$ occurs as the unipotent support of some irreducible representation of $G^F$. In order to obtain a subset of $\Delta$ which only depends on $G$ and not on the choice of the particular Frobenius map $F$, we set \[ \Delta_k^{\!\bullet}:=\bigcup_{n\geq 1} \Delta_{k,F^n}^{\!\bullet}.\] (Note that, if $F_1\colon G\rightarrow G$ is another Frobenius map, then there always exist integers $n,n_1\geq 1$ such that $F_1^{n_1}=F^n$.) Thus, ${\mathfrak{C}}^\bullet=\{C_d \mid d\in \Delta_k^{\!\bullet}\}$ is precisely the set of unipotent classes mentioned at the end of Section~\ref{sec0}. \end{abs} \begin{table}[htbp] \caption{The sets $\Delta_k^{\!\bullet}\setminus \Delta_{\operatorname{spec}}$ for $G$ of exceptional type} \label{tabb} \begin{center} {\small $\begin{array}{ccc} \hline G_2 & {\mathbf{b}}_d & \text{condition} \\ \hline A_1 &3 & p\neq 3\\ \tilde{A}_1 &2 & p\neq 2\\\hline\\ \hline F_4 & {\mathbf{b}}_d & \text{condition} \\ \hline A_1 &16 & p\neq 2\\ A_2{+}\tilde{A}_1 &7 & p\neq 2\\ B_2 &6 & p\neq 2\\ \tilde{A}_2{+}A_1 & 6 & p\neq 3\\ C_3(a_1) & 5 & p\neq 2 \\\hline\\ \hline E_6 & {\mathbf{b}}_d & \text{condition} \\ \hline 3A_1 & 16 & p\neq 2\\2A_2{+}A_1 & 9 & p\neq 3\\A_3{+}A_1 & 8 & p\neq 2\\ A_5 & 4 & p\neq 2\\\hline\\ \hline E_7 & {\mathbf{b}}_d & \text{condition} \\ \hline (3A_1)' & 31 & p\neq 2\\ 4A_1 & 28 & p\neq 2\\ 2A_2{+}A_1 & 18 & p\neq 3\\ (A_3{+}A_1)' & 17 & p\neq 2\\ A_3{+}2A_1 & 16 & p\neq 2\\ D_4{+}A_1 &12& p\neq 2\\ A_5' &9 & p\neq 2\\ A_5{+}A_1 &9 & p\neq 3\\ D_6(a_2) &8 & p\neq 2\\ D_6 & 4& p\neq 2\\\hline \end{array}\qquad\quad \begin{array}{cccc} \hline E_8 & {\mathbf{b}}_d & \text{condition} \\ \hline 3A_1 & 64 & p\neq 2\\ 4A_1 & 56 & p\neq 2\\ A_2{+}3A_1 & 43 & p\neq 2\\ 2A_2{+}A_1 & 39 & p\neq 3\\ A_3{+}A_1 & 38 & p\neq 2\\ 2A_2{+}2A_1 & 36 & p\neq 3\\ A_3{+}2A_1 & 34 & p\neq 2\\ A_3{+}A_2{+}A_1 & 29 & p\neq 2\\ D_4{+}A_1 & 28 & p\neq 2\\ 2A_3 & 26 & p\neq 2\\ A_5 & 22 & p\neq 2\\ A_4{+}A_3 & 20 & p\neq 5\\ A_5{+}A_1 & 19 & p\neq 2,3\\ D_5(a_1){+}A_2 & 19 & p\neq 2\\ D_6(a_2) & 18 & p\neq 2\\ E_6(a_3){+}A_1 & 18 & p\neq 3\\ E_7(a_5) & 17 & p\neq 2\\ D_5{+}A_1 & 16 & p\neq 2\\ D_6 & 12 & p\neq 2\\ A_7 & 11 & p\neq 2\\ E_6{+}A_1 & 9 & p\neq 3\\ E_7(a_2) & 8 & p\neq 2\\ D_7 & 7 & p\neq 2\\ E_7 & 4 & p\neq 2\\ \hline \multicolumn{3}{l}{\text{(Notation from \cite[\S 13.1]{C2})}}\\\\\\ \end{array}$} \end{center} \end{table} \begin{rem} \label{uni1} Recall from Lusztig \cite{Ls1}, \cite{Ls2} the notion of {\it special unipotent classes}. The precise definition of these classes is not elementary; it involves the Springer correspondence and the notion of {\it special representations} of the Weyl group of~$G$. By \cite[Prop.~4.2]{gema}, an $F$-stable unipotent class is special if and only if it is the unipotent support of some unipotent representation of $G^F$. Thus, we conclude that special unipotent classes come from characteristic~$0$ and we have: \begin{itemize} \item[(a)] $\Delta_{\text{spec}}\subseteq \Delta_k^{\!\bullet}$, where $\Delta_{\text{spec}}$ denotes the set of all $d\in \Delta$ such that the corresponding unipotent class $C_d$ of $G$ is special. \end{itemize} Explicit descriptions of the sets $\Delta_{\text{spec}}$ are contained in the tables in \cite[\S 13.4]{C2}. Using the above concepts, one can give an alternative description of the map $\rho\mapsto C_\rho$; see \cite[13.4]{L1}, \cite[10.9]{L2}, \cite[\S 1]{L4}, \cite[\S 3.C]{gema}. This alternative description allows one to compute $\Delta_k^{\!\bullet}$ explicitly, without knowing any character values of $G^F$. In particular, this yields the following statement: \begin{itemize} \item[(b)] If $p$ is a good prime for $G$, then $\Delta_k^{\!\bullet}= \Delta$. \end{itemize} This was first stated (in terms of the alternative description of $\rho \mapsto C_\rho$ and for $p$ large) as a conjecture in \cite[\S 9]{Ls1}; see also \cite[13.4]{L1}. A full proof eventually appeared in \cite[Theorem~1.5]{L4}. \end{rem} \begin{prop} \label{uni2} Assume that $G$ is simple and $p$ is a bad prime for $G$. If $G$ is of classical type $B_n,C_n$ or $D_n$, then $\Delta_k^{\!\bullet}= \Delta_{\operatorname{spec}}$. If $G$ is of exceptional type $G_2$, $F_4$, $E_6$, $E_7$ or $E_8$, then the sets $\Delta_k^{\!\bullet}\setminus \Delta_{\operatorname{spec}}$ are specified in Table~\ref{tabb}. \end{prop} (In Table~\ref{tabb}, we list all $d\in \Delta\setminus \Delta_{\text{spec}}$; for each such $d$, the last column gives the condition on $p$ such that $d\in \Delta_k^{\!\bullet}$.) \begin{proof} This follows by analogous methods as in \cite{L4}. The special feature of the case where $G$ is of classical type and $p=2$ is the fact that then the centraliser of a semisimple element is a Levi subgroup of some parabolic subgroup (see, e.g., \cite[\S 4]{C1}). If $G$ is of exeptional type, then one uses explicit computations completely analogous to those in \cite[\S 7]{L4}; here, one also needs L\"ubeck's tables \cite{Lue07} concerning possible centralisers of semisimple elements in these cases. We omit further details. \end{proof} \begin{conj} \label{main} Let $d\in\Delta_k^{\!\bullet}$ be invariant under the permutation $\tau\colon \Phi\rightarrow \Phi$ induced by~$F$. Then there exist linear maps $\lambda\colon{\mathfrak{g}}_d(2)\rightarrow k$ which are defined over ${\mathbb{F}}_q$ and are in ``sufficiently general position'' (see Definition~\ref{def24}). Hence, following the general procedure described in \ref{abs17}, we can define the corresponding $\operatorname{GGGR}$s $\Gamma_{d,\lambda}$ of~$G^F$. \end{conj} By Remarks~\ref{rem24a}, \ref{rem24c}($*$) and \ref{uni1}(b), the conjecture holds for all $d\in \Delta_k^{\!\bullet}=\Delta$ if $p$ is a good prime for~$G$. In particular, it holds when $G$ is of type $A_n$. We will now discuss a number of examples supporting the conjecture in cases where~$p$ is a bad prime. As already explained in the previous section, the main issue is the validity of condition (K2) in Definition~\ref{def24}. \begin{exmp} \label{bc2bis} (a) Let $G=\mbox{Sp}_4(k)$ and assume that $k$ has characteristic~$2$. By Proposition~\ref{uni2}, or by inspection of the known character table of $G^F$ (see \cite{Eno1}), we note that the unipotent class corresponding to the weighted Dynkin diagram $d_0$ in Example~\ref{bsp4} is not the unipotent support of any irreducible character of~$G^F$. Thus, $d_0 \not\in \Delta_k^{\!\bullet}$ and so this critical case does not enter in the range of validity of Conjecture~\ref{main}. In fact, $\Delta_k^{\!\bullet}=\Delta\setminus \{d_0\}$ if $p=2$. (b) The situation is similar for $G$ of type $G_2$. Consider the two weighted Dynkin diagrams with ${\mathfrak{g}}_d(1)\neq \{0\}$ in Example~\ref{bg2}. By Table~\ref{tabb}, or by inspection of the known character table of $G^F$ (see \cite{EnYa} for $p=2$ and \cite{Eno2} for $p=3$), we see that $\Delta_k^{\!\bullet}=\Delta \setminus\{d\}$ where $d$ is as in Example~\ref{bg2}(a) if $p=2$, and $d$ is as in Example~\ref{bg2}(b) if $p=3$. \end{exmp} \begin{exmp} \label{bd4bis} Let again $G$ be of type $D_4$ and return to the discussion in Example~\ref{bd4}. The special unipotent classes are explicitly described in \cite[p.~439]{C2}; there is only one class which is not special (it corresponds to elements with Jordan blocks of sizes $3,2,2,1$), and this is precisely the one considered in Example~\ref{bd4}(b). But, by Proposition~\ref{uni2}, the corresponding weighted Dynkin diagram does not belong to $\Delta_k^{\!\bullet}$ if $p=2$. \end{exmp} \begin{exmp} \label{be8} Let $G$ be of type $E_8$ with diagram \begin{center} \begin{picture}(150,40) \put( 1,32){$\alpha_1$} \put( 5,25){\circle*{5}} \put( 7,25){\line(1,0){20}} \put( 25,32){$\alpha_3$} \put( 29,25){\circle*{5}} \put( 31,25){\line(1,0){20}} \put( 49,32){$\alpha_4$} \put( 53,25){\circle*{5}} \put( 55,25){\line(1,0){20}} \put( 73,32){$\alpha_5$} \put( 77,25){\circle*{5}} \put( 79,25){\line(1,0){20}} \put( 97,32){$\alpha_6$} \put(101,25){\circle*{5}} \put(103,25){\line(1,0){20}} \put(121,32){$\alpha_7$} \put(124,25){\circle*{5}} \put(127,25){\line(1,0){20}} \put(145,32){$\alpha_8$} \put(149,25){\circle*{5}} \put(53,23){\line(0,-1){20}} \put(53,3){\circle*{5}} \put(60,2){$\alpha_2$} \end{picture} \end{center} Let $d_0\in\Delta$ correspond to the class denoted $A_4{+}A_3$ in Table~\ref{tabb}. We have $d_0(\alpha_4)=d_0(\alpha_7)=1$ and $d_0(\alpha_i)=0$ for $i\neq 4,7$; see \cite[p.~406]{C2}. By Table~\ref{tabb}, we have $\Delta_k^{\!\bullet}=\Delta\setminus\{d_0\}$ if $p=5$; furthermore, $d_0\in\Delta_k^{\!\bullet}$ if $p\neq 5$. Let $\lambda\colon {\mathfrak{g}}_{d_0}(2) \rightarrow k$ be a linear map and consider the Gram matrix ${\mathcal{G}}_\lambda$ of the alternating form $\sigma_\lambda$. We claim: \begin{itemize} \item[(a)] If $p\neq 5$, then $\det({\mathcal{G}}_\lambda)\neq 0$ for some $\lambda \colon {\mathfrak{g}}_{d_0}(2)\rightarrow k$. \item[(b)] If $p=5$, then $\det({\mathcal{G}}_\lambda)=0$ for all $\lambda \colon {\mathfrak{g}}_{d_0}(2)\rightarrow k$. \end{itemize} First, we find that $\dim {\mathfrak{g}}_{d_0}(1)=24$ and $\dim {\mathfrak{g}}_{d_0}(2)= 21$. As explained in Remark~\ref{rem24c}, we then explicitly work out ${\mathcal{G}}_\lambda$. We have \[{\mathcal{G}}_\lambda=\bigl(f_{ij}(x_1,\ldots,x_{21})\bigr)_{1\leq i,j \leq 24}\] where $f_{ij}$ are certain polynomials with integer coefficients in $21$ indeterminates. In order to verify (a), we argue as follows. If $p>5$, then (a) holds by Remark~\ref{rem24a}. If $p=2,3$, then we simply run through all vectors of values $(x_1,\ldots,x_{21})\in \{0,1\}^{21}$ (starting with the vector $1,1,\ldots,1$ and then increasing step by step the number of zeroes) until we find one such that $\det({\mathcal{G}}_\lambda) \neq 0$. It turns out that this search is successful just after a few steps. The verification of (b) is much harder. Let $p=5$ and denote by $\bar{f}_{ij}$ the reduction of $f_{ij}$ modulo~$p$. Then we need to check that $\det(\bar{f}_{ij})=0$. It seems to be practically impossible to compute such a determinant directly, or even just the rank. (Using special values of the $x_i$ as above, one quickly sees that the rank of $(\bar{f}_{ij})$ is at least~$22$.) Now, since ${\mathcal{G}}_\lambda$ is anti-symmetric, we can use the fact that the desired determinant is given by $\mbox{Pf}(\bar{f}_{ij})^2$, where $\mbox{Pf}(\bar{f}_{ij})$ denotes the Pfaffian of the matrix $(\bar{f}_{ij})$; see, for example, \cite{dress}, \cite{knu}. (I am indebted to Ulrich Thiel for pointing this out to me.) A simple recursive algorithm (via row expansion, as in \cite[1.5]{dress}) is sufficient to compute $\mbox{Pf}(\bar{f}_{ij})=0$ in this case and yields~(b). (Over ${\mathbb{Z}}$, the Pfaffian of $(f_{ij})$ is a non-zero polynomial which is a linear combination of $1386$ monomials in $21$ indeterminates, where all coefficients are divisible by~$5$.) The class $A_4{+}A_3$ in type $E_8$ also plays a special role in \cite[\S 4.2]{prem2}. (I thank Alexander Premet for pointing this out to me.) \end{exmp} \begin{exmp} \label{be8b} Let again $G$ be of type $E_8$, with diagram as above. Let $d_1\in\Delta$ correspond to the class denoted $A_5{+}A_1$ in Table~\ref{tabb}. We have $d_1(\alpha_1)=d_1(\alpha_4)= d_1(\alpha_8)=1$ and $d_1(\alpha_i)=0$ for $i\neq 1,4,8$; see \cite[p.~406]{C2}. By Table~\ref{tabb}, we have $d_1\in \Delta_k^{\!\bullet}$ if $p\neq 2,3$, and $d_1\not\in\Delta_k^{\!\bullet}$ otherwise. Let $\lambda\colon {\mathfrak{g}}_{d_1}(2) \rightarrow k$ be a linear map and consider the Gram matrix ${\mathcal{G}}_\lambda$ of the alternating form $\sigma_\lambda$. Here, we find that $\dim {\mathfrak{g}}_{d_1}(1)=22$ and $\dim {\mathfrak{g}}_{d_1}(2)= 18$. Using computations as in the previous example, we obtain: \begin{itemize} \item[(a)] If $p\neq 2,3$, then $\det({\mathcal{G}}_\lambda)\neq 0$ for some $\lambda \colon {\mathfrak{g}}_{d_1}(2)\rightarrow k$. \item[(b)] If $p\in\{2,3\}$, then $\det({\mathcal{G}}_\lambda)=0$ for all $\lambda \colon {\mathfrak{g}}_{d_1}(2)\rightarrow k$. \end{itemize} (In (b), there are $\lambda$ such that ${\mathcal{G}}_\lambda$ has rank $20$.) \end{exmp} The examples suggest the following characterisation of the set $\Delta_k^{\!\bullet}$. \begin{conj} \label{main1a} Let $d \in \Delta$. Then $d\in \Delta_k^{\!\bullet}$ if and only if either ${\mathfrak{g}}_d(1)=\{0\}$, or there exists a linear map $\lambda \colon {\mathfrak{g}}_d(2)\rightarrow k$ such that the radical of $\sigma_\lambda$ is zero. \end{conj} Finally, we state the following conjecture concerning special unipotent classes. As in \ref{abs14}, let $G_0$ be a connected reductive algebraic group over ${\mathbb{C}}$ of the same type as $G$; let ${\mathfrak{g}}_0$ be its Lie algebra. For $d\in \Delta$ and $i=1,2$, we set \[{\mathfrak{g}}_{{\mathbb{Z}},d}(i):=\langle e_\alpha \mid d(\alpha)=i\rangle_{\mathbb{Z}}\subseteq {\mathfrak{g}}_0.\] As in \ref{abs17}, given a homomorphism $\lambda \colon {\mathfrak{g}}_{{\mathbb{Z}},d}(2) \rightarrow {\mathbb{Z}}$, we obtain an alternating form $\sigma_\lambda \colon {\mathfrak{g}}_{{\mathbb{Z}},d}(1) \times {\mathfrak{g}}_{{\mathbb{Z}},d}(1)\rightarrow {\mathbb{Z}}$ and we may consider its Gram matrix with respect to the ${\mathbb{Z}}$-basis $\{e_\alpha\mid \alpha\in \Phi_1\}$ of ${\mathfrak{g}}_{{\mathbb{Z}},d}(1)$. If this Gram matrix has determinant $\pm 1$, then we say that $\sigma_\lambda$ is non-degenerate over~${\mathbb{Z}}$. \begin{conj} \label{main2} With the above notation, let $d\in \Delta$. Then we have $d\in \Delta_{\operatorname{spec}}$ if and only if either ${\mathfrak{g}}_{{\mathbb{Z}},d}(1)=\{0\}$, or there exists a homomorphism $\lambda \colon {\mathfrak{g}}_{{\mathbb{Z}},d}(2)\rightarrow {\mathbb{Z}}$ such that $\sigma_\lambda$ is non-degenerate over~${\mathbb{Z}}$. \end{conj} Note that, if ${\mathfrak{g}}_d(1)=\{0\}$, then we certainly have $d\in \Delta_{\text{spec}}$. (This easily follows from \cite[Prop.~1.9(b)]{LuSp}.) Hence, in order to verify the above conjectures for a given example, it is sufficient to consider the cases where ${\mathfrak{g}}_d(1)\neq \{0\}$. This will be further discussed in the following section. \section{A worked example: type $F_4$} \label{sec4} In this section, we work out in detail the example where $G$ is of type $F_4$. We believe that the results of our computations are strong evidence for the truth of Conjectures~\ref{main} and~\ref{main2}; the discussion of the various cases will also provide a good illustration of the computational issues involved. Let $\Pi=\{\alpha_1,\alpha_2,\alpha_3, \alpha_4\}$ be a set of simple roots such that the Dynkin diagram of $G$ looks as follows: \begin{center} \begin{picture}(100,15) \put( 1,10){$\alpha_1$} \put( 5,3){\circle*{5}} \put( 7,3){\line(1,0){20}} \put( 29,3){\circle*{5}} \put( 25,10){$\alpha_2$} \put( 31,5){\line(1,0){20}} \put( 31,2){\line(1,0){20}} \put( 37,1){\mbox{$>$}} \put( 49,10){$\alpha_3$} \put( 53,3){\circle*{5}} \put( 55,3){\line(1,0){20}} \put( 73,10){$\alpha_4$} \put( 77,3){\circle*{5}} \end{picture} \end{center} By \cite[p.~401]{C2}, there are $16$~weighted Dynkin diagrams in $\Delta$; together with some additional information, these are printed in Table~\ref{tab1}. (The entries in the last column are determined by Remark~\ref{uni1}(a) and Table~\ref{tabb}.) \begin{table}[htbp] \caption{Unipotent classes in type $F_4$} \label{tab1} \begin{center} {\small $\begin{array}{ccccc} \hline \text{Name} & \text{$d\in \Delta$} & {\mathbf{b}}_d & \mbox{special?} &\mbox{condition $d\in \Delta_k^{\!\bullet}$?}\\ \hline 1 & \begin{picture}(80,5) \put( 0,0){0} \put( 8,3){\line(1,0){10}} \put( 20,0){0} \put( 28,5){\line(1,0){15}} \put( 28,2){\line(1,0){15}} \put( 31,1){\mbox{$>$}} \put( 45,0){0} \put( 53,3){\line(1,0){10}} \put( 65,0){0} \end{picture} & 24 & \text{yes} &-\\ A_1 & \begin{picture}(80,5) \put( 0,0){1} \put( 8,3){\line(1,0){10}} \put( 20,0){0} \put( 28,5){\line(1,0){15}} \put( 28,2){\line(1,0){15}} \put( 31,1){\mbox{$>$}} \put( 45,0){0} \put( 53,3){\line(1,0){10}} \put( 65,0){0} \end{picture} & 16 & \text{no} & p\neq 2\\ \tilde{A}_1 & \begin{picture}(80,5) \put( 0,0){0} \put( 8,3){\line(1,0){10}} \put( 20,0){0} \put( 28,5){\line(1,0){15}} \put( 28,2){\line(1,0){15}} \put( 31,1){\mbox{$>$}} \put( 45,0){0} \put( 53,3){\line(1,0){10}} \put( 65,0){1} \end{picture} & 13 & \text{yes} &-\\ A_1{+}\tilde{A}_1 & \begin{picture}(80,5) \put( 0,0){0} \put( 8,3){\line(1,0){10}} \put( 20,0){1} \put( 28,5){\line(1,0){15}} \put( 28,2){\line(1,0){15}} \put( 31,1){\mbox{$>$}} \put( 45,0){0} \put( 53,3){\line(1,0){10}} \put( 65,0){0} \end{picture} & 10 & \text{yes} &-\\ A_2 & \begin{picture}(80,5) \put( 0,0){2} \put( 8,3){\line(1,0){10}} \put( 20,0){0} \put( 28,5){\line(1,0){15}} \put( 28,2){\line(1,0){15}} \put( 31,1){\mbox{$>$}} \put( 45,0){0} \put( 53,3){\line(1,0){10}} \put( 65,0){0} \end{picture} & 9 & \text{yes} &-\\ \tilde{A}_2 & \begin{picture}(80,5) \put( 0,0){0} \put( 8,3){\line(1,0){10}} \put( 20,0){0} \put( 28,5){\line(1,0){15}} \put( 28,2){\line(1,0){15}} \put( 31,1){\mbox{$>$}} \put( 45,0){0} \put( 53,3){\line(1,0){10}} \put( 65,0){2} \end{picture} & 9 & \text{yes} &-\\ A_2{+}\tilde{A}_1 & \begin{picture}(80,5) \put( 0,0){0} \put( 8,3){\line(1,0){10}} \put( 20,0){0} \put( 28,5){\line(1,0){15}} \put( 28,2){\line(1,0){15}} \put( 31,1){\mbox{$>$}} \put( 45,0){1} \put( 53,3){\line(1,0){10}} \put( 65,0){0} \end{picture} & 7 & \text{no} & p\neq 2\\ B_2 & \begin{picture}(80,5) \put( 0,0){2} \put( 8,3){\line(1,0){10}} \put( 20,0){0} \put( 28,5){\line(1,0){15}} \put( 28,2){\line(1,0){15}} \put( 31,1){\mbox{$>$}} \put( 45,0){0} \put( 53,3){\line(1,0){10}} \put( 65,0){1} \end{picture} & 6 & \text{no} & p\neq 2\\ \tilde{A}_2{+}A_1 & \begin{picture}(80,5) \put( 0,0){0} \put( 8,3){\line(1,0){10}} \put( 20,0){1} \put( 28,5){\line(1,0){15}} \put( 28,2){\line(1,0){15}} \put( 31,1){\mbox{$>$}} \put( 45,0){0} \put( 53,3){\line(1,0){10}} \put( 65,0){1} \end{picture} & 6 & \text{no} &p\neq 3\\ C_3(a_1) & \begin{picture}(80,5) \put( 0,0){1} \put( 8,3){\line(1,0){10}} \put( 20,0){0} \put( 28,5){\line(1,0){15}} \put( 28,2){\line(1,0){15}} \put( 31,1){\mbox{$>$}} \put( 45,0){1} \put( 53,3){\line(1,0){10}} \put( 65,0){0} \end{picture} & 5 & \text{no} &p\neq 2\\ F_4(a_3) & \begin{picture}(80,5) \put( 0,0){0} \put( 8,3){\line(1,0){10}} \put( 20,0){2} \put( 28,5){\line(1,0){15}} \put( 28,2){\line(1,0){15}} \put( 31,1){\mbox{$>$}} \put( 45,0){0} \put( 53,3){\line(1,0){10}} \put( 65,0){0} \end{picture} & 4 & \text{yes} &-\\ B_3 & \begin{picture}(80,5) \put( 0,0){2} \put( 8,3){\line(1,0){10}} \put( 20,0){2} \put( 28,5){\line(1,0){15}} \put( 28,2){\line(1,0){15}} \put( 31,1){\mbox{$>$}} \put( 45,0){0} \put( 53,3){\line(1,0){10}} \put( 65,0){0} \end{picture} & 3 & \text{yes} &-\\ C_3 & \begin{picture}(80,5) \put( 0,0){1} \put( 8,3){\line(1,0){10}} \put( 20,0){0} \put( 28,5){\line(1,0){15}} \put( 28,2){\line(1,0){15}} \put( 31,1){\mbox{$>$}} \put( 45,0){1} \put( 53,3){\line(1,0){10}} \put( 65,0){2} \end{picture} & 3 & \text{yes} &-\\ F_4(a_2) & \begin{picture}(80,5) \put( 0,0){0} \put( 8,3){\line(1,0){10}} \put( 20,0){2} \put( 28,5){\line(1,0){15}} \put( 28,2){\line(1,0){15}} \put( 31,1){\mbox{$>$}} \put( 45,0){0} \put( 53,3){\line(1,0){10}} \put( 65,0){2} \end{picture} & 2 & \text{yes} &-\\ F_4(a_1) & \begin{picture}(80,5) \put( 0,0){2} \put( 8,3){\line(1,0){10}} \put( 20,0){2} \put( 28,5){\line(1,0){15}} \put( 28,2){\line(1,0){15}} \put( 31,1){\mbox{$>$}} \put( 45,0){0} \put( 53,3){\line(1,0){10}} \put( 65,0){2} \end{picture} & 1 & \text{yes} &-\\ F_4 & \begin{picture}(80,5) \put( 0,0){2} \put( 8,3){\line(1,0){10}} \put( 20,0){2} \put( 28,5){\line(1,0){15}} \put( 28,2){\line(1,0){15}} \put( 31,1){\mbox{$>$}} \put( 45,0){2} \put( 53,3){\line(1,0){10}} \put( 65,0){2} \end{picture} & 0 & \text{yes} &-\\ \hline \multicolumn{5}{l}{\text{(Notation from \cite[p.~401]{C2})}} \end{array}$} \end{center} \end{table} There are eight weighted Dynkin diagrams which satisfy ${\mathfrak{g}}_d(1)\neq \{0\}$. We now consider these eight cases in detail, where we just focus on the validity of condition (K2) in Definition~\ref{def24}. (In particular, the Frobenius map $F\colon G\rightarrow G$ will not play a role in this section.) \begin{abs} \label{bf4a} Let $d(\alpha_1)=1$, $d(\alpha_2)=d(\alpha_3)=d(\alpha_4)=0$. We have ${\mathbf{b}}_d=16$ and \begin{align*} {\mathfrak{g}}_d(1) &=\langle e_{1000}, e_{1100}, e_{1110}, e_{1120}, e_{1111}, e_{1220}, e_{1121}, \\&\qquad\quad e_{1221}, e_{1122}, e_{1231}, e_{1222}, e_{1232}, e_{1242}, e_{1342}\rangle_k,\\ {\mathfrak{g}}_d(2) &=\langle e_{2342}\rangle_k, \end{align*} where, for example, $1342$ stands for the root $\alpha_1+3\alpha_2+ 4\alpha_3+2\alpha_4$. Let $\lambda\colon {\mathfrak{g}}_d(2)\rightarrow k$ be any linear map. As in Example~\ref{bd4}, we work out the corresponding Gram matrix ${\mathcal{G}}_\lambda$, where we set $x_1:=\lambda(e_{2342})$. It is given by \begin{center} ${\mathcal{G}}_\lambda=\pm x_1\cdot\mbox{antidiag}(1,-1,2,-1,-2,1,2,-2, -1,2,1,-2, 1,-1)$. \end{center} We have $\det({\mathcal{G}}_\lambda)=64x_1^{14}$. Hence, if $p=2$, then the radical of $\sigma_\lambda$ is not zero. If $p\neq 2$, then the radical is zero whenever $x_1\neq 0$. \end{abs} \begin{abs} \label{bf4b} Let $d(\alpha_1)=d(\alpha_2)=d(\alpha_3)=0$, $d(\alpha_4)=1$. We have ${\mathbf{b}}_d=13$ and \begin{align*} {\mathfrak{g}}_d(1) &=\langle e_{0001},e_{0011},e_{0111}, e_{1111},e_{0121}, e_{1121},e_{1221},e_{1231}\rangle_k,\\ {\mathfrak{g}}_d(2)&=\langle e_{0122},e_{1122},e_{1222},e_{1232},e_{1242},e_{1342}, e_{2342}\rangle_k, \end{align*} where we use the same notational conventions as above. Let $\lambda\colon {\mathfrak{g}}_d(2)\rightarrow k$ be any linear map. Let $x_1,\ldots,x_8$ be the values of $\lambda$ on the $8$ basis vectors of ${\mathfrak{g}}_d(2)$ (ordered as above). Then we obtain \begin{center} {\footnotesize ${\mathcal{G}}_\lambda=\pm\left(\begin{array}{r@{\hspace{5pt}}r@{\hspace{5pt}} r@{\hspace{5pt}}r@{\hspace{5pt}}r@{\hspace{5pt}}r@{\hspace{5pt}} r@{\hspace{5pt}}r} 0& 0& 0& 0& -2x_1& -2x_2& -2x_3& -x_4 \\ 0& 0& 2x_1& 2x_2& 0& 0& -x_4& -2x_5 \\ 0& -2x_1& 0& 2x_3& 0& x_4& 0& -2x_6 \\ 0& -2x_2& -2x_3& 0& -x_4& 0& 0& -2x_7 \\ 2x_1& 0& 0& x_4& 0& 2x_5& 2x_6& 0 \\ 2x_2& 0& -x_4& 0& -2x_5& 0& 2x_7& 0\\ 2x_3& x_4& 0& 0& -2x_6& -2x_7& 0& 0 \\ x_4& 2x_5& 2x_6& 2x_7& 0& 0& 0& 0 \end{array}\right).$} \end{center} In principle, we could work out $\det({\mathcal{G}}_\lambda)$ and then try to find out for which values of $x_1,\ldots,x_8$ it is non-zero. However, this determinant is already quite complicated; it is a linear combination of $34$ monomials in $x_1,\ldots,x_8$. But we can just notice that, if we set $x_4:=1$ and $x_i:=0$ for all $i\neq 4$, then $\det({\mathcal{G}}_\lambda)=1$. So the radical of $\sigma_\lambda$ will be zero for this choice of $\lambda$, and this works for any field~$k$. \end{abs} \begin{abs} \label{bf4c} Let $d(\alpha_1)=d(\alpha_3)=d(\alpha_4)=0$, $d(\alpha_2)=1$. We have ${\mathbf{b}}_d=10$ and \begin{align*} {\mathfrak{g}}_d(1) &=\langle e_{0100},e_{1100},e_{0110},e_{1110},e_{0120},e_{0111}, \\&\qquad\quad e_{1120},e_{1111},e_{0121},e_{1121},e_{0122}, e_{1122}\rangle_k,\\ {\mathfrak{g}}_d(2)&=\langle e_{1220},e_{1221},e_{1231}, e_{1222},e_{1232},e_{1242} \rangle_k. \end{align*} Let $\lambda\colon {\mathfrak{g}}_d(2)\rightarrow k$ be any linear map, and denote by $x_1,\ldots,x_6$ the values of $\lambda$ on the $6$ basis vectors of ${\mathfrak{g}}_d(2)$ (ordered as above). Then we obtain \begin{center} {\footnotesize ${\mathcal{G}}_\lambda=\pm\left(\begin{array}{r@{\hspace{5pt}}r@{\hspace{5pt}} r@{\hspace{5pt}}r@{\hspace{5pt}}r@{\hspace{5pt}}r@{\hspace{5pt}} r@{\hspace{5pt}}r@{\hspace{5pt}}r@{\hspace{5pt}}r@{\hspace{5pt}} r@{\hspace{5pt}}r} 0& 0& 0& 0& 0& 0& -x_1& 0& 0& -x_2& 0& -x_4 \\ 0& 0& 0& 0& x_1& 0& 0& 0& x_2& 0& x_4& 0 \\ 0& 0& 0& 2x_1& 0& 0& 0& x_2& 0& -x_3& 0& -x_5 \\ 0& 0& -2x_1& 0& 0& -x_2& 0& 0& x_3& 0& x_5& 0 \\ 0& -x_1& 0& 0& 0& 0& 0& x_3& 0& 0& 0& -x_6 \\ 0& 0& 0& x_2& 0& 0& x_3& 2x_4& 0& x_5& 0& 0 \\ x_1& 0& 0& 0& 0& -x_3& 0& 0& 0& 0& x_6& 0 \\ 0& 0& -x_2& 0& -x_3& -2x_4& 0& 0& -x_5& 0& 0& 0 \\ 0& -x_2& 0& -x_3& 0& 0& 0& x_5& 0& 2x_6& 0& 0 \\ x_2& 0& x_3& 0& 0& -x_5& 0& 0& -2x_6& 0& 0& 0 \\ 0& -x_4& 0& -x_5& 0& 0& -x_6& 0& 0& 0& 0& 0 \\ x_4& 0& x_5& 0& x_6& 0& 0& 0& 0& 0& 0& 0 \end{array}\right).$} \end{center} Here we notice that, if we set $x_3:=1$, $x_4:=1$ and $x_i:=0$ for $i\neq 3,4$, then $\det({\mathcal{G}}_\lambda)=1$. Hence, the radical of $\sigma_\lambda$ is zero for this choice of $\lambda$, and this works for any field~$k$. \end{abs} \begin{abs} \label{bf4d} Let $d(\alpha_1)=d(\alpha_2)=d(\alpha_4)=0$, $d(\alpha_3)=1$. We have ${\mathbf{b}}_d=7$ and \begin{align*} {\mathfrak{g}}_d(1) &=\langle e_{0010}, e_{0110}, e_{0011}, e_{1110}, e_{0111}, e_{1111}\rangle_k,\\ {\mathfrak{g}}_d(1) &=\langle e_{0120}, e_{1120}, e_{0121}, e_{1220}, e_{1121}, e_{0122}, e_{1221}, e_{1122}, e_{1222}\rangle_k. \end{align*} Let $\lambda\colon {\mathfrak{g}}_d(2)\rightarrow k$ be any linear map. Let $x_1, \ldots,x_9$ be the values of $\lambda$ on the $9$ basis vectors of ${\mathfrak{g}}_d(2)$ (ordered as above). Then we obtain \begin{center} {\footnotesize ${\mathcal{G}}_\lambda=\pm\left(\begin{array}{r@{\hspace{5pt}}r@{\hspace{5pt}} r@{\hspace{5pt}}r@{\hspace{5pt}}r@{\hspace{5pt}}r@{\hspace{5pt}} r@{\hspace{5pt}}r@{\hspace{5pt}}r} 0& 2x_1& 0& 2x_2& x_3& x_5 \\ -2x_1& 0& -x_3& 2x_4& 0& x_7 \\ 0& x_3& 0& x_5& 2x_6& 2x_8 \\ -2x_2& -2x_4& -x_5& 0& -x_7& 0 \\ -x_3& 0& -2x_6& x_7& 0& 2x_9 \\ -x_5& -x_7& -2x_8& 0& -2x_9& 0 \end{array}\right).$} \end{center} We have $\det({\mathcal{G}}_\lambda)= 16(x_1x_5x_9-x_1x_7x_8-x_2x_3x_9+x_2x_6x_7+x_3x_4x_8-x_4x_5x_6)^2$. So, if $p=2$, then $\det({\mathcal{G}}_\lambda)=0$ (for any $\lambda$). If $p\neq 2$, then we notice that $\det({\mathcal{G}}_\lambda)=16$ for $x_4:=1$, $x_5:=1$, $x_6:=1$ and $x_i:=0$ for $i\neq 4,5,6$. Hence, the radical of $\sigma_\lambda$ is zero for this choice of~$\lambda$. \end{abs} \begin{abs} \label{bf4e} Let $d(\alpha_1)=2$, $d(\alpha_2)=d(\alpha_3)=0$, $d(\alpha_4)=1$. We have ${\mathbf{b}}_d=6$ and \begin{align*} {\mathfrak{g}}_d(1) &=\langle e_{0001}, e_{0011}, e_{0111}, e_{0121}\rangle_k,\\ {\mathfrak{g}}_d(2) &=\langle e_{1000}, e_{1100}, e_{1110}, e_{1120}, e_{1220}, e_{0122}\rangle_k. \end{align*} Let $\lambda\colon {\mathfrak{g}}_d(2)\rightarrow k$ be any linear map. Let $x_1, \ldots,x_6$ be the values of $\lambda$ on the $6$ basis vectors of ${\mathfrak{g}}_d(2)$ (ordered as above). Then we obtain \begin{center} ${\mathcal{G}}_\lambda=\pm 2x_6\cdot \mbox{antidiag}(-1,1,-1,1)$. \end{center} We have $\det({\mathcal{G}}_\lambda)=16x_6^4$. Hence, if $p=2$, then the radical of $\sigma_\lambda$ is not zero. If $p\neq 2$, then the radical is zero whenever $x_6\neq 0$. \end{abs} \begin{abs} \label{bf4f} Let $d(\alpha_1)=0$, $d(\alpha_2)=1$, $d(\alpha_3)=0$, $d(\alpha_4)=1$. We have ${\mathbf{b}}_d=6$ and \begin{align*} {\mathfrak{g}}_d(1) &=\langle e_{0100}, e_{0001}, e_{1100}, e_{0110}, e_{0011}, e_{1110}, e_{0120}, e_{1120}\rangle_k,\\ {\mathfrak{g}}_d(2) &=\langle e_{0111}, e_{1111}, e_{0121}, e_{1220}, e_{1121} \rangle_k. \end{align*} Let $\lambda\colon {\mathfrak{g}}_d(2)\rightarrow k$ be any linear map. Let $x_1, \ldots,x_5$ be the values of $\lambda$ on the $5$ basis vectors of ${\mathfrak{g}}_d(2)$ (ordered as above). Then we obtain \begin{center} {\footnotesize ${\mathcal{G}}_\lambda=\pm \left(\begin{array}{r@{\hspace{5pt}}r@{\hspace{5pt}} r@{\hspace{5pt}}r@{\hspace{5pt}}r@{\hspace{5pt}}r@{\hspace{5pt}} r@{\hspace{5pt}}r} 0& 0& 0& 0& -x_1& 0& 0& -x_4 \\ 0& 0& 0& -x_1& 0& -x_2& -x_3& -x_5 \\ 0& 0& 0& 0& -x_2& 0& x_4& 0 \\ 0& x_1& 0& 0& -x_3& 2x_4& 0& 0 \\ x_1& 0& x_2& x_3& 0& x_5& 0& 0 \\ 0& x_2& 0& -2x_4& -x_5& 0& 0& 0 \\ 0& x_3& -x_4& 0& 0& 0& 0& 0 \\ x_4& x_5& 0& 0& 0& 0& 0& 0 \end{array}\right).$} \end{center} We have $\det({\mathcal{G}}_\lambda)=9(x_1x_4^2x_5-x_2x_3x_4^2)^2$. Hence, if $p=3$, then the radical of $\sigma_\lambda$ is not zero. Now assume that $p\neq 3$. Then we notice that $\det({\mathcal{G}}_\lambda)=9$ for $x_1:=1$, $x_4:=1$, $x_5:=1$ and $x_i:=0$ for $i=2,3$. So the radical is zero for this choice of~$\lambda$. \end{abs} \begin{abs} \label{bf4g} Let $d(\alpha_1)=1$, $d(\alpha_2)=0$, $d(\alpha_3)=1$, $d(\alpha_4)=0$. We have ${\mathbf{b}}_d=5$ and \begin{align*} {\mathfrak{g}}_d(1) &=\langle e_{1000}, e_{0010}, e_{1100}, e_{0110}, e_{0011}, e_{0111}\rangle_k,\\ {\mathfrak{g}}_d(2) &=\langle e_{1110}, e_{0120}, e_{1111}, e_{0121}, e_{0122}\rangle_k. \end{align*} Let $\lambda\colon {\mathfrak{g}}_d(2)\rightarrow k$ be any linear map. Let $x_1, \ldots,x_5$ be the values of $\lambda$ on the $5$ basis vectors of ${\mathfrak{g}}_d(2)$ (ordered as above). Then we obtain \begin{center} {\footnotesize ${\mathcal{G}}_\lambda=\pm \left(\begin{array}{r@{\hspace{5pt}}r@{\hspace{5pt}} r@{\hspace{5pt}}r@{\hspace{5pt}}r@{\hspace{5pt}}r} 0& 0& 0& x_1& 0& x_3 \\ 0& 0& x_1& 2x_2& 0& x_4 \\ 0& -x_1& 0& 0& -x_3& 0 \\ -x_1& -2x_2& 0& 0& -x_4& 0 \\ 0& 0& x_3& x_4& 0& 2x_5 \\ -x_3& -x_4& 0& 0& -2x_5& 0 \end{array}\right).$} \end{center} We have $\det({\mathcal{G}}_\lambda)=4(x_1^2x_5-x_1x_3x_4+x_2x_3^2)^2$. Hence, if $p=2$, then the radical of $\sigma_\lambda$ is not zero. Now assume that $p\neq 2$. Then we notice that $\det({\mathcal{G}}_\lambda)=4$ for $x_1:=1$, $x_5:=1$ and $x_i:=0$ for $i=2,3,4$. So the radical of $\sigma_\lambda$ is zero for this choice of~$\lambda$. \end{abs} \begin{abs} \label{bf4h} Let $d(\alpha_1)=1$, $d(\alpha_2)=0$, $d(\alpha_3)=1$, $d(\alpha_4)=2$. We have ${\mathbf{b}}_d=3$ and \begin{align*} {\mathfrak{g}}_d(1) &=\langle e_{1000},e_{0010},e_{1100},e_{0110}\rangle_k,\\ {\mathfrak{g}}_d(2) &=\langle e_{0001},e_{1110},e_{0120}\rangle_k. \end{align*} Let $\lambda\colon {\mathfrak{g}}_d(2)\rightarrow k$ be any linear map. Let $x_1, x_2,x_3$ be the values of $\lambda$ on the~$3$ basis vectors of ${\mathfrak{g}}_d(2)$ (ordered as above). Then we obtain \begin{center} {\footnotesize ${\mathcal{G}}_\lambda=\pm \left(\begin{array}{r@{\hspace{5pt}}r@{\hspace{5pt}} r@{\hspace{5pt}}r} 0& 0& 0& x_2 \\ 0& 0& x_2& 2x_3 \\ 0& -x_2& 0& 0 \\ -x_2& -2x_3& 0& 0 \end{array}\right).$} \end{center} Hence, we see that the radical of $\sigma_\lambda$ is zero for any linear map $\lambda\colon {\mathfrak{g}}_d(2)\rightarrow k$ such that $\lambda(e_{1110})=x_2 \neq 0$, and this works for any field~$k$. \end{abs} Similar computations can, of course, be performed for other types of groups. The results are summarized as follows. \begin{abs} \label{corf4} Assume that the characteristic of $k$ is a bad prime for $G$. Let $d\in \Delta$ be such that ${\mathfrak{g}}_d(1) \neq \{0\}$. Consider the following two statements. \begin{itemize} \item[(a)] If $d\in \Delta_k^{\!\bullet}$, then there exist linear maps $\lambda \colon {\mathfrak{g}}_d(2) \rightarrow k$ such that $\det({\mathcal{G}}_\lambda) \neq 0$ and $\lambda(e_\alpha)\in \{0,1\}$ for all $\alpha\in \Phi_2$. \item[(b)] If $d\not\in \Delta_k^{\!\bullet}$, then $\det({\mathcal{G}}_\lambda)=0$ for all linear maps $\lambda \colon {\mathfrak{g}}_d(2)\rightarrow k$. \end{itemize} Let $G$ be simple of exceptional type $G_2$, $F_4$, $E_6$, $E_7$ or $E_8$. Then (a) can be verified by exactly the same kind of computations as in Example~\ref{be8}(a), by systematically running through all possibilities where $\lambda(e_\alpha)\in\{0,1\}$ for $\alpha\in \Phi_2$, until we find one such that $\det({\mathcal{G}}_\lambda)\neq 0$. Even in type $E_8$, this just works in a few seconds. The verification of (b) is much harder. As in the verification of Example~\ref{be8}(b), we need to show that the determinant of a certain matrix with entries in a polynomial ring over ${\mathbb{F}}_p$ is~$0$. Except for some cases in type $E_8$, it is sufficient to use the Pfaffian of that matrix, as in Example~\ref{be8}(b). For types $G_2$, $F_4$, we can see all this immediately from the results of the computations in Example~\ref{bg2} and in \ref{bf4a}--\ref{bf4h}, by comparing with the entries in the last column of Table~\ref{tab1}. However, there are critical cases in type $E_8$ (for example, $A_2{+}3A_1$, $2A_2{+}A_1$, $2A_2{+}2A_1$, $A_3{+}2A_1$, $A_3{+}A_2 {+}A_1$) where the computation of the Pfaffian appears to be practically impossible. In these cases, some more sophisticated computational methods are required. \end{abs} \begin{prop}[Steel--Thiel \protect{\cite{ST}}] \label{corf4aa} Let $G$ be of type $E_8$. Then the statement in \ref{corf4}(b) holds for all $d\not\in\Delta\setminus \Delta_k^{\!\bullet}$. \end{prop} (The proof relies on Groebner basis techniques.) \begin{cor} \label{corf4a} If $G$ is simple of exceptional type $G_2$, $F_4$, $E_6$, $E_7$ or $E_8$, then Conjectures~\ref{main1a} and \ref{main2} hold for $G$. \end{cor} \begin{proof} First consider Conjecture~\ref{main1a}. Let $d\in \Delta_k^{\!\bullet}$. If ${\mathfrak{g}}_d(1)\neq \{0\}$, then we must show that there exists some $\lambda\colon {\mathfrak{g}}_d(2) \rightarrow k$ such that $\det({\mathcal{G}}_\lambda)\neq 0$. If $p$ (the characteristic of~$k$) is a good prime for $G$, then this holds by Remarks~\ref{rem24a} and~\ref{rem24c}($*$). If $p$ is a bad prime, then this holds since \ref{corf4}(a) is known to hold. Conversely, assume that either ${\mathfrak{g}}_d(1)=\{0\}$, or there exists a linear map $\lambda \colon {\mathfrak{g}}_d(2)\rightarrow k$ such that $\det({\mathcal{G}}_\lambda) \neq 0$. If ${\mathfrak{g}}_d(1)=\{0\}$, then $d\in \Delta_{\text{spec}}\subseteq \Delta_k^{\!\bullet}$, as already remarked at the end of Section~\ref{sec3}. If ${\mathfrak{g}}_d(1)\neq \{0\}$, then we have $d\in \Delta_k^{\!\bullet}$ by \ref{corf4}(b) and Proposition~\ref{corf4aa}. Now consider Conjecture~\ref{main2}. Let $d\in \Delta_{\text{spec}}$. If ${\mathfrak{g}}_{{\mathbb{Z}},d}(1)\neq \{0\}$, then we must show that there exists a homomorphism $\lambda\colon {\mathfrak{g}}_{{\mathbb{Z}},d}(2) \rightarrow {\mathbb{Z}}$ such that $\sigma_\lambda\colon {\mathfrak{g}}_{{\mathbb{Z}},d}(1)\times {\mathfrak{g}}_{{\mathbb{Z}},d}(1)\rightarrow {\mathbb{Z}}$ is non-degenerate over ${\mathbb{Z}}$. The verification is similar to that in \ref{corf4}(a), but now we work over ${\mathbb{Z}}$. Again, we systematically run through all possibilities where $\lambda(e_\alpha) \in\{0,1\}$ for $\alpha\in\Phi_2$, until we find one such that the Gram matrix of $\sigma_\lambda$ has determinant equal to~$1$. Even in type $E_8$, this just works in a few seconds. Conversely, assume that either ${\mathfrak{g}}_{{\mathbb{Z}},d}(1)=\{0\}$, or there exists a homomorphism $\lambda \colon {\mathfrak{g}}_{{\mathbb{Z}},d}(2) \rightarrow {\mathbb{Z}}$ such that $\sigma_\lambda \colon {\mathfrak{g}}_{{\mathbb{Z}},d}(1)\times {\mathfrak{g}}_{{\mathbb{Z}},d}(1)\rightarrow {\mathbb{Z}}$ is non-degenerate over ${\mathbb{Z}}$. If ${\mathfrak{g}}_{{\mathbb{Z}},d}(1)=\{0\}$, then $d\in \Delta_{\text{spec}}$ (see again the remark at the end of Section~\ref{sec3}). Now assume that ${\mathfrak{g}}_{{\mathbb{Z}},d}(1)\neq \{0\}$. By reduction modulo~$p$, we obtain a linear map $\lambda_k\colon {\mathfrak{g}}_d(2)\rightarrow k$ and a corresponding alternating form $\sigma_{\lambda_k}\colon {\mathfrak{g}}_d(1)\times {\mathfrak{g}}_d(1)\rightarrow k$. Since $\sigma_\lambda$ is non-degenerate over~${\mathbb{Z}}$, the radical of $\sigma_{\lambda_k}$ will be zero and so $d\in \Delta_k^{\!\bullet}$, since we already know that Conjecture~\ref{main1a} is true for~$G$. Note that this holds for all choices of~$k$. So Table~\ref{tabb} shows that $d\in \Delta_{\text{spec}}$. \end{proof} \begin{rem} \label{allan} Assume that $G$ is simple of type $A_n$. Then there are no bad primes for $G$, and all unipotent classes of $G$ are special. Now Conjecture~\ref{main1a} is known to hold in this case; see Remarks~\ref{rem24a} and~\ref{rem24c}($*$). As far as Conjecture~\ref{main2} is concerned, it remains to show that if $d\in \Delta$ is such that ${\mathfrak{g}}_{{\mathbb{Z}},d}(1)\neq \{0\}$, then there exists a homomorphism $\lambda\colon {\mathfrak{g}}_{{\mathbb{Z}},d}(2) \rightarrow {\mathbb{Z}}$ such that $\sigma_\lambda \colon {\mathfrak{g}}_{{\mathbb{Z}},d}(1)\times {\mathfrak{g}}_{{\mathbb{Z}},d}(1) \rightarrow {\mathbb{Z}}$ is non-degenerate over ${\mathbb{Z}}$. At the moment, we do not see a general argument but, by similar methods as above, we have at least checked this holds for $2\leq n\leq 15$. \end{rem} In order to verify Conjecture~\ref{main} in full, one would also need a description of the sets ${\mathfrak{g}}_d(2)^{*!}$; this will be discussed elsewhere. (For $G$ of type $A_n$, $B_n$, $C_n$, $D_n$, such a description is available from \cite{L3d}, \cite{xue1}.) Furthermore, one would need to check whether or not there exists some $\lambda$ which is defined over ${\mathbb{F}}_q$ and is in sufficiently general position.~---~It would be highly desirable to find a more conceptual explanation for all this. \medskip \noindent {\bf Acknowledgements.} I thank George Lusztig for discussions. Thanks are also due for Gunter Malle for a careful reading of the manuscript and many useful comments. I am indebted to Ulrich Thiel and Allen Steel for addressing, and solving, the computational issues in type $E_8$ (see \cite{ST}), which provided crucial support for Conjectures~\ref{main1a} and~\ref{main2}. Thanks are also due to Alexander Premet for comments and for pointing out the reference \cite{prem2}. Most of the work was done while I enjoyed the hospitality of the University of Cyprus at Nicosia (early October 2018); I thank Christos Pallikaros for the invitation. This work is a contribution to DFG SFB-TRR 195 ``Symbolic tools in Mathematics and their application''.
1810.08999
\section{\label{sec:intro}Introduction} Optically pumped magnetometers, (OPMs), have increasingly been in the spotlight for their broad span of applications ranging from fundamental physics experiments to medical physics. Examples include measurements of the electric dipole moment (EDM)~\cite{edm1,edm2} and searches for exotic physics~\cite{cptviolation} as well as magneto-encephalography (MEG)~\cite{meg1,meg2} and magneto-cardiography~\cite{mcg0,mcg1,mcg2} where detection of the small magnetic fields of the brain and the heart is required. A review can be found in ~\cite{Budker2007}. In recent years OPMs have become the state-of-the-art magnetic field sensors achieving sub fT$/\sqrt{\textrm{Hz}}$ sensitivity and surpassing the well established SQUID based sensors~\cite{Romalis2002,Romalis2010,Romalis2013}. In its simplest operation, an OPM uses a pump-probe laser to measure the atomic Larmor frequency, i.e.\ the frequency of spin precession, by interacting with optically pumped atoms, which in effect measures the strength of the external magnetic field. However, in a larger range of applications, a complete determination of the magnetic field is required. Some schemes employ a scalar magnetometer to run as a vector magnetometer by applying a rotating low frequency bias magnetic field~\cite{Yakobson2004,Gao2016}. Another possible approach uses multiple radio-frequency modulations to map the three vector components onto the harmonics of the signal~\cite{Romalis2004,Budker2014}. The effects of the field orientation on the resulting signal phase have been studied for different configurations of a modulating field and may be used for full vector magnetometry~\cite{ingleby}. Also an all-optical scheme with crossed beams was demonstrated to extract the three field components~\cite{Gao2015}. To date, most of the OPM schemes are based on pump-probe configurations that rely on the Faraday rotation, i.e.\ circular birefringence of the medium. As a result, the majority of such schemes require an orthogonal pump-probe geometry for high efficiency of detection \cite{Romalis2002}. However, this geometry is not convenient for developing miniature sensors, whilst a parallel configuration is compatible with chip-scale and compact atomic magnetometers. The issue can be overcome by introducing modulation fields, but several fields are required to extract the full information. In this paper we demonstrate an alternative 3D vector magnetometer based on the measurement of the Voigt effect, i.e.\ linear birefringence arising from aligned rather than oriented spin states. It is measured with a probe beam detuned from an optical resonance, in the presence of a single radio-frequency field~\cite{Jammi}. Previous work on double resonance detection of aligned states measured linear dichroism on a resonant optical transition \cite{Weis1,Weis2}. The method presented here, maps the three vector components of the external field, detected via demodulation of the probe beam's ellipticity, onto orthogonal quadratures of the first and second harmonic of the dressing frequency. State preparation and detection are performed in a parallel pump-probe geometry. We demonstrated vector capability of our magnetometer over a range of $\pm0.3$~nT longitudinal and $\pm180$~nT transverse fields and analyzed its sensitivity in a shielded environment in open loop operation. Active feedback on the external field should enable an extension of dynamic range as well as operation in unshielded scenarios. The paper is organized as follows. In Section~\ref{sec:theory}, we briefly describe the linear birefringence induced by radio-frequency dressed states. This provides predictions for the mapping of field components onto orthogonal quadratures of the first and second harmonics of the rf oscillation in the signal's response. In the following part of the paper, we report on the experimental realization using two different types of atomic ensemble. Section~\ref{sec:expColdatoms} demonstrates the detection principle with laser cooled atoms, prepared in a pure quantum state. Section \ref{sec:expvapour} describes the extension to a magnetically shielded vapour cell by combining the Voigt effect with synchronous pumping. Experimental results on vector sensitivity are shown together with an analysis of noise performance. Section~\ref{sec:Conclusions} presents our conclusions. \section{\label{sec:theory}Magnetometry with radio-frequency dressed states} Optically pumped magnetometers utilise dispersive coupling of light to an atomic ensemble in the presence of an external magnetic field. The Larmor precession of spin-polarized atoms causes a modulation of the medium's birefringence, which can be observed polarimetrically. In our scheme, we actively drive such precession with an additional radio-frequency field. Here, we present a brief description of the driven medium and its interaction with the light field in terms of dressed states, as discussed in our previous work~\cite{Jammi}. Our model includes the dependence on field orientation, which allows for the extraction of full vector information from measurements of either linear or circular birefringence. We consider atoms interacting with a static field $\mathbf{B}_{\mathrm{dc}}=B_{\mathrm{dc}}\mathbf{e}_z$ and a field oscillating at a radio-frequency $\omega$ in a transverse direction $\mathbf{B}_{\mathrm{rf}}(t)=B_{\mathrm{rf}}\cos(\omega t) \mathbf{e}_x$. For weak fields, the time-dependent interaction Hamiltonian of an atom with spin $\mathbf{F}$ of constant magnitude can be approximated by $\hat{H}=(\mu_B g_F/\hbar) \hat{\mathbf{F}}\cdot\left(\mathbf{B}_{\mathrm{rf}}(\omega t)+\mathbf{B}_{\mathrm{dc}}\right)$, where $\mu_B$ is the Bohr magneton, $g_F$ is the Land\'e factor, and $\hbar$ is the reduced Planck constant. Depending on the sign of the $g_F$ factor, using positive $\omega$, we transform the Hamiltonian to a frame rotating about the $z$-axis, according to $\hat{H}_\mathrm{rot}=\hat{U}\hat{H}\hat{U}^{-1}+i\hbar^{-1}\left(\partial_t\hat{U}\right)\hat{U}^{-1}$ with a time-dependent rotation operator $\hat{U}=e^{\mathrm{sgn}(g_F) i\omega t \hat{F}_z/\hbar}$. Neglecting counter-rotating terms, the transformed, effective Hamiltonian takes the form \begin{equation} \hat{H}_{\mathrm{eff}}=\frac{\mu_Bg_F}{\hbar}\hat{\mathbf{F}}\cdot\mathbf{B}_{\mathrm{eff}}.\label{eq:Heff} \end{equation} The effective magnetic field in this frame is given by $\mathbf{B}_{\mathrm{eff}}=B_{\rho}\mathbf{e}_x + (B_{\mathrm{dc}}-B_{\mathrm{res}}) \mathbf{e}_z$, where $B_{\rho}=B_\mathrm{rf}/2$, and $B_{\mathrm{res}}=\pm\hbar\omega/\mu_B g_F$ corresponds to a fictitious magnetic field that defines a resonance condition for the Larmor precession~\cite{Jammi}. As depicted in Fig.~\ref{fig:FrameRotation}(a), the angle enclosed by the effective field and the $z$-axis is \begin{equation} \theta=\frac{\pi}{2}-\mathrm{tan}^{-1}{\frac{B_\mathrm{dc}-B_{\mathrm{res}}}{B_{\rho}}}. \label{eq:theta} \end{equation} E.g., at resonance, i.e.\ for $\theta=\pi/2$, the effective field is orthogonal to the static field $\mathbf{B}_\mathrm{dc}$, pointing in the rotating frame's $x$-direction. \begin{figure}[h!]\hspace*{-0.4cm} \begin{overpic}[scale=0.3]{frame_rot.png} \put (3,33) {a)} \put (45,33) {b)} \end{overpic} \caption{Geometrical depiction of the effective field in the rotating frame. (a) The effective field encloses an angle $\theta$ with the $z$-axis. An external field variation $\delta B_z$ changes the angle $\theta \rightarrow \theta'$. (b) The presence of transverse external fields $B_y$ and $B_x$ also changes the orientation of the effective field, rotating it by angles $\alpha$ and $\beta$, respectively.}\label{fig:FrameRotation} \end{figure} The eigenstates of the effective, rotating-frame Hamiltonian, i.e.\ the dressed states, can be written as $ \left|\Psi_{\mathrm{rot}}\right\rangle=e^{i\theta\hat{F}_y/\hbar}\left|F,F_z\right\rangle$. But in the laboratory frame the same states are time-dependent, given by $\left|\Psi(t)\right\rangle=\hat{U}^{-1}\left|\Psi_{\mathrm{rot}}\right\rangle$. The dressed states can be prepared directly by synchronous optical pumping \cite{azunish}, or by adiabatic dressing of bare states $\left|F,F_z\right\rangle$, which acquire only a time-dependent phase under transformation to the rotating frame, leading to a different (quasi-) energy. In our scheme, magnetometer operation does not rely on a purely dynamical spin evolution, which would be observed as a change in precession frequency. Instead, we assume adiabatic following of dressed states that remain aligned with the effective field. Magnetometry is then enabled by the orientational dependence of the effective field on an additional external field. An external field in the $z$-direction enters the spin evolution through the dependence of $\theta$ on the static field strength, see Eq.~(\ref{eq:theta}) and Fig.~\ref{fig:FrameRotation}(a), where we can control the sensitivity $\partial\theta/\partial B_z$ by applying an offset field, such that $B_z=B_\mathrm{offs}+\delta B_z$. The presence of transverse static fields can also be represented by rotations, as shown in Fig.~\ref{fig:FrameRotation}(b). Field components $B_{y,x}$ rotate the static field about the $x,y$-axes by angles $\alpha$ and $\beta$, respectively. Hence, using a sequential rotation $\mathbf{M}(\alpha,\beta)=\mathbf{R}_x(\alpha)\mathbf{R}_{y}(\beta)$ \cite{Rodrigues}, the atomic spin operator in the laboratory frame is given by $\hat{\mathbf{F}}'=\mathbf{M}(\alpha,\beta)\hat{\mathbf{F}}$, where unprimed coordinates are aligned with the actual field. Figure~\ref{fig:FrameRotation} shows that the angles $\alpha$ and $\beta$ are given by \begin{align} \alpha&=\arctan\left(\frac{-B_y}{B_z}\cos(\beta)\right), \\ \beta&=\arctan\left(\frac{B_x}{B_z}\right), \end{align} with the small angle approximation \begin{align} \alpha&\approx\frac{-B_y}{B_z},\ \beta\approx\frac{B_x}{B_z}. \label{eq:angles} \end{align} For a complete description at larger angles, we need to include that the transverse fields increase the actual static field strength to $B_\mathrm{dc}=\sqrt{B_z^2+B_x^2+B_y^2}$, and that the applied rf field is not co-rotated, leading to a reduction of its effective amplitude in the rotating frame, given by $B_\rho=(B_\mathrm{rf}/2)\mathrm{cos}\beta$. For the detection of the spin evolution we employ $+45^{\circ}$-linearly polarized probe light propagating in the $z$-direction with a corresponding Stokes parameter $S_y=(c/2)\langle \hat{a}^{\dagger}_{x}\hat{a}_{y}+\hat{a}^{\dagger}_{y}\hat{a}_{x}\rangle$, which is equal to half the photon flux \cite{Jammi}. The dispersive interaction of the atomic medium with off-resonant light may lead to both circular and linear birefringence, depending on the atomic spin-dependent polarizability tensor. After propagation through the medium, neglecting absorption and assuming sufficiently small phase angles, the resulting Faraday and Voigt rotation can be described by Stokes operators \begin{align} \Braket{\hat{S}'_x(t)}&=-G_{F}^{(1)} S_y n_F \Braket{\hat{F}_z(t)},\label{eq:Faraday}\\ \Braket{\hat{S}'_z(t)}&=G_{F}^{(2)} S_y n_F \Braket{\hat{F}_x^2(t)-\hat{F}_y^2(t)}\label{eq:Voigt}, \end{align} where $\hat{S}'_z$ and $\hat{S}'_x$ represent the polarization's rotation and ellipticity as photon flux imbalances of the output light, measured in either a circular or linear basis. The coupling strengths $G_{F}^{(k)}$ depend on light detuning, interaction cross section, and the rank-k components of the polarizability tensor. In these equations, we assume interaction with $n_F$ atoms in the same spin state within one hyperfine $F$-manifold and neglect dispersive back-action on the atoms (Stark shifts)~\cite{Jammi}. For eigenstates of the effective Hamiltonian, using the geometrical rotations by angles $\alpha$, $\beta$, and $\theta$, we can determine the temporal atomic response in the laboratory frame, measured via Faraday or Voigt effect, in the presence of external magnetic fields. An adiabatic eigenstate $\left|\Psi_{\mathrm{rot}}\right\rangle$, transformed to the laboratory frame and rotated by $\mathbf{M}(\alpha,\beta)$, leads to spectral decompositions of the measured signals. For Faraday rotation, this is given by \begin{equation} \Braket{\hat{S'}_x(t)}=-\frac{1}{2}G_F^{(1)}S_y n_F\hbar F_z\sum_{n=0}^1 \tilde{h}_n(\theta)e^{in\omega t}+c.c., \label{eqn:spectraloutputFz} \end{equation} using $\Bra{F,F_z}\hat{F}_z\Ket{F,F_z}=\hbar F_z$. From Eqs.~(\ref{eq:h0Fz}) and (\ref{eq:h1Fz}), we find the spectral components in the small angle approximation \begin{align} \label{eqn:hcompFzLowangle} &\left( \tilde{h}_0,\tilde{h}_1\right)^T(\theta)\approx\left(\begin{array}{l} \cos{\theta}\\ (\beta\pm i \alpha)\sin{\theta}\\ \end{array}\right), \end{align} for $\alpha,\beta \ll 1$, i.e.\ for $B_{x,y}\ll B_z$. The principal behaviour of these functions across rf resonance is depicted in Fig.~\ref{fig:hcomps}(a). This spectral decomposition shows rf-resonant behaviour. The transverse field components are mapped onto the quadratures of the first harmonic according to $\beta\pm i\alpha\approx (B_x\mp i B_y)/B_z$ and with an oscillation amplitude proportional to $\sin{\theta}$. The latter is maximal for $\theta=\pi/2$, i.e.\ exactly on rf resonance. At the same time, the zeroth harmonic, i.e.\ the dc signal, exhibits dispersive behaviour that maps the total static field strength, which is proportional to $B_z$ in the first order approximation near resonance. This configuration represents a vector magnetometer, but the dc component of the signal is quite vulnerable to electronic and technical noise, which will limit this strategy in practice to sensitive measurements of only the two transverse field components. When measuring the Voigt effect, the spectral decomposition of the signal leads up to the second harmonic and is given by \begin{equation} \Braket{\hat{S'}_z(t)}=\frac{1}{2}G_F^{(2)}S_y n_F\hbar^2\xi_F(F_z)\sum_{n=0}^2 h_n(\theta)e^{in\omega t}+c.c., \label{eqn:spectraloutput} \end{equation} using $\Bra{F,F_z}\hat{F}_y^2-\hat{F}_z^2\Ket{F,F_z}=\hbar^2(F(F+1)-3F_z^2)/2=\hbar^2\xi_F(F_z)$. According to Eqs.~(\ref{eq:h0Fxy})-(\ref{eq:h2Fxy}), the spectral components in the small angle approximation are in this case \begin{align} \label{eqn:approxVoigtharm} &\left( h_0,h_1,h_2\right)^T(\theta)\approx \left(\begin{array}{l} 0\\ {(\beta\mp i\alpha)\sin{2\theta}} \\ {-\sin^2{\theta}} \\ \end{array}\right). \end{align} The principal behaviour of these functions is shown in Fig.~\ref{fig:hcomps}(b). Again, the transverse field components are mapped onto the quadratures of the first harmonic, but now with a dispersive shape given by an oscillation amplitude proportional to $\sin{2\theta}$. Maximal amplitude is reached at $\theta=\pi/4$ and $\theta=3\pi/4$, i.e.\ when the static field is $B_\mathrm{dc}=B_\mathrm{sense}^\pm=B_{\mathrm{res}}\pm B_\rho$. In contrast to the Faraday decomposition, the zeroth harmonic vanishes while the second harmonic depends on the static field amplitude. Conveniently, the maximum sensitivity and approximately linear response to $B_z$ is also met at $B_\mathrm{dc}=B_\mathrm{sense}^\pm$. Hence, the Voigt rotation enables low-noise detection of all three magnetic components by evaluating the first and second signal harmonics. \begin{figure}[t!] \begin{overpic}[width=0.45\textwidth]{spectral_components.png} \put (0,63) {a)} \put (18,61) {Faraday} \put (25,44) {n=0} \put (25,52) {\color{red}n=1} \put (0,35) {b)} \put (18,33) {Voigt} \put (59,20) {n=0} \put (69,33) {\color{red}n=1} \put (59,28) {\color{blue}n=2} \end{overpic} \caption{Spectral decomposition of (a) Faraday rotation proportional to $\Braket{F_z}$ and (b) Voigt rotation proportional to $\Braket{F_x^2-F_y^2}$, with harmonics $n=0$ (solid black lines), $n=1$ (dashed red lines) and $n=2$ (dashed-dotted blue lines).} \label{fig:hcomps} \end{figure} For the Voigt effect measurements presented in the following, we work on the high field side of the rf resonance, by applying a field in the $z$-direction of strength $B_\mathrm{offs}=B_\mathrm{sense}^+(\alpha=\beta=0)=B_\mathrm{res}+B_\mathrm{rf}/2$. At this setting, the explicit second order expansion of the three relevant signal quadratures is given by \begin{align} \Re(h_1)=h_x=&+\left(\frac{1}{B_\mathrm{offs}} - \frac{\delta B_z}{B_\mathrm{offs}^2} \right)B_x, \label{Re1f}\\ \Im(h_1)=h_y=&-\left(\frac{1}{B_\mathrm{offs}} - \frac{\delta B_z}{B_\mathrm{offs}^2} \right)B_y, \label{Im1f}\\ \Re(h_2)=h_z=&-\frac{1}{2}+\frac{\delta B_z}{B_{\mathrm{rf}}}- \left(\frac{\delta B_z}{B_{\mathrm{rf}}}\right)^2\nonumber\\ &+\frac{2B_\mathrm{offs}+B_\mathrm{rf}}{4B_\mathrm{offs}^2B_\mathrm{rf}}B_x^2+\frac{B_\mathrm{offs}+B_\mathrm{rf}}{2B_\mathrm{offs}^2B_\mathrm{rf}}B_y^2\label{Re2f}. \end{align} \section{\label{sec:expColdatoms}Experimental realization: Laser cooled atoms} \subsection{Laser cooled atoms setup} Our experimental cold atom setup was described in~\cite{Jammi}, and here we will present only a brief description. We prepare an ensemble of approximately $2\times10^7$ laser cooled $^{87}$Rb atoms with a temperature of $(80\pm10)~\mathrm{\mu K}$ in the $\ket{F=2, m_F=0}$ state, using a sequence of optical pumping and state cleaning steps. Here, optical pump light propagating along a second optical axis was used to simplify the state preparation. Atoms are then adiabatically dressed with a magnetic rf field in the $x$-direction with frequency $\omega = 2\pi \times 180~\mathrm{kHz}$, generated by an external resonant coil. The rf field amplitude is ramped up to $\approx 15~\mathrm{mG}$ over $4~\mathrm{ms}$ while the static magnetic field is ramped to a magnitude of $B_{\mathrm{offs}} \approx 260~\mathrm{mG}$ along the $z$-direction, which tunes the atomic Larmor frequency near resonance. \begin{figure}[b] \centering \includegraphics[width=0.45\textwidth]{Setup1.png} \caption{Experimental setup. A laser cooled $^{87}$Rb sample is prepared in a pure $|F=2,F_y=0\rangle$ state. After rotation of the static field into the z-direction and adiabatic dressing with a magnetic rf field along x, linear birefringence of the sample is probed polarimetrically by a laser pulse propagating along z.} \label{fig:sExpSetup} \end{figure} We measure the Voigt effect with a laser beam ($P\approx 100~\mu\mathrm{W}, \diameter\approx2.5~\mathrm{mm}$) detuned by $-400~\mathrm{MHz}$ from the $F=2\to F'=2$ transition of the $^{87}$Rb-$D_1$-line. A half-waveplate sets the polarization at $45\degree$ with respect to the $x,y$-axes. After interaction of a $1~\mathrm{ms}$ long probe pulse with the ensemble, a quarter-wave plate and a Wollaston prism allow us to measure the linear birefringence of the medium. The light is detected on a balanced photodetector pair (Thorlabs PDB210A) with a high-pass filtering rf amplifier (Minicircuits Model ZFL-1000+). The output voltage $u$ is proportional to the observed ellipticity, i.e.\ $u(t)=g_{\mathrm{el}} S'_z(t)$ with electronic gain $g_{\mathrm{el}}$, on the order of $10^{-12}~$V/Hz. The output signal is acquired by a field programmable gate array (FPGA) and is demodulated digitally with reference to the phase of the rf field. \subsection{\label{sec:fieldmapping}Field mapping with laser cooled atoms} The atomic ensemble should operate as a vector magnetometer near $\mathbf{B}=B_\mathrm{sense}^+\mathbf{e}_z$, where the two frequency modes of the atomic response map the three components of the magnetic field. Detuned to one HWHM above the rf resonance, the signal amplitude at $2\omega$ will be sensitive to the longitudinal field, while the two quadratures of the signal amplitude at $\omega$ map the transverse fields. Figure~\ref{fig:h1h2ColdAtoms} shows experimental signals demodulated at frequencies $\omega$ and $2\omega$ as a function of the static field $B_z\mathbf{e}_z$ (see more details in Section~\ref{subsec:detection}). The detected oscillation amplitude at frequency $2\omega$ shows resonant behaviour. The dispersive responses at frequency $\omega$ are observed due to the presence of a transverse field with non-zero $x$ and $y$ components. On the high field side, these signals show maximal amplitude near $B_\mathrm{z}=B_\mathrm{sense}^+\approx 0.258~\mathrm{G}$, where the $2\omega$ amplitude shows an approximately linear response with respect to $B_z$. To show vector magnetometer operation, we scan the transverse fields in a grid like pattern at constant $B_z$ near the sensitive point. The results are shown in Fig.~\ref{fig:GridColdAtoms}, together with the matching theoretical response. \begin{figure}[t!] \begin{overpic}[width=0.45\textwidth]{fig4_a_b.png} \put (15,68) {a)} \put (15,35) {b)} \end{overpic} \caption{Voigt effect measurement across RF resonance. Typical, experimental amplitudes of the signal harmonics. Here, in-phase components of ac voltage amplitudes oscillating at $\omega$ (a), and at $2\omega$ (b) vary when the field component $B_z$ is scanned across the rf resonance in the presence of a constant, non-zero transverse field.} \label{fig:h1h2ColdAtoms} \end{figure} The results confirm the principle of operation for the magnetometer based on the Voigt effect in cold atoms, with well controlled preparation of pure quantum states and temporal separation of state-preparation and exposure to an external field. For practical purposes, this setup is of limited use, given the complexity of the apparatus and limitations on achievable sample proximity and bandwidth or cycle rate. Therefore, we explored Voigt effect magnetometry in a vapour cell with room temperature atoms towards practical devices with higher bandwidth and sensitivity. \begin{figure}[t!] \begin{overpic}[width=\textwidth/2]{Fig5_Cold2DGrid.png} \put(0,33){a)} \put(50,33){c)} \end{overpic}\vspace{0.5cm} \begin{overpic}[width=\textwidth/2]{Fig5_Cold3DGrid.png} \put(0,33){b)} \put(50,33){d)} \end{overpic} \caption{Atomic response, mapping the external magnetic field vector onto harmonic signal components. (a) Theoretical response of the signal quadratures at frequency $\omega$, as a function of the transverse field components $B_x$ and $B_y$ for constant $B_z$. (b) Full 3-dimensional response (see Appendix), including the real part of the signal amplitude at frequency $2\omega$, which allows for the measurement of the longitudinal field $B_z$. (c) Experimental realization, showing real and imaginary parts of the complex signal amplitude $m_1$ (detailed description in Sec.~\ref{subsec:detection}) at $\omega$ as a function of scanned transverse fields. The separation between two vertical lines is $\Delta B_x\approx 9~\mathrm{mG}$. (d) Full experimental response of both amplitudes $m_1$ and $m_2$ as a function of the scanned field.} \label{fig:GridColdAtoms} \end{figure} \section{\label{sec:expvapour}Experimental realisation: Room temperature vapour} \subsection{Room temperature vapour setup} Our setup is based on a paraffin coated $^{87}$Rb enriched vapour cell of diameter $d=26$ mm and length $l=106$ mm at room temperature, with a density of approximately $10^{10}$ atoms per cubic cm. The cell is placed inside a commercial 4-layer $\mu$-metal shield (Twinleaf MS-2), mounted on a non-magnetic vibration isolation table with non-magnetic optomechanics, see Fig.~\ref{fig:setup}(a). The static magnetic fields and the radio frequency field inside the chamber are generated by a combination of a solenoid for the longitudinal field and cosine-theta coils for the transversal fields. The coils are driven by a lead-acid battery powered, ultra low-noise current sources, based on the modified Hall-Librecht design \cite{hall,dalin}. The laser system to address the atomic transitions for state preparation and probing consists of a combination of commercial and in-house external cavity diode lasers. The laser system is housed on a separate vibration isolation table, and the light is coupled via single mode, polarization maintaining fibers. The atomic vapour is optically pumped with a linearly polarized pair of laser beams, counterpropagating to a linearly polarized probe beam. A small angle is used for optical access. The Voigt rotation is measured by separating the two circular polarization components with a quarter wave-plate and polarizer cube and detecting the light with a balanced photodetector pair (Thorlabs PDB210A), with electronic gain $g_{\mathrm{el}}\approx10^{-13}~$V/Hz. The magnetometer is operated in pump-probe mode, where each cycle contains an initial period of synchronous optical pumping before probing the atomic state. The experimental sequence generation and data acquisition are performed using a National Instruments FPGA (PCIe-7852). \begin{figure}[b] \begin{overpic}[scale=0.3]{setup.png} \put(0,43){a)} \end{overpic}\quad \begin{overpic}[scale=0.22]{sequence.png} \put(0,40){b)} \end{overpic} \caption{Experimental realization with a shielded vapour cell. (a) Layout of the experimental setup. A pair of pump beams is used to prepare the atomic state before a polarimetric measurement is performed on a counterpropagating probe beam. The pump light is incident under a small angle with respect to the probe and the static offset field. (b) Single shot experimental pulse sequence with a typical 8 ms duration. During each cycle, the static fields are held at constant, scanned values after switching within $\leq 50~\mu\mathrm{s}$ at the start of the state preparation process.}\label{fig:setup} \end{figure} \subsection{\label{subsec:state_preparation}State preparation} In each cycle, we perform the state preparation by optical pumping, which reaches a steady state over the first 5~ms, before probing the state for another 3~ms, see Fig.~\ref{fig:setup}(b). But different from the cold atom test case, the magnetic fields are not adiabatically ramped to different values between pump and probe stages. Here, we prepare dressed states directly by synchronous pumping \cite{sync_pump}, i.e.\ using a pulse train of pump light, in phase with a uniform, 5~kHz rf field of $\lessapprox 0.1$ mG, near resonant with the static field $B_z$. We use linear polarization along the $x$-axis, parallel to the rf field. During a short 9\% duty cycle, this direction is nearly aligned with our quantization axis in the rotating frame. This enables the preparation of either the dressed state $\ket{F=2,m_F=0}$ for pumping near the $F=2\rightarrow F'=2$ transition on the $D_1$ line, or an incoherent mixture of $\ket{F=2,m_F=\pm2}$ for pumping near $F=2\rightarrow F'=1$. Our signal amplitude depends quadratically on the magnetic quantum number $m$, and all three of these states give rise to the same maximally possible signal with $\xi_F(2)=\xi_F(-2)=-\xi_F(0)$. The choice of states and required pump polarization allows for the pump beam to propagate parallel to the probe beam, along the $z$-axis. \begin{figure}[t!]\centering\hspace*{-1em} \begin{overpic}[width=\textwidth/2]{Fig7_raw_atomic_and_fft_signal_5kHz_RF.png} \put (3,90) {a)} \put (3,45) {b)} \end{overpic} \caption{Typical experimental signals. a) Raw balanced signal of the Voigt rotation. The state preparation occurs within the first 5ms of the cycle followed by a 3ms probing pulse. b) Single-sided, power spectral density (psd) of the amplified signal during the probe pulse. Atomic signals arise at $\omega$ and $2\omega$. Weak harmonics at $3\omega$ and $4\omega$ can also be observed, which may arise due to non-linear magneto-optical effects~\cite{Budker2002} and non-linearities in the electronic detection path. For comparison, photon shot noise is at a level of $g_\mathrm{el}^2S_y\approx 0.6\times10^{-10}~\mathrm{V}^2/\mathrm{kHz}$. }\label{fig:raw_data} \end{figure} \begin{figure}[h!]\vspace*{-0.3cm}\hspace*{-0.1cm}\centering \begin{overpic}[width=\textwidth/2]{Fig8_a_b_dedmoulated_profiles_new.png} \put (-2,73) {a)} \put (-2,39) {b)} \end{overpic} \hspace*{-0.1cm}\centering\begin{overpic}[width=\textwidth/2]{Fig8_d_e_fom_1f_2f_new.png} \put (-2,74) {c)} \put (-2,41) {d)} \end{overpic} \caption{Experimental magnetometer response. Panels a) to b) show the three relevant quadratures of the mode amplitudes $m_1$ and $m_2$, i.e.\ responses at frequencies $\omega$ and $2\omega$, for a scan of the longitudinal field $B_z$ across the rf resonance. Here, non-zero transverse fields $B_x$ and $B_y$ are kept constant. The mode amplitudes, extracted according to Eq.~(\ref{eq:mode_amplitudes}), follow the predicted behaviour, see Eq.~(\ref{eq:Voigt}). Panels c) and d) show experimental estimates for the three signal scale factors as a function of probe detuning. For the longitudinal field, this is the slope of the $2\omega$ resonance profile, estimated as $2A/\Gamma$. Near the chosen probe detuning of -550MHz, all scale factors are close to maximal, and the first order responses to orthogonal external fields are orthogonal.} \label{fig:opm_1f2fmodes} \end{figure} To maximise atomic population in the $F=2$-hyperfine manifold, a co-propagating CW repump beam addressing $\ket{F=1}\rightarrow \ket{F'=2}$ of the D2 line of the same polarization is spatially overlapped with the pump, which repopulates atoms from the $\ket{F=1}$ to the $\ket{F=2}$ ground state. The pump and repump beams share the same Gaussian intensity profile with 7.3~mm diameter ($1/e^2$) and 2.2 mW/$\mathrm{cm^2}$ and 1.6 mW/$\mathrm{cm^2}$ peak intensity, respectively. The pumping efficiency is limited by atom exchange, spin exchange collisions, other decoherence processes, non-parallel effective field and pump polarisation at the sensitive field offset $B^+_{\mathrm{sense}}$ where $\theta=\pi/4$, and the synchronized pump duty cycle, which is a compromise between effective power and achieving momentary alignment between effective field and polarisation. The resulting spin state can be characterized spectroscopically~\cite{Julsgaard}, and here we used a stroboscopic version of microwave spectrocopy to probe the dressed atomic states~\cite{Sinuco}. The experimental lower estimate confirmed that more than 75\% of the atomic population is pumped into a mixture of the dressed states $\ket{F=2,m=\pm2}$. Details of this method will be published elsewhere. The imperfect pumping efficiency reduces the overall signal strength due to a reduction of the atomic alignment but may also influence collisional dynamics at sufficiently high atomic densities. \subsection{\label{subsec:detection}Signal detection} Immediately after the state preparation process we couple a counter-propagating probe pulse along $z$ to measure the resultant Voigt rotation. The probe is detuned by -550 MHz from the $\ket{F=2}\rightarrow \ket{F'=1}$ transition of the D1 line. It has a Gaussian profile of 3.4~mm diameter ($1/e^2$) and 2.6~mW/$\mathrm{cm^2}$ peak intensity and linear polarization set to 45\textdegree with respect to the rf field. \begin{figure*}[t] \begin{overpic}[width=\textwidth]{Fig9.png} \put (2,40) {a)} \put (54,40) {b)} \put (0,18) {\rotatebox{90}{Im($m_1$) (mV$\sqrt{\mathrm{s}}$)}} \put (21,1) {Re($m_1$) (mV$\sqrt{\mathrm{s}}$)} \put (27,32.5) {\rotatebox{90}{\scriptsize{Re($m_2$) (mV$\sqrt{\mathrm{s}}$)}}} \put (26.5,10.5) {\rotatebox{90}{\scriptsize{Im($m_1$) (mV$\sqrt{\mathrm{s}}$)}}} \put (31.5,7.5) {{\scriptsize{Re($m_1$) (mV$\sqrt{\mathrm{s}}$)}}} \put (51,16) {\rotatebox{90}{Re($m_2$) (mV$\sqrt{\mathrm{s}}$)}} \put (60,3.5) {\rotatebox{-8.5}{Im($m_1$) (mV$\sqrt{\mathrm{s}}$)}} \put (83,1) {\rotatebox{25}{Re($m_1$) (mV$\sqrt{\mathrm{s}}$)}} \put (79,29) {\rotatebox{90}{\scriptsize{Re($m_2$) (mV$\sqrt{\mathrm{s}}$)}}} \put (83,27.5) {\rotatebox{-8.5}{\scriptsize{Im($m_1$) }}} \put (88,24.5) {\scriptsize{(mV$\sqrt{\mathrm{s}}$)}} \put (94,26.2) {\rotatebox{26}{\scriptsize{Re($m_1$) }}} \end{overpic} \caption{Mapping of the OPM response to external fields at 5 kHz radio frequency dressing. (a) Quadratures of the first harmonic signal as a function of raster scanned transverse fields $B_{x,y}$ ranging over $\approx\pm180$~nT for constant $B_z$. Each colour represents a different $B_z$ field ranging over $\approx\pm0.3$~nT. The top inset shows the location of the $B_z$ field with respect to the resonant signal at $2\omega$. (b) Inclusion of the second harmonic signal produces oviforms in the three-dimensional representation and demonstrates the full vector mapping. The behaviour in the linear regime for small field perturbations is shown in the insets, with visible photon shot noise $g_\mathrm{el}\sqrt{S_y}\approx 2.5\times10^{-4}~\mathrm{mV}\sqrt{\mathrm{s}}$. We attribute deviations from the ideal profiles to geometric misalignment between the pump/probe beams and static and rf fields. The asymmetric distortion increases for lower bias fields, consistent with imperfect orthogonality between static field coils and their alignment with the direction of the probe beam. } \label{fig:3d_plot} \end{figure*} The interaction with the dressed atomic medium results in modulated, elliptical polarization of the probe. Figure~\ref{fig:raw_data} shows the typical temporal trace of the balanced detector signal during one cycle together with its spectrum. The main contributions to the rf signal are found at frequency 2$\omega=10$ kHz and a weaker signal at the dressing frequency $\omega=5$ kHz due to the presence of transverse fields. As can be seen in Fig.~\ref{fig:raw_data}(a), the atomic signal decays due to finite state lifetime. Generally, this is limited by the atom-wall and atom-atom collision rates. In our case, the exchange of the atoms between the main cell body and the stem with the Rb reservoir is the major contributing factor to the relaxation rate. In principle, high quality anti-relaxation coatings together with a lockable stem system can be used to achieve coherence lifetimes in excess of 60~s \cite{balabas}. Absorption of the probe beam introduces additional decay and additional optical pumping, which broadens the $2\omega$ resonance profile and alters the response at $\omega$ to transverse fields. We choose the combination of probe power and detuning by optimizing the slope of the $2\omega$-signal with respect to the external field strength. We evaluate this scale factor as the ratio of height $A_z$ and width $\Gamma/2$ of the resonance peak, both indicated in Fig.~\ref{fig:opm_1f2fmodes}(b). The dependence on probe detuning is shown in Fig.~\ref{fig:opm_1f2fmodes}(d). The maximal response to longitudinal fields is found away from the Doppler broadened absorption lines, where also the signal responses to small changes of the two orthogonal transverse field components are close to maximal, shown in Fig.~\ref{fig:opm_1f2fmodes}(s), and show the predicted $\pi/2$ relative phase shift. Closer to the resonances, we observe different and non-orthogonal responses. The detected raw signals are digitally demodulated to extract the three-dimensional field vector information. We calculate complex temporal mode amplitudes $m_k$ for the first and second harmonic ($k=1,2$) by taking the scalar product of the signal $u(t)=g_\mathrm{el}S'_z(t)$ with exponentially decaying, normalised mode functions leading to the definition \begin{equation} m_k=\int_{t_1}^{t_2}e^{(-ik\omega-\gamma)t}u(t)dt/\sqrt{\int_{t_1}^{t_2}e^{-2\gamma t}dt},\label{eq:mode_amplitudes} \end{equation} which covers the interval of the probe pulse between times $t_1$ and $t_2$ and matches the atomic response. To first order and for appropriately adjusted phase, the quadratures, i.e.\ the real and imaginary part of $m_1$ reflect the external field components $B_x$ and $B_y$ while the real $m_2$ is sensitive to $B_z$. The signals are phase-locked to the rf driving field, but they acquire additional electronic phase-shifts. Therefore, we first adjust the demodulation phase for the second harmonic signal by scanning the longitudinal $B_z$ field whilst the transverse fields are set to zero. We adjust the phase such that the real and imaginary quadratures of the mode amplitude $m_2$ produce a symmetric and a dispersive profile, respectively. Any additional phase entering the first harmonic is equivalent to a rotation of the field coordinate system about the longitudinal axis. Here, we scan the transverse fields over a small range, with the longitudinal field adjusted to the sensitive field point $B_z\approx B^+_{\mathrm{sense}}$ to identify two orthogonal quadratures with the $x,y$-transverse field coils by minimizing their crosstalk. As it is shown in Fig.~\ref{fig:opm_1f2fmodes}(a) both quadratures of the signal amplitude $m_1$ follow a dispersive profile. Fig.~\ref{fig:opm_1f2fmodes}(b) shows the resonant response of the amplitude $\mathrm{Re}(m_2)$. This is consistent with the theoretical model described in section~\ref{sec:theory} and with the experimental results for cold atoms in section~\ref{sec:fieldmapping}. In contrast to the double resonance magnetometer described in Ref.~\cite{Weis1}, where the quadratures of the first harmonic present a resonant and a dispersive profile with respect to the applied static field, the quadratures in Fig.\ref{fig:opm_1f2fmodes}(a) are both dispersive while one quadrature of the second harmonic shows a resonant profile. All three signals become most sensitive to orthogonal external field components at the same offset field. The conversion of measured signals into magnetic field values relies on a two-step calibration procedure. First, the static field coils are characterised with two independent methods, before the signal scale factor for each field direction is determined by applying a range of known fields to the magnetometer. The static field coils together with their electronic drivers are calibrated using the known field dependence of the Larmor resonance for $^{87}$rubidium. For a set of fixed radio-frequencies $\omega$, the longitudinal field $B_z$ is scanned with no transverse fields present to find the maximal response of the $2\omega$ signal. Under this condition, the field is determined by known parameters according to $B_z=B_\mathrm{res}=\hbar\omega/\mu_B|g_F|$. The resonant field can be easily calculated and plotted against the applied voltage/current of the coils giving the field conversion. The presence of transverse fields changes the resonance condition to $B_\mathrm{res}=\hbar\omega/\mu_B|g_F|\sqrt{B_z^2+B_{x,y}^2}$. Thus, to obtain the calibration for the transverse fields, we change one of the transverse fields whilst keeping the other one at zero and sweep the $B_z$ field to obtain a new location of the $2\omega$ resonance. As before, the new resonance location is evaluated as a function of the control voltage/current of the coils. In addition to this procedure, we confirm the coil calibrations using a commercial fluxgate magnetometer (Stefan Mayer Instruments, FLC3-70). Finally, the signal scale factors are measured by applying a linear field ramp of a well known range to each of the fields independently. For small fields, the corresponding demodulated signal responses show linear relationships, see Eqs~(\ref{Re1f})-(\ref{Re2f}), which we use to calibrate the signal-to-field conversion. \subsection{\label{subsec:detection}3D vector mapping} Following the same procedure as in the cold atom case in Section~\ref{sec:fieldmapping}, we measured the three components of the field by setting the static field $B_z=B_\mathrm{sense}$, which maximizes the mode amplitudes at $\omega$. By linearly scanning the external transverse fields and demodulating the $\omega$ and $2\omega$ quadratures, we are able to map the magnetometer response. The vector magnetometer operation can be visualized on a 3D plot shown in Fig.~\ref{fig:3d_plot}. The full mathematical description of the oviform plot can be found in the Appendix~\ref{sec:faraday}. But it must be noted that the theoretical results are based on the assumption of a pure atomic state. The model does not account for decoherence due to atomic collisions and various broadening effects (e.g. gradient fields, light power). Nevertheless, the experimental results are in reasonable agreement with the expected 3-dimensional response, see Eqs.~(\ref{Re1f})-(\ref{Re2f}). As can be seen in Fig.~\ref{fig:3d_plot} the 2D and 3D oviform profiles arising from magnetic field scans show some asymmetric distortion and an offset, which arise from geometrical misalignment of the probe/pump beams relative to the static fields and/or the rf dressing field as confirmed for larger misalignment. The insets show the magnetometer response in the linear regime for small external fields. Despite the theoretical simplifications, our model effectively describes the vector magnetometer response to the external magnetic fields. We assume that effects of distortion can be reduced by accurate alignment between the probe/pump beams, the dressing field, and the small static offset fields. Operation at higher dressing frequency and thus larger offset field leads to more accurate alignment of the field across the atomic ensemble, however, this method reduces the sensitivity to the transverse fields as described in the following section. The sensing range for longitudinal field is determined by the width of the resonance, whilst the response to transverse fields and its non-linearity is determined by their ratio to the offset field. In principle, the range of operation can be extended and maximal sensitivity maintained by placing the magnetometer into a closed loop system. \subsection{\label{subsec:detection}Noise performance} To perform the noise measurements we detune the static $B_z$ field to $B_z=B_\mathrm{sense}^+$, which optimizes the magnetometer sensitivity for all three components. We then adjust the transverse fields such that the first harmonic signal vanishes, i.e.\ the noise measurements are done near the apex of the middle (blue) ovoid in Fig.~\ref{fig:3d_plot}.(b). \begin{figure}[ht!]\centering \hspace*{-0.3cm} \begin{overpic}[width=\textwidth/2]{Fig10_a.png} \put(3,53){a)} \end{overpic} \hspace*{-0.3cm} \begin{overpic}[width=\textwidth/2]{Fig10_b.png} \put(3,53){b)} \end{overpic} \hspace*{-0.3cm} \begin{overpic}[width=\textwidth/2]{Fig10_c.png} \put(3,53){c)} \end{overpic} \caption{OPM noise at 36$^{\circ}$C vapour cell temperature in a shielded environment at 5 kHz radio frequency dressing. Noise floor values are estimated for the range 10~Hz~-~62.5~Hz. Panels (a) and (b) show noise performance for the two orthogonal transverse fields. The light noise in one of the signal quadratures shows phase-locked, low-frequency fluctuations of cross talk between the rf generation and detection paths. Panel (c) shows the noise performance for the longitudinal field component. The light noise levels (photon shot noise) are obtained with a far-detuned probe laser and disabled pump/repump lasers. The electronic noise is recorded without probe light and no rf field present. The calibration of field equivalent noise amplitudes includes an $\approx5\%$-drop of the low-pass frequency response function, which is predominantly determined by the mode function entering Eq.~(\ref{eq:mode_amplitudes}).} \label{fig:opm_noise} \end{figure} \begin{figure}[h]\centering\hspace*{-0.5cm} \begin{overpic}[width=\textwidth/2]{Fig11_a.png} \put(5,43){a)} \end{overpic} \hspace*{-0.5cm} \begin{overpic}[width=\textwidth/2]{Fig11_b.png} \put(5,43){b)} \end{overpic} \caption{Dependencies of the noise performance. (a) OPM noise as a function of the vapour cell temperature. (b) OPM noise as a function of radio frequency of the dressing field.} \label{fig:opm_noise_vs_RF} \end{figure} Based on the field calibrations described above, we record field equivalent signal noise for the three field components over $\approx16$~s (2048 cycles at 125~Hz). Figure~\ref{fig:opm_noise} shows the spectral noise performance for the two quadratures at $\omega$ and the in-phase quadrature at $2\omega$. At 5 kHz rf dressing frequency and a temperature of $36^\circ$C, the magnetometer operates with an average noise level of $\approx$ 2.2~pT$/\sqrt{\mathrm{{Hz}}}$ for the transverse fields over the range of 10-62.5~Hz, dominated by photon shot noise. Longitudinal fields can be measured with a sensitivity of 0.4~pT$/\sqrt{\mathrm{{Hz}}}$. The dominant constraint on the noise level is the short coherence time of the cell ($\tau\approx 2$~ms) which is limited by the quality of the paraffin coating and the exchange of the atoms between the main cell body and the stem with the Rb reservoir. Typically, paraffin or OTS coated cells have coherence times ranging from 30 ms to 300 ms~\cite{seltzer,coating}. Longer coherence time would improve the field sensitivity of the OPM due to a larger fraction of atoms remaining in the field sensitive state. In addition, higher quality paraffin coating would also shorten the pump/repump pulse time needed to (re-)prepare the stretched states, allowing for increased cycle rate and thus higher bandwidth as well as higher duty cycle and thus reduced aliasing of magnetic field noise. We have investigated the effects of heating the cell to increase the atomic density, which should in principle improve the sensitivity by increasing signal strength. However, as it is shown in Fig.~\ref{fig:opm_noise_vs_RF}(a), the signal-to-noise-ratio saturates already at temperatures of approximately $32^\circ$C for the transverse fields and at even lower temperatures for the longitudinal fields. Initially, increasing signal amplitudes lead to better sensitivity, especially for the transverse fields where the signals are closer to photon shot noise. But additional atomic processes such as resonance broadening limit the performance at higher temperature where the figure of merit saturates. A similar saturation effect was previously observed in ref.~\cite{pustelny}, where higher atomic concentrations lead to an increase of the collisional and surface relaxation rates, depolarizing the prepared state. The sensitivity of the OPM to transverse fields does not only depend on the shape of the resonance, but also on the chosen dressing frequency, because the corresponding signals arise from the geometric rotation of the static field. The rotation angle and consequently the signal strength increases for smaller offset fields, see Eqs.~(\ref{Re1f}) and (\ref{Im1f}). The resulting linear dependence of sensitivity on dressing frequency is shown in Fig.~\ref{fig:opm_noise_vs_RF}(b). Over the range of 40~kHz to 2.5~kHz the transverse field noise performance varies by a factor of four. This strategy is limited by the linewidth of the rf resonance and other factors such as the required precision of alignment increased susceptibility to magnetic field gradients distorting the oviform mapping. \section{\label{sec:Conclusions}Conclusions} We have presented and successfully demonstrated a full vector magnetometer based on the Voigt effect both in cold atom and hot vapour setups. As shown, our scheme has the advantage of requiring only a single optical axis for state preparation and detection making it ideal for compact magnetic field sensors. We have achieved pT/$\sqrt{\mathrm{Hz}}$ sensitivity with a $62.5$ Hz bandwidth. Our current limitations in the sensitivity of the OPM stem from the coherence time of the cell and the low atom number. Future improvements will include a heated and buffer gas filled cell in order to increase the atom numbers and reduce the rate of atomic collisions that induce decoherence effects, respectively. These improvements should shorten the state preparation lifetime thus increasing the bandwidth and the field sensitivity of the OPM. In principle, placing the cell in an optical resonator may be used to increase the interaction path between the light and the atoms thus further improving the sensitivity. The datasets generated for this paper are accessible at \cite{data} Nottingham Research Data Management Repository. \section{Acknowledgements} This work was funded by Engineering and Physical Sciences Research Council (Grant No. EP/M013294/1). We acknowledge the support from the School of Physics \& Astronomy Engineering and Electronics workshops. We thank Sindhu Jammi for collecting the cold atoms data. We thank Konstantinos Poulios and Kasper Jensen for useful discussions.
1208.1734
\section{Introduction} \label{sec:intro} Renormalization group (RG) methods play an important role in {\it ab initio} nuclear theory by extending the range of many computational methods and improving their convergence patterns \cite{Bogner:2003wn, Bogner:2009bt, Furnstahl:2012fn}. There are numerous RG methods that have been successfully applied to nuclear few- and many-body systems in recent years~\cite{Bogner:2009bt}. While the details differ, all such methods decouple low- and high-momentum degrees of freedom in a manner that leaves low-energy observables invariant. In this paper, we will denote the momentum scale at which this decoupling occurs by $\Lambda$. In methods such as the Lee-Suzuki-Okubo similarity transformation method or the related $V_{{\rm low}\,k}$ approach, $\Lambda$ is a floating cutoff beyond which high-momentum states have been integrated out~\cite{Epelbaum:1998na, Bogner:2001gq,Bogner:2006vp}. For other methods, such as the similarity renormalization group (SRG) approach, $\Lambda$ gives a measure of how band-diagonal the Hamiltonian is in momentum space~\cite{Bogner:2006pc}. In all cases, $\Lambda$ serves as a ``resolution scale'' since dynamics above and below this scale are effectively decoupled~\cite{Bogner:2009bt,Jurgenson:2007td}. We emphasize that while observable quantities (such as cross sections) do not change, the physics interpretation can (and generally does) change with resolution. It is a common misconception that at low-resolution, one is unable to describe phenomena that, at high-resolution, are associated with the high-momentum components of low-energy wave functions. A prototypical example is the $(e,e'p)$ process at large momentum transfers, where theoretical analyses relate such experiments to nuclear momentum distributions if the impulse approximation is assumed valid for a high-cutoff interaction \cite{Frankfurt:2008zv}. Calculations find nearly universal scaling of the high-momentum tails, which is interpreted in terms of short-range correlations in the nuclear wave functions~\cite{Pieper:1992gr}. Naively it might be thought that this physics is beyond the reach of low-momentum approaches, for which wave functions have drastically reduced short-range correlations. However, this is not the case: the experimental cross section is unchanged if the corresponding operator is consistently evolved under the RG, even if the evolved wave function has almost no high-momentum strength. The formal relationship between an operator $\hat{O}^{\Lambda_0}$ at an initial high-resolution scale $\Lambda_0$, and the consistently-evolved effective operator $\hat{O}^{\Lambda}$ at the low-resolution scale $\Lambda$ is \emph{defined} by \begin{equation} \label{eq:Oeff} \bra{\psi_n^{\Lambda_0}}\hat{O}^{\Lambda_0}\ket{\psi_n^{\Lambda_0}} = \bra{\psi_n^{\Lambda}}\hat{O}^{\Lambda}\ket{\psi_n^{\Lambda}}\,. \end{equation} In the $(e,e'p)$ example, one might worry that the consistent evolution embodied by Eq.~\ref{eq:Oeff} is computationally intractable because the evolved momentum occupation operator might be too complicated in practice (e.g., strong non-localities and sizable many-body components, etc.). In Ref.~\cite{Anderson:2010aq}, some of these questions were addressed by examining the consistent SRG evolution of various operators, including momentum distributions and electromagnetic form factors in the deuteron. There, all operators were found to flow to smooth, low-momentum forms, exhibiting many of the same simplifications as RG-evolved interactions. More interestingly, under certain kinematic conditions it was found that operator expectation values exhibit \emph{factorization}, which provides a clean separation of long- and short-distance physics and an alternative interpretation of the universal high-momentum dependence and scaling behavior~\cite{Anderson:2010aq}. The proof of factorization presented here and in Ref.~\cite{Anderson:2010aq} follows straightforwardly from decoupling and the separation of scales, and is reminiscent of the operator product expansion (OPE) in quantum field theory. The OPE was developed for the evaluation of singular products of local field operators at small separation~\cite{Wilson:1969zs,Wilson:1972ee}. The utility of the OPE rests on factorization; short-distance details decouple from long-distance dynamics. Factorization enables one, for example, to separate the momentum and distance scales in hard-scattering processes in terms of perturbative QCD and parton distribution functions. While the methods used in the present paper share several similarities with the OPE, a precise connection has not yet been made. One key difference is that, in the framework of a local quantum field theory, the OPE gives a controlled expansion since the dependence of the Wilson coefficients on the separation $\p{r}$ is fixed by the scaling dimensions of the corresponding local operators. In the present paper, however, we work in the general domain of non-relativistic quantum mechanics (i.e., no assumption of a local QFT). Therefore, we cannot make precise statements about the scaling behavior of terms when we expand Fock space operators at one resolution scale $\Lambda$ in terms of the corresponding operators at another scale $\Lambda_0 \ge \Lambda$. While the factorization formulas in the present context are not as controlled as those derived in a local quantum field theory using the OPE, they nevertheless provide tools that let us parameterize the high-momentum components of operators which would normally require degrees of freedom we do not retain. We can, for example, build effective few-body operators containing state-independent functions of high momenta that can be measured directly in few-body experiments. These operators can then be employed to make predictions for $A$-body systems. In this paper, we generalize previous developments~\cite{Anderson:2010aq} to derive scaling relations for the high-momentum tails of momentum distributions and static structure factors in arbitrary low-energy $A$-body states. In both instances, we find that the expectation value of the corresponding operator factorizes into the product of a universal function associated with high-momentum (short-distance) physics, and a state-dependent number associated with low-momentum (long-distance) structure. The outline of the rest of this paper is as follows: In Section~\ref{sec:twobody}, we review the proof from Ref.~\cite{Anderson:2010aq} that expectation values of high-momentum probes factorize in the $A=2$ system. In Section~\ref{sec:Abodyfac}, we recast the discussion of factorization in a second-quantized language and use it to derive universal scaling relations for momentum distributions and static structure factors in general $A$-body systems. As a test of these relations, in Section~\ref{sec:examples} we apply them to two well-studied many-body systems - the unitary Fermi gas and the electron gas - to reproduce known expressions for the asymptotic tails of the momentum distributions and static structure factors of each system. Our conclusions are summarized in Section~\ref{sec:summary}, and several technical details are relegated to the Appendices. \section{Factorization in the two-body system} \label{sec:twobody} In Ref.~\cite{Anderson:2010aq}, Anderson {\it et al.} applied renormalization group methods to the two-body problem to show that high-momentum components of low-energy wave functions factorize into the product of a state-independent function of momentum, and a state-dependent number that is sensitive only to low-momentum physics. Using this wave function factorization, it is straightforward to show that expectation values of operators that probe high-momentum modes similarly factorize into a state-independent piece that encodes the high-momentum physics and depends on the particular operator, and a state-dependent number that depends only on the low-momentum structure of the state and is identical for all high-momentum operators~\cite{Anderson:2010aq}. As we will show in Section~\ref{sec:Abodyfac}, the factorization formulas of Ref.~\cite{Anderson:2010aq} generalize to arbitrary low-energy $A$-body systems, allowing us to derive scaling relations for the high-momentum tails of momentum distributions and static structure factors. Since the simple factorization in the $A=2$ system is the starting point to derive analogous relations for general low-energy $A$-body states, we begin by reviewing the salient points from Ref.~\cite{Anderson:2010aq}. \subsection{Wave function factorization} Renormalization group transformations simplify nuclear few- and many-body calculations by decoupling low- and high-momentum degrees of freedom leaving low-energy observables unchanged~\cite{Bogner:2003wn,Bogner:2009bt,Furnstahl:2012fn}. In Ref.~\cite{Anderson:2010aq}, the analysis was done in the context of similarity renormalization group (SRG) transformations, where the resolution scale $\Lambda$ provides a measure of how band-diagonal the evolved interaction is in momentum space\footnote{Using the SRG transformation of Ref.~\cite{Anderson:2010aq}, the evolved potential goes as $\frac{V^{\Lambda}(k',k)}{V^{\infty}(k',k)}\sim \exp\left(-\frac{(k'^2-k^2)^2}{\Lambda^4}\right)$.}. However, for our present analysis we do not have to be very specific about the details of the particular RG implementation. All we require is that momentum modes above and below $\Lambda$ are effectively decoupled by the given transformation. In the center-of-mass frame of the two-body system, this implies that the low energy states ($|E_n| \lesssim \Lambda^2$) are localized in the low-momentum subspace\footnote{For SRG transformations, the high-momentum components of the evolved wave functions are exponentially suppressed as $\exp( -q^4/\Lambda^4)$. The decoupling is exact for RG transformations employing a sharp cutoff $\Lambda$. } \begin{eqnarray} \mathcal{P}_{\Lambda}|\psi^{\Lambda}_n\rangle &\approx &|\psi^{\Lambda}_n\rangle \qquad \mathcal{Q}_{\Lambda}|\psi^{\Lambda}_n\rangle \approx 0\,, \end{eqnarray} where the projection operators $\mathcal{P}_{\Lambda}$ and $\mathcal{Q}_{\Lambda}$ are defined as \begin{equation} \mathcal{P}_{\Lambda} = \int^{\Lambda}_{0}\! \frac{d^3p}{(2\pi)^3} \ \vert \p{p} \rangle\langle \p{p} \vert \quad {\rm and \quad}\mathcal{Q}_{\Lambda} = \int^{\infty}_{\Lambda}\! \frac{d^3 q}{(2\pi)^3} \ \vert \p{q} \rangle\langle \p{q}\vert\quad\,. \end{equation} Starting from the unevolved Schr\"odinger equation written in block-matrix form % \begin{equation} \begin{pmatrix} \mathcal{P}_{\Lambda}H_{\infty}\mathcal{P}_{\Lambda} & \mathcal{P}_{\Lambda}H_{\infty}\mathcal{Q}_{\Lambda} \\ \mathcal{Q}_{\Lambda}H_{\infty}\mathcal{P}_{\Lambda} & \mathcal{Q}_{\Lambda}H_{\infty}\mathcal{Q}_{\Lambda} \\ \end{pmatrix} \begin{pmatrix} \mathcal{P}_{\Lambda}\psi^{\infty}_{\alpha}\ \\ \mathcal{Q}_{\Lambda}\psi^{\infty}_{\alpha} \\ \end{pmatrix} = E\\ _{\alpha}\begin{pmatrix}\mathcal{P}_{\Lambda}\psi^{\infty}_{\alpha} \\ \mathcal{Q}_{\Lambda}\psi^{\infty}_{\alpha} \\ \end{pmatrix} \;, \end{equation} we can solve for the high-momentum projection of any eigenstate as \begin{eqnarray} \mathcal{Q}_{\Lambda}\left\vert \psi^{\infty}_{\alpha}\right> &=& (E_{\alpha}-\mathcal{Q}_{\Lambda}H_{\infty}\mathcal{Q}_{\Lambda})^{-1} \mathcal{Q}_{\Lambda}H_{\infty}\mathcal{P}_{\Lambda} \mathcal{P}_{\Lambda}\left\vert \psi^{\infty}_{\alpha} \right\rangle \nonumber \\ &=& (E_{\alpha} -\mathcal{Q}_{\Lambda}H_{\infty}\mathcal{Q}_{\Lambda})^{-1}\mathcal{Q}_{\Lambda}V_{\infty} \mathcal{P}_{\Lambda}\left\vert \psi^{\infty}_{\alpha} \right\rangle \;, \label{eq:Qpsi} \end{eqnarray} where we have used \((\mathcal{P}_{\Lambda})^{2}=\mathcal{P}_{\Lambda}\), \(H_{\infty}=T+V_{\infty}\), and \(\mathcal{Q}_{\Lambda}T\mathcal{P}_{\Lambda}=0\). For low-energy states \( \psi^{\infty}_{\alpha}\) such that \(|E_{\alpha}|\ll {\rm Min}[|E_{\mathcal{Q} H\mathcal{Q} }|]\sim \Lambda^2\) (where \(E_{\mathcal{Q} H\mathcal{Q} }\) are the eigenvalues of \(\mathcal{Q}_{\Lambda}H_{\infty}\mathcal{Q}_{\Lambda}\)), we can neglect the \(E_{\alpha}\) dependence in Eq.~\ref{eq:Qpsi} % \begin{equation} \psi^{\infty}_{\alpha}(\p{q})\,\approx\, -\int^{\infty}_{\Lambda}\!d\tilde q'\,\int^{\Lambda}_{0}\!d\tilde p\, \left\langle \p{q}\right\vert \frac{1}{\mathcal{Q}_{\Lambda}H_{\infty}\mathcal{Q}_{\Lambda}}\left| \p{q}' \right> V_{\infty}(\p{q'},\p{p})\,\psi^{\infty}_{\alpha}(\p{p})\,, \end{equation} where we've introduced the abbreviation $d\tilde q \equiv \frac{d^3q}{(2\pi)^3}$. Assuming for simplicity that $\psi^{\infty}_{\alpha}$ is an S-wave state and that the potential \(V_{\infty}(\p{q}',\p{p})\) is slowly varying with respect to $\p{p}$ compared to \( \psi^{\infty}_{\alpha}(\p{p})\) in the region \(p<\Lambda\) and \(q'\gg\Lambda\), we can factorize the low- and high-momentum physics by expanding% \begin{eqnarray}\nonumber \int^{\Lambda}_{0} \! d\tilde p \, V_{\infty}(\p{q}',\p{p})\psi^{\infty}_{\alpha} (\p{p}) &\approx& V_{\infty}(\p{q}',\p{p}')\vert_{\p{p}'=0}\times \int^{\Lambda}_{0}\! d\tilde p \,\psi^{\infty}_{\alpha} (\p{p}) \\ && \null + \left.\frac{1}{2}\frac{d^{2}}{dp^{'2}}V_{\infty}(\p{q}',\p{p}') \right\vert_{\p{p}'=0}\times \int^{\Lambda}_{0}\! d\tilde p\,p^{2}\,\psi^{\infty}_{\alpha} (\p{p})\,+\cdots\,, \label{eq:Taylor} \end{eqnarray} which gives \begin{eqnarray} \psi^{\infty}_{\alpha}(\p{q}) &\approx&\,\gamma(\p{q};\Lambda)\, \int^{\Lambda}_{0}\! d\tilde p\,\psi^{\infty}_{\alpha}(\p{p})\,\,+ \,\,\eta(\p{q};\Lambda)\, \int^{\Lambda}_{0}\! d\tilde p\,p^2\,\psi^{\infty}_{\alpha}(\p{p})\,+\, \ldots\,, \end{eqnarray} where the state-independent functions that carry the $\p{q}$-dependence are defined as \begin{equation} \gamma(\p{q};\Lambda)\equiv-\int^{\infty}_{\Lambda}d\tilde{q}'\, \left\langle \p{q}\right\vert \frac{1}{\mathcal{Q}_{\Lambda}H_{\infty}\mathcal{Q}_{\Lambda}}\left| \p{q}' \right> V_{\infty}(\p{q}',\p{0}) \;, \label{eq:gammalambda} \end{equation} \begin{equation} \eta(\p{q};\Lambda)\equiv-\frac{1}{2}\,\int^{\infty}_{\Lambda}d\tilde{q}'\, \left\langle \p{q}\right\vert \frac{1}{\mathcal{Q}_{\Lambda}H_{\infty}\mathcal{Q}_{\Lambda}}\left| \p{q}' \right> \left.\frac{d^{2}}{dp^{'2}}V_{\infty}(\p{q}',\p{p}') \right\vert_{\p{p}'=0} \;. \label{eq:etalambda} \end{equation} It is known empirically~\cite{Bogner:2009bt,Lepage:1997cs} that the low-momentum projections of the low-energy eigenstates of the bare and evolved Hamiltonians are related by a wave function renormalization factor \(\mathcal{P}_{\Lambda}\left\vert \psi^{\infty}_{\alpha} \right\rangle\approx Z_{\Lambda}\left\vert \psi^{\Lambda}_\alpha \right\rangle\), which reflects the fact that RG evolution does not modify long-distance physics. Using that \begin{eqnarray} \int^{\Lambda}_{0}\!d\tilde p\, \psi^{\infty}_{\alpha}(\p{p}) &\approx & Z_{\Lambda}\int^{\Lambda}_{0}\!d\tilde p\, \psi^{\Lambda}_\alpha(\p{p}) = \left.Z_{\Lambda}\,\psi^{\Lambda}_\alpha(\p{r})\right\vert_{\p{r}=0}\,\equiv \,Z_{\Lambda}\,\psi^{\Lambda}_\alpha(0) \\ \int^{\Lambda}_{0}\!d\tilde p\, p^2\,\psi^{\infty}_{\alpha}(\p{p}) &\approx & Z_{\Lambda}\int^{\Lambda}_{0}\!d\tilde p\, p^2\,\psi^{\Lambda}_\alpha(\p{p}) = -\left.Z_{\Lambda}\,\nabla^2\psi^{\Lambda}_\alpha(\p{r})\right\vert_{\p{r}=0}\,\equiv\,-Z_{\Lambda}\,\nabla^2\psi^{\Lambda}_\alpha(0)\, , \end{eqnarray} we obtain the momentum space version of Lepage's non-relativistic operator product expansion~\cite{Lepage:1997cs} relating the short-distance structure of the unevolved or ``bare'' wave functions to those of the low-energy effective theory \begin{equation} \psi^{\infty}_{\alpha}(\p{q}) \approx \,\gamma(\p{q};\Lambda) Z_{\Lambda}\psi^{\Lambda}_\alpha ( 0)-\eta(\p{q};\Lambda) Z_{\Lambda}\,\nabla^2\psi^{\Lambda}_\alpha (0)+\cdots\,. \label{eq:OPE} \end{equation} If we keep only the leading term in the expansion \begin{equation} \psi^{\infty}_{\alpha}(\p{q}) \approx \,\gamma(\p{q};\Lambda) Z_{\Lambda}\psi^{\Lambda}_\alpha ( 0)\,= \gamma(\p{q};\Lambda) Z_{\Lambda}\int^{\Lambda}_0\! d\tilde p\,\psi^{\Lambda}_\alpha(\p{p})\,, \label{eq:OPE_LO} \end{equation} we see that the high-momentum components of the low-energy eigenstates are factorized into a state-independent function \(\gamma(\p{q};\Lambda)\), which summarizes the short-distance behavior of the wave function, and a state-dependent coefficient that probes the low-momentum structure of the state. \subsection{Effective operators and factorization} \label{sub:EffOp2b} Given the wave function factorization in Eq.~\ref{eq:OPE_LO}, we can now derive analogous factorization formulas for expectation values of general operators in the $A=2$ system. Consider the expectation value of an operator $\wh{O}$ in a low-energy eigenstate of the unevolved Hamiltonian \begin{eqnarray} \langle \psi^{\infty}_{\alpha}|\wh{O}|\psi^{\infty}_{\alpha}\rangle &=& \int_0^\Lambda\!\!d\tilde p \int_0^\Lambda\!\!d\tilde p' \,\psi^{\infty *}_{\alpha}(\p{p})O(\p{p},\p{p}')\psi^{\infty}_{\alpha}(\p{p}')\,+\, \int_0^\Lambda\!\!d\tilde p \int_\Lambda^\infty\!\!d\tilde q\, \psi^{\infty *}_{\alpha}(\p{p})O(\p{p},\p{q})\psi^{\infty}_{\alpha}(\p{q}) \nonumber \\ &+& \int_\Lambda^\infty\!\!d\tilde q \int_0^\Lambda\!\!d\tilde p\, \psi^{\infty *}_{\alpha}(\p{q})O(\p{q},\p{p})\psi^{\infty}_{\alpha}(\p{p})\,+\, \int_\Lambda^\infty\!\!d\tilde q \int_\Lambda^\infty\!\!d\tilde q'\, \psi^{\infty *}_{\alpha}(\p{q})O(\p{q},\p{q}')\psi^{\infty}_{\alpha}(\p{q}')\,, \nonumber\\ \end{eqnarray} where we have explicitly separated the low- and high-momentum integrals in forming the matrix element. Next, we insert Eq.~\ref{eq:OPE_LO} and $\psi^{\infty}_{\alpha}(\p{p})\approx Z_{\Lambda}\psi^{\Lambda}_\alpha(\p{p})$ for the high- and low-momentum components of $\psi^{\infty}_{\alpha}$, respectively. Since the matrix elements $O(\p{p},\p{q})$ and $O(\p{q},\p{p})$ involve well-separated momenta, we perform a Taylor expansion about $\p{p}=0$ and keep only the leading term giving \begin{eqnarray} \langle \psi^{\infty}_{\alpha}|\wh{O}|\psi^{\infty}_{\alpha}\rangle &\approx& Z^2_{\Lambda}\,\int^{\Lambda}_{0}\!\!d\tilde p \int^{\Lambda}_{0}\!\!d\tilde p' \,\psi^{\Lambda *}_\alpha(\p{p})O(\p{p},\p{p}')\psi^{\Lambda}_\alpha(\p{p}')\nonumber\\ &+&\,2Z^2_{\Lambda}|\psi^{\Lambda}_\alpha(0)|^2 \int^{\infty}_{\Lambda}\!\!d\tilde q\, O(0,\p{q})\gamma(\p{q};\Lambda) \nonumber \\ &+& Z^2_{\Lambda}|\psi^{\Lambda}_\alpha(0)|^2\,\int^{\infty}_{\Lambda}\!\!d\tilde q \int^{\infty}_{\Lambda}\!\!d\tilde q'\, \gamma^*(\p{q};\Lambda)O(\p{q},\p{q}')\gamma(\p{q}';\Lambda)\,. \nonumber\\ \end{eqnarray} Since the evolved wave functions $\psi^{\Lambda}_\alpha(\p{k})$ have vanishing or exponentially suppressed support for $\p{k}>\Lambda$, we can re-write this as \begin{equation} \label{eq:Oeff1} \langle \psi^{\infty}_{\alpha}|\wh{O}|\psi^{\infty}_{\alpha}\rangle \,\approx\, Z^2_{\Lambda}\langle \psi^{\Lambda}_\alpha|\wh{O}|\psi^{\Lambda}_\alpha\rangle + g^{(0)}(\Lambda)\,\langle \psi^{\Lambda}_\alpha|\delta^{(3)}(\p{r})|\psi^{\Lambda}_\alpha\rangle\,, \end{equation} where the coupling $g^{(0)}(\Lambda)$ is defined as \begin{eqnarray} g^{(0)}(\Lambda)&\equiv& 2Z^2_{\Lambda}\,\int^{\infty}_{\Lambda}\!\!d\tilde q\, O(0,\p{q})\gamma(\p{q};\Lambda) \,\nonumber\\&&\qquad+\,Z^2_{\Lambda}\,\int^{\infty}_{\Lambda}\!\!d\tilde q \int^{\infty}_{\Lambda}\!\!d\tilde q'\, \gamma^*(\p{q};\Lambda)O(\p{q},\p{q}')\gamma(\p{q}';\Lambda)\,. \label{eq:g0} \end{eqnarray} Recalling that the consistently evolved effective operator is defined by \begin{equation} \langle \psi^{\infty}_{\alpha}|\wh{O}|\psi^{\infty}_{\alpha}\rangle \,\equiv \, \langle \psi^{\Lambda}_\alpha|\wh{O}_{\Lambda}|\psi^{\Lambda}_\alpha\rangle\,, \end{equation} we see from Eq.~\ref{eq:Oeff1} that \begin{equation} \wh{O}_{\Lambda}\, \approx\, Z^2_{\Lambda}\,\wh{O} \,+\, g^{(0)}(\Lambda)\,\delta^{(3)}(\p{r})\,+\,\ldots\,, \end{equation} where the ``$\ldots$'' contains higher derivatives of delta functions that arise from the gradient terms in Eq.~\ref{eq:OPE} as well as higher-order terms in the expansion of $O(\p{q},\p{p})$ about $\p{p}=0$. In this way, we see that the RG-evolved operators take on a universal form; the effects of the integrated-out high-momentum modes are absorbed in a rescaling of the unevolved operator at the initial resolution scale, plus a series of local, state-independent corrections that take the form of a derivative expansion with $\Lambda$-dependent couplings~\cite{Lepage:1997cs,Felline:2003mi}. As stressed by Lepage~\cite{Lepage:1997cs}, the universal form of these local corrections is analogous to the multipole expansion in classical electromagnetism; just as multipole moments may be calculated from an underlying theory (e.g., the true charge and current densities) or extracted from a finite number of experimental data, the same holds true for the couplings $Z_{\Lambda}, g^{(0)}(\Lambda)$, etc. Let us now consider the implications of Eq.~\ref{eq:Oeff1} for operators that predominantly probe high-momentum components of low-energy states. Since such operators have negligible strength at low-momentum $\mathcal{P}_{\Lambda}\wh{O}\,\mathcal{P}_{\Lambda}\approx 0$, the first term in Eq.~\ref{eq:Oeff1} vanishes, leaving \begin{equation} \langle \psi^{\infty}_{\alpha}|\wh{O}|\psi^{\infty}_{\alpha}\rangle \,\approx\, g^{(0)}(\Lambda)\,\langle \psi^{\Lambda}_\alpha|\delta^{(3)}(\p{r})|\psi^{\Lambda}_\alpha\rangle\,. \label{eq:factorizedOp} \end{equation} Therefore, the expectation value of {\it any} operator that probes the high-momentum structure of low-energy states factorizes into a state-independent piece, $g^{(0)}(\Lambda)$, that depends on the particular high-momentum operator via Eq.~\ref{eq:g0}, times a state-dependent number, $\langle \psi^{\Lambda}_\alpha|\delta^{(3)}(\p{r})|\psi^{\Lambda}_\alpha\rangle$, that is the same for any high-momentum $\wh{O}$, and is only sensitive to the low-momentum structure of the state since $\mathcal{P}_{\Lambda}|\psi^{\infty}_{\alpha}\rangle \approx Z_{\Lambda}|\psi^{\Lambda}_\alpha\rangle$. The momentum distribution $\hat{n}_{\p{q}} = a^{\dagger}_{\p{q}}a_{\p{q}}$ for $\p{q}\gg \Lambda$ is a prototypical example of an operator that is sensitive to the high-momentum structure of wave functions. Since $\hat{n}_{\p{q}} = |\p{q}\rangle\langle \p{q}|$ for the $A=2$ system in the center-of-mass frame, Eq.~\ref{eq:factorizedOp} becomes \begin{equation} \label{eq:nqdeut} \langle\psi^{\infty}_{\alpha}|\hat{n}_{\p{q}}|\psi^{\infty}_{\alpha}\rangle\, \approx\, \gamma^2(\p{q};\Lambda)\,Z^2_{\Lambda}\, |\psi^{\Lambda}_\alpha(0)|^2\,. \end{equation} We see that momentum distributions in {\it all} low-energy states ($|E_{\alpha}|\lesssim \Lambda^2$) in the $A=2$ system share the same $\p{q}$-dependence for $\p{q}\gtrsim\Lambda$. In fact, we will find in the following section that the factorization formula, Eq.~\ref{eq:nqdeut}, generalizes to arbitrary $A$-body systems. \section{Factorization in the $A$-body system } \label{sec:Abodyfac} \subsection{Evolved creation and annihilation operators} \label{sub:2ndquant} In order to proceed beyond the $A=2$ system, it is convenient to recast the results from the previous section in a second-quantized language. As a first step, we examine how the Fock space creation and annihilation operators evolve under RG transformations. Suppressing non-essential spin and isospin indices, the transformed operators can be expanded on the original operator basis as \begin{equation} \label{eq:adagger_lambda} a^{(\Lambda)\dagger}_\p{q} = a^\dagger_\p{q}+\sum\limits_{\p{k}_1,\p{k}_2}C^{\Lambda}_\p{q}(\p{k}_1,\p{k}_2)a^\dagger_{\p{k}_1}a^\dagger_{\p{k}_2}a_{\p{k}_1+\p{k}_2-\p{q}}\,+\,\ldots\,\equiv\,a^\dagger_\p{q} + \delta a^{(\Lambda)\dagger}_\p{q}\, \end{equation} where the ``$\ldots$'' contains higher-rank terms that are generated ($a^{\dagger}a^{\dagger}a^{\dagger} a a$, etc.) when the RG evolution is carried out beyond the two-body level, i.e., when induced 3- and higher-body interactions in $H^{\Lambda}$ are not truncated during the flow. Note that the form of the coupling function $C^{\Lambda}_{\p{q}}(\p{k}_1,\p{k}_2)$ can be constrained further by boosting both sides of Eq.~\ref{eq:adagger_lambda} and using Galilean invariance to write \begin{eqnarray} a^{(\Lambda)\dagger}_\p{q-P} &=& a^\dagger_\p{q-P}+\sum\limits_{\p{k}_1,\p{k}_2}C^{\Lambda}_\p{q}(\p{k}_1,\p{k}_2)a^\dagger_{\p{k}_1-\p{P}}a^\dagger_{\p{k}_2-\p{P}}a_{\p{k}_1+\p{k}_2-\p{q}-\p{P}}\nonumber \\ &=&a^\dagger_\p{q-P}+\sum\limits_{\p{k}_1,\p{k}_2}C^{\Lambda}_\p{q}(\p{k}_1+\p{P},\p{k}_2+\p{P})a^\dagger_{\p{k}_1}a^\dagger_{\p{k}_2}a_{\p{k}_1+\p{k}_2-\p{q}+\p{P}}\, \nonumber\\ &=&a^\dagger_\p{q-P}+\sum\limits_{\p{k}_1,\p{k}_2}C^{\Lambda}_\p{q-P}(\p{k}_1,\p{k}_2)a^\dagger_{\p{k}_1}a^\dagger_{\p{k}_2}a_{\p{k}_1+\p{k}_2-\p{q}+\p{P}}\,, \end{eqnarray} which implies \begin{equation} \label{eq:Galilean} C^{\Lambda}_\p{q}(\p{k}_1+\p{P},\p{k}_2+\p{P}) \,=\, C^{\Lambda}_\p{q-P}(\p{k}_1,\p{k}_2)\,. \end{equation} In the following, we restrict our attention to the leading non-trivial term in Eq.~\ref{eq:adagger_lambda}. This corresponds to neglecting induced three- and higher-body interactions in $H^{\Lambda}$ since the coefficient function $C^{\Lambda}_{\p{q}}(\p{k}_1,\p{k}_2)$ is uniquely determined from the RG evolution in the two-body system. This can be seen by considering the following matrix element between the zero-particle vacuum (which does not evolve under the RG) and a two-body eigenstate in the bare and evolved theories \begin{eqnarray} \langle \psi^{\infty}_{\alpha}|a^{\dagger}_{\frac{\p{P}}{2}+\p{p}}a^{\dagger}_{\frac{\p{P}}{2}-\p{p}}|0\rangle\,&=&\, \langle\psi^{\Lambda}_\alpha|a^{(\Lambda)\dagger}_{\frac{\p{P}}{2}+\p{p}}a^{(\Lambda)\dagger}_{\frac{\p{P}}{2}+\p{p}}|0\rangle\,\nonumber\\ &=& \langle\psi^{\Lambda}_\alpha|a^{\dagger}_{\frac{\p{P}}{2}+\p{p}}a^{\dagger}_{\frac{\p{P}}{2}-\p{p}}|0\rangle + \langle\psi^{\Lambda}_\alpha|\delta a^{\dagger}_{\frac{\p{P}}{2}+\p{p}}a^{\dagger}_{\frac{\p{P}}{2}-\p{p}}|0\rangle \nonumber \\ &=& \langle\psi^{\Lambda}_\alpha|a^{\dagger}_{\frac{\p{P}}{2}+\p{p}}a^{\dagger}_{\frac{\p{P}}{2}-\p{p}}|0\rangle \,+\,\sum\limits_{\p{k}}C^{\Lambda}_{\p{P}/2+\p{p}}(\p{P}/2+\p{k},\p{P}/2-\p{k})\,\langle\psi^{\Lambda}_\alpha |a^\dagger_{\frac{\p{P}}{2}+\p{k}}a^\dagger_{\frac{\p{P}}{2}-\p{k}}|0\rangle\nonumber\\ &=& \langle\psi^{\Lambda}_\alpha|a^{\dagger}_{\frac{\p{P}}{2}+\p{p}}a^{\dagger}_{\frac{\p{P}}{2}-\p{p}}|0\rangle \,+\,\sum\limits_{\p{k}}C^{\Lambda}_{\p{p}}(\p{k},-\p{k})\,\langle\psi^{\Lambda}_\alpha |a^\dagger_{\frac{\p{P}}{2}+\p{k}}a^\dagger_{\frac{\p{P}}{2}-\p{k}}|0\rangle\,, \end{eqnarray} where $\delta a^{\dagger}\ket{0}=0$ was used in the second line and Eq.~\ref{eq:Galilean} was used in the last step. Since the dependence on the COM momentum $\p{P}$ cancels on both sides, we are left with \begin{equation} \psi^{\infty *}_{\alpha}(\p{p}) = \psi^{\Lambda *}_\alpha(\p{p}) + \sum\limits_{\p{k}} C^{\Lambda}_{\p{p}}(\p{k},-\p{k})\,\psi^{\Lambda *}_\alpha(\p{k})\,, \end{equation} which can be inverted using the completeness of the $\{\psi^{\Lambda}_\alpha\}$ to give\footnote{It is helpful to think of the SRG or the Lee-Suzuki-Okubo similarity transformation method, where the decoupling of low- and high-momentum modes is accomplished by a unitary transformation. By unitarity, one has $\sum\limits_{\alpha}\ket{\psi^{\infty}_{\alpha}}\bra{\psi^{\infty}_{\alpha}} = \mathds{1} = \sum\limits_{\alpha} \ket{\psi^{\Lambda}_\alpha}\bra{\psi^{\Lambda}_\alpha}$, where the sum over $\alpha$ is unrestricted.} \begin{equation} \label{eq:W} C^{\Lambda}_{\p{p}}(\p{k},-\p{k}) = \sum\limits_{\alpha} \langle\p{k}|\psi^{\Lambda}_\alpha\rangle\langle\psi^{\infty}_{\alpha}|\p{p}\rangle - \delta_{\p{k},\p{p}}\,. \end{equation} One can use the results of Section~\ref{sec:twobody} to evaluate two important limiting cases of Eq.~\ref{eq:W} that will prove useful below. First, consider $C^{\Lambda}_{\p{p}}(\p{p}',-\p{p}')$ for $p,p' \lesssim \Lambda$. Using $\mathcal{P}_{\Lambda}|\psi^{\infty}_{\alpha}\rangle \approx Z_{\Lambda} |\psi^{\Lambda}_\alpha\rangle$ for low-energy states and $\mathcal{P}_{\Lambda}|\psi^{\Lambda}_\alpha\rangle \approx 0$ for $|E_{\alpha}|\gtrsim \Lambda^2$, Eq.~\ref{eq:W} becomes \begin{eqnarray} \label{eq:Cpp} C^{\Lambda}_{\p{p}}(\p{p}',-\p{p}')\,&\approx & Z_{\Lambda}\sum\limits_{|E_\alpha|\lesssim\Lambda^2} \langle\p{p}'|\psi^{\Lambda}_\alpha\rangle\langle\psi^{\Lambda}_\alpha|\p{p}\rangle - \delta_{\p{p}',\p{p}}\nonumber\\ &\approx & \bigl(Z_{\Lambda}-1\bigr)\,\delta_{\p{p}',\p{p}}\,. \end{eqnarray} In the last step, we used that the low-energy evolved eigenstates span the low-momentum subspace due to decoupling \begin{equation} \label{eq:approxP} \mathcal{P}_{\Lambda} = \sum\limits_{\p{p}\leq \Lambda}|\p{p}\rangle\langle\p{p}| \,\approx\, \sum\limits_{|E_\alpha|\lesssim\Lambda^2}|\psi^{\Lambda}_\alpha\rangle\langle\psi^{\Lambda}_\alpha|\,. \end{equation} The other important limiting case is $C^{\Lambda}_{\p{q}}(\p{p},-\p{p})$ for $p \lesssim \Lambda$ and $q\gtrsim \Lambda$. Inserting Eq.~\ref{eq:OPE_LO} into Eq.~\ref{eq:W} then gives \begin{eqnarray} \label{eq:Cqp} C^{\Lambda}_{\p{q}}(\p{p},-\p{p}) &\approx& Z_{\Lambda}\gamma(q;\Lambda)\sum\limits_{|E_\alpha|\lesssim\Lambda^2} \langle\p{p}'|\psi^{\Lambda}_\alpha\rangle\langle\psi^{\Lambda}_\alpha|\p{r=0}\rangle\nonumber\\ &\approx & Z_{\Lambda}\gamma(q;\Lambda)\,, \end{eqnarray} where Eq.~\ref{eq:approxP} was used in the final step. \subsection{Factorization for momentum distributions} In Eq.~\ref{eq:nqdeut}, we found that the expectation value of the momentum distribution in low-energy two-body states factorizes for large $q\gtrsim \Lambda$. Using the second-quantized formulation of Section~\ref{sub:2ndquant}, we will now show that a similar factorization occurs for general low-energy $A$-body states. We begin by considering the expectation value of the consistently evolved momentum distribution operator for $q\gg \Lambda$ in an arbitrary low-energy $A$-body state \begin{eqnarray} \label{eq:momdist} n_{\p{q}}&=& \langle\psi^{\infty}_{\alpha_{,A}}|a^{\dagger}_{\p{q}}a_{\p{q}}|\psi^{\infty}_{\alpha_{,A}}\rangle = \langle\psi^{\Lambda}_{\alpha_{,A}}|[a^{\dagger}_{\p{q}}a_{\p{q}}]^{(\Lambda)}|\psi^{\Lambda}_{\alpha_{,A}}\rangle \nonumber\\ &=& \langle\psi^{\Lambda}_{\alpha_{,A}}|\bigl\{a^{\dagger}_{\p{q}}a_{\p{q}}+\delta a^{\!(\Lambda)\dagger}_{\p{q}}a_{\p{q}} + a^{\dagger}_{\p{q}}\delta a^{\!(\Lambda)}_{\p{q}} + \delta a^{\!(\Lambda)\dagger}_{\p{q}}\delta a^{\!(\Lambda)}_{\p{q}}\bigr\} |\psi^{\Lambda}_{\alpha_{,A}}\rangle\,. \end{eqnarray} This is an exact equality provided that a) the evolved Hamiltonian $H^{\Lambda}$ includes all induced $3\,$-, $4\,$-,$\,\ldots\,A$-body interactions generated by the RG evolution, b) all higher-order terms for $\delta a^{(\Lambda)\dagger}$ and $\delta a^{(\Lambda)}$ in Eq.~\ref{eq:adagger_lambda} are included, and c) all possible $1\,$-, $2\,$-, $\ldots$, $A$-body operators generated by the terms in the curly brackets are kept\footnote{An $A$-body operator is defined as a normal-ordered string of $A$ $a^{\dagger}$'s and $A$ $a$'s.}. However, since we are only interested in the high-momentum tail of Eq.~\ref{eq:momdist}, and since one expects induced 3-body and higher operators contributing to $H^{\Lambda}$ and $[a^{\dagger}_{\p{q}}a_{\p{q}}]^{(\Lambda)}$ to be subleading so long as one doesn't evolve too low in $\Lambda$~\cite{Bogner:2009bt}, we will neglect them. In what follows, we assume $\Lambda$ is of the same order as the physical momentum scales that characterize $\psi^{\Lambda}_{\alpha_{,A}}$ (e.g., the Fermi momentum, $k_F$, for homogenous systems, $\sqrt{m\omega/\hbar}$ for harmonically-trapped systems, etc.). Due to decoupling, the low-energy states $\psi^{\Lambda}_{\alpha_{,A}}$ have vanishingly small support at high momentum. Therefore, any term in $[a^{\dagger}_{\p{q}}a_{\p{q}}]^{(\Lambda)}$ that annihilates a high-momentum particle from $|\psi^{\Lambda}_{\alpha_{,A}}\rangle$ or $\langle\psi^{\Lambda}_{\alpha_{,A}}|$ will be suppressed. In this limit, we find that Eq.~\ref{eq:momdist} becomes \begin{eqnarray} n_\p{q}&\approx& \bra{\psi^{\Lambda}_{\alpha_{,A}}}\delta a^{(\Lambda)\dagger}_\p{q} \delta a^{(\Lambda)}_\p{q} \ket{\psi^{\Lambda}_{\alpha_{,A}}} \nonumber \\ &=& \sum\limits_{\p{k,k',K,K'}}C^{\Lambda}_{\p{q}}\left(\frac{\p{K}}{2}+\p{k},\frac{\p{K}}{2}-\p{k}\right)C^\Lambda_{\p{q}}\left(\frac{\p{K}'}{2}+\p{k'},\frac{\p{K}'}{2}-\p{k'}\right)\nonumber\\ &&\qquad\qquad\qquad\times\, \bra{\psi^{\Lambda}_{\alpha_{,A}}}a^\dagger_{\frac{\p{K}}{2}+\p{k}}a^\dagger_{\frac{\p{K}}{2}-\p{k}}a_{\p{K-q}}a^\dagger_{\p{K'-q}}a_{\frac{\p{K'}}{2}+\p{k'}}a_{\frac{\p{K'}}{2}-\p{k'}}\ket{\psi^{\Lambda}_{\alpha_{,A}}}\,\nonumber\\[.1in] &=&\sum\limits_{\p{k,k',K}}C^{\Lambda}_{\p{q}}\left(\p{\frac{\p{K}}{2}+k}\p{\frac{\p{K}}{2}-k}\right)C^{\Lambda}_{\p{q}}\left(\p{\frac{\p{K}}{2}+k'},\p{\frac{\p{K}}{2}-k'}\right)\bra{\psi^{\Lambda}_{\alpha_{,A}}}a^\dagger_{\frac{\p{K}}{2}+\p{k}}a^\dagger_{\frac{\p{K}}{2}-\p{k}}a_{\frac{\p{K}}{2}+\p{k'}}a_{\frac{\p{K}}{2}-\p{k'}}\ket{\psi^{\Lambda}_{\alpha_{,A}}}\nonumber\\ &=& \sum\limits_{\p{k,k',K}}C^{\Lambda}_{\p{q}-\frac{\p{K}}{2}}\left(\p{k},\p{-k}\right)C^{\Lambda}_{\p{q}-\frac{\p{K}}{2}}\left(\p{k'},\p{-k'}\right)\bra{\psi^{\Lambda}_{\alpha_{,A}}}a^\dagger_{\frac{\p{K}}{2}+\p{k}}a^\dagger_{\frac{\p{K}}{2}-\p{k}}a_{\frac{\p{K}}{2}+\p{k'}}a_{\frac{\p{K}}{2}-\p{k'}}\ket{\psi^{\Lambda}_{\alpha_{,A}}}\,, \end{eqnarray} where we have anti-commuted $a_{\p{K}-\p{q}}$ to the right and dropped the normal-ordered three-body term in going from the second to third line. The low-momentum nature of $\psi^{\Lambda}_{\alpha_{,A}}$ implies that dominant terms in the sum are for $|\p{K}/2 \pm\p{k}|$ and $|\p{K}/2 \pm\p{k}'| \lesssim \Lambda$. Consequently, we have a mismatch of scales $|\p{q}-\p{K}/2| \gg |\p{k}|,|\p{k}'|$, which together with Eq.~\ref{eq:Cqp} gives \begin{eqnarray} \label{eq:momdistfacA} n_{\p{q}}&\approx &Z_{\Lambda}^2 \sum\limits_{\p{k,k',K}} \gamma^2(\p{q}-\p{K}/2;\Lambda) \, \bra{\psi^{\Lambda}_{\alpha_{,A}}}a^\dagger_{\frac{\p{K}}{2}+\p{k}}a^\dagger_{\frac{\p{K}}{2}-\p{k}}a_{\frac{\p{K}}{2}+\p{k'}}a_{\frac{\p{K}}{2}-\p{k'}}\ket{\psi^{\Lambda}_{\alpha_{,A}}}\nonumber\\ &\approx &Z_{\Lambda}^2 \gamma^2(\p{q};\Lambda) \, \sum\limits_{\p{k,k',K}}\bra{\psi^{\Lambda}_{\alpha_{,A}}}a^\dagger_{\frac{\p{K}}{2}+\p{k}}a^\dagger_{\frac{\p{K}}{2}-\p{k}}a_{\frac{\p{K}}{2}+\p{k'}}a_{\frac{\p{K}}{2}-\p{k'}}\ket{\psi^{\Lambda}_{\alpha_{,A}}}\,, \end{eqnarray} where we've used $q\gg K/2$ in the last step\footnote{For systems where $\gamma(\p{q};\Lambda)$ exhibits a power-law decay, the corrections for non-zero $K$ do not modify the power-law tail of $n_{\p{q}}$.}. In this way, we see the large-$q$ tails of momentum distributions for arbitrary low-energy $A$-body states share the same universal $q$-dependence. In nuclear physics, Eq.~\ref{eq:momdistfacA} provides an alternative to the usual explanations based on short-range correlations~\cite{Frankfurt:2008zv,Frankfurt:2009vv} as to why calculated momentum distributions in various nuclei and nuclear matter scale with each other at large $q$. In Section~\ref{sec:examples}, we will use Eq.~\ref{eq:momdistfacA} and the analogous expression, Eq.~\ref{eq:staticEff2}, to reproduce known asymptotic expressions for the momentum distributions and static structure factors for two well-studied many-body systems, the unitary Fermi gas and the electron gas. \subsection{Factorization for static structure factors} The static structure factor is an important quantity that contains information about density-density correlations in a many-body system. For a many-body system of fermions with two spin states, the correlations between the densities of the two spin states are particularly important. The corresponding static structure factor $S_{\uparrow\!\downarrow}(\p{q})$ for a homogeneous system is the Fourier transform in the relative coordinate $\p{r}_1-\p{r}_2$ of the density correlator $\langle\psi^{\infty}_{\alpha_{,A}}| \rho_{\uparrow}(\p{r}_1)\rho_{\downarrow}(\p{r}_2)|\psi^{\infty}_{\alpha_{,A}}\rangle$. Using similar arguments as for the momentum distribution, we now show that at large momentum $S_{\uparrow\!\downarrow}(\p{q})$ factorizes into a universal function of $\p{q}$ times a matrix element of a delta function in the evolved low-momentum wave functions. Starting from the definition of $S_{\uparrow\!\downarrow}(\p{q})$ in the unevolved theory \begin{eqnarray} \label{StrucBas} S_{\uparrow\!\downarrow}(\p{q})&=&\bra{\psi^{\infty}_{\alpha_{,A}}}\rho_\uparrow^\dagger(\p{q})\rho_{\downarrow}(\p{q})\ket{\psi^{\infty}_{\alpha_{,A}}} = \sum\limits_{\p{p,p'}}\bra{\psi^{\infty}_{\alpha_{,A}}}a^\dagger_{\p{p},\uparrow}a_{\p{p+q},\uparrow}a^\dagger_{\p{p'+q},\downarrow}a_{\p{p'},\downarrow}\ket{\psi^{\infty}_{\alpha_{,A}}} \nonumber \\ &=& \sum\limits_{\p{p,p'}}\bra{\psi^{\infty}_{\alpha_{,A}}}a^{\dagger}_{\p{p'+q},\downarrow}a^\dagger_{\p{p},\uparrow} a_{\p{p+q},\uparrow}a_{\p{p'},\downarrow}\ket{\psi^{\infty}_{\alpha_{,A}}}\nonumber \\ &\equiv& \bra{\psi^{\infty}_{\alpha_{,A}}}\widehat{S}_{\uparrow\!\downarrow}(\p{q})\ket{\psi^{\infty}_{\alpha_{,A}}}\,, \end{eqnarray} we consider the expectation value of the consistently evolved operator $\widehat{S}_{\uparrow\!\downarrow}(\p{q};\Lambda)$ in the evolved wave functions. Using Eq.~\ref{eq:adagger_lambda} for the evolved creation/annihilation operators, we have \begin{eqnarray} \label{eq:staticEff} \bra{\psi^{\infty}_{\alpha_{,A}}}\widehat{S}_{\uparrow\!\downarrow}(\p{q})\ket{\psi^{\infty}_{\alpha_{,A}}} &=& \bra{\psi^{\Lambda}_{\alpha_{,A}}}\widehat{S}_{\uparrow\!\downarrow}(\p{q};\Lambda)\ket{\psi^{\Lambda}_{\alpha_{,A}}} \equiv \bra{\psi^{\Lambda}_{\alpha_{,A}}}\bigl(\widehat{S}_{\uparrow\!\downarrow}(\p{q})+\delta\widehat{S}_{\uparrow\!\downarrow}^{\Lambda}(\p{q})\bigr)\ket{\psi^{\Lambda}_{\alpha_{,A}}}\,. \end{eqnarray} This is an exact relation only if all induced many-body operators (up to rank-$A$ for the $A$-body system) are kept in $H^{\Lambda}$ and $\delta\widehat{S}^{\Lambda}(\p{q})$. As with our analysis of the momentum distribution, we neglect these many-body contributions by a) restricting the expansion of $a^{(\Lambda)\dagger}$ and $a^{(\Lambda)}$ to the leading terms shown in Eq.~\ref{eq:adagger_lambda} and, b) truncating $\delta\widehat{S}^{\Lambda}(\p{q})$ to two-body operators \begin{eqnarray} \delta\widehat{S}_{\uparrow\!\downarrow}^{\Lambda}(\p{q}) &\approx& \sum\limits_{\p{K,k,k'}}C^{\Lambda}_{\p{q}+\p{k}'}(\p{k},-\p{k})\,a^\dagger_{\frac{\p{K}}{2}+\p{k},\uparrow}a^\dagger_{\frac{\p{K}}{2}-\p{k},\downarrow}a_{\frac{\p{K}}{2}+\p{k'},\downarrow}a_{\frac{\p{K}}{2}-\p{k'},\uparrow} \,+\, \rm{h.c.}\nonumber\\ &+&\,\sum\limits_{\p{P,K,k,k'}}C^{\Lambda}_{\p{P+q-\frac{K}{2}}}(\p{k},-\p{k})C^{\Lambda}_{\p{P-\frac{K}{2}}}(\p{k}',-\p{k}')\,a^\dagger_{\frac{\p{K}}{2}+\p{k},\uparrow}a^\dagger_{\frac{\p{K}}{2}-\p{k},\downarrow}a_{\frac{\p{K}}{2}+\p{k'},\downarrow}a_{\frac{\p{K}}{2}-\p{k'},\uparrow}\,\nonumber\\ &\equiv& \delta\widehat{S}_{1}^{\Lambda}(\p{q})\,+\, \delta\widehat{S}_{2}^{\Lambda}(\p{q})\,, \end{eqnarray} where $ \delta\widehat{S}_{1(2)}^{\Lambda}(\p{q})$ denotes the terms linear (quadratic) in the expansion coefficients $C^{\Lambda}$. With the approximate form of the evolved operator in hand, we can now evaluate Eq.~\ref{eq:staticEff} for $q \gg \Lambda$, where once again $\Lambda$ is assumed to be of the same order as the physical scales that characterize the system. The expectation value of the bare operator, $\hat{S}_{\uparrow\!\downarrow}(\p{q})$, in the evolved low-momentum wave functions is negligible since it involves the removal of a high-momentum particle. Therefore, we have \begin{equation} \begin{aligned} \label{eq:staticEff1a} \bra{\psi^{\infty}_{\alpha_{,A}}}\widehat{S}_{\uparrow\!\downarrow}(\p{q})\ket{\psi^{\infty}_{\alpha_{,A}}} &\approx\bra{\psi^{\Lambda}_{\alpha_{,A}}}\bigl(\delta\widehat{S}_{1}^{\Lambda}(\p{q})+\delta\widehat{S}_{2}^{\Lambda}(\p{q})\bigr)\ket{\psi^{\Lambda}_{\alpha_{,A}}} \\ &= 2\,\sum\limits_{\p{K,k,k'}}C^{\Lambda}_{\p{q}+\p{k}'}(\p{k},-\p{k})\, \bra{\psi^{\Lambda}_{\alpha_{,A}}}a^\dagger_{\frac{\p{K}}{2}+\p{k},\uparrow}a^\dagger_{\frac{\p{K}}{2}-\p{k},\downarrow}a_{\frac{\p{K}}{2}+\p{k'},\downarrow}a_{\frac{\p{K}}{2}-\p{k'},\uparrow}\ket{\psi^{\Lambda}_{\alpha_{,A}}} \\ + \sum\limits_{\p{P,K,k,k'}}&C^{\Lambda}_{\p{P+q-\frac{K}{2}}}(\p{k},-\p{k})C^{\Lambda}_{\p{P-\frac{K}{2}}}(\p{k}',-\p{k}')\,\bra{\psi^{\Lambda}_{\alpha_{,A}}}a^\dagger_{\frac{\p{K}}{2}+\p{k},\uparrow}a^\dagger_{\frac{\p{K}}{2}-\p{k},\downarrow}a_{\frac{\p{K}}{2}+\p{k'},\downarrow}a_{\frac{\p{K}}{2}-\p{k'},\uparrow}\,\ket{\psi^{\Lambda}_{\alpha_{,A}}}\,. \end{aligned} \end{equation} Due to the low-momentum structure of the evolved wave functions, it is clear that the sums over momenta $\p{K},\p{k},\p{k}'$ are effectively cutoff at $\Lambda$, while the summation over $\p{P}$ is unrestricted in the second term of Eq.~\ref{eq:staticEff1a}. Performing a Taylor series expansion of the coefficient functions in powers of the small momenta $\p{K},\p{k},\p{k}'$ and keeping just the leading term gives \begin{eqnarray} \label{eq:staticEff1} \bra{\psi^{\infty}_{\alpha_{,A}}}\widehat{S}_{\uparrow\!\downarrow}(\p{q})\ket{\psi^{\infty}_{\alpha_{,A}}} &\approx& \Bigl\{2\,C^{\Lambda}_{\p{q}}(0,0) + \sum\limits_{\p{P}}C^{\Lambda}_{\p{P+q}}(0,0)C^{\Lambda}_{\p{P}}(0,0)\,\Bigr\}\nonumber\\ &&\qquad\qquad\qquad\times\quad \sum\limits_{\p{K,k,k'}}\, \bra{\psi^{\Lambda}_{\alpha_{,A}}}a^\dagger_{\frac{\p{K}}{2}+\p{k},\uparrow}a^\dagger_{\frac{\p{K}}{2}-\p{k},\downarrow}a_{\frac{\p{K}}{2}+\p{k'},\downarrow}a_{\frac{\p{K}}{2}-\p{k'},\uparrow}\ket{\psi^{\Lambda}_{\alpha_{,A}}}\,.\nonumber\\ \end{eqnarray} To proceed further, we consider the following three regions that arise in the sum over $\p{P}$: \begin{itemize} \item Region I): $|\p{P}+\p{q}| \gtrsim \Lambda$ and $|\p{P}| \gtrsim \Lambda$ \item Region II): $|\p{P}+\p{q}|\lesssim \Lambda$ and $|\p{P}| \gtrsim \Lambda$ \item Region III): $|\p{P}+\p{q}| \gtrsim \Lambda$ and $|\p{P}| \lesssim \Lambda$. \end{itemize} Regions II) and III) are trivial since the $C^{\Lambda}$ coefficients involving all soft momenta give a delta function, Eq.~\ref{eq:Cpp}, that allows the sums to be performed. Together with Eq.~\ref{eq:Cqp}, we have \begin{eqnarray} &&\sum\limits_{\p{P},\rm{II}}C^{\Lambda}_{\p{P+q}}(0,0)C^{\Lambda}_{\p{P}}(0,0)\, \approx \sum\limits_{\p{P},\rm{II}}(Z_{\Lambda}-1)\delta_{\p{P},\p{q}}\,Z_{\Lambda}\,\gamma(\p{P};\Lambda)\,=\,Z_{\Lambda}(Z_{\Lambda}-1)\gamma(\p{q};\Lambda)\nonumber\\ &&\sum\limits_{\p{P},\rm{III}}C^{\Lambda}_{\p{P+q}}(0,0)C^{\Lambda}_{\p{P}}(0,0)\, \approx \sum\limits_{\p{P},\rm{III}}(Z_{\Lambda}-1)\delta_{\p{P},\p{0}}\,Z_{\Lambda}\,\gamma(\p{P+q};\Lambda)\,=\,Z_{\Lambda}(Z_{\Lambda}-1)\gamma(\p{q};\Lambda)\,,\nonumber\\ \end{eqnarray} which gives \begin{eqnarray} \label{eq:staticEff2} \bra{\psi^{\infty}_{\alpha_{,A}}}\widehat{S}_{\uparrow\!\downarrow}(\p{q})\ket{\psi^{\infty}_{\alpha_{,A}}} &\approx& \Bigl\{2\,Z_{\Lambda}^2\gamma(\p{q};\Lambda) + \sum\limits_{\p{P},I}Z_{\Lambda}^2\,\gamma(\p{P+q};\Lambda)\gamma(\p{P};\Lambda)\,\Bigr\}\nonumber\\ &&\qquad\qquad\qquad\times\quad \sum\limits_{\p{K,k,k'}}\, \bra{\psi^{\Lambda}_{\alpha_{,A}}}a^\dagger_{\frac{\p{K}}{2}+\p{k},\uparrow}a^\dagger_{\frac{\p{K}}{2}-\p{k},\downarrow}a_{\frac{\p{K}}{2}+\p{k'},\downarrow}a_{\frac{\p{K}}{2}-\p{k'},\uparrow}\ket{\psi^{\Lambda}_{\alpha_{,A}}}\,.\nonumber\\ \end{eqnarray} As with the momentum distribution, we see that the high-momentum tail of the static structure factor in a general low-energy $A$-body state factorizes into a universal function of $\p{q}$, multiplied by a state-dependent matrix element that is controlled entirely by low-momentum physics. \subsection{Factorization for general high-momentum operators} While our explicit proofs of factorization have thus far been limited to the momentum distribution and the static structure factor, the phenomena is very general and can be qualitatively understood from Eq.~\ref{eq:Oeff}. Consider an operator at the initial high-resolution scale $\Lambda_0$ that probes high-momentum modes, $\hat{O}^{\Lambda_0}_{\p{q}}$, where the subscript $\p{q}$ indicates that the second-quantized expression involves creation (annihilation) operators that add (remove) a high-momentum particle. We assume $\p{q}$ is much larger than any physical scale that characterizes the low-energy state $\psi^{\Lambda_0}_{n}$, and we also assume $|\p{q}|\ll \Lambda_0$ so that the expectation value of $\hat{O}^{\Lambda_0}_{\p{q}}$ is non-vanishing. Now consider the consistently evolved $\hat{O}^{\Lambda}_{\p{q}}$, where $\Lambda \ll |\p{q}|\ll \Lambda_0$, and expand it as a polynomial in creation/annihilation operators defined at $\Lambda_0$. Schematically, we have \begin{equation} \hat{O}^{\Lambda}_{\p{q}} = \sum_{\alpha} g^{\alpha}_{\p{q}} \hat{A}_{\alpha}\, \end{equation} where $\hat{A}_{\alpha}$ denotes a normal-ordered string of creation/annihilation operators at $\Lambda_0$, $\alpha$ is a collective index for the different momentum modes being added/removed, and $g^{\alpha}_{\p{q}}$ is a c-number coefficient. Inserting this into Eq.~\ref{eq:Oeff}, we have \begin{equation} \label{eq:facgen} \bra{\psi_n^{\Lambda_0}}\hat{O}^{\Lambda_0}_{\p{q}}\ket{\psi_n^{\Lambda_0}} = \sum_{\alpha}g^{\alpha}_{\p{q}}\,\bra{\psi_n^{\Lambda}}\hat{A}_{\alpha}\ket{\psi_n^{\Lambda}}\,. \end{equation} Due to the low-momentum nature of the evolved wave functions, we find that only the $\hat{A}_{\alpha}$ involving the addition/removal of low-momentum ($\lesssim \Lambda$) modes contributes to Eq.~\ref{eq:facgen}. Since all momentum modes contained in $\alpha$ obey $k_{\alpha}/q \ll 1$, the c-number coefficients can be Taylor-expanded in the soft-momenta. Note that this expansion should be well-defined since the $g^{\alpha}_{\p{q}}$ encode contributions from loop integrals that have both ultraviolet ($\Lambda_0$) and infrared ($\Lambda$) cutoffs in place, thus preventing any singular behavior from arising. In this way, we see that the universal $\p{q}$-dependence factorizes, and the remaining state-dependence is given by matrix elements of low-momentum operators. \section{Examples} \label{sec:examples} As a check of our factorization formulas Eq.~\ref{eq:momdistfacA} and Eq.~\ref{eq:staticEff2}, we use them to reproduce known expressions for the high-momentum tails of $n_{\p{q}}$ and $S_{\uparrow\!\downarrow}(\p{q})$ for two well-studied systems, the unitary Fermi gas (UFG) and the electron gas. \subsection{Unitary Fermi gas} \subsubsection{Momentum distribution} In the case of a unitary Fermi gas described by a contact interaction, the coefficient $\gamma(\p{q};\Lambda)$, and hence the large momentum tails of $n_{\p{q}}$ and $S(\p{q})$, can be calculated analytically. Consider the two-body Hamiltonian with a spin-independent contact interaction \begin{equation} \hat{H}_\infty =\hat{T} + \hat{V_\delta} =\frac{\hat{\p{p}}^2}{2m}+\frac{g(\Lambda_0)}{2m}\delta^{(3)}(\p{r})\,,\end{equation} where $\Lambda_0$ is the ultraviolet cutoff on all momenta of the theory. Here, we assume that $\Lambda_0$ is much larger than any relevant low-energy scales in the problem such as the inverse scattering length or the Fermi momentum. The coupling constant $g(\Lambda_0)$ is determined by matching the scattering amplitude at threshold to the $S$-wave scattering length $a$ and is given by~\cite{Braaten:2008uh} \begin{equation} \label{Coupling} g(\Lambda_0) = \left[\frac{1}{4\pi a}-\frac{\Lambda_0}{2\pi^2}\right]^{-1}\,. \end{equation} To obtain an explicit expression for $\gamma(\p{q};\Lambda)$ in Eq.~\ref{eq:gammalambda}, the operator $(\mathcal{Q}_{\Lambda} H_\infty \mathcal{Q}_{\Lambda})^{-1}$ can be constructed with the aid of the operator identity \begin{equation} \frac{1}{A+B} = (1-A^{-1}B+A^{-1}BA^{-1}B-\ldots)A^{-1}\,, \end{equation} where $A\rightarrow \mathcal{Q}_{\Lambda}T\mathcal{Q}_{\Lambda}$ and $B\rightarrow \mathcal{Q}_{\Lambda}V_\delta\mathcal{Q}_{\Lambda}$ giving \begin{equation}\begin{aligned} \gamma(\p{q};\Lambda) &= -\frac{g(\Lambda_0)}{2m}\int_{\Lambda}^{\Lambda_0}\frac{\dif^3{q'}}{(2\pi)^3}\bra{\p{q}}(\mathcal{Q}_{\Lambda}H_\infty \mathcal{Q}_{\Lambda})^{-1}\ket{\p{q'}} \\ &=-\frac{g(\Lambda_0)}{q^2}\left(1-\frac{g(\Lambda_0)}{q^2}\int_{\Lambda}^{\Lambda_0}\frac{\dif{q'}}{2\pi^2}\frac{q'^2}{q'^2}+\ldots\right) = -\frac{g(\Lambda_0)}{q^2}\sum\limits_{n=0}^\infty\left(-\frac{g(\Lambda_0)\cdot(\Lambda_0-\Lambda)}{2\pi^2}\right)^n\\ &=\frac{-g(\Lambda_0)}{q^2}\frac{2\pi^2}{2\pi^2+g(\Lambda_0)\cdot(\Lambda_0-\Lambda)}\,. \end{aligned}\end{equation} This can be simplified further since Eq.~\ref{Coupling} implies \begin{equation} \frac{2\pi^2g(\Lambda_0)}{2\pi^2+g(\Lambda_0)\cdot(\Lambda_0-\Lambda)} = \left[\frac{\Lambda_0-\Lambda}{2\pi^2}+\frac{1}{g(\Lambda_0)}\right]^{-1}=\left[\frac{1}{4\pi a}-\frac{\Lambda}{2\pi^2}\right]^{-1} = g(\Lambda)\,, \end{equation} which gives \begin{equation} \label{eq:gammaUFG} \gamma(\p{q};\Lambda) = -\frac{g(\Lambda)}{q^2}\,. \end{equation} Inserting into Eq.~\ref{eq:momdistfacA}, we find \begin{equation} \label{eq:momdistfacUFG} n_{\p{q}}\approx \frac{Z_{\Lambda}^2g^2(\Lambda)}{q^4}\sum\limits_{\p{k,k',K}}\bra{\psi^{\Lambda}_{\alpha_{,A}}}a^\dagger_{\frac{\p{K}}{2}+\p{k}}a^\dagger_{\frac{\p{K}}{2}-\p{k}}a_{\frac{\p{K}}{2}+\p{k'}}a_{\frac{\p{K}}{2}-\p{k'}}\ket{\psi^{\Lambda}_{\alpha_{,A}}}\,, \end{equation} where $\Lambda$ is of the same order of magnitude as the relevant low-energy scales of the system and $\Lambda \ll q \ll \Lambda_0$. In Ref.~\cite{Braaten:2008uh}, Braaten and Platter used the operator product expansion to show that the tail of the UFG momentum distribution behaves like\footnote{Shina Tan provided the first derivation of Eq.~\ref{eq:momdistOPE} using generalized functions~\cite{2008AnPhy.323.2971T}. Many different derivations can be found in the literature, see Ref.~\cite{Braaten:2010if} for details.} \begin{equation} \label{eq:momdistOPE} n_\p{q} = \frac{g^2(\Lambda_0)}{q^4}\sum\limits_{\p{k,k',K}}\bra{\psi_{\alpha,_A}^{\Lambda_0}}a^\dagger_{\frac{\p{K}}{2}+\p{k}}a^\dagger_{\frac{\p{K}}{2}-\p{k}}a_{\frac{\p{K}}{2}+\p{k'}}a_{\frac{\p{K}}{2}-\p{k'}}\ket{\psi_{\alpha,_A}^{\Lambda_0}} \equiv \frac{C(\Lambda_0)}{q^4} \end{equation} where $C(\Lambda_0)$ is often known as Tan's contact parameter. In the Appendix, we will show that, at the level of approximating the evolved creation/annihilation operators by the leading-order expression in Eq.~\ref{eq:adagger_lambda} and truncating induced three- and higher-body operators, the following relationship holds \begin{eqnarray} \label{eq:contact} Z_{\Lambda}^2 C(\Lambda) = C(\Lambda_0)\,. \end{eqnarray} Therefore, Eqs.~\ref{eq:momdistfacUFG} and \ref{eq:momdistOPE} are equivalent at the level of approximations made thus far. Heuristically, we can understand this equivalence since we expect that $Z_{\Lambda}\rightarrow 1$ as $\Lambda\rightarrow\Lambda_0$. \subsubsection{Static structure factor} Turning next to the asymptotic expression for the static structure factor, Eq.~\ref{eq:staticEff2}, our task is to evaluate the following term \begin{equation} \label{eq:UFGintegral1} \sum\limits_{\p{P},I}Z_{\Lambda}^2\,\gamma(\p{P+q};\Lambda)\gamma(\p{P};\Lambda)\,\,\rightarrow\,\,Z_{\Lambda}^2g^2(\Lambda)\,\int \frac{d^3P}{(2\pi)^3}\frac{\theta(|\p{P}|-\Lambda)\theta(|\p{P}+\p{q}|-\Lambda)}{|\p{P}+\p{q}|^2|\p{P}|^2}\,, \end{equation} where we've taken the infinite volume limit to convert the sum to an integral and substituted Eq.~\ref{eq:gammaUFG} for $\gamma$. Note that the integral in Eq.~\ref{eq:UFGintegral1} has an implicit ultra-violet cutoff $\Lambda_0\gg \Lambda$. To evaluate the integral, we use that there are two regions for which the theta function $\theta(|\p{P}+\p{q}|-\Lambda)=1$ independent of $\p{P}\cdot\p{q}$, and one region where it depends on angle to write \begin{equation} \label{eq:IUFG} I = \int \frac{d^3P}{(2\pi)^3}\frac{\theta(|\p{P}|-\Lambda)\theta(|\p{P}+\p{q}|-\Lambda)}{|\p{P}+\p{q}|^2|\p{P}|^2}\, \equiv I_{\rm{high}}\,+\,I_{\rm{medium}}\,+\, I_{\rm{low}}\,. \end{equation} For $|\p{P}|>|\p{q}|$, we have $|\p{P}+\p{q}|\ge |\p{P}|-|\p{q}|$, which implies that for $|\p{P}|\ge \Lambda + |\p{q}|$, then $\theta(|\p{P}+\p{q}|-\Lambda)=1$ independent of $\p{P}\cdot\p{q}$. In this case, the limits of the angular integration are unrestricted \begin{eqnarray} \label{eq:IhighUFG} I_{\rm{high}}&=& \frac{1}{4\pi^2}\int_{\Lambda+q}^{\Lambda_0}dP\,\int_{-1}^{1}dx\frac{1}{P^2+q^2 + 2Pqx}\, \overset{\Lambda_0\rightarrow\infty}{=}\frac{1}{4\pi^2q}\left[\ensuremath{\,\mbox{Li}}_2\left(\frac{q}{q+\lambda}\right)-\ensuremath{\,\mbox{Li}}_2\left(-\frac{q}{q+\lambda}\right)\right]\,,\nonumber\\ \end{eqnarray} where $\rm{Li}_2(x)$ is the polylogarithm function. Using that $\Lambda/q \ll 1$ and keeping just the leading term gives \begin{equation} \label{eq:IhighUFG1} I_{\rm{high}} \approx \frac{1}{4\pi^2q}\,\biggl[\frac{\pi^2}{4}\,+\,\frac{\Lambda}{q}\log\left(\frac{\Lambda}{2q}\right)\, -\,\frac{\Lambda}{q}\biggr]\,. \end{equation} Similarly, for $|\p{P}|<|\p{q}|$ the limits of the angular integration are unrestricted if $ |\p{P}|< |\p{q}|-\Lambda$ giving \begin{eqnarray} \label{eq:IlowUFG} I_{\rm{low}}&=& \frac{1}{4\pi^2}\int_{\Lambda}^{q-\Lambda}dP\,\int_{-1}^{1}dx\frac{1}{P^2+q^2 + 2Pqx}\nonumber\\ &=&\frac{1}{4\pi^2q}\left[\ensuremath{\,\mbox{Li}}_2\left(-\frac{\Lambda}{q}\right)-\ensuremath{\,\mbox{Li}}_2\left(\frac{\Lambda}{q}\right) +\ensuremath{\,\mbox{Li}}_2\left(1-\frac{\Lambda}{q}\right)-\ensuremath{\,\mbox{Li}}_2\left(-1+\frac{\Lambda}{q}\right)\right]\,,\nonumber\\ &\approx& \frac{1}{4\pi^2q}\,\biggl[\frac{\pi^2}{4}\,+\,\frac{\Lambda}{q}\log\left(\frac{\Lambda}{2q}\right)\, -\,\frac{3\Lambda}{q}\biggr]\,, \end{eqnarray} where we've used $\Lambda/q \ll 1$ in the last step. Finally, we consider the intermediate region $q-\Lambda < P < q+\Lambda$ where $\theta(|\p{P}+\p{q}|-\Lambda)=1$ places restrictions on the the limits of the angular integration. In this case, the theta function requires $x > x_{\rm{min}}= \frac{\Lambda^2-P^2-q^2}{2Pq}$ \begin{eqnarray} \label{eq:ImedUFG} I_{\rm{medium}} &=& \frac{1}{4\pi^2}\int_{q-\Lambda}^{q+\Lambda}dP\,\int_{x_{\rm min}}^{1}dx\frac{1}{P^2+q^2 + 2Pqx}\nonumber\\ &=&\frac{1}{4\pi^2q}\,\biggl[\log\left(\frac{q}{\Lambda}\right)\log\left(\frac{1+\Lambda/q}{1-\Lambda/q}\right)\,+\,\ensuremath{\,\mbox{Li}}_2\left(-1-\frac{\Lambda}{q}\right)-\ensuremath{\,\mbox{Li}}_2\left(-1+\frac{\Lambda}{q}\right)\biggr]\nonumber\\ &\approx& \frac{1}{4\pi^2q}\,\biggl[\frac{2\Lambda}{q}\log\left(\frac{2q}{\Lambda}\right)\biggr]\,. \end{eqnarray} Inserting Eqs.~\ref{eq:IhighUFG1}-\ref{eq:ImedUFG} in Eq.~\ref{eq:IUFG} gives \begin{equation} I \approx \frac{1}{4\pi^2q}\left[\frac{\pi^2}{2}-\frac{4\Lambda}{q}\right]\,, \end{equation} which together with Eq.~\ref{eq:staticEff2} and Eq.~\ref{eq:gammaUFG} yields \begin{eqnarray} \label{eq:staticEff3} S_{\uparrow\!\downarrow}(\p{q}) &\approx& \left(-\frac{2}{q^2 g(\Lambda)} \,+\,\frac{1}{8q}\,-\,\frac{\Lambda}{\pi^2q^2}\right)\,Z_{\Lambda}^2\,C(\Lambda)\,\nonumber\\ &=& \left(-\frac{2}{q^2 g(\Lambda)} \,+\,\frac{1}{8q}\,-\,\frac{\Lambda}{\pi^2q^2}\right)\,C(\Lambda_0) \nonumber\\ &=& \left(\frac{1}{8q} \,-\,\frac{1}{2\pi a q^2}\right)\,C(\Lambda_0)\,, \end{eqnarray} where we used Eq.~\ref{eq:contact} and the explicit form of the coupling $g(\Lambda)$, Eq.~\ref{Coupling}, in the second and third lines, respectively. As with the momentum distribution, Eq.~\ref{eq:staticEff3} agrees with the known result that has been previously derived by a number of different methods~\cite{Braaten:2010if}. \subsection{Electron gas} \subsubsection{Momentum distribution} As our second check of Eq.~\ref{eq:momdistfacA} and Eq.~\ref{eq:staticEff2}, we derive the large-momentum limit of the momentum distribution and static structure factor for Coulombic systems. Unlike the unitary Fermi gas, we were unable to evaluate $\gamma(\p{q};\Lambda)$ in closed form. Therefore, we turn to a perturbative calculation and expand the Q-space propagator \begin{eqnarray} \frac{1}{\mathcal{Q}_{\Lambda} H\mathcal{Q}_{\Lambda}} &=& \frac{1}{\mathcal{Q}_{\Lambda} T\mathcal{Q}_{\Lambda}} -\frac{1}{\mathcal{Q}_{\Lambda} T\mathcal{Q}_{\Lambda}}V\frac{1}{\mathcal{Q}_{\Lambda} T\mathcal{Q}_{\Lambda}} +\frac{1}{\mathcal{Q}_{\Lambda} T\mathcal{Q}_{\Lambda}}V\frac{1}{\mathcal{Q}_{\Lambda} T\mathcal{Q}_{\Lambda}}V\frac{1}{\mathcal{Q}_{\Lambda} T\mathcal{Q}_{\Lambda}}+\ldots\,,\nonumber\\ \end{eqnarray} which together with Eq.~\ref{eq:gammalambda} gives the first- and second-order contributions to $\gamma$ \begin{eqnarray} \label{eq:gammaLO} \gamma^{(1)}(\p{q};\Lambda) &=& -\int_{\Lambda}^{\infty}\frac{d^3q'}{(2\pi)^3}\,\frac{(2\pi)^3}{q^2}\delta^{3}(\p{q}-\p{q}')\frac{4\pi e^2}{q'^2}\nonumber\\ &=& -\frac{4\pi}{a_0q^4}\,, \end{eqnarray} and \begin{eqnarray} \gamma^{(2)}(\p{q};\Lambda) &=& -\int_{\Lambda}^{\infty}\frac{d^3q'}{(2\pi)^3}\frac{1}{q^2q'^2}\frac{4\pi e^2}{|\p{q}-\p{q}'|^2}\frac{4\pi e^2}{q'^2}\nonumber\\ &\approx& \frac{8}{a_0^2q^4\Lambda}\, \end{eqnarray} where we've kept the leading term in $1/q$ for the second-order contribution and $a_0=\frac{\hbar^2}{e^2m}$ is the Bohr radius. We assume that perturbation theory is justified provided \begin{equation} \label{eq:pertvalidity} \frac{\gamma^{(2)}}{\gamma^{(1)}} = \frac{2}{\pi}\frac{1}{a_0\Lambda} \ll1\quad\Rightarrow\quad \Lambda \gg \frac{2}{\pi}\frac{1}{a_0}\,, \end{equation} and restrict our attention to the leading term, Eq.~\ref{eq:gammaLO}. Inserting this into Eq.~\ref{eq:momdistfacA} gives \begin{eqnarray} \label{eq:momdistfaccoul} n_{\p{q}}\approx \frac{16\pi^2}{q^8 a_0^2}Z^2_{\Lambda}\,\sum\limits_{\p{k,k',K}}\bra{\psi^{\Lambda}_{\alpha_{,A}}}a^\dagger_{\frac{\p{K}}{2}+\p{k},\uparrow}a^\dagger_{\frac{\p{K}}{2}-\p{k},\downarrow}a_{\frac{\p{K}}{2}+\p{k'},\uparrow}a_{\frac{\p{K}}{2}-\p{k'},\downarrow}\ket{\psi^{\Lambda}_{\alpha_{,A}}}\,. \end{eqnarray} Apart from the $Z_{\Lambda}$ factors and the evolved wave functions $\psi^{\Lambda}_{\alpha_{,A}}$, this is very similar to the known result first derived by Kimball~\cite{1975JPhA....8.1513K} \begin{equation} \label{eq:momdistcoul} n_{\p{q}}\approx \frac{16\pi^2}{q^8 a_0^2}\,\sum\limits_{\p{k,k',K}}\bra{\psi^{\infty}_{\alpha_{,A}}}a^\dagger_{\frac{\p{K}}{2}+\p{k},\uparrow}a^\dagger_{\frac{\p{K}}{2}-\p{k},\downarrow}a_{\frac{\p{K}}{2}+\p{k'},\uparrow}a_{\frac{\p{K}}{2}-\p{k'},\downarrow}\ket{\psi^{\infty}_{\alpha_{,A}}}\,. \end{equation} As with the unitary Fermi gas, one can make a heuristic argument that Eq.~\ref{eq:momdistfaccoul} and Eq.~\ref{eq:momdistcoul} are equivalent since $Z_{\Lambda}\rightarrow 1$ and $\psi^{\Lambda}_{\alpha_{,A}}\rightarrow \psi^{\infty}_{\alpha_{,A}}$ as $\Lambda\rightarrow\infty$. More precisely, we will show in the Appendix that \begin{eqnarray} \label{eq:bareevolvedcontact} \sum\limits_{\p{k,k',K}}\bra{\psi^{\infty}_{\alpha_{,A}}}a^\dagger_{\frac{\p{K}}{2}+\p{k},\uparrow}a^\dagger_{\frac{\p{K}}{2}-\p{k},\downarrow}a_{\frac{\p{K}}{2}+\p{k'},\uparrow}a_{\frac{\p{K}}{2}-\p{k'},\downarrow}\ket{\psi^{\infty}_{\alpha_{,A}}} \approx \nonumber \\ Z^2_{\Lambda}\left\{1 + \mathcal{O}\left(\frac{1}{ \Lambda a_0}\right)\right\} \sum\limits_{\p{k,k',K}}\bra{\psi^{\Lambda}_{\alpha_{,A}}}a^\dagger_{\frac{\p{K}}{2}+\p{k},\uparrow}a^\dagger_{\frac{\p{K}}{2}-\p{k},\downarrow}a_{\frac{\p{K}}{2}+\p{k'},\uparrow}a_{\frac{\p{K}}{2}-\p{k'},\downarrow}\ket{\psi^{\Lambda}_{\alpha_{,A}}}\,, \end{eqnarray} so that Eqs.~\ref{eq:momdistfaccoul} and~\ref{eq:momdistcoul} are equivalent up to terms of order $\mathcal{O}(\frac{1}{\Lambda a_0})$, which are presumed to be small by virtue of Eq.~\ref{eq:pertvalidity}. \subsubsection{Static structure factor} Turning next to the application of Eq.~\ref{eq:staticEff2} to Coulomb systems, our task is to evaluate the following term \begin{equation} \label{eq:Coulombintegral1} \sum\limits_{\p{P},I}Z_{\Lambda}^2\,\gamma(\p{P+q};\Lambda)\gamma(\p{P};\Lambda)\,\,\rightarrow\,\,\left(\frac{4\pi Z_{\Lambda}}{a_0}\right)^2\,\int \frac{d^3P}{(2\pi)^3}\frac{\theta(|\p{P}|-\Lambda)\theta(|\p{P}+\p{q}|-\Lambda)}{|\p{P}+\p{q}|^4|\p{P}|^4}\,. \end{equation} As before, we split the integral into a sum of three terms \begin{equation} I = \int \frac{d^3P}{(2\pi)^3}\frac{\theta(|\p{P}|-\Lambda)\theta(|\p{P}+\p{q}|-\Lambda)}{|\p{P}+\p{q}|^4|\p{P}|^4} = I_{\rm{high}} + I_{\rm{medium}} + I_{\rm{low}}\, \end{equation} where $I_{\rm{high}}$ corresponds to $|\p{P}| \ge \Lambda +|\p{q}|$, $I_{\rm{medium}}$ corresponds to $|\p{q}|-\Lambda \le |\p{P}| \le |\p{q}|+\Lambda$, and $I_{\rm{low}}$ corresponds $|\p{P}|\le |\p{q}| -\Lambda$. For $I_{\rm{high}}$ and $I_{\rm{low}}$, the angular integrals are trivial \begin{eqnarray} I_{\rm{high}} &=& \frac{1}{4\pi^2}\int_{\Lambda+q}^{\infty}P^2 dP\int_{-1}^{1}\frac{1}{P^4}\frac{1}{\left(P^2+q^2+2Pqx\right)^2} \nonumber\\ &\approx& \frac{1}{4\pi^2}\frac{1}{2\Lambda q^4}\,+\,\mathcal{O}\left(\frac{1}{q^5}\right)\,,\\ I_{\rm{low}}&=& \frac{1}{4\pi^2}\int_{\Lambda}^{q-\Lambda}P^2 dP\int_{-1}^{1}\frac{1}{P^4}\frac{1}{\left(P^2+q^2+2Pqx\right)^2} \nonumber\\ &\approx& \frac{1}{4\pi^2}\frac{5}{2\Lambda q^4} \,+\,\mathcal{O}\left(\frac{1}{q^5}\right)\,. \end{eqnarray} For $I_{\rm{medium}}$, the Heaviside theta functions restrict the angular integral \begin{eqnarray} \label{eq:Imedcoul} I_{\rm{medium}} &=& \frac{1}{4\pi^2}\int_{q-\Lambda}^{q+\Lambda}P^2 dP\int_{x_{\rm{min}}}^{1}\frac{1}{P^4}\frac{1}{\left(P^2+q^2+2Pqx\right)^2} \nonumber\\ &\approx& \frac{1}{4\pi^2}\frac{1}{\Lambda q^4}\,+\,\mathcal{O}\left(\frac{1}{q^5}\right)\,, \end{eqnarray} where $x_{\rm{min}} = \frac{\Lambda^2-P^2-q^2}{2Pq}$. Combining Eqs.~\ref{eq:Coulombintegral1}-\ref{eq:Imedcoul} with Eq.~\ref{eq:staticEff2}, we find \begin{eqnarray} S_{\uparrow\!\downarrow}(\p{q}) &\approx&\frac{8\pi}{a_0}\frac{Z^2_{\Lambda}}{q^4}\left(1-\frac{2}{\pi}\frac{1}{\Lambda a_0}\right)\,\sum\limits_{\p{K,k,k'}}\, \bra{\psi^{\Lambda}_{\alpha_{,A}}}a^\dagger_{\frac{\p{K}}{2}+\p{k},\uparrow}a^\dagger_{\frac{\p{K}}{2}-\p{k},\downarrow}a_{\frac{\p{K}}{2}+\p{k'},\downarrow}a_{\frac{\p{K}}{2}-\p{k'},\uparrow}\ket{\psi^{\Lambda}_{\alpha_{,A}}} \nonumber\\ &\approx& \frac{8\pi}{a_0}\frac{1}{q^4}\,\sum\limits_{\p{K,k,k'}}\, \bra{\psi^{\infty}_{\alpha_{,A}}}a^\dagger_{\frac{\p{K}}{2}+\p{k},\uparrow}a^\dagger_{\frac{\p{K}}{2}-\p{k},\downarrow}a_{\frac{\p{K}}{2}+\p{k'},\downarrow}a_{\frac{\p{K}}{2}-\p{k'},\uparrow}\ket{\psi^{\infty}_{\alpha_{,A}}}\,, \end{eqnarray} where we've used Eq.~\ref{eq:pertvalidity} and Eq.~\ref{eq:bareevolvedcontact} to obtain the second line. Once again, this is in agreement with the previously known result of Kimball~\cite{1975JPhA....8.1513K}. \section{Summary} \label{sec:summary} In this paper, we have used elementary RG arguments to show that, for general low-energy many-body states, the high-momentum tails of momentum distributions and static structure factors factorize into the product of a universal function of momentum that is fixed (in leading-order) by two-body physics, and a state-dependent matrix element that is sensitive only to low-momentum structure of the many-body state, and is the same for both. This generalizes the results of Anderson \emph{et al.}, who derived analogous relations in the two-body system~\cite{Anderson:2010aq}, and suggests a possible interpretation of the universal high-momentum dependence and scaling behavior found in nuclear momentum distributions in the analysis of $(e,e'p)$ reactions. As a check, we have successfully applied our factorization relations to two well-studied systems, the unitary Fermi gas and the electron gas, reproducing known results for the high-momentum tails of each. Our proof of factorization follows from decoupling and the separation of scales, and resembles aspects of the OPE in quantum field theory. Unfortunately, we have not been able to establish a precise connection. The main difference appears to be that, in a local quantum field theoretical framework, the OPE offers a controlled expansion since the scaling dimension of a given local operator uniquely fixes the $\p{r}$-dependence of the corresponding Wilson coefficient, making the truncation of the expansion controllable. In contrast, in the present paper we work in the domain of general non-relativistic quantum mechanics and do not require that the system is described by a local QFT. This relaxation of assumptions allows us to extend the notion of factorization and OPE-like methods to a wider class of problems, albeit in a less controlled fashion since we cannot make precise statements about the scaling properties of the operators kept/omitted in our expansions. Nevertheless, the methods presented in this paper may still be useful in low-energy nuclear physics, as they provide tools that allow us to parameterize the high-momentum components of operators that would normally require degrees of freedom that we do not retain. We can, for example, build effective operators containing state-independent functions of high momenta that can in principle be extracted from few-body data, and subsequently used to make predictions for high-momentum processes in $A$-body systems. \begin{acknowledgments} We thank Eric Anderson, Dick Furnstahl, Kai Hebeler, Heiko Hergert, Robert Perry, and Lucas Platter for useful comments and discussions. This work was supported in part by the National Science Foundation under Grant Nos. PHY-0758125 and PHY-1068648. \end{acknowledgments}
1208.1443
\section{Introduction} \label{sec:intro} Expressing convex optimization problems in conic form, as the minimization of a linear functional over an affine slice of a convex cone, has been an important method in the development of modern convex optimization theory. This abstraction is useful (at least from a theoretical viewpoint) because all that is difficult and interesting about the problem is packaged into the cone. The conic viewpoint provides a natural way to organize classes of convex optimization problems into hierarchies based on whether the cones associated with one class can be expressed in terms of the cones associated with another class. For example, semidefinite programming generalizes linear programming because the non-negative orthant is the restriction to the diagonal of the positive semidefinite cone. When faced with a convex cone the geometry of which is not well understood, we stand to gain theoretical insight as well as off-the-shelf optimization algorithms by representing it in terms of a cone with known geometric and algebraic structure such as the positive semidefinite cone. Terminology is attached to this idea, with a cone being \emph{spectrahedral} if it is a linear section (or `slice') of the positive semidefinite cone, and \emph{semidefinitely representable} if it is a linear projection of a spectrahedral cone. The efficiency of a semidefinite representation is also clearly important. If we can write a cone as the projection of a slice of the cone of $m\times m$ positive semidefinite matrices, we say it has a semidefinite representation of \emph{size $m$}. Many convex cones have been shown to be semidefinitely representable using a variety of techniques (see \cite{nemirovski2001lectures} as well as the recent book \cite{frgbook} for contrasting methods and examples). The classes of semidefinitely representable cones and spectrahedral cones are distinct \cite{ramana1995some}, with semidefinitely representable cones being perhaps more natural from the point of view of optimization. A semidefinite representation of a cone suffices to express the associated cone program as a semidefinite program. Furthermore, unlike spectrahedral cones, the class of semidefinitely representable cones is closed under duality~\cite[Proposition 3.2]{gouveia2011positive}. The \emph{hyperbolicity cones} form a family of convex cones (constructed from certain multivariate polynomials) that includes the positive semidefinite cone, as well as all homogeneous cones~\cite{guler1997hyperbolic}. While it has been shown (by Lewis et al.~\cite{lewis2005lax} based on work of Helton and Vinnikov~\cite{helton2007linear}) that all three-dimensional hyperbolicity cones are spectrahedral, little is known about semidefinite representations of higher dimensional hyperbolicity cones. Furthermore while hyperbolicity cones have very simple descriptions, their dual cones are not well understood. In this paper we give explicit, polynomial-sized semidefinite representations of the hyperbolicity cones known as the \emph{derivative relaxations} of the non-negative orthant, and the corresponding derivative relaxations of the positive semidefinite cone. These cones form a family of outer approximations to the orthant and positive semidefinite cones respectively with many interesting properties~\cite{renegar2006hyperbolic}. We obtain semidefinite representations of the derivative relaxations of spectrahedral cones as slices of the derivative relaxations of the positive semidefinite cone. \subsection{Hyperbolic polynomials and hyperbolicity cones} \label{sec:hyp-poly} A homogeneous polynomial $p$ of degree $m$ in $n$ variables is \emph{hyperbolic} with respect to $e\in \mathbb{R}^n$ if $p(e) \neq 0$ and if for all $x\in \mathbb{R}^n$ the univariate polynomial $t\mapsto p(x-te)$ has only real roots. G\r{a}rding's foundational work on hyperbolic polynomials \cite{garding1959inequality} establishes that if $p$ is hyperbolic with respect to $e$ then the connected component of $\{x\in \mathbb{R}^n: p(x) \neq 0\}$ containing $e$ is an open convex cone. This cone is called the \emph{hyperbolicity cone} corresponding to $(p,e)$. We denote it by $\Lambda_{++}(p,e)$, and its closure by $\Lambda_{+}(p,e)$. Note that $p$ is hyperbolic with respect to $e$ if and only if $-p$ is hyperbolic with respect to $e$. As such we assume throughout that $p(e) > 0$. We can expand $p(x+te)$ as \[ p(x+te) = p(e)\left[t^{m} + a_1(x)t^{m-1} + a_2(x)t^{m-2} + \cdots + a_{m-1}(x)t + a_m(x)\right]\] where the $a_i(x)$ are polynomials that are homogeneous of degree $i$. There is an alternative description of the hyperbolicity cone $\Lambda_{+}(p,e)$ due to Renegar~\cite[Theorem 20]{renegar2006hyperbolic} as \begin{equation} \label{eq:coefcone} \Lambda_{+}(p,e) = \left\{x\in \mathbb{R}^n: a_1(x) \geq 0,\;\;a_2(x) \geq 0,\;\; \ldots,\;\;a_m(x) \geq 0\right\}. \end{equation} We use this description of $\Lambda_+(p,e)$ throughout the paper. \paragraph{Basic examples:} \begin{itemize} \item The polynomial $p(x_1,x_2,\ldots,x_n) = x_1 x_2 \cdots x_n$ is hyperbolic with respect to $e=\mathbf{1}_n:=(1,1,\ldots,1)$. The associated closed hyperbolicity cone is the non-negative orthant, $\mathbb{R}_+^n$. Since \[ p(x+t\mathbf{1}_n) = t^{n} + e_1(x)t^{n-1} + \cdots + e_{n-1}(x)t + e_n(x)\] where $e_k(x) = \sum_{1\leq i_1<\cdots<i_k \leq n} x_{i_1}\cdots x_{i_k}$ is the elementary symmetric polynomial of degree $k$ in the variables $x_1,x_2,\ldots,x_n$, \[ \Lambda_{+}(p,e) = \mathbb{R}^{n}_+ = \left\{x\in \mathbb{R}^n: e_1(x) \geq 0,\;\; e_2(x)\geq 0,\;\;\ldots,\;\;e_n(x)\geq 0\right\}.\] \item Let $X$ by an $n\times n$ symmetric matrix of indeterminates. The polynomial $p(X) = \det(X)$ is hyperbolic with respect to $e=I_n$, the $n\times n$ identity matrix. The associated closed hyperbolicity cone is the positive semidefinite cone, $\mathbb{S}_+^n$. Since \[ p(X+tI_n) = t^{n} + E_{1}(X)t^{n-1} + \cdots + E_{n-1}(X)t + E_n(X)\] where the $E_k(X)$ are the coefficients of the characteristic polynomial of $X$, \[ \Lambda_+(p,e) = \mathbb{S}_+^n = \left\{X: E_1(X)\geq 0,\;\;E_2(X)\geq 0, \;\;\ldots,\;\;E_n(X) \geq 0\right\}.\] Observe that $E_{k}(X):= e_k(\lambda(X))$ is the elementary symmetric polynomial of degree $k$ in the eigenvalues of $X$ so the positive semidefinite cone can also be described in terms of polynomial inequalities on the eigenvalues of $X$ as \[ \mathbb{S}_+^n = \left\{X: e_1(\lambda(X)) \geq 0,\;\;e_2(\lambda(X)) \geq 0,\;\; \ldots,\;\;e_n(\lambda(X)) \geq 0\right\}.\] \end{itemize} \subsection{Derivative relaxations} \label{sec:deriv-relaxations} If $p$ is hyperbolic with respect to $e$ then (essentially by Rolle's theorem) the directional derivative of $p$ in the direction $e$, \emph{viz.} \[ p^{(1)}_e(x):= \left.\frac{d}{dt}p(x+te)\right|_{t=0}\] is also hyperbolic with respect to $e$, a construction that goes back to G\r{a}rding \cite{garding1959inequality}. If $p$ has degree $m$, by repeatedly differentiating in the direction $e$ we construct a sequence of polynomials $p,p^{(1)}_e,p^{(2)}_e,\ldots,p^{(m-1)}_e$ each hyperbolic with respect to $e$. The corresponding hyperbolicity cones can be expressed nicely in terms of polynomial inequalities. Indeed if $p(x+te) = p(e)\left[t^m + \sum_{i=1}^{m} a_{i}(x)t^{m-i}\right]$ then differentiating $k$ times with respect to $t$ we see that \[ p_e^{(k)}(x+te) = p(e)\left[c_0a_{m-k}(x) + c_1a_{m-k-1}(x)t + \cdots +c_{m-k}t^{m-k}\right]\] where $c_i = (k+i){!}/i{!} > 0$. By~\eqref{eq:coefcone} the corresponding hyperbolicity cone is \[\Lambda_+^{(k)}(p,e):=\Lambda_{+}(p^{(k)}_e,e) = \{x\in \mathbb{R}^n:a_1(x)\geq 0,\;\; a_2(x)\geq 0,\;\;\ldots, \;\;a_{m-k}(x) \geq 0\}\] and can be obtained from \eqref{eq:coefcone} by removing $k$ of the inequality constraints. As a result, the hyperbolicity cones $\Lambda_+^{(k)}(p,e)$ provide a sequence of outer approximations to the original hyperbolicity cone that satisfy \[ \Lambda_{+}(p,e) \subset \Lambda_{+}^{(1)}(p,e) \subset \cdots \subset \Lambda_{+}^{(m-1)}(p,e).\] The last of these, $\Lambda_{+}^{(m-1)}(p,e)$, is simply the closed half-space defined by $e$. The work of Renegar \cite{renegar2006hyperbolic} highlights the many nice properties of this sequence of approximations. Note that we abuse terminology by referring to the cones $\Lambda_{+}^{(k)}(p,e)$ as \emph{derivative relaxations} of the hyperbolicity cone $\Lambda_{+}(p,e)$. The abuse is that $\Lambda_{+}^{(k)}(p,e)$ does not depend only on the \emph{geometric} object $\Lambda_+(p,e)$ but on its particular \emph{algebraic} description via $p$ and $e$. \paragraph{Examples:} \begin{itemize} \item In the case of $p(x) = x_1x_2\cdots x_n = e_n(x)$ and $e=\mathbf{1}_n$, we have that $p^{(k)}_e(x) = k{!}e_{n-k}(x)$. Consequently the $k$th derivative relaxation of the orthant, which we denote by $\orthant{n}{k}$, is the hyperbolicity cone $\Lambda_{+}(e_{n-k},\mathbf{1}_n)$. It can be expressed as \begin{align} \orthant{n}{k} & = \{x\in \mathbb{R}^n: e_1(x) \geq 0,\;\; e_2(x) \geq 0,\;\; \ldots, \;\; e_{n-k}(x) \geq 0\}.\label{eq:coef-deriv} \end{align} Consistent with these descriptions we define $\orthant{n}{n} := \mathbb{R}^n$. \item In the case of $p(X) = \det(X) = E_n(X)$ and $e=I_n$, we have that $p^{(k)}_e(x) = k{!}E_{n-k}(X)$. The $k$th derivative relaxation of the positive semidefinite cone, which we denote by $\psdcone{n}{k}$, can be described as \begin{align} \psdcone{n}{k} & = \left\{X\in \mathbb{S}^n: E_1(x) \geq 0,\;\; E_2(x) \geq 0,\;\; \ldots,\;\; E_{n-k}(x) \geq 0\right\}\label{eq:psd-coef-deriv}\\ & = \left\{X\in \mathbb{S}^n: e_1(\lambda(X)) \geq 0,\;\; e_2(\lambda(X))\geq 0,\;\;\ldots,\;\; e_{n-k}(\lambda(X)) \geq 0\right\}\label{eq:psd-coef-deriv-spec} \end{align} Again we define $\psdcone{n}{n} := \mathbb{S}^n$, the set of $n\times n$ symmetric matrices. Since $E_i(\diag(x)) = e_i(x)$ for all $i$, the diagonal slice of $\psdcone{n}{k}$ is exactly $\orthant{n}{k}$. \end{itemize} \paragraph{Symmetry:} Suppose $G$ is a group acting by linear transformations on $\mathbb{R}^n$ by $x\mapsto g\cdot x$ for all $g\in G$. Suppose \emph{both $p$ and $e$} are invariant under the group action, i.e., $g\cdot e = e$ and $(g\cdot p)(x) := p(g^{-1}\cdot x) = p(x)$ for all $g\in G$. Then for all $t\in \mathbb{R}$, $x\in\mathbb{R}^n$ and $g\in G$ \[ p(x+te) = (g\cdot p)(x+te) = p(g^{-1}\cdot (x+te)) = p((g^{-1}\cdot x) + te).\] Hence the hyperbolicity cone $\Lambda_+(p,e)$ and all of its derivative cones $\Lambda_+^{(k)}(p,e)$ are invariant under this same group action. For our purposes an important example of this is the symmetry of the cones $\psdcone{n}{k}$. The action of $O(n)$ by conjugation on symmetric matrices leaves the polynomial $p(X) = \det(X)$ invariant \emph{and} preserves the direction $e = I_n$. Hence all of the derivative relaxations of the positive semidefinite cone are invariant under conjugation by orthogonal matrices. As such, the cones $\psdcone{n}{k}$ are \emph{spectral sets}, in the sense that whether a symmetric matrix $X$ belongs to $\psdcone{n}{k}$ depends only on the eigenvalues of $X$. This is evident from the description of $\psdcone{n}{k}$ in~\eqref{eq:psd-coef-deriv-spec}. \subsection{Related work} \label{sec:related} Previous work has focused on semidefinite and spectrahedral representations of the derivative relaxations of the orthant. Zinchenko \cite{zinchenko2008hyperbolicity} used a decomposition approach to give semidefinite representations of $\orthant{n}{1}$ and its dual cone. Sanyal \cite{sanyal2013derivative} subsequently gave spectrahedral representations of $\orthant{n}{1}$ and $\orthant{n}{n-2}$ and conjectured that all of the derivative relaxations of the orthant admit spectrahedral representations. Recently Br{\"a}nd{\'e}n \cite{branden2012hyperbolicity} settled this conjecture in the affirmative giving spectrahedral representations of $\orthant{n}{n-k}$ for $k=1,2,\ldots,n-1$ of size $O(n^{k-1})$. For each $1\leq k < n$ Br{\"a}nd{\'e}n constructs a graph $G_{n,k} = (V,E)$ together with edge weights $(w_e(x))_{e\in E}$ that are linear forms in $x$ so that \begin{equation} \label{eq:branden} \orthant{n}{n-k} = \left\{x\in \mathbb{R}^n: L_{G_{n,k}}(x) \succeq 0\right\} \end{equation} where $L_{G_{n,k}}(x)$ is the $|V|\times |V|$ edge-weighted Laplacian of $G_{n,k}$. Since $L_{G_{n,k}}(x)$ is linear in the edge weights, and the edge weights are linear forms in $x$, \eqref{eq:branden} is a spectrahedral representation of size $|V|$. With the exception of two distinguished vertices, the vertices of $G_{n,k}$ are indexed by all $\ell$-tuples (for $1\leq \ell\leq k-1$) consisting of distinct elements of $\{1,2,\ldots,n\}$. Hence $|V| = 2+\sum_{\ell=1}^{k-1}\ell{!}\binom{n}{\ell}$ showing that Br{\"a}nd{\'e}n's spectrahedral representation of $\orthant{n}{n-k}$ has size $O(n^{k-1})$. While Br{\"a}nd{\'e}n's construction is of considerable theoretical interest, these representations (unlike ours) are not practical for optimization due to their prohibitive size. A spectrahedral representation of $\orthant{n}{1}$ is implicit in the work of Choe et al.~\cite{choe2004homogeneous} that studies the relationships between matroids and hyperbolic polynomials. Choe et al.~observe that if $\mathcal{M}$ is a \emph{regular} matroid represented by the rows of a totally unimodular matrix $V$ then $\det(V^T\diag(x)V)$ is the basis generating polynomial of $\mathcal{M}$. In particular, the uniform matroid $U_{n}^{n-1}$ is regular and has $e_{n-1}(x)$ as its basis generating polynomial, yielding a symmetric determinantal representation of $e_{n-1}(x)$ and hence a spectrahedral representation of $\orthant{n}{n-1}$. From a computational perspective, G\"{u}ler \cite{guler1997hyperbolic} showed that if $p$ has degree $m$ and is hyperbolic with respect to $e$ then $\log p$ is a self-concordant barrier function (with barrier parameter $m$) for the hyperbolicity cone $\Lambda_+(p,e)$. As such, as long as $p$ and its gradient and Hessian can be computed efficiently, one can use interior point methods to minimize a linear functional over an affine slice of $\Lambda_+(p,e)$ efficiently. Renegar \cite[Section 9]{renegar2006hyperbolic} gave an efficient interpolation-based method for computing $p_e^{(k)}$ (and its gradient and Hessian) whenever $p$ (and its gradient and Hessian) can be evaluated efficiently. G\"{u}ler and Renegar's observations together yield efficient computational methods to optimize a linear functional over an affine slice of a derivative relaxation of a spectrahedral cone. Our results complement these, giving a method to solve optimization problems of this type using existing numerical procedures for semidefinite programming. \subsection{Notation} Here we define notation not explicitly defined elsewhere in the paper. If $C$ is a convex cone, we denote by $C^{*}$ the dual cone, i.e.~the set of linear functionals that are non-negative on $C$. We represent linear functionals on $\mathbb{R}^n$ using the standard Euclidean inner product, and linear functionals on $\mathbb{S}^n$ using the trace inner product $\langle X,Y\rangle = \textup{tr}(XY)$. As such $C^* = \{y: \langle y,x\rangle \geq 0,\;\;\text{for all $x\in C$}\}$. If $X\in \mathbb{S}^n$ let $\lambda(X)$ denote its eigenvalues sorted so that $\lambda_1(X)\geq \lambda_2(X)\geq \cdots \geq \lambda_n(X)$. If $X\in\mathbb{S}^n$ let $\diag(X)\in \mathbb{R}^n$ denote the vector of diagonal entries and if $x\in \mathbb{R}^n$ let $\diag(x)$ denote the diagonal matrix with diagonal entries given by $x$. The usage will be clear from the context. \section{Results} \label{sec:results} Our main contribution is to construct two different explicit polynomial-sized semidefinite representations of the derivative relaxations of the positive semidefinite cone. We call our two representations the \emph{derivative-based} and \emph{polar derivative-based} representations respectively. In this section we describe these representations, and outline the proof of our main theoretical result. \begin{theorem} \label{thm:main} For each positive integer $n$ and each $k=1,2,\ldots,n-1$, the cone $\psdcone{n}{k}$ has a semidefinite representation of size $O(\min\{k,n-k\}n^2)$. \end{theorem} We defer detailed proofs of the correctness of our representations to Sections~\ref{sec:main-pfs} and~\ref{sec:btn}. At this stage, we just highlight that there is essentially one basic algebraic fact that underlies all of our results. Whenever $V_n$ is an $n\times (n-1)$ matrix with orthonormal columns that are each orthogonal to $\mathbf{1}_n$, i.e.\ $V_n^TV_n = I_{n-1}$ and $V_n^T\mathbf{1}_n = 0$, then \[ e_{n-1}(x) = n\det(V_n^T\diag(x)V_n).\] We give a proof of this identity in Section~\ref{sec:main-pfs}. Note that this identity is independent of the particular choice of $V_n$ satisfying $V_n^TV_n = I_{n-1}$ and $V_n^T\mathbf{1}_n = 0$. In fact, all of the results expressed in terms of $V_n$ (notably Propositions~\ref{prop:RS1},~\ref{prop:RS2},~\ref{prop:RS1-dual}, and~\ref{prop:RS2-dual}) are similarly independent of the particular choice of $V_n$. Both of the representations are recursive in nature. The derivative-based representation is based on recursively applying two basic propositions (Propositions~\ref{prop:btn} and~\ref{prop:RS1}, to follow) to construct a chain of semidefinite representations of the form \begin{align} \framebox{$\psdcone{n}{k}$} \xleftarrow[\textup{Prop.~\ref{prop:btn}}]{O(n^2)} \orthant{n}{k} \xleftarrow[\textup{Prop.~\ref{prop:RS1}}]{0} \framebox{$\psdcone{n-1}{k-1}$}\xleftarrow[\textup{Prop.~\ref{prop:btn}}]{O((n-1)^2)}& \orthant{n-1}{k-1} \leftarrow \cdots\label{eq:td} \\ \cdots&\leftarrow \orthant{n-k+1}{1} \xleftarrow[\textup{Prop.~\ref{prop:RS1}}]{0} \framebox{$\psdcone{n-k}{0}$.}\nonumber \end{align} The annotated arrow $C\xleftarrow[\textup{Prop.~a}]{m} K$ indicates that given a semidefinite representation of $K$ of size $m'$ we can construct a semidefinite representation of $C$ of size $m'+m$, and that an explicit description of the construction is given in Proposition $a$. The base case of the recursion is just the positive semidefinite cone $\psdcone{n-k}{0}$, which has a trivial semidefinite representation. Hence starting from $\psdcone{n-k}{0}$ (which has a semidefinite representation of size $n-k$), we can apply Proposition~\ref{prop:RS1} to obtain a semidefinite representation of $\orthant{n-k+1}{1}$ of size $n-k$, then apply Proposition~\ref{prop:btn} to obtain a semidefinite representation of $\psdcone{n-k+1}{1}$ of size $(n-k) + O((n-k+1)^2)$, and so on. The polar derivative-based representation is based on recursively applying Proposition~\ref{prop:btn} together with a third basic proposition (Proposition~\ref{prop:RS2}, to follow) to construct a slightly different chain of semidefinite representations of the form \begin{align} \framebox{$\psdcone{n}{k}$} \xleftarrow[\textup{Prop.~\ref{prop:btn}}]{O(n^2)} \orthant{n}{k} \xleftarrow[\textup{Prop.~\ref{prop:RS2}}]{n} \framebox{$\psdcone{n-1}{k}$} \xleftarrow[\textup{Prop.~\ref{prop:btn}}]{O(n^2)} &\orthant{n-1}{k} \leftarrow \cdots\nonumber\\ \cdots & \leftarrow \orthant{k+2}{k} \xleftarrow[\textup{Prop.~\ref{prop:RS2}}]{n} \framebox{$\psdcone{k+1}{k}$.}\label{eq:bu} \end{align} Note that the base case of the recursion is just $\psdcone{k+1}{k} = \{X\in \mathbb{S}^{k+1}:\; \textup{tr}(X) \geq 0\}$, a half-space. \subsection{Building blocks of the two recursions} We now describe the constructions related to each of the types of arrows in the recursions sketched above. The arrows labeled by Proposition~\ref{prop:btn} assert that we can construct a semidefinite representation of $\psdcone{n}{k}$ from a semidefinite representation of $\orthant{n}{k}$. This can be done in the following way. \begin{proposition} \label{prop:btn} If $\orthant{n}{k}$ has a semidefinite representation of size $m$, then $\psdcone{n}{k}$ has a semidefinite representation of size $m+O(n^2)$. Indeed \begin{equation} \label{eq:whysh} \psdcone{n}{k} = \left\{X\in \mathbb{S}^n: \exists z\in \mathbb{R}^n\;\;\text{s.t.}\;\; z\in \orthant{n}{k},\;\;(X,z) \in \textup{SH}_n\right\}, \end{equation} where $\textup{SH}_n$ is the \emph{Schur-Horn cone} defined as \[ \textup{SH}_n = \left\{(X,z):\;\;z_1\geq z_2\geq \cdots \geq z_n,\;\; X\in \textup{conv}_{Q\in O(n)} \{Q^T\diag(z)Q\}\right\}\] i.e.~the set of pairs $(X,z)$ such that $X$ is in the convex hull of all symmetric matrices with ordered spectrum $z$. The Schur-Horn cone has the semidefinite characterization \begin{align} (X,z)\in\textup{SH}_n\quad\text{if and only if} &\quad z_1\geq z_2 \geq \cdots \geq z_n\;\;\text{and}\nonumber\\ \text{there exists} & \quad t_2,\ldots,t_{n-1}\in \mathbb{R},\;\;Z_2,\ldots,Z_{n-1} \succeq 0\;\;\nonumber\\ \text{such that}&\quad \textup{tr}(X) = \textstyle{\sum_{j=1}^{n}z_j},\;\;X \preceq z_1 I,\;\;\text{and}\nonumber\\ \text{for $\ell=2,\ldots,n-1$,} & \quad X \preceq t_{\ell} I + Z_{\ell}\;\;\text{and}\;\; \ell\cdot t_{\ell} + \textup{tr}(Z_{\ell}) \leq \textstyle{\sum_{j=1}^{\ell}z_j}.\nonumber \end{align} \end{proposition} Proposition~\ref{prop:btn} holds because of the \emph{symmetry} of $\psdcone{n}{k}$. In particular it is a spectral set---invariant under conjugation by orthogonal matrices. The other reason this representation works is that diagonal slice of $\psdcone{n}{k}$ is $\orthant{n}{k}$. We discuss this result in more detail in Section~\ref{sec:btn}. The arrows in~\eqref{eq:td} labeled by Proposition~\ref{prop:RS1} appear only in the derivative-based recursion. They assert that we can obtain a semidefinite representation of $\orthant{n}{k}$ from a semidefinite representation of $\psdcone{n-1}{k-1}$. Indeed we establish in Section~\ref{sec:RS1-pfs} that $\orthant{n}{k}$ is actually a slice of $\psdcone{n-1}{k-1}$. \begin{proposition} \label{prop:RS1} If $1\leq k \leq n-1$ then $\orthant{n}{k} = \left\{x\in \mathbb{R}^n: V_n^T\diag(x)V_n\in \psdcone{n-1}{k-1}\right\}$. \end{proposition} The arrows in~\eqref{eq:bu} labeled by Proposition~\ref{prop:RS2} appear only in the polar derivative-based recursion. They assert that we can obtain a semidefinite representation of $\orthant{n}{k}$ from a semidefinite representation of $\psdcone{n-1}{k}$. We establish the following in Section~\ref{sec:RS2-pfs}. \begin{proposition} \label{prop:RS2} If $1\leq k \leq n-2$ then \[ \orthant{n}{k} = \left\{x\in \mathbb{R}^n: \exists Z\in \psdcone{n-1}{k}\;\;\text{s.t.}\;\; \diag(x) \succeq V_n Z V_n^T\right\}.\] \end{proposition} \subsection{Size of the representations} Recall that each arrow $C\xleftarrow{m} K$ in~\eqref{eq:td} and~\eqref{eq:bu} is labeled with the \emph{additional size} $m$ required to implement the representation of $C$ given a semidefinite representation of $K$. Since the derivative-based recursion has $2k$ arrows, it is immediate from \eqref{eq:td} that the derivative-based semidefinite representation of $\psdcone{n}{k}$ has size $O(kn^2)$ and so is of polynomial size. On the other hand, this approach gives a disappointingly large semidefinite representation of the half-space $\psdcone{n}{n-1} = \{X\in \mathbb{S}^n: \textup{tr}(X) \geq 0\}$ of size $O(n^3)$. The derivative-based approach cannot exploit the fact that this is a very simple cone. This is why we also consider the polar derivative-based representation, as it is designed around the fact that $\psdcone{n}{n-1}$ has a simple semidefinite representation. It is immediate from \eqref{eq:bu} that the polar derivative-based semidefinite representation of $\psdcone{n}{k}$ has size $O((n-k)n^2)$ and so is also of polynomial size. Furthermore, it gives small representations of size $O(n^2)$ exactly when the derivative-based representations are large, of size $O(n^3)$. For any given pair $(n,k)$ we should always use the derivative-based representation of $\psdcone{n}{k}$ if $k < n/2$ and the polar derivative-based representation when $k> n/2$. Theorem~\ref{thm:main} combines our two size estimates, stating that $\psdcone{n}{k}$ has a semidefinite representation of size $O(\min\{k,n-k\}n^2)$. \subsection{Pseudocode for our derivative-based representation} We do not write out any of our semidefinite representations in full because the recursive descriptions given here are actually more naturally suited to implementation. To illustrate this, we give pseudocode for the MATLAB-based high-level modeling language YALMIP~\cite{lofberg2004yalmip} that `implements' the derivative-based representations of $\psdcone{n}{k}$ and $\orthant{n}{k}$. Decision variables are declared by expressions like \texttt{x = sdpvar(n,1);} which creates a decision variable \texttt{x} taking values in $\mathbb{R}^n$. An LMI object is a list of equality constraints and linear matrix inequality constraints that are linear in any declared decision variables. Suppose we have a function \texttt{SH(X,z)} that takes a pair of decision variables and returns an LMI object corresponding to the constraint that $(X,z)\in \textup{SH}_n$. This is easy to construct from the explicit semidefinite representation in Proposition~\ref{prop:btn}. Then the function \texttt{psdcone} takes an $n\times n$ symmetric matrix-valued decision variable \texttt{X} and returns an LMI object for the constraint $X\in \psdcone{n}{k}$. \begin{align*} \texttt{1:}\quad&\texttt{function K = \underline{psdcone}(X,k)}\\ \texttt{2:}\quad &\qquad\texttt{if k==0}\\ \texttt{3:}\quad &\qquad\qquad\texttt{K = [X >= 0];}\\ \texttt{4:}\quad &\qquad\texttt{else}\\ \texttt{5:}\quad &\qquad\qquad\texttt{z = sdpvar(size(X,1),1);}\\ \texttt{6:}\quad & \qquad\qquad\texttt{K = [\underline{orthant}(z,k), \underline{SH}(X,z)];}\\ \texttt{7:}\quad &\qquad\texttt{end}\\\intertext{It calls a function \texttt{orthant} that takes a decision variable \texttt{x} in $\mathbb{R}^n$ and returns an LMI object for the constraint $x\in \orthant{n}{k}$.} \texttt{1:}\quad&\texttt{function K = \underline{orthant}(x,k)}\\ \texttt{2:}\quad&\qquad\texttt{if k==0}\\ \texttt{3:}\quad&\qquad\qquad\texttt{K = [x >= 0];}\\ \texttt{4:}\quad& \qquad\texttt{else}\\ \texttt{5:}\quad& \qquad\qquad\texttt{V = null(ones(size(x))');}\\ \texttt{6:}\quad &\qquad\qquad\texttt{K = [\underline{psdcone}(V'*diag(x)*V,k-1)];}\\ \texttt{7:}\quad & \qquad\texttt{end} \end{align*} It is straightforward to adapt these two functions for the polar derivative-based representation, one needs only to change the base cases (lines 2--4 of each) and to adapt line 6 of \texttt{orthant} to reflect Proposition~\ref{prop:RS2}. \subsection{Dual cones} If a cone is semidefinitely representable, so is its dual cone. In fact there are explicit procedures to take a semidefinite representation for a cone and produce a semidefinite representation for its dual cone~\cite[Section 4.1.1]{nemirovski2006advances}. Here we describe two explicit semidefinite representations of the dual cones $(\psdcone{n}{k})^*$ that enjoy the same recursive structure as the corresponding semidefinite representations of $\psdcone{n}{k}$. To construct them, we essentially dualize all the relationships given by the arrows in~\eqref{eq:td} and~\eqref{eq:bu}. By straightforward applications of a conic duality argument, in Section~\ref{sec:duals} we establish the following dual analogues of Propositions~\ref{prop:RS1} and~\ref{prop:RS2}. \begin{propositiontwodual} \label{prop:RS1-dual} If $1\leq k \leq n-1$ then \[ (\orthant{n}{k})^{*} = \left\{\diag(V_n Y V_n^T):\;\; Y\in (\psdcone{n-1}{k-1})^*\right\}.\] \end{propositiontwodual} \begin{propositionthreedual} \label{prop:RS2-dual} If $1\leq k \leq n-2$ then \[ (\orthant{n}{k})^* = \left\{\diag(Y):\;\; Y \succeq 0,\;\; V_n^TYV_n \in (\psdcone{n-1}{k})^*\right\}.\] \end{propositionthreedual} We could also obtain a dual version of Proposition~\ref{prop:btn} by directly applying conic duality to the semidefinite representation in Proposition~\ref{prop:btn}. This would involve dualizing the semidefinite representation of $\textup{SH}_n$. Instead we give another, perhaps simpler, representation of $(\psdcone{n}{k})^*$ in terms of $(\orthant{n}{k})^*$ that is not obtained by directly applying conic duality to Proposition~\ref{prop:btn}. \begin{propositiononedual} \label{prop:btn-dual} If $(\orthant{n}{k})^*$ has a semidefinite representation of size $m$, then $(\psdcone{n}{k})^*$ has a semidefinite representation of size $m+O(n^2)$ given by \begin{equation} \label{eq:dualsh} (\psdcone{n}{k})^* = \left\{W\in \mathbb{S}^n: \exists y\in \mathbb{R}^{n}\;\;\text{s.t.}\;\; y\in (\orthant{n}{k})^*,\;\; (W,y)\in \textup{SH}_n\right\}. \end{equation} \end{propositiononedual} Recall that Proposition~\ref{prop:btn} holds because $\psdcone{n}{k}$ is invariant under orthogonal conjugation and $\orthant{n}{k}$ is the diagonal slice of $\psdcone{n}{k}$. While it is immediate that $(\psdcone{n}{k})^*$ is also orthogonally invariant, it is a less obvious result that the diagonal slice of $(\psdcone{n}{k})^*$ is $(\orthant{n}{k})^*$. We prove this in Section~\ref{sec:btn}. The recursions underlying the derivative-based and polar derivative-based representations of $(\psdcone{n}{k})^*$ then take the form \begin{equation} \label{eq:dual-td} (\psdcone{n}{k})^* \leftarrow (\orthant{n}{k})^* \leftarrow (\psdcone{n-1}{k-1})^*\leftarrow \cdots \leftarrow (\orthant{n-k+1}{1})^* \leftarrow (\psdcone{n-k}{0})^* \end{equation} and, respectively, \begin{equation} \label{eq:dual-bu} (\psdcone{n}{k})^* \leftarrow (\orthant{n}{k})^* \leftarrow (\psdcone{n-1}{k})^* \leftarrow \cdots \leftarrow (\orthant{k+2}{k})^* \leftarrow (\psdcone{k+1}{k})^*. \end{equation} Note that for the dual derivative-based representation, the base case is $(\psdcone{n-k}{0})^* = \mathbb{S}_+^{n-k}$ (since the positive semidefinite cone is self dual). For the dual polar derivative-based representation the base case is $(\psdcone{k+1}{k})^* = \{tI_{k+1}: t\geq 0\}$, the ray generated by the identity matrix in $\mathbb{S}^{k+1}$. \subsection{Derivative relaxations of spectrahedral cones} \label{sec:spec} So far we have focused on the derivative relaxations of the positive semidefinite cone. It turns out that the derivative relaxations of spectrahedral cones are just slices of the associated derivative relaxations of the positive semidefinite cone. \begin{proposition} Suppose $p(x) = \det(\sum_{i=1}^{n}A_ix_i)$ where the $A_i$ are $m\times m$ symmetric matrices and $e\in \mathbb{R}^n$ is such that $\sum_{i=1}^{n}A_ie_i = B$ is positive definite. Then for $k=0,1,\ldots,m-1$, \[ \Lambda^{(k)}_{+}(p,e) = \left\{x\in \mathbb{R}^n: \sum_{i=1}^{n}B^{-1/2}A_iB^{-1/2}x_i \in \psdcone{m}{k}\right\}.\] \end{proposition} \begin{proof} Let $A(x) = \sum_{i=1}^{n}B^{-1/2}A_iB^{-1/2}x_i$. Then $A(e) = I$ and for all $x\in \mathbb{R}^n$ and all $t\in \mathbb{R}$ \[ p(x+te) = \det(B)\det(A(x+te)) = \det(B)\det(A(x)+tI).\] This implies that all the derivatives of $p$ in the direction $e$ are exactly the same as the corresponding derivatives of $\det(B)\det(X)$ in the direction $I$ evaluated at $X = A(x)$. Since $\det(B) > 0$, it follows that for $k=0,1,\ldots,m-1$, $x\in \Lambda_+^{(k)}(p,e)$ if and only if $A(x)\in \psdcone{m}{k}$. \end{proof} We conclude this section with an example of these constructions. \begin{example}[Derivative relaxations of a $3$-ellipse] Given foci $(0,0),(0,4)$ and $(3,0)$ in the plane, the $3$-ellipse consisting of points such that the sum of distances to the foci equals $8$ is shown in Figure~\ref{fig:3ellipse}. This is one connected component of the real algebraic curve of degree $8$ given by $\{(x,y)\in \mathbb{R}^2: \det \mathcal{E}(x,y,1)=0\}$ where $\mathcal{E}$ is defined in \eqref{eq:3ellipse} (see Nie et al.~\cite{nie2008semidefinite}). The region enclosed by this $3$-ellipse is the $z=1$ slice of the spectrahedral cone defined by $\mathcal{E}(x,y,z)\succeq 0$ where \begin{equation} \label{eq:3ellipse} \mathcal{E}(x,y,z)=\begin{bmatrix} 5z+3x & y & y-4z & 0 & y & 0 & 0 & 0\\ y & 5z+x & 0 & y-4z & 0 & y & 0 & 0\\ y-4z & 0 & 5z + x & y & 0 & 0 & y & 0\\ 0 & y-4z & y & 5z-x & 0 & 0 & 0 & y\\ y & 0 & 0 & 0 & 11z+x & y & y-4z & 0\\ 0 & y & 0 & 0 & y & 11z-x & 0 & y-4z\\ 0 & 0 & y & 0 & y-4z & 0 & 11z-x & y\\ 0 & 0 & 0 & y & 0 & y-4z & y & 11z-3x\end{bmatrix}. \end{equation} Note that $\mathcal{E}(0,0,1) \succ 0$ and so $e=(0,0,1)$ is a direction of hyperbolicity for $p(x,y,z) = \det \mathcal{E}(x,y,z)$. The left of Figure~\ref{fig:3ellipse} shows the $z=1$ slice of the cone $\Lambda_+(p,e)$ and its first three derivative relaxations $\Lambda_+^{(1)}(p,e),\Lambda_+^{(2)}(p,e)$, and $\Lambda_+^{(3)}(p,e)$. The right of Figure~\ref{fig:3ellipse} shows the $z=1$ slice of the cones $(\Lambda_+(p,e))^*, (\Lambda^{(1)}_+(p,e))^*, (\Lambda^{(2)}_+(p,e))^*$, and $(\Lambda_{+}^{(3)}(p,e))^*$. All of these convex bodies were plotted by computing 200 points on their respective boundaries by optimizing 200 different linear functionals over them. We performed the optimization by modeling our semidefinite representations of these cones in YALMIP \cite{lofberg2004yalmip} which numerically solved the corresponding semidefinite program using SDPT3 \cite{toh1999sdpt3}. \end{example} \begin{figure} \begin{center} \includegraphics[width = 0.4\linewidth]{figures/Fig1a}\hspace{1cm} \raisebox{0.35cm}{\includegraphics[width=0.3\linewidth]{figures/Fig1b}} \end{center} \caption{\label{fig:3ellipse} On the left, the inner region is the $3$-ellipse consisting of points with sum-of-distances to $(0,0),(0,4)$, and $(3,0)$ equal to $8$, i.e.~the $z=1$ slice of the spectrahedral cone defined by \eqref{eq:3ellipse}. The outer three regions are the $z=1$ slices of the first three derivative relaxations of this spectrahedral cone in the direction $(0,0,1)$. On the right are the $z=1$ slices of the dual cones of the cones shown on the left, with dual pairs having the same shading.} \end{figure} \section{The derivative-based and polar derivative-based recursive constructions} \label{sec:main-pfs} In this section we prove Proposition~\ref{prop:RS1} which relates $\orthant{n}{k}$ and $\psdcone{n-1}{k-1}$ as well as Proposition~\ref{prop:RS2} which relates $\orthant{n}{k}$ and $\psdcone{n-1}{k}$. These relationships are the geometric consequences of polynomial identities between elementary symmetric polynomials and determinants. Specifically the proof of Proposition~\ref{prop:RS1} makes use of a determinantal representation (Equation~\eqref{eq:cosw} in Section~\ref{sec:RS1-pfs}) of the derivative \begin{equation} \label{eq:derivexpansion} \textstyle{\left.\frac{\partial}{\partial t}e_n(sx+t\mathbf{1}_n)\right|_{s=1}} = \left[1 \cdot e_{n-1}(x) + \cdots + (n-1)\cdot e_1(x)t^{n-2} + n\cdot t^{n-1}\right]. \end{equation} (Note that $s$ plays no role in~\eqref{eq:derivexpansion}, we include it to highlight the relationship with~\eqref{eq:polarexpansion}.) Similarly the proof of Proposition~\ref{prop:RS2} relies on a determinantal expression (Equation~\eqref{eq:buid} in Section~\ref{sec:RS2-pfs}) for the polar derivative \begin{equation} \textstyle{\left.\frac{\partial}{\partial s}e_n(sx+t\mathbf{1}_n)\right|_{s=1}} = \left[n\cdot e_n(x) + (n-1)\cdot e_{n-1}(x)t + \cdots + 1\cdot e_1(x)t^{n-1}\right].\label{eq:polarexpansion} \end{equation} This explains why we call one the \emph{derivative-based representation}, and the other the \emph{polar derivative-based representation}. \subsection{The derivative-based recursion: relating $\orthant{n}{k}$ and $\psdcone{n-1}{k-1}$} \label{sec:RS1-pfs} Let $V_n$ denote an (arbitrary) $n\times (n-1)$ matrix satisfying $V_n^TV_n = I_{n-1}$ and $V_n^T\mathbf{1}_n = 0$. Our results in this section and the next stem from the following identity. \begin{lemma} \label{lem:main-id} For all $x\in \mathbb{R}^n$ and all $t\in \mathbb{R}$, \begin{equation} \label{eq:cosw} \textstyle{\left.\frac{\partial}{\partial t}e_n(sx+t\mathbf{1}_n)\right|_{s=1}} = e_{n-1}(x+t\mathbf{1}_n) = n\det(V_n^T\diag(x)V_n+tI_{n-1}). \end{equation} \end{lemma} This is a special case of an identity established by Choe et al.~\cite[Corollary 8.2]{choe2004homogeneous} and is closely related to Sanyal's result~\cite[Theorem 1.1]{sanyal2013derivative}. The proof of Choe et al.~uses the Cauchy-Binet identity. Here we provide an alternative proof. \begin{proof} The polynomial $e_{n-1}(x_1,x_2,\ldots,x_n)$ is characterized by satisfying $e_{n-1}(\mathbf{1}_n) = n$, and by being symmetric, homogeneous of degree $n-1$ and of degree one in each of the $x_i$. We show, below, that $n\det(V_n^T\diag(x)V_n)$ also has these properties and so that $e_{n-1}(x) = n\det(V_n^T\diag(x)V_n)$. The stated result then follows because $V_n^TV_n = I_{n-1}$ implies \[ e_{n-1}(x+t\mathbf{1}_n) = n\det(V_n^T\diag(x+t\mathbf{1}_n)V_n) = n\det(V_n^T\diag(x)V_n + tI_{n-1}).\] Now, it is clear that $\det(V_n^T\diag(x)V_n)$ is homogeneous of degree $n-1$ and that \[ n\det(V_n^T\diag(\mathbf{1}_n)V_n) = n\det(I_{n-1}) = n.\] It remains to establish that $\det(V_n^T\diag(x)V_n)$ is symmetric and of degree one in each of the $x_i$. To do so we repeatedly use the fact that if $V_n$ and $U_n$ both have orthonormal columns that span the orthogonal complement of $\mathbf{1}_n$ then $\det(V_n^T\diag(x)V_n) = \det(U_n^T\diag(x)U_n)$. The polynomial $\det(V_n^T\diag(x)V_n)$ is symmetric because for any $n\times n$ permutation matrix $P$ the columns of $V_n$ and $P V_n$ respectively are both orthonormal and each spans the orthogonal complement of $\mathbf{1}_n$ (because $P\mathbf{1}_n = \mathbf{1}_n$). Hence \[ \det(V_n^T\diag(Px)V_n) = \det((PV_n)^T\diag(x)(P V_n)) = \det(V_n^T\diag(x)V_n).\] We finally show that $\det(V_n^T\diag(x)V_n)$ is of degree one in each $x_i$ by a convenient choice of $V_n$. For any $i$, we can always choose $V_n$ to be of the form \[ V_n^T = \begin{bmatrix} v_1 & \cdots v_{i-1} & \sqrt{\frac{n-1}{n}}e_i & v_{i+1} & \cdots&v_n\end{bmatrix}\] where $e_i$ is the $i$th standard basis vector in $\mathbb{R}^{n-1}$. Then \[ \det(V_n^T\diag(x)V_n) = \det\left(x_i\left(\textstyle{\frac{n-1}{n}}\right)e_ie_i^T + \textstyle{\sum_{j\neq i}}x_j v_jv_j^T\right)\] which is of degree one in $x_i$ by the linearity of the determinant in its $i$th column. \end{proof} As observed by Sanyal, such a determinantal identity for $e_{n-1}(x)$ establishes that $\orthant{n}{1}$ is a slice of $\mathbb{S}_+^{n-1} = \psdcone{n-1}{1-1}$. We now have two expressions for the derivative $\left.\frac{\partial}{\partial t}e_n(sx+t\mathbf{1}_n)\right|_{s=1}$, one from the definition and one from~\eqref{eq:cosw}. Comparing them allows us to deduce Proposition~\ref{prop:RS1}, that $\orthant{n}{k}$ is a slice of $\psdcone{n-1}{k-1}$ for all $1\leq k \leq n-1$. \begin{proof}[of Proposition~\ref{prop:RS1}] From~\eqref{eq:derivexpansion} and~\eqref{eq:cosw} we see that \begin{align*} \textstyle{\left.\frac{\partial}{\partial t}e_n(sx+t\mathbf{1}_n)\right|_{s=1}} & = \left[1 \cdot e_{n-1}(x) + \cdots + (n-1)\cdot e_1(x)t^{n-2} + n\cdot t^{n-1}\right]\\ & = n\left[E_{n-1}(V_n^T\diag(x)V_n) + \cdots + E_1(V_n^T\diag(x)V_n)t^{n-2} + t^{n-1}\right]. \end{align*} Comparing coefficients of powers of $t$ we see that for $i=0,1,\ldots,n-1$ \[ nE_{(n-1)-(i-1)}(V_n^T\diag(x)V_n) = (n-i)e_{n-i}(x).\] Hence for $k=1,2,\ldots,n-1$, $x\in \orthant{n}{k}$ if and only if $V_n^T\diag(x)V_n\in \psdcone{n-1}{k-1}$. \end{proof} \subsection{The polar derivative-based recursion: relating $\orthant{n}{k}$ and $\psdcone{n-1}{k}$} \label{sec:RS2-pfs} In this section we relate $\orthant{n}{k}$ with $\psdcone{n-1}{k}$, eventually proving Proposition~\ref{prop:RS2}. Our argument follows a pattern similar to the previous section. First we give a determinantal expression for the polar derivative $\left.\frac{\partial}{\partial s}e_{n}(sx+t\mathbf{1}_n)\right|_{s=1}$, and then interpret it geometrically. While our approach here is closely related to the approach of the previous section, things are a little more complicated. This is not surprising because our construction aims to express $\orthant{n}{k}$, which has an algebraic boundary of degree $n-k$, in terms of $\psdcone{n-1}{k}$, which has an algebraic boundary of \emph{smaller} degree, $n-k-1$. Hence it is not possible for $\orthant{n}{k}$ simply to be a slice of $\psdcone{n-1}{k}$. \paragraph{Block matrix notation:} Let $\hat{\mathbf{1}}_n = \mathbf{1}_{n}/\sqrt{n}$ and define $Q_n = \begin{bmatrix} V_n & \hat{\mathbf{1}}_n\end{bmatrix}$ noting that $Q_n$ is orthogonal. It is convenient to introduce the block matrix \begin{equation} \label{eq:Mx} M(x) := Q_n^T\diag(x)Q_n = \begin{bmatrix} V_n^T\diag(x)V_n & V_n\diag(x)\hat{\mathbf{1}}_n\\ \hat{\mathbf{1}}_n^T\diag(x)V_n & \hat{\mathbf{1}}_n^T\diag(x)\hat{\mathbf{1}}_n\end{bmatrix} =:\begin{bmatrix} M_{11}(x) & M_{12}(x)\\ M_{12}(x)^T & M_{22}(x)\end{bmatrix} \end{equation} which reflects the fact that it is natural to work in coordinates that are adapted to the symmetry of the problem. (Indeed $\hat{\mathbf{1}}_n$ and the columns of $V_n$ each span invariant subspaces for the permutation action on the coordinates of $\mathbb{R}^n$.) \paragraph{Schur complements:} In this section our results are expressed naturally in term of the \emph{Schur complement} $ (M/M_{22})(x) := M_{11}(x) - M_{12}(x)M_{22}(x)^{-1}M_{12}(x)^T$ which is well defined whenever $e_1(x) = nM_{22}(x) \neq 0$. The following lemma summarizes the main properties of the Schur complement that we use. \begin{lemma} \label{lem:SCprop} If $M = \left[\begin{smallmatrix} M_{11}& M_{12}\\M_{12}^T & M_{22}\end{smallmatrix}\right]$ is a partitioned symmetric matrix with non-zero scalar $M_{22}$ and $M/M_{22} := M_{11} - M_{12}M_{22}^{-1}M_{12}^T$ then \begin{equation} \begin{bmatrix} M_{11} & M_{12}\\ M_{12}^T & M_{22}\end{bmatrix} = \begin{bmatrix} I_{n-1} & M_{12}M_{22}^{-1}\\ 0 & I_{1}\end{bmatrix} \begin{bmatrix} M/M_{22} & 0\\ 0 & M_{22}\end{bmatrix} \begin{bmatrix} I_{n-1} & 0\\M_{22}^{-1}M_{12}^T & I_1\end{bmatrix}. \label{eq:blockfactorization} \end{equation} This factorization immediately implies the following properties. \begin{itemize} \item If $M$ is invertible then the $(1,1)$ block of $M^{-1}$ is given by $[M^{-1}]_{11} = (M/M_{22})^{-1}$. \item If $M_{22}>0$ then \[ M \succeq 0 \Longleftrightarrow M/M_{22} \succeq 0.\] \end{itemize} \end{lemma} We now establish our determinantal expression for the polar derivative. \begin{lemma} \label{lem:polar} If $e_1(x) = nM_{22}(x) \neq 0$ then \begin{equation} \label{eq:buid} \textstyle{\left.\frac{\partial}{\partial s} e_n(sx+t\mathbf{1}_n)\right|_{s=1}} = e_1(x)\det((M/M_{22})(x) + tI_{n-1}). \end{equation} \end{lemma} \begin{proof} First assume $x_i \neq 0$ for $i=1,2,\ldots,n$. If $x\in \mathbb{R}^n$ let $x^{-1}$ denote its entry-wise inverse. Exploiting our determinantal expression for the derivative we see that \begin{align} \textstyle{\frac{\partial}{\partial s}e_n(sx+t\mathbf{1}_n)} & = \textstyle{e_n(x)\frac{\partial}{\partial s}e_{n}(s\mathbf{1}_n+tx^{-1})}\nonumber\\ & = e_n(x)e_{n-1}(s\mathbf{1}_n+tx^{-1})\nonumber\\ & \stackrel{*}{=} e_n(x)\,n\,\det(V_n^T\diag(tx^{-1}+s\mathbf{1}_n)V_n)\nonumber\\ & = e_n(x)\,n\,\det(V_n^{T}\diag(x^{-1})V_n)\det(tI_{n-1} + s(V_n^T\diag(x^{-1})V_n)^{-1})\nonumber\\ & \stackrel{*}{=} e_n(x)e_{n-1}(x^{-1})\det(tI_{n-1} + s(V_n^{T}\diag(x^{-1})V_n)^{-1})\nonumber\\ & = e_1(x)\det(tI_{n-1} + s(V_n^T\diag(x^{-1})V_n)^{-1})\label{eq:idsc} \end{align} where the equalities marked with an asterisk are due to~\eqref{eq:cosw}. Since $Q_n$ is orthogonal $M(x)^{-1} = (Q_n^T\diag(x)Q_n)^{-1} = Q_n^T\diag(x^{-1})Q_n = M(x^{-1})$. Hence using a property of the Schur complement from Lemma~\ref{lem:SCprop} we see that \[ (V_n^T\diag(x^{-1})V_n)^{-1} = [M(x^{-1})]_{11}^{-1} = [M(x)^{-1}]_{11}^{-1} = (M/M_{22})(x).\] Substituting this into~\eqref{eq:idsc} establishes the stated identity, which is clearly valid for all $x$ such that $e_1(x) = nM_{22}(x) \neq 0$. \end{proof} We now have two expressions for the polar derivative, namely~\eqref{eq:polarexpansion} and~\eqref{eq:buid}. One comes from the definition of polar derivative, the other from the determinantal representation of Lemma~\ref{lem:polar}. Expanding each and equating coefficients gives the following identities. \begin{lemma} \label{lem:polarids} Let $x\in \mathbb{R}^n$ be such that $e_1(x) = n M_{22}(x) \neq 0$. Then for $k=0,1,2,\ldots,n-1$ \[ e_1(x)E_{n-1-k}((M/M_{22})(x)) = (n-k)e_{n-k}(x).\] \end{lemma} \begin{proof} Expanding the polar derivative two ways (from Lemma~\ref{lem:polar} and~\eqref{eq:polarexpansion}) we obtain \begin{align*} \textstyle{\left.\frac{\partial}{\partial s}e_n(sx+t\mathbf{1}_n)\right|_{s=1}} & = \left[n\cdot e_n(x) + (n-1)\cdot e_{n-1}(x)t + \cdots + 1\cdot e_1(x)t^{n-1}\right]\\ & = e_1(x)\left[E_{n-1}((M/M_{22})(x)) + E_{n-2}((M/M_{22})(x))t + \cdots + t^{n-1}\right]. \end{align*} The result follows by equating coefficients of $t^k$. \end{proof} We are now in a position to prove the main result of this section. \begin{proof}[of Proposition~\ref{prop:RS2}] From the definition of $M(x)$ in~\eqref{eq:Mx}, observe that because $Q_n$ is orthogonal, the constraint $\diag(x) \succeq V_n Z V_n^T$ holds if and only if \[M(x) = Q_n^T\diag(x)Q_n \succeq Q_n^T(V_n Z V_n^T)Q_n = \left[\begin{smallmatrix} Z & 0\\0 & 0\end{smallmatrix}\right].\] Hence we aim to establish the following statement that is equivalent to Proposition~\ref{prop:RS2} \[ \orthant{n}{k} = \left\{x\in \mathbb{R}^n:\,\exists Z\in \psdcone{n-1}{k}\;\;\text{s.t.}\;\; M(x) \succeq \begin{bmatrix} Z & 0\\0 & 0\end{bmatrix}\right\}\quad\text{for $k=1,2,\ldots,n-2$}.\] The arguments that follow repeatedly use the fact (from Lemma~\ref{lem:SCprop}) that if $e_1(x) = n M_{22}(x) > 0$ then \begin{equation} \label{eq:SCspec} M(x) \succeq \begin{bmatrix} Z & 0\\0 & 0\end{bmatrix}\quad \Longleftrightarrow\quad (M/M_{22})(x) \succeq Z. \end{equation} With these preliminaries established, we turn to the proof of Proposition~\ref{prop:RS2}. First suppose there is $Z\in \psdcone{n-1}{k}$ such that $M(x) - \left[\begin{smallmatrix} Z & 0\\0 & 0\end{smallmatrix}\right]\succeq 0$. There are two cases to consider, depending on whether $M_{22}(x)$ is positive or zero. Suppose we are in the case where $e_1(x) = nM_{22}(x) > 0$. Then $(M/M_{22})(x) \succeq Z$, so there is some $Z' \in \mathbb{S}^{n-1}_+$ such that \[ (M/M_{22})(x) = Z+Z' \in \psdcone{n-1}{k} + \mathbb{S}^{n-1}_+ = \psdcone{n-1}{k}\] where the last equality holds because $\psdcone{n-1}{k} \supset \mathbb{S}^{n-1}_+$. It follows that $x\in \orthant{n}{k}$ because $e_1(x) > 0$ (by assumption) and by Lemma~\ref{lem:polarids}, \[ie_{i}(x) = e_1(x)E_{i-1}((M/M_{22})(x)) \geq 0\quad\text{for $i=2,3,\ldots,n-k$}.\] Now consider the case where $e_1(x) = nM_{22}(x) = 0$. Since \[ \begin{bmatrix} M_{11}(x) -Z & M_{12}(x)\\M_{12}(x)^{T} & M_{22}(x)\end{bmatrix} = \begin{bmatrix} M_{11}(x) - Z & V_n^T x/\sqrt{n}\\ x^TV_n/\sqrt{n} & 0\end{bmatrix} \succeq 0\] it follows that $V_n^Tx = 0$. Since, $\hat{\mathbf{1}}_n^T x =0$ we see that $Q_n^Tx = 0$ so $x=0\in \orthant{n}{k}$. Consider the reverse inclusion and suppose $x\in \orthant{n}{k}$. Again there are two cases depending on whether $e_1(x)$ is positive or zero. If $e_1(x) > 0$ take $Z = (M/M_{22})(x)$. Then, by~\eqref{eq:SCspec}, $M(x) \succeq \left[\begin{smallmatrix} Z & 0\\0 & 0\end{smallmatrix}\right]$. To see that $Z\in \psdcone{n-1}{k}$ note that by Lemma~\ref{lem:polarids}, \[E_{i}((M/M_{22})(x)) = (i+1)\frac{e_{i+1}(x)}{e_{1}(x)} \geq 0\quad\text{for $i=1,2,\ldots,n-1-k$}.\] If $x\in \orthant{n}{k}$ and $e_1(x)=0$ then we use the assumption that $k \leq n-2$. Under this assumption $x\in \orthant{n}{k}\cap\{x: e_1(x) = 0\} = \{0\}$. In this case we can simply take $Z=0\in \psdcone{n-1}{k}$ since $M(x) = 0 \succeq 0 = \left[\begin{smallmatrix} Z & 0\\0 & 0\end{smallmatrix}\right]$. \end{proof} \subsection{Dual relationships} \label{sec:duals} We conclude this section by establishing Propositions~\ref{prop:RS1-dual} and~\ref{prop:RS2-dual}, the dual versions of Propositions~\ref{prop:RS1} and~\ref{prop:RS2}. Both follow from general results about conic duality, such as the following rephrasing of \cite[Corollary 16.3.2]{rockafellar1997convex}. \begin{lemma} \label{lem:dual-general} Suppose $K\subset \mathbb{R}^m$ is a closed convex cone and $A:\mathbb{R}^p\rightarrow \mathbb{R}^m$ and $B:\mathbb{R}^{p}\rightarrow \mathbb{R}^n$ are linear maps. Let \[ C = \{B(x): A(x)\in K\}\subset \mathbb{R}^{n}.\] Furthermore, assume that there is $x_0\in \mathbb{R}^p$ such that $A(x_0)$ is in the relative interior of $K$. Then \[ C^* = \{w\in \mathbb{R}^{n}: \exists y\in K^*\;\;\text{s.t.}\;\; B^*(w) = A^*(y)\}.\] \end{lemma} \begin{proof}[of Proposition~\ref{prop:RS1-dual}] Define $A: \mathbb{R}^{n}\rightarrow \mathbb{S}^{n-1}$ by $A(x) = V_n^T\diag(x)V_n$ and define $B$ to be the identity on $\mathbb{R}^n$. Then by Proposition~\ref{prop:RS1} \[ \orthant{n}{k} = \{B(x): A(x)\in \psdcone{n-1}{k-1}\}.\] Clearly $B^*$ is the identity on $\mathbb{R}^n$ and $A^*:\mathbb{S}^{n-1}\rightarrow \mathbb{R}^n$ is given by $A^*(Y) = \diag(V_n Y V_n^T)$. Since $A(\mathbf{1}_n) = I_{n-1}$ is in the interior of $\psdcone{n-1}{k-1}$, applying Lemma~\ref{lem:dual-general} we obtain \[ (\orthant{n}{k})^* = \{w\in \mathbb{R}^{n}: \exists Y\in (\psdcone{n-1}{k-1})^* \;\;\text{s.t.}\;\; w = \diag(V_n Y V_n^T).\}\] Eliminating $w$ gives the statement in Proposition~\ref{prop:RS1-dual}. \end{proof} \begin{proof}[of Proposition~\ref{prop:RS2-dual}] Define $A: \mathbb{R}^{n}\times \mathbb{S}^{n-1} \rightarrow \mathbb{S}^{n}\times \mathbb{S}^{n-1}$ by \[ A(x,Z) = (\diag(x) - V_n Z V_n^T,Z)\] and $B:\mathbb{R}^{n}\times \mathbb{S}^{n-1}\rightarrow \mathbb{R}^n$ by $B(x,Z) = x$. Then by Proposition~\ref{prop:RS2} \[ \orthant{n}{k} = \{B(x,Z): A(x,Z) \in \mathbb{S}_+^n \times \psdcone{n-1}{k}\}.\] A straightforward computation shows that $B^*:\mathbb{R}^n\rightarrow \mathbb{R}^n\times \mathbb{S}^{n-1}$ is given by $B^*(w) = (w,0)$. Furthermore $A^*:\mathbb{S}^{n}\times \mathbb{S}^{n-1}$ is given by $A^*(Y,W) = (\diag(Y), W-V_n^TYV_n)$. Since $A(\mathbf{1}_n,I_{n-1})$ is in the interior of $\mathbb{S}_+^n\times \psdcone{n-1}{k}$, applying Lemma~\ref{lem:dual-general} we obtain \[ (\orthant{n}{k})^* = \{w\in \mathbb{R}^n: \exists (Y,W)\in \mathbb{S}_+^n\times (\psdcone{n-1}{k})^*\;\;\text{s.t.}\;\; w = \diag(W),\;\; V_n^T Y V_n = W\}.\] Eliminating $W$ and $w$ gives the statement in Proposition~\ref{prop:RS2-dual}. \end{proof} \section{Exploiting symmetry: relating $\psdcone{n}{k}$ and $\orthant{n}{k}$ and their dual cones} \label{sec:btn} In the introduction we observed that $\psdcone{n}{k}$ is invariant the action of orthogonal matrices by conjugation on $\mathbb{S}^n$ and that its diagonal slice is $\orthant{n}{k}$. In this section we explain how to use these properties to construct the semidefinite representation of $\psdcone{n}{k}$ in terms of $\orthant{n}{k}$ stated in Proposition~\ref{prop:btn}. We then discuss how the duals of these two cones relate. The material in this section is well known so in some places we give appropriate references to the literature rather than providing proofs. Let $O(n)$ denote the group of $n\times n$ orthogonal matrices. The \emph{Schur-Horn cone} is \begin{equation} \label{eq:SH} \textup{SH}_n = \left\{(X,z):\;\;z_1\geq z_2\geq \cdots \geq z_n,\;\;X\in \textup{conv}_{Q\in O(n)} \{Q^T\diag(z)Q\}\right\}, \end{equation} the set of pairs $(X,z)$ such that $z$ is in weakly decreasing order and $X$ is in the convex hull of symmetric matrices with ordered spectrum $z$. We call this the Schur-Horn cone because all \emph{symmetric Schur-Horn orbitopes} \cite{sanyal2011orbitopes} appear as slices of $\textup{SH}_n$ of the form $\{X: (X,z_0)\in \textup{SH}_n\}$ where $z_0$ is fixed and in weakly decreasing order. Whenever a convex subset $C\subset \mathbb{S}^n$ is invariant under orthogonal conjugation, i.e.~$C$ is a spectral set, we can express $C$ in terms of the Schur-Horn cone and the (hopefully simpler) diagonal slice of $C$ as follows. \begin{lemma} \label{lem:sym} If $C\subset\mathbb{S}^n$ is convex and invariant under orthogonal conjugation then \[ C = \{X\in \mathbb{S}^n: \exists z\in \mathbb{R}^n\;\;\text{s.t.}\;\; (X,z)\in \textup{SH}_n,\;\;\diag(z)\in C\}.\] \end{lemma} \begin{proof} Suppose $X\in C$. Take $z = \lambda(X)$, the ordered vector of eigenvalues of $X$. Then there is some $Q\in O(n)$ such that $X = Q^T\diag(\lambda(X))Q$ so $(X,\lambda(X))\in \textup{SH}_n$. By the orthogonal invariance of $C$, $X\in C$ implies that $QXQ^T = \diag(\lambda(X))\in C$. For the reverse inclusion, suppose there is $z\in \mathbb{R}^n$ such that $(X,z)\in \textup{SH}_n$ and $\diag(z)\in C$. Then by the orthogonal invariance of $C$, $Q^T\diag(z)Q\in C$ for all $Q\in O(n)$. Since $C$ is convex, $\textup{conv}_{Q\in O(n)}\{Q^T\diag(z)Q\} \subseteq C$. Hence $(X,z)\in \textup{SH}_n$ implies that \[ X\in \textup{conv}_{Q\in O(n)}\{Q^T\diag(z)Q\} \subseteq C.\] \end{proof} The first statement in Proposition~\ref{prop:btn} follows from Lemma~\ref{lem:sym} by recalling that $\psdcone{n}{k}$ is orthogonally invariant and $\orthant{n}{k} = \{z\in \mathbb{R}^n: \diag(z)\in \psdcone{n}{k}\}$. Proving the remainder of Proposition~\ref{prop:btn} then reduces to establishing the correctness of the stated semidefinite representation of $\textup{SH}_n$. This can be deduced from the following two well-known results. \begin{lemma} \label{lem:maj} If $\lambda(X)$ is ordered so that $\lambda_1(X)\geq \cdots \geq \lambda_{n}(X)$ then $(X,z)\in\textup{SH}_n$ if and only if $z_1\geq z_2 \geq \cdots \geq z_n$, \[ \textup{tr}(X) = \sum_{i=1}^{n}\lambda_i(X) = \sum_{i=1}^{n}z_i,\quad\text{and}\quad \sum_{i=1}^{\ell}\lambda_i(X) \leq \sum_{i=1}^{\ell}z_i \quad\text{for $\ell=1,2,\ldots,n-1$}.\] \end{lemma} In other words $(X,z)\in \textup{SH}_n$ if and only if $z$ is weakly decreasing and $\lambda(X)$ is \emph{majorized} by $z$. This is discussed, for example, in~\cite[Corollary 3.2]{sanyal2011orbitopes}. To turn this characterization into a semidefinite representation, it suffices to have semidefinite representations of the epigraphs of the convex functions $s_{\ell}(X) := \sum_{i=1}^{\ell} \lambda_{i}(X)$. These are given by Nesterov and Nemirovski in~\cite[Section 6.4.3, Example 7]{nesterov1993interior}. \begin{lemma} \label{lem:sumk} If $2\leq \ell \leq n-1$, the epigraph of the convex function $s_{\ell}(X) = \sum_{i=1}^{\ell}\lambda_i(X)$ has a semidefinite representation of size $O(n)$ given by \[\{(X,t): s_{\ell}(X) \leq t\} = \{(X,t): \exists s\in \mathbb{R},\; Z \in \mathbb{S}^n\quad\text{s.t.}\quad Z \succeq 0,\;\;X \preceq Z+sI,\;\;\textup{tr}(Z)+s\,\ell \leq t\}.\] The epigraph of $s_1(X)$ has a simpler semidefinite representation as \[ \{(X,t):s_1(X)\leq t\} = \{(X,t): X\preceq tI\}.\] \end{lemma} We now turn to the relationship between $(\psdcone{n}{k})^*$ and $(\orthant{n}{k})^*$. Note that $(\psdcone{n}{k})^*$ is invariant under orthogonal conjugation. So the claim (Proposition~\ref{prop:btn-dual}) that \[ (\psdcone{n}{k})^* = \{Y\in \mathbb{S}^n: \exists w\in \mathbb{R}^n\;\;\text{s.t.}\;\; w\in (\orthant{n}{k})^*,\;\;(Y,w)\in \textup{SH}_n\} \] would follow from Lemma~\ref{lem:sym} once we know that the diagonal slice of $(\psdcone{n}{k})^*$ is $(\orthant{n}{k})^*$. This is a special case of the following result for which we give a direct proof. \begin{lemma} \label{lem:lewis} Suppose $C\subset \mathbb{S}^n$ is a convex cone that is invariant under orthogonal conjugation. Then \[ \{y\in \mathbb{R}^n: \diag(y)\in C^*\} = \{z\in \mathbb{R}^n: \diag(z)\in C\}^*.\] \end{lemma} Note that if $C = \psdcone{n}{k}$ then the left hand side above is the diagonal slice of $(\psdcone{n}{k})^*$ and the right hand side is $(\orthant{n}{k})^*$. \begin{proof} We establish that for orthogonally invariant convex sets, diagonal projections and diagonal slices coincide, i.e.\ \[ \{\diag(X): X\in C\} = \{x\in \mathbb{R}^{n}: \diag(x)\in C\}.\] The result then follows by conic duality, explicitly by applying Lemma~\ref{lem:dual-general} in Section~\ref{sec:duals} to pass to the dual cone on both sides. Clearly the diagonal slice of $C$ is contained in the diagonal projection of $C$. As such we only show the other inclusion. We use a useful averaging argument (see, e.g.,~\cite{miranda1994group} where the idea is attributed to Olkin). Let $X\in C$ be arbitrary and for every subset $I\subset\{1,2,\ldots,n\}$ let $\Delta_I$ denote the diagonal matrix with $\Delta_{ii} = 1$ if $i\in I$ and $\Delta_{ii} = -1$ otherwise. Clearly each of the $\Delta_{ii}$ are orthogonal. We can express $P_D(X)\in \mathbb{S}^n$, the projection of $X$ onto the subspace of diagonal matrices, as \[ P_{D}(X) = \frac{1}{2^{n}}\sum_{I} \Delta_I X \Delta_I^T\] where the sum is over all $2^n$ subsets of $\{1,2,\ldots,n\}$. Since $X\in C$ and $C$ is orthogonally invariant, each $\Delta_I X \Delta_I^T$ is an element of $C$. Since $C$ is convex, it follows that $P_D(X)\in C$ and is diagonal as required. \end{proof} \section{Concluding remarks} \label{sec:conclusion} We conclude with some comments about (the possibility of) simplifying our representations and some open questions. \subsection{Simplifications} \label{sec:simplifications} If we can simplify a representation of $\orthant{n}{k}$ or $\psdcone{n}{k}$ for some $k=i$, that allows us to simplify the derivative-based representations for $k\geq i$ and the polar derivative-based representations for $k \leq i$. For example $\orthant{n}{n-2}$ can be succinctly expressed in terms of the second-order cone $Q_+^{n+1} = \{x\in \mathbb{R}^{n+1}: (\sum_{i=1}^{n}x_i^2)^{1/2} \leq x_{n+1}\}$ as \[ \orthant{n}{n-2} = \{x\in \mathbb{R}^{n}: (x,e_1(x))\in Q_+^{n+1}\}.\] Then we can represent $\psdcone{n}{n-2}$ in terms of the second order cone as \[ \psdcone{n}{n-2} = \{Z\in \mathbb{S}^{n}:(Z,\textup{tr}(Z))\in Q_+^{n^2+1}\}\] because $\textup{tr}(Z) = \sum_{i=1}^n\lambda_i(Z)$ and $\sum_{i,j=1}^{n}Z_{ij}^2 = \sum_{i=1}^{n}\lambda_i(Z)^2$. This should be used as a base case instead of $\psdcone{n}{n-1}$ in the polar derivative-based representations. As an example of this, Proposition~\ref{prop:RS2} can be used to give a concise representation of $\orthant{n}{n-3}$ in terms of the second order cone as \begin{align*} x\in \orthant{n}{n-3} \quad \Longleftrightarrow \quad& \exists Z\in \mathbb{S}^{n-1}\;\;\text{such that}\\ & \diag(x) \succeq V_n Z V_n^T \;\;\text{and}\;\; (Z,\textup{tr}(Z))\in Q_+^{(n-1)^2+1}. \end{align*} \subsection{Lower bounds on the size of representations} \label{sec:lower-bounds} The explicit constructions given in this paper establish upper bounds on the minimum size of semidefinite representations of $\psdcone{n}{k}$ and $\orthant{n}{k}$. To assess how good our representations are, it is interesting to establish corresponding \emph{lower} bounds on the size of semidefinite representations of $\orthant{n}{k}$ and $\psdcone{n}{k}$. Since $\orthant{n}{k}$ is a slice of $\psdcone{n}{k}$, any lower bound on the size of a semidefinite representation of $\orthant{n}{k}$ also provides a lower bound on the size of a semidefinite representation of $\psdcone{n}{k}$. Hence we focus our discussion on $\orthant{n}{k}$. In the case of $\orthant{n}{n-1}$, a halfspace, the obvious semidefinite representation of size one is clearly of minimum size. Less trivial is the case of $\orthant{n}{0}$, the non-negative orthant. It has been shown by Gouveia et al.~\cite[Section 5]{gouveia2013lifts} that $\mathbb{R}^n_+$ does not admit a semidefinite representation of size smaller than $n$. Hence the obvious representation of $\mathbb{R}^n_+$ as the restriction of $\mathbb{S}_+^n$ to the diagonal is of minimum size. For each $k$, the slice of $\orthant{n}{k}$ obtained by setting the last $k$ variables to zero is $\mathbb{R}^{n-k}_+$. Hence any semidefinite representation of $\orthant{n}{k}$ has size at least $n-k$, the minimum size of a semidefinite representation of $\mathbb{R}^{n-k}_+$. This argument establishes that Sanyal's spectrahedral representation of $\orthant{n}{1}$ of size $n-1$ is actually a minimum size semidefinite representation of $\orthant{n}{1}$. We are not aware of any other lower bounds on the size of semidefinite representations of the cones $\orthant{n}{k}$ for $2\leq k \leq n-2$. The semidefinite representations of $\orthant{n}{k}$ given in this paper are \emph{equivariant} in that they appropriately preserve the symmetries of $\orthant{n}{k}$. (For a precise definition see \cite[Definition 2.10]{gouveia2013lifts}.) It is known that symmetry matters when representing convex sets as projections of other convex sets \cite{kaibel2010symmetry}. For example if $p$ is a power of a prime, equivariant representations of regular $p$-gons in $\mathbb{R}^2$ are necessarily much larger than their minimum-sized non-equivariant counterparts \cite[Proposition 3.5]{gouveia2013lifts}. Given that the cones $\orthant{n}{k}$ are highly symmetric, it would also be interesting to establish lower bounds on the size of equivariant semidefinite representations of the derivative relaxations of the non-negative orthant. \bibliographystyle{plain}
2301.03433
\section*{SUPPLEMENTAL MATERIAL} \subsection{Constraints on a linear temporal variation of the fine-structure constant and its coupling to gravity}\label{sec:lpi} Previous optical frequency comparisons of $\nu_\textrm{E3}$ and $\nu_\textrm{E2}$ in our group found the strictest limits on a linear temporal variation of the fine-structure constant $\alpha$ and its coupling to the gravitational potential of the sun, which would lead to an oscillation of $\alpha$ with a period of one year \cite{Lange2021}. Including the new data presented in the main text in the same analysis allows us to further improve these limits. \Autoref{fig:alphavar} shows both the previously published measurement results of the frequency ratio $\nu_\textrm{E3}/\nu_\textrm{E2}$ between May 2016 (MJD 57527) and August 2020 (MJD 59081), as well as the new measurement data starting from September 2020. The increased amount of data from 2021 and 2022 compared to earlier years is due to increased measurement activity as well as improvements in clock availability during measurements. The dashed red line and the dotted blue line are fits to all data of a linear temporal drift and a sinusoidal modulation with a period of about 365 days respectively. Taking into account the statistical uncertainties and reproducibility, the fits result in a reduced $\chi^2$ of 1.2 and 1.1, respectively. We find a linear drift of \begin{equation} \frac{1}{\alpha}\frac{\textrm{d}\alpha}{\textrm{d}t}=1.8(2.5)\times10^{-19}/\textrm{yr}\,\textrm{.} \end{equation} For a possible coupling to the gravitational potential $\Phi$ of the sun, the best fit to all data gives \begin{equation} \frac{c^2}{\alpha}\frac{\textrm{d}\alpha}{\textrm{d}\Phi}=-2.4(3.0)\times 10^{-9}\,\textrm{.} \end{equation} Both values are compatible with zero and improve the previous best limits \cite{Lange2021} by about a factor of four in the uncertainty. \begin{figure} \centering \includegraphics[width = \columnwidth]{supp_figure1.pdf} \caption{Ratio of the frequencies $\nu_{E3}$ and $\nu_{E2}$ of the E3 and the E2 transition in \textsuperscript{171}Yb\textsuperscript{+} measured between MJD 57527 (May 19, 2016) and MJD 59900 (November 11, 2022). The gray error bars show the statistical uncertainties. For the black error bars, a fractional uncertainty has been added in quadrature to take into account the reproducibility of systematic shifts over the measurement period. The dashed red line and the dotted blue line are fits to the data for searches for a linear temporal drift and a dependence on the gravitational potential, respectively.} \label{fig:alphavar} \end{figure} \subsection{Detection threshold} In order to estimate a detection threshold for each frequency $\omega/2 \pi$, we need to account for the fact that extreme values become more likely as one considers a larger number of independent samples (look-elsewhere effect). We define the threshold $P_{\mathrm{th},\omega}$, following \cite{Scargle1982, Hees2016}, such that: \begin{align} \textrm{Prob}(P_\omega<P_{\mathrm{th},\omega })=(1-p_0)^{1/n_{\textrm{ind}}} \end{align} with the number of independent frequencies $n_\textrm{ind}$. In this case, if we find a value $P_\omega \geq P_{\mathrm{th},\omega}$ anywhere in the spectrum and interpret it as a detection, the probability of it being a false detection is less than $p_0=5\%$. We estimate the number of independent frequencies to be $n_\textrm{ind}\approx f_\mathrm{max} T$, where $f_\mathrm{max}=0.005$\,Hz, and obtain $n_\textrm{ind}\approx3.5\times10^{5}$ for the E3/E2 data and $n_\textrm{ind}\approx1.8\times10^{4}$ for the E3/Sr data. For $\omega/2\pi\gg 1/T$ and pure Gaussian white noise, the periodogram follows an exponential probability distribution, and the corresponding detection threshold is known analytically~\cite{Scargle1982}. For smaller frequencies and different kinds of noise, where the probability distribution of the estimated power spectrum is not known, the detection limit can in principle be obtained directly from Monte-Carlo (MC) sampling of noise. However, this would require an extremely large number of samples, that is not computationally feasible. For the total evaluated measurement data of $\nu_\textrm{E3}/\nu_\textrm{E2}$, $0.95^{1/n_\textrm{ind}}\approx1-(2\times10^{-7})$, meaning that on the order of $10^7$ Monte-Carlo samples are necessary to obtain the detection threshold directly from the corresponding percentile of the probability distribution. A variety of approaches to this problem have been developed in different fields and for different statistics \cite{Gross2010, Cowan2011, Beaujean2018}. Here, we make use of the fact that an exponential distribution yields a good fit to the probability distribution of the estimated power at a given frequency, when allowing for a constant offset, and extract the threshold based on such a fit for a smaller number of MC samples. Convergence of the obtained results is at the few-percent-level already for $N=1000$ MC samples, and the quality of the fit to the results of the MC sampling is good throughout the parameter space considered, even for small frequencies and in the case of a random-walk noise component. To determine the detection threshold, we thus fit the cumulative of an exponential distribution \begin{align} 1-\textrm{e}^{-a(P-P_0)} \end{align} to a histogram of the cumulative extracted power $P$, and determine the detection threshold based on the resulting fit parameters $a$ and $P_0$, then calculate the corresponding amplitude. We confirm the agreement of this method with the analytical detection threshold given in \cite{Scargle1982} for intermediate frequencies, and rely on the analytic result for frequencies $\gg 1/T$. In the case of $\nu_\textrm{E3}/\nu_\textrm{E2}$, the noise is well-described by white noise throughout, and since $1/T\approx1.4\e{-8}$, we rely on the analytic result for frequencies larger than $10^{-6}\,$Hz. For $\nu_\textrm{E3}/\nu_\textrm{Sr}$, we rely on the analytic result for frequencies larger than $4\times 10^{-5}\,$Hz, where the $1/\omega^2$ noise component is negligible and the measurement is dominated by white noise. The visible structure of the detection threshold at low frequencies results from the specific distribution of sampling gaps in our data.
1109.3092
\section{Introduction} Given two graphs $G$ and $H$, the \emph{strong product} of $G$ and $H$, denoted by $G\boxtimes H$, is the graph obtained by substituting each vertex in $G$ with a copy of $H$. The graph $C_5 \boxtimes K_3$ (see Figure \ref{fig:borodin}) has appeared as an exemplary graph in several situations, including as a counterexample to Haj\'{o}s' conjecture \cite{catlin79} and as proof of tightness of the Borodin-Kostochka conjecture \cite{borodink77}, Reed's $\omega$, $\Delta$, $\chi$ conjecture \cite{reed98}, and most recently a result on hitting all maximum cliques with a stable set: \begin{figure} \begin{center} \includegraphics[scale=.55]{borodin} \caption{$C_5 \boxtimes K_3$}\label{fig:borodin} \end{center} \end{figure} \begin{theorem}[King \cite{king11}] \label{thm:king} Any graph satisfying $\omega > \frac{2}{3}(\Delta + 1)$ contains a stable set that intersects every maximum clique. \end{theorem} \RED{This theorem is a refinement of a result of Rabern \cite{rabern10}, who proved the result when $\omega \geq \frac 34(\Delta+1)$. The refinement relies on a strengthening of Haxell's Theorem \cite{haxell95}; this strengthening was implicit in Haxell's work and also in work of Aharoni, Berger, and Ziv \cite{aharonibz07}.} Since $C_5 \boxtimes K_3$ satisfies $\omega = \frac 23(\Delta+1)$ but contains no stable set hitting every maximum clique, the strict inequality in Theorem \ref{thm:king} is necessary. Actually $C_5$ itself also shows that strictness is necessary, and is not just a Brooks-type exception. In the next two sections of this note we prove that any graph that exhibits this property is the strong product of an odd hole\footnote{A {\em hole} is an induced cycle of length at least 4.} and a clique: \begin{theorem} \label{thm-main} Any connected graph satisfying $\omega \geq \frac{2}{3}(\Delta + 1)$ contains a stable set intersecting every maximum clique unless it is the strong product of \RED{an odd hole and a clique.} \end{theorem} It is easy to confirm that the strong product of a \RED{an odd hole and a clique} does not contain a stable set hitting every maximum clique. In the last section of this note, we prove that there is no hope of proving a statement analogous to Theorem \ref{thm:king} for maximal rather than maximum cliques. \section{The clique graph} Following \cite{king11} and \cite{rabern10}, we approach Theorem \ref{thm-main} by characterizing the structure of the {\em clique graph}. Given a graph $G$ and a collection $\mathcal{C}$ of maximum cliques in $G$, we define the clique graph, denoted by $G(\mathcal{C})$, as follows. The vertices of $G(\mathcal{C})$ correspond to the cliques in $\mathcal{C}$; two vertices of $G(\mathcal C)$ are adjacent if and only if their corresponding cliques intersect in $G$ For now we can restrict our attention to connected clique graphs. When $\omega > \frac 23(\Delta+1)$, we are guaranteed that if $G(\mathcal C)$ is connected, then $|\cap \mathcal C|\geq \frac 13(\Delta+1)$ \cite{king11}. However, the same is not necessarily true when $\omega = \frac 23(\Delta+1)$, for example with the strong product of either \RED{a hole (i.e.\ a cycle of length $\geq 4$) and a clique, or $P_\ell$ (i.e.\ a path on $\ell$ vertices) for $\ell \geq 4$ and a clique}, in which case $\cap \mathcal C$ is empty. This is actually the only troublesome case. To prove this we need Hajnal's set collection lemma. \begin{lemma}[Hajnal \cite{hajnal65}] \label{lem-hajnal} Let $G$ be a graph and let $\mathcal C$ be a collection of maximum cliques in $G$. Then \[|\cap\mathcal C| + |\cup\mathcal C| \geq 2\omega(G).\] \end{lemma} The following lemma extends a lemma of Kostochka \cite{kostochka80} that is instrumental to the proof of Theorem \ref{thm:king}. \begin{lemma}\label{lem:main} Suppose $G$ is connected and satisfies $\omega \geq \frac 23(\Delta+1)$, and let $\mathcal C$ be a collection of maximum cliques in $G$ such that $G(\mathcal C)$ is connected \RED{and $|\cap \mathcal C | < \frac 13(\Delta+1)$. Then $\cap \mathcal C = \emptyset$, and for some $k\geq 4$ either $G$ is $C_k\boxtimes K_{\omega/2}$, or the subgraph induced by $\cup\mathcal C$ contains $P_k\boxtimes K_{\omega/2}$ as a subgraph.} \end{lemma} \RED{Kostochka's lemma (which appears in English in \cite{king11} and \cite{rabern10}) actually tells us that if $\omega > \frac 23(\Delta+1)$, no such set $\mathcal C$ can exist. So it suffices to deal with the case $\omega = \frac 23(\Delta+1)$.} \begin{proof} \RED{Assume $\omega = \frac 23(\Delta+1)$. Note that if $\mathcal C'$ is any family of maximum cliques with $\cap \mathcal C'\neq \emptyset$, then $|\cup \mathcal C'|\leq \Delta+1$. Otherwise, every vertex in $\cap \mathcal C'$ would have more than $\Delta$ neighbours, which is impossible. For any two intersecting maximum cliques $A$ and $B$, we know by the previous paragraph that $|A\cap B| = 2\omega - |A\cup B| \geq 2\omega - (\Delta+1) = \omega/2$. Now let $\mathcal C'$ be a maximal set of cliques such that $|\cap \mathcal C'|\geq \omega/2$, and let $A$ and $B$ be two intersecting cliques in $\mathcal C'$ such that $B$ intersects a clique $C$ in $\mathcal C\setminus \mathcal C'$ (we know that $|\mathcal C|\geq 3$ because $|A\cap B|\geq \omega/2$, so this must be possible since $G(\mathcal C)$ is connected). Let $\mathcal C''$ denote $\mathcal C'\cup \{C\}$. By the maximality of $\mathcal C'$, we have $|\cap \mathcal C''| < \frac 13(\Delta+1)$. Suppose that $\cap \mathcal C''$ is nonempty. Any vertex in $\cap{\mathcal C''}$ is adjacent to the rest of $\cup{\mathcal C''}$, so $|\cup\mathcal C''| \leq \Delta+1$. But this contradicts Lemma \ref{lem-hajnal}, so $\cap{\mathcal C''}$ must indeed be empty and therefore $\cap \mathcal C = \emptyset$. Since $B\cap C \neq \emptyset$ it follows that $|B\cap C| \geq \omega/2$. On the other hand we also have $|B \setminus C| \geq |\cap \mathcal{C}'| \geq \omega/2$ and so $|B \cap C| = |\cap \mathcal{C}'| = \omega/2$. Thus it is clear that the sets $(B \cap C)$ and $(\cap \mathcal C')$ partition $B$. Also, no clique of $\mathcal C'$ can intersect $C\setminus B$, since a vertex in this intersection would be complete to $B$, contradicting the fact that $B$ is a maximum clique. Further, no clique $D$ of $\mathcal C'$ other than $B$ can intersect $C$, since this would imply that $D$ and $C$ have nonempty intersection of size less than $\omega/2$, which is impossible. Therefore $\mathcal C' = \{A,B\}$, otherwise $|\cup \mathcal C'|$ would be greater than $\Delta+1$. We have shown that $|\mathcal C|\geq 3$, and given any three cliques $A,B,C\in \mathcal C$ with $|A\cap B\cap C|< \omega/2$ such that $A$ and $C$ both intersect $B$, \begin{enumerate} \item $A$ and $C$ are disjoint, \item $A\cap B$ and $C\cap B$ have size $\omega/2$ and partition $B$, and \item no other maximum clique $D$ intersects $B$. \end{enumerate} It follows that $G(\mathcal C)$ has maximum degree 2 (and by assumption, is connected). Therefore the subgraph induced by $\cup \mathcal C$ contains, for some $k\geq 4$, either $P_k\boxtimes K_{\omega/2}$ or $C_k\boxtimes K_{\omega/2}$ as a subgraph. Finally, since $G$ is connected and $C_k\boxtimes K_{\omega/2}$ is $(\frac 32\omega-1)$-regular, if $G$ contains $C_k\boxtimes K_{\omega/2}$ as a subgraph then $G$ is isomorphic to $C_k\boxtimes K_{\omega/2}$. This completes the proof.} \end{proof} \section{Hitting the maximum cliques with a stable set} In order to find our desired stable set, we need the main intermediate result in the proof of Theorem \ref{thm:king}, \RED{which extends Haxell's Theorem \cite{haxell95}.} \begin{theorem}[King \cite{king11}] \label{thm-isr} \RED{Let $G$ be a graph with vertices partitioned into cliques $V_1,\dots, V_r$, and let $k$ be a positive integer.} If for every $i$ and every $v \in V_i$, $v$ has at most $\min\{k, |V_i|-k\}$ neighbours outside $V_i$, then $G$ contains a stable set of size $r$. \end{theorem} \begin{proof}[Proof of Theorem \ref{thm-main}] For fixed $\omega(G)\geq 1$ we proceed by induction on $|V(G)|$; the result trivially holds whenever $|V(G)|\leq \omega(G)$. Let $\mathcal{C}$ be the set of maximum cliques in a graph $G$, and let $\mathcal{C}_1, \mathcal{C}_2,\ldots, \mathcal{C}_k$ be the partitioning of $\mathcal{C}$ such that $G[\mathcal{C}_1],G[\mathcal{C}_2],\ldots,G[\mathcal{C}_k]$ are the connected components of the clique graph $G[\mathcal{C}]$. We consider two cases. The first case is basically the same as the proof of Theorem \ref{thm:king}.\\ \noindent{\bf Case 1:} For every $1\leq i \leq k$, $\cap \mathcal{C}_i \neq \emptyset$. By Lemma \ref{lem:main}, for every $1\leq i \leq k$ we have $|\cap \mathcal{C}_i|\geq \frac 13(\Delta(G)+1)$. It suffices to show that there is a stable set in $G$ intersecting each $\cap \mathcal{C}_i$. For a given $i$, every vertex in $\cap \mathcal{C}_i$ has at most $\Delta(G)+1 -|\cup \mathcal{C}_i|$ neighbours in $\cup_{j\neq i}(\cap \mathcal{C}_j)$. Lemma \ref{lem-hajnal} tells us that $|\cup \mathcal{C}_i|+|\cap \mathcal{C}_i| \geq \frac 43(\Delta(G)+1)$. Therefore $\Delta(G)+1-|\cup \mathcal{C}_i|\leq |\cap \mathcal{C}_i|-\frac 13(\Delta(G)+1)$. And since $|\cup\mathcal{C}_i|\geq \omega(G) \geq \frac 23(\Delta(G)+1)$, a vertex in $\cap\mathcal{C}_i$ has at most $\min\{ \frac 13(\Delta(G)+1), |\cap\mathcal{C}_i|-\frac 13(\Delta(G)+1) \}$ neighbours in $\cup_{j\neq i}(\cap \mathcal{C}_j)$. It therefore follows from Theorem \ref{thm-isr} that there is a stable set in $G$ intersecting each $\cap \mathcal{C}_i$. This completes Case 1.\\ \noindent{\bf Case 2:} For some $1\leq i\leq k$, $\cap \mathcal{C}_i=\emptyset$. Assume that $\cap\mathcal{C}_1=\emptyset$. Lemma \ref{lem:main} tells us that either $G$ is the strong product \RED{a hole and $K_{\omega(G)/2}$}, or $G[\cup \mathcal{C}_i]$ contains as a subgraph the strong product of $K_{\omega(G)/2}$ and a $P_\ell$ for $\ell\geq 4$. In the former case the theorem clearly holds, so let us consider the latter case. If there is a vertex not in a clique of size $\omega(G)$, we can delete it and apply induction, so assume that no such vertex exists. Let the cliques of $\mathcal{C}_1$ be $C_1,\ldots,C_{\ell-1}$ such that for $1\leq i \leq \ell-2$, $C_i$ and $C_{i+1}$ intersect in exactly $\omega(G)/2$ vertices. Let $X_1$ denote $C_1\setminus C_2$ and let $X_2$ denote $C_{\ell-1}\setminus C_{\ell-2}$. \begin{figure} \begin{center} \includegraphics[scale=.7]{reduction} \caption{A reduction of a clique path for $\ell = 5$}\label{fig:reduction} \end{center} \end{figure} We will construct a graph $G'$ on fewer than $|V(G)|$ vertices such that $\omega(G')=\omega(G)$ and $\Delta(G')\leq\Delta(G)$, and apply induction to prove our result. To construct $G'$ from $G$ we delete $\cup_{1\leq i \leq \ell-2}(C_i\cap C_{i+1}) = (\cup \mathcal{C}_1 )\setminus(X_1\cup X_2)$ and add edges to make $X_1\cup X_2$ a clique of size $\omega$ in $G'$ (see Figure \ref{fig:reduction}). Clearly $G'$ has maximum degree at most $\Delta(G)$. We claim that $G'$ has clique number $\omega(G)$. Suppose this is not the case. It follows that there exists a set $Y_1\subseteq X_1\cup X_2$ and a set $Y_2$ in $V(G)\setminus \cup \mathcal{C}_1$ such that $Y_1\cup Y_2$ is a clique of size greater than $\omega(G)$. Let $v$ be a vertex in $Y_2$. Since $v$ is in an $\omega(G)$-clique in \RED{$G\setminus (X_1\cup X_2)$}, it has at most $\omega(G)/2$ neighbours in $X_1\cup X_2$, so $|Y_1|\leq \omega(G)/2$. Therefore $|Y_2|>\omega(G)/2$, which implies that some vertex in $Y_1$ has at least $\omega(G)-1$ neighbours in $\cup\mathcal{C}_1$ and more than $\omega(G)/2$ neighbours in $Y_2$, contradicting the fact that $\omega(G) \geq \frac 23(\Delta(G)+1)$. Therefore $G'$ has clique number $\omega(G)$. By induction, there is a stable set $S$ in $G'$ hitting every $\omega(G)$-clique. Thus $S$ is also a stable set in $G$ intersecting $X_1\cup X_2$ exactly once. Without loss of generality let $v$ be a vertex in $X_1\cap S$. From $S$ we will construct a stable set $S'$ hitting every $\omega(G)$-clique in $G$ in one of two ways, depending on the parity of $\ell$. If $\ell$ is even, let $S'$ consist of $S$ along with one vertex in $C_{2k}\cap C_{2k+1}$ for each $1 \leq k \leq (\ell/2) -1$. It is a routine exercise to confirm that $S'$ is a stable set hitting every maximum clique in $G$. If $\ell$ is odd, let $S'$ consist of $S\setminus \{v\}$ along with \RED{one vertex from} $C_{2k-1}\cap C_{2k}$ for each $1 \leq k \leq (\ell-1)/2$. Again $S'$ is a stable set hitting every maximum clique in $G$, because the only $\omega(G)$-clique intersecting $C_1\setminus C_2$ is $C_1$. This completes the proof. \end{proof} \section{Hitting large maximal cliques with a stable set} Theorem \ref{thm:king} can be used to characterize minimum counterexamples to Reed's $\chi$, $\omega$, $\Delta$ conjecture; see for example \cite{aravindks11} \S4. Motivated by the problem of similarly characterizing minimum counterexamples to the local strengthening of Reed's $\chi$, $\omega$, $\Delta$ conjecture (see \cite{chudnovskykps11, kingthesis}), King recently proposed the following unpublished conjecture: \begin{conjecture} There exists a universal constant $\epsilon > 0$ such that every graph contains a stable set hitting every maximal clique of size at least $(1-\epsilon)(\Delta+1)$. \end{conjecture} We conclude this note by disproving the conjecture. \begin{theorem} For any $\epsilon>0$ there exists a graph in which every maximal clique has size at least $(1-\epsilon)(\Delta+1)$, and no stable set hits every maximal clique. \end{theorem} \begin{proof} Choose two positive integers $k$ and $t$ sufficiently large such that \begin{equation}\label{eq1} (1-\epsilon)(kt +5t-5) < kt +2-k. \end{equation} We now construct a graph with vertices partitioned into sets $A$ and $B$ of size $kt$ and $5t$ respectively. We further partition $A$ into $A_1,\ldots, A_t$ and $B$ into $B_1,\ldots,B_t$ such that \begin{enumerate} \item $A$ is a clique and each $A_i$ has size $k$ \item each $B_i$ induces a $5$-cycle, and there are no edges between $B_i$ and $B_j$ for $i\neq j$ \item vertices $u\in A_i$ and $v\in B_j$ are adjacent precisely when $i\neq j$. \end{enumerate} Thus we can see that the unique maximum clique in $G$ is $\cup_i A_i$, with size $kt$. All other maximal cliques of $G$ consist of two vertices in $B$ and $k(t-1)$ vertices of $A$. The maximum degree of the graph is $kt+5t-6$, achieved by all vertices in $A$. By (\ref{eq1}), every maximal clique has size greater than $(1-\epsilon)(\Delta+1)$. It therefore suffices to prove that no stable set intersects every maximal clique. Suppose we have a stable set $S$ intersecting every maximal clique. Since $A$ is a maximal clique, without loss of generality we can assume $S$ intersects $A_1$, and therefore $S\setminus A_1 \subseteq B_1$. But then there must remain two adjacent vertices in $B_1\setminus S$. Together with $\cup_{j\neq 1}A_j$ these vertices form a maximal clique in $G$. This contradiction completes the proof. \end{proof} \section{Acknowledgements} \RED{This work was done at ITI in Prague; the authors are grateful to everyone involved, particularly Daniel Kr\'al', for their hospitality. The authors would also like to thank the two anonymous referees for their careful reading and helpful suggestions.}
1109.2860
\section*{1. Introduction.} As shown in \cite{Aksenov}, the study of the Newman phenomenon in the multiplicative sequences leads naturally to the study of the norms (over $\mathbb Q$ or over a subfield of small degree) of numbers of the form $r(\zeta)$ where $\zeta$ is a primitive $p$-th root of unity ($p$ being prime) and $r$ is a polynomial with coefficients $1$ or $-1$, the free term being $1$; the only relevant polynomials for the study of the Newman phenomenon are those that have at least one $-1$ coefficient. We are looking for results that are valid for a fixed $r$ and an arbitrary prime $p$ (the conditions that can be put on $p$ are: the prime $p$ allows the existence of the desired subfield and $p$ is big enough). Here are some results of this kind: for any odd prime $p$ we get $N_{\mathbb Q(p)/\mathbb Q}(1-\zeta)=p$ (see \cite{NumTh}) and \begin{equation} \label{N++-} N_{\mathbb Q(p)/\mathbb Q}(1+\zeta-\zeta^2)=L(p) \text{ the $p$-th Lucas number .} \end{equation} (see \cite{Aksenov}). Moreover, if $p\equiv1$ mod $4$, then $N_{\mathbb Q(\zeta)/Q(\sqrt{p})} (1-\zeta)=\sqrt{p}\epsilon^{\pm h}$ where $\epsilon$ is the fundamental unit of the ring of integers of $Q(\sqrt{p})$ and $h$ is its class number; if $p\equiv-1$ mod $4$ we get $N_{\mathbb Q(\zeta)/Q(\sqrt{-p})}=\epsilon i\sqrt{p} $ where $\epsilon\in\{1,-1\}$ is a sign defined as follows: suppose $\zeta=e^{\frac{2i\pi k}{p}}$, then $\epsilon=\genfrac{(}{)}{}{}{k}{p}(-1)^{\frac{h+1}{2}}$ where $h$ is the class number of $\mathbb Q(\sqrt{-p})$ (see \cite{BorevichShafarevich}). In this article we are going to study the values of $N_{\mathbb Q(p)/\mathbb Q}(r(\zeta))$ where $r$ is one of the remaining quadratic polynomials, namely $r_1(\zeta)=1-\zeta+\zeta^2$ or $r_2(\zeta)=1-\zeta-\zeta^2$. \section*{2. Main results.} The first result concerns the polynomial $r_1$ and it is the following: \begin{theorem} Let $n$ be an integer bigger than $4$ and not multiple of $2$ or $3$, and let $\zeta=e^\frac{2i\pi}n$. Then $$\prod_{k=1}^{n-1}(1-\zeta^k+\zeta^{2k})=1. $$ Therefore, $1-\zeta+\zeta^2$ is a unit in the ring of integers of $\mathbb Q(\zeta)$. \end{theorem} \begin{proof} For each $k\in[1,n-1]$ we get: $$1-\zeta^k+\zeta^{2k}=\zeta^k\left(\zeta^k+\zeta^{-k}-1\right)=\zeta^k(2\cos\frac{2\pi k}n-1)=\zeta^k \frac{\cos\frac{3\pi k}n}{\cos\frac{\pi k}n}. $$ The product of terms $\zeta^k$ is one, and it can be checked that the products $\prod\limits_{k=1}^{n-1}\cos\frac{3\pi k}n$ and $\prod\limits_{k=1}^{n-1}\cos\frac{\pi k}n$ only differ by permutation of factors. \end{proof} For prime numbers $n$ the method used to prove the formula \ref{N++-} makes the theorem \arabic{theorem} is equivalent to the following \begin{corollary} Let $p$ be a prime, $p\geqslant 5$. Then the number of ways of putting an even nonzero number of dominos on the circle of length $p$ is equal to the number of ways of putting an odd number of dominos on that circle. \end{corollary} For example for $p=11$ there are:\\ $1$ way of putting $0$ dominos on a circle of length $11$;\\ $11$ ways of putting $1$ domino;\\ $44$ ways of putting $2$ dominos;\\ $77$ ways of putting $3$ dominos;\\ $55$ ways of putting $4$ dominos;\\ $11$ ways of putting $5$ dominos. For $p=17$ there are:\\ $1$ way of putting $0$ dominos on a circle of length $17$;\\ $17$ ways of putting $1$ domino;\\ $119$ ways of putting $2$ dominos;\\ $442$ ways of putting $3$ dominos;\\ $935$ ways of putting $4$ dominos;\\ $1122$ ways of putting $5$ dominos;\\ $714$ ways of putting $6$ dominos;\\ $204$ ways of putting $7$ dominos;\\ $17$ ways of putting $8$ dominos. For the polynomial $r_2$ we get the following result: \begin{theorem} Let $p$ be an odd prime. Then, $$ N_{\mathbb Q(\zeta)/\mathbb Q}(1-\zeta-\zeta^2)=L(p).$$ \end{theorem} \begin{proof} We prove this norm to be equal to the norm of $1+\zeta-\zeta^2$. Indeed: $$\prod_{k=1}^{p-1}(1-\zeta^k-\zeta^{2k})=\prod_{k=1}^{p-1}\left(-\zeta^{2k}(-\zeta^{-2k}+\zeta^{-k}+1)\right)=\prod_{k'=1}^{p-1}(1+\zeta^{k'}-\zeta^{2k'})=L(p). $$ \end{proof} \section*{3. Further questions.} The results presented here finish the study of norms over $\mathbb Q$ relative to quadratic polyninomials (which correspond in terms of Newman's phenomenon to $3$-multiplicative sequences). The case of cubic polynomials seems more challenging.
1109.2911
\section{Introduction} Hawking's calculation of black hole evaporation~\cite{Hawking:1974sw} leads to a direct conflict between general relativity and the unitary evolution of quantum mechanics~\cite{Hawking:1976ra}. This prompted the suggestion that the requirement of unitary evolution should be relaxed~\cite{Hawking:1982dj, Page:1979tc}, and pure states allowed to evolve into mixed states. Such nonunitary evolution, however, seems problematic~\cite{Banks:1983by}.\footnote{See~\cite{Unruh:1995gn, Unruh:2012vd}, however, for criticisms of the arguments made in~\cite{Banks:1983by}.} Moreover, since black hole evaporation is a very slow process involving a large number of emitted particles, and since one only expects to start recovering information after about one half of the radiation has been emitted~\cite{Page:1993df}, one might imagine that unitarity is restored by the accumulation of small (say, nonperturbative) corrections over the course of the entire evolution process, cf.~\cite{Page:1993wv}. To evaluate this claim, Mathur~\cite{Mathur:2009hf} introduces a qubit model of black hole evaporation and then derives bounds on the entanglement entropy of the emitted radiation, which show that this scenario is \emph{not possible}. Below we generalize his result to consider more general deformations of the pair creation dynamics. In particular, for small changes in the evaporation process the entanglement entropy continues to increase, therefore (barring remnants and variants thereof) the evolution is not unitary. Thus, for unitarity to be restored, one needs to make \emph{large} corrections to the semi-classical evaporation process described by Hawking. Recently, several specific unitary models have been introduced~\cite{Czech:2011wy, Giddings:2011ks, Mathur:2011wg} as proposed alternatives to Hawking's semiclassical evolution. As discussed in~\cite{Mathur:2011uj}, models of this kind could be called ``burning paper'' models that involve large corrections to the Hawking evolution. While these models are unitary, they are typically written in a way that makes it difficult to compare to the semiclassical evolution and to see how Mathur's bound operates. The difficulty arises because the bound is derived in terms of dynamics in an ever-enlarging Hilbert space, whereas unitary models are typically written as dynamics in a fixed-dimensional Hilbert space. The primary goal of this paper is to clarify the meaning of Mathur's bound in~\cite{Mathur:2009hf}. In particular, until one writes unitary models as (large) corrections to nonunitary, semiclassical evolution, the meaning of Mathur's result remains obscure. Previous investigations of corrections in this context either directly introduce unitary models that are difficult to compare to Hawking evaporation~\cite{Czech:2011wy, Giddings:2011ks, Mathur:2011wg}; or consider small corrections to Hawking evaporation that \emph{do not produce unitary evolution when made sufficiently large~\cite{Mathur:2011wg, Mathur:2010kx}}, and thus are unconvincing illustrations of the result in~\cite{Mathur:2009hf}. A second goal is to characterize what kinds of corrections produce the desired unitary evolution. That the corrections need to be large is a necessary condition derived in~\cite{Mathur:2009hf}, but sufficient conditions have not been discussed in the same way. In order to address the above points, it is necessary to introduce a general model space that provides a uniform language to discuss both unitary and nonunitary black hole evaporation. This allows us, for example, to continuously deform the semiclassical Hawking evolution to unitary evolution. One can then explicitly see that the deformation is large in an appropriate sense, and therefore in agreement with a suitable generalization of Mathur's argument. Let us emphasize that it is not our intention here to advocate for nonunitary evolution, only to demonstrate that unitarity demands there be a significant alteration of the traditional semiclassical evolution. To restrict ourselves to unitary evolution at this stage would be to beg the question. In Section~\ref{sec:gen}, we introduce a very general framework for qubit models of black hole evaporation that is appropriate for both unitary and nonunitary evolution. Most of the discussion focuses on how to interpret the models, and their connection to ideas in quantum information theory. In Section~\ref{sec:models}, we apply the formalism to a number of sample models; some of the models were chosen because they were discussed previously in the literature, and others because they illustrate some interesting issues. In Section~\ref{sec:bound}, we briefly review Mathur's argument against small corrections restoring unitary evolution, and generalize the main entanglement entropy bound to allow arbitary deformations. Mathur's original result~\cite{Mathur:2009hf} only explicitly considers one kind of perturbation. In Section~\ref{sec:unitarity}, using observations from Section~\ref{sec:models}, we discuss what conditions ensure unitary evolution. In Section~\ref{sec:one-par}, we present a one-parameter family of models that continuously connects Hawking evaporation to a unitary model; one sees clearly that the deformation required is large. In Section~\ref{sec:conc}, we conclude with some brief comments. \section{The General Model}\label{sec:gen} Before presenting our model, we make a few preliminary comments on nonunitary evolution in Section~\ref{sec:nonunitary}. Then in Section~\ref{sec:quops}, using some results from quantum information theory, we explain how to describe nonunitary evolution that still has a good probabilistic interpretation. Along the way, we clarify some potential confusions regarding previous work. Finally, we present our general class of models in Section~\ref{sec:themodel}, discussing the physical interpretation in Section~\ref{sec:physics}. \subsection{Nonunitarity}\label{sec:nonunitary} When we model the evolution of a closed quantum system, the state of the system is given by a ket $\ket{\psi(t)}$ that satisfies \begin{equation}\label{eq:unit-evol-1} \ket{\psi(t)} = U(t)\ket{\psi(0)} \end{equation} for a unitary time evolution operator $U(t)$. We may equivalently write the state of the system as a density matrix $\rho(t) = \ket{\psi(t)}\bra{\psi(t)}$ where $\rho(t)$ satisfies \begin{equation}\label{eq:unit-evol-rho} \rho(t) = U\rho(0)U^\dg. \end{equation} Unitary evolution satisfies several nice conditions, namely,\footnote{We do not claim that these are completely independent.} \begin{enumerate} \item\label{it:lin} Linearity \item\label{it:norm} Preservation of the norm: unit norm states evolve to unit norm states, ensuring that a probabilistic interpretation makes sense. For density matrices, the desired condition is that unit-trace, completely positive density matrices evolve to unit-trace, completely positive density matrices. \item\label{it:inv} Invertibility: previous states can be found from the current state. \item\label{it:pure} Purity: pure states evolve to pure states. \end{enumerate} However, it has sometimes been suggested~\cite{Hawking:1982dj, Hawking:1976ra, Page:1979tc} that theories of quantum gravity (especially in the presence of black holes) will not be unitary. Let us note that the negation of unitary is ambiguous, since it is not clear which of the above conditions is relaxed. Let us consider three illustrative mappings. \begin{enumerate} \item This evolution does not conserve probability---the norm is not conserved; however, pure states still evolve to pure states, and the evolution is invertible: \begin{equation} \ket{\psi} \mapsto \frac{3}{4}\ket{\psi}. \end{equation} This kind of evolution is sometimes useful in modelling a system that decays into something that is outside the model. For instance, in modeling alpha decay. An equivalent way to describe this evolution is to say that the system has energies with an imaginary part. For a fundamental description that includes all degrees of freedom, however, the nonconservation of probability is nonsensical. \item This evolution preserves the norm, is invertible, but evolves pure states to mixed states: \begin{equation} \ket{\psi_1} \mapsto \rho_1 = \frac{1}{2}\ket{\psi_1}\bra{\psi_1} + \frac{1}{2}\ket{\phi_1}\bra{\phi_1}\qquad \ket{\psi_2} \mapsto \rho_2 = \frac{1}{2}\ket{\psi_2}\bra{\psi_2} + \frac{1}{2}\ket{\phi_2}\bra{\phi_2}\qquad \dots \end{equation} While this model evolves pure states to mixed states, information is still preserved (in a weak sense),\footnote{Provided large ensembles of identical copies of the system, one can with some confidence distinguish distinct density matrices; however, this usage of the phrase ``information preservation'' is not canonical, and is problematic if one starts considering density matrices which are very close to each other. This would mean in an experiment repeated many times with identical initial conditions, one could reconstruct some information about the initial state from the final state. In a technical sense, however, quantum information is lost. We make the distinction here, since the semiclassical description implies that information is not preserved even in this weak sense.} and probability is conserved. \item This evolution conserves probability, evolves pure states to pure states, but is not invertible. \begin{equation} \ket{\psi_1} \mapsto \ket{\psi_0}\qquad \ket{\psi_2} \mapsto \ket{\psi_0}\qquad \dots \end{equation} In this model one cannot reconstruct the past from the current state, and therefore information is not preserved. In Section~\ref{sec:G1-broken}, we introduce some models of this type. \end{enumerate} In the black hole information paradox, one considers some initial configuration of matter $\ket{\psi_m}$ that collapses into a black hole, and then completely evaporates to radiation in a thermal mixed state $\rho \sim e^{-\beta H}$. Since the final state is both mixed and independent of the initial state, the evolution \emph{both} fails to be invertibile and pure in the above senses. Suppose for the nonce that Hawking's original argument is correct~\cite{Hawking:1976ra}, and the fundamental theory of quantum gravity is not unitary. We can no longer write evolution as in~\eqref{eq:unit-evol-rho}. We restrict our considerations to evolution that satisfies conditions~\ref{it:lin} and~\ref{it:norm}, but not necessarily condition~\ref{it:inv} and~\ref{it:pure}. Then, assuming some very basic conditions that ensure a good probabilistic interpretation, we can write the most general possible evolution in the operator-sum representation~(cf.~\cite{nielsen}) \begin{equation}\label{eq:op-sum-rep} \rho(t) = \sum_k E_k \rho(0) E_k^\dg \end{equation} for some set of operator $E_k$ that satisfy the completeness relation \begin{equation} \sum_k E_k(t)^\dg E_k(t) = I. \end{equation} This is one way to write the evolution of an open quantum system, and the transformation from $\rho(0)$ to $\rho(f)$ is called a ``quantum operation'' in the quantum information theory literature~\cite{nielsen}. The operators $E_k$ determine the evolution of the density matrix. When there is only one $E_k$, the evolution is unitary.\footnote{Note that we model evolution with discrete time evolution, and do not discuss continuous evolution, which might be governed by the Lindblad equation.} \subsection{Quantum Operations}\label{sec:quops} As in~\cite{Mathur:2009hf, Giddings:2011ks, Mathur:2011uj, Mathur:2010kx, Mathur:2011wg}, we model the Hawking evaporation process as a discrete set of mappings on qubits. In the initial state, the system consists entirely of matter in a pure state. There have been some suggestions~\cite{Braunstein:2009my} in this context that the entanglement between the initial black hole-forming matter and the outside matter plays an important role; we do not address these issues at this time. The initial state is modeled as a set of $n$ ``matter qubits'': \begin{equation} \rho_0 = \ket{\psi_0}\bra{\psi_0} \qquad \ket{\psi_0} \in\spn \left\{\ket{\hat{q}_1\hat{q}_2\cdots\hat{q}_{n}}\right\}, \end{equation} where each $\hat{q}$ is a qubit, a quantum state labeled by $0$ or $1$. After a sequence of intermediate steps, the end state consists entirely of radiation (again, we are assuming no remnants), modeled as a (possibly mixed) density matrix acting on $n$ ``radiation'' qubits, $\rho_f$. Throughout the evolution, we keep the total dimension of the Hilbert space fixed. This is certainly true for the unitary evolution of closed systems, but here we put it in as a reasonable assumption. We are motivated in part by the black hole's entropy. The black hole initially has $S\sim M^2$, which entirely radiates away on the time scale $\sim M^3$ with an emission every $\sim M$. This is consistent with a model having a fixed number of physical qubits. Following~\cite{Giddings:2011ks}, we use hats to distinguish the internal black hole qubits from the external radiation qubits. The hatted qubits represent all degrees of freedom that are inaccessible outside the black hole; unlike~\cite{Mathur:2009hf}, we do not distinguish between degrees of freedom from the initial matter, from gravitational interactions, or any that arise during the evaporation process. We write basis elements for the final state as \begin{equation} \{\ket{q_nq_{n-1}\cdots q_1}\}, \end{equation} where we have put the labels on the qubits in reverse order for reasons which should become clear. At the $i$th step, we have a density matrix acting on $n-i$ black hole qubits and $i$ radiation qubits, so that the total dimension of the Hilbert space is fixed. The evaporation concludes on the $n$th step when there is only radiation: \begin{equation} \rho_0 \to \rho_1 \to \cdots \to \rho_{n-1} \to \rho_n \end{equation} In general the mapping from $\rho_0$ to $\rho_n = \rho_f$ should be a quantum operation, which is the composition of $n$ quantum operations (one for each emission). Therefore the total evolution from $\rho_0$ to $\rho_n$ (and each intermediate step) may be written in terms of some $E_k$s, like in Equation~\eqref{eq:op-sum-rep}. Quantum operations, however, may be written in an equivalent, alternative form that is more natural when discussing black hole evaporation. This form connects more directly with the discussion in~\cite{Mathur:2009hf}. We motivate this alternate form, by noting that any mixed density matrix may be ``purified'' by enlarging the Hilbert space. For example, a density matrix of the form \begin{equation} \rho = p_1 \ket{A}\bra{A} + p_2 \ket{B}\bra{B} \end{equation} with orthonormal $\{\ket{A}, \ket{B}\}$ can be purified by introducing the orthonormal states $\{\ket{\alpha},\ket{\beta}\}$, and defining \begin{equation} \ket{\Psi} = \sqrt{p_1}\ket{A}\otimes\ket{\alpha} + \sqrt{p_2}\ket{B}\otimes\ket{\beta}. \end{equation} Then, one sees that $\rho$ is the reduced density matrix found by tracing out the new degrees of freedom. Let us emphasize \emph{purification is a formal mathematical operation, and not a dynamical process}. In particular, for us, the new kets that we direct producted into the Hilbert space do not correspond to any physical degrees of freedom.\footnote{If one does treat the new degrees of freedom as physical in the final state, then one is considering a remnant scenario.} Roughly speaking, then, we can imagine purifying each of the $\rho_i$s by enlarging the Hilbert space, and then the evolution in this enlarged Hilbert space would be unitary. As it turns out, any quantum operation from say $\rho_0$ to $\rho_n$ may be written in the following way: \begin{equation} \rho_n = \tr_{\text{aux}}[U (\rho_{\text{aux}}\otimes \rho_0)U^\dg], \end{equation} for a unitary transformation $U$ acting on some auxillary degrees of freedom as well as the physical degrees of freedom. In particular, if $\rho_0$ acts on a $d$-dimensional Hilbert space, then we need introduce at most a $d^2$-dimensional auxillary Hilbert space to write the most general quantum operation in this form~\cite{nielsen}. Thus for our $n$-qubit system, we need only introduce $2n$ auxillary qubits to capture the most general evolution of density matrices. \begin{figure}[ht] \subfloat[][initial]{ \includegraphics[scale=.75, viewport=0 -24 90 68]{initial-H} \label{fig:initial-H} } \subfloat[][intermediate]{ \includegraphics[scale=.75]{int-H} \label{fig:int-H} } \subfloat[][final]{ \includegraphics[scale=.75, viewport=-10 -24 241 50]{final-H} \label{fig:final-H} } \caption{Here we illustrate the black hole evolution with auxillary degrees of freedom. At first we have only the initial black hole degrees of freedom represented by dots. At later stages in the evolution, we introduce some new degrees of freedom that corresponding to the infalling negative energy particles in the Hawking process (squares), as well as outgoing radiation (wavy lines). We have thus increased the size of the Hilbert space, but this should be thought of only as a convenient way to parametrize potentially nonunitary evolution. To return to a fixed dimension model, one needs to trace out auxillary degrees of freedom. In general, at intermediate steps, it is not clear which degrees of freedom should be thought of as auxillary, so we illustrate just one possibility. On the other hand, it is unambiguous that in the final state \emph{all} of the black hole degrees of freedom (dots and squares) are auxillary if there are no remnants.} \label{fig:H} \end{figure} In this language, we can think about the semiclassical Hawking evolution in the following way. We start with $n$ black hole qubits initially in a pure state. We imagine that $n$ might be roughly given by the entropy of the black hole.\footnote{If we want the initial state to model the matter just before it collapses into a black hole this may not be a good assumption, since in a suitably fine-grained description the number of degrees of freedom available to ordinary matter is parametrically smaller than the entropy of the black hole~\cite{Giddings:2009gj, 'tHooft:1993gx}. We do not concern ourselves with this issue here, but the author is grateful to S.~Giddings for pointing this out.} At the first time step, a pair of qubits is created at the horizon in an entangled state, \begin{equation} \frac{1}{\sqrt{2}}(\ket{\hat{0}}\ket{0} + \ket{\hat{1}}\ket{1}). \end{equation} The zero represents no particle and the one represents a particle. We refer the reader to~\cite{Mathur:2009hf, Mathur:2010kx} for a thorough discussion on the origin of this description; see also~\cite{Giddings:2011ks}. We have now added two new qubits to the system, increasing the size of our Hilbert space. At each time step a new entangled pair is produced in the above state, and the Hilbert space keeps increasing in size. By the end of the evaporation process, on the $n$th step, we have added $2n$ qubits to the initial $n$ qubits for a total $3n$ qubits. Since the black hole has completely evaporated and there are only the $n$ physical qubits of radiation, the remaining $2n$ hatted qubits should be interpreted as auxillary degrees of freedom as in the above discussion. Since presumeably the total number of physical degrees of freedom should remain fixed at $n$ qubits, at the $i$th step we have $2i$ auxillary (hatted) qubits, but the semiclassical analysis does not make any clear identification of the auxillary qubits at intermediate stages in the evaporation. It \emph{is} clear, however, that by the end of the evaporation process all of the black hole (hatted) qubits must be auxillary. Specifying the auxillary subspace (in combination with giving the internal dynamics) corresponds to taking into account back reaction on the geometry. The Hilbert space is illustrated in Figure~\ref{fig:H}, where the intermediate state is shown with some highlighted auxillary degrees of freedom. In the figure, the highlighted region contains some of the inital matter qubits (circles) and some of the new infalling qubits (squares); this represents one possibility. One could also consider cases where for the first $n/2$ steps only the inital matter is auxillary, for example. The parameterization of the auxillary degrees of freedom at intermediate steps should be considered part of a model, so that one may trace out the auxillary degrees of freedom to arive at a fixed-dimensional Hilbert space description. Since we are mostly interested in the final state, where the auxillary space is unambiguous, we do not always specify the auxillary degrees of freedom. In the cases where the evolution is unitary, it should be clear what the auxillary degrees of freedom are. Although nothing profoundly new has been said here, we hope this discussion may help clarify potential confusions\footnote{As a specific example, Reference~\cite{Czech:2011wy} raises the issue of an ever-enlarging Hilbert space as potential issue in the analysis of~\cite{Mathur:2009hf}.} regarding~\cite{Mathur:2009hf, Mathur:2010kx}, and other investigations~\cite{Mathur:2011wg, Giddings:2011ks, Czech:2011wy}. \subsection{The General Model}\label{sec:themodel} We are now ready to present the general class of models that we consider. We start with $n$ hatted qubits, and at each step we add a hatted qubit and an unhatted qubit. The number of qubits at each step is summarized in Table~\ref{tab:no-qubits}. \begin{table}[ht] \begin{center} \begin{tabular}{c|c|c|c|c|c} step & no. BH qubits & no. rad. qubits & total no. qubits & no. aux. qubits & state\\\hline $0$ & $n$ & $0$ & $n$ & 0 & $\ket{\psi_0}$\\ $1$ & $n+1$ & $1$ & $n+2$ & 2 & $\ket{\psi_1}$\\ $2$ & $n+2$ & $2$ & $n+4$ & 4 & $\ket{\psi_2}$\\ \vdots & \vdots & \vdots & \vdots & \vdots &\vdots\\ $i$ & $n+i$ & $i$ & $n+2i$ & $2i$ & $\ket{\psi_i}$\\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\ $n$ & $2n$ & $n$ & $3n$ & $2n$ & $\ket{\psi_n}$ \end{tabular} \end{center} \caption{Here we outline the discrete steps in our models. At the $0$th or initial step there are $n$ black hole (BH) qubits and no radiation qubits. At each step in the evolution, the state is given by the ket $\ket{\psi_i}$ in an enlarging Hilbert space.}\label{tab:no-qubits} \end{table} We model the evolution in two steps: a creation step effected by operators $C_i$; and an internal evolution step effected by $\hat{U}_i$ acting on the hatted qubits and $U_i$ acting on the unhatted radiation qubits. Basis vectors at each step look like \begin{multline} \left\{\ket{\hat{q}_1\hat{q}_2\cdots\hat{q}_{n+i}}\ket{q_iq_{i-1}\cdots q_1}\right\} \xrightarrow{C_i} \left\{\ket{\hat{q}_1\hat{q}_2\cdots\hat{q}_{n+i}\hat{q}_{n+i+1}}\ket{q_{i+1}q_iq_{i-1}\cdots q_1}\right\}\\ \xrightarrow{\hat{U}_i\otimes U_i} \left\{\hat{U}\ket{\hat{q}_1\hat{q}_2\cdots\hat{q}_{n+i}\hat{q}_{n+i+1}}U\ket{q_{i+1}q_iq_{i-1}\cdots q_1}\right\}. \end{multline} Of course, one can combine $C_i$ and $\hat{U}_i\otimes U_i$ into a single operator, but it is useful to break up the evolution in this way. Also, there is some physical motivation for thinking about the evolution in this way, since the pair creation time scale is roughly $\sim M$, the black hole mass, while there are some conjectures that the internal dynamics of the black hole should be as fast~\cite{Hayden:2007cs, Sekino:2008he}. (For a 3+1 dimensional Schwarzschild black hole, the scrambling time is speculated to be $\sim M\log M$, but the evolution time step is $\sim M$.) For the majority of the discussion, we focus on the $C_i$ and are content to set $\hat{U}_i = U_i = I$. What properties should the $C_i$ satisfy? We want $C_i$ to preserve the norm and be linear, which means that we should require \begin{equation}\label{eq:CdgC} (C_i)^\dg C_i = I, \end{equation} where there is no sum on $i$. Note that this is not the same as $C_i(C_i)^\dg$ since the $C_i$ have \emph{non}square matrix representations; the above requirement makes $C_i$ an isometric, but nonunitary mapping. We also assume that the $C_i$ act only on the hatted black hole qubits and not on the unhatted radiation qubits which are far away from the pair creation site. We can write the $C_i$ in the following form \begin{equation} C_i = \ket{\varphi_1}\otimes \hat{P}_1+\ket{\varphi_2}\otimes \hat{P}_2+\ket{\varphi_3}\otimes \hat{P}_3 + \ket{\varphi_4}\otimes \hat{P}_4, \end{equation} where $\ket{\varphi_j}$ are an orthonormal basis for the created pair qubits, and the $\hat{P}$'s are linear operators which act on the hatted qubits (with implicit $i$ dependence). Following~\cite{Mathur:2011wg, Giddings:2011ks}, we use the basis \begin{equation}\begin{aligned} \ket{\varphi^i_1} &= \frac{1}{\sqrt{2}}\big(\ket{\hat{0}_{n+i+1}}\ket{0_{i+1}} + \ket{\hat{1}_{n+i+1}}\ket{1_{i+1}}\big)\\ \ket{\varphi^i_2} &= \frac{1}{\sqrt{2}}\big(\ket{\hat{0}_{n+i+1}}\ket{0_{i+1}} - \ket{\hat{1}_{n+i+1}}\ket{1_{i+1}}\big)\\ \ket{\varphi^i_3} &= \ket{\hat{0}_{n+i+1}}\ket{1_{i+1}} \\ \ket{\varphi^i_4} &= \ket{\hat{1}_{n+i+1}}\ket{0_{i+1}}, \end{aligned}\end{equation} for the newly created pair. The constraint in Equation~\eqref{eq:CdgC} implies the following condition on the $\hat{P}$s: \begin{equation}\label{eq:completeness} (C_i)^\dg C_i = \hat{P}_1^\dg\hat{P}_1 + \hat{P}_2^\dg\hat{P}_2 + \hat{P}_3^\dg\hat{P}_3 + \hat{P}_4^\dg\hat{P}_4 = \hat{I}. \end{equation} Note that this defines the $\hat{P}$s as a set of generalized measurement operators acting on the black hole Hilbert space. A fully specified model, then, entails \begin{enumerate} \item A set of $\hat{P}$s at each step $i$ that satisfy the completeness relation~\eqref{eq:completeness}. \item The unitary operators $\hat{U}_i$ and $U_i$ for each $i$. \item A clear delineation of the auxillary subspace at each step $i$. \end{enumerate} The last item is frequently omitted in our discussion; it should be clear for unitary models, and it does not make significant differences for the nonunitary models. If one wants to acquire the fixed-dimensional Hilbert space description, however, then one must trace out the auxillary degrees of freedom at each step. This gives a very general model space that makes it easy to compare and contrast different models of evolution. \subsection{Physical Motivations}\label{sec:physics} At this point, the class of models introduced above may seem fairly abstract with little contact with the original black hole problem. Let us review the physical motivations for this type of model as laid out in~\cite{Mathur:2009hf} and in~\cite{Giddings:2011ks}. We consider an initial configuration of spherically symmetric matter that forms a black hole, which we expect should be well described by the Schwarzschild solution. The model is based on the semiclassical evolution of fields in the background of such a Schwarzschild black hole. In order to give a Hilbert space description of evolution, it is necessary to specify a spacelike slicing of the geometry so that we can specify the quantum state of fields on each slice. It is important to the arguments advanced in~\cite{Mathur:2009hf} that there exists a ``nice slicing'' of the black hole geometry~\cite{Lowe:1995ac}. This slicing avoids the geometry's strong curvature, has sub-Planck scale extrinsic and intrinsic curvature, and yet cuts through the initial matter, horizon, and outgoing Hawking radiation in a smooth way~\cite{Lowe:1995ac, Mathur:2009hf}. Thus, all quantum gravity effects seem to be under control. Our model should be considered an effective description of the dynamics on the slicing. As is well known, in the presence of curved backgrounds the quantum field theory notion of particle becomes observer dependent. If we expand our fields on the slice into modes inside the horizon and outside the horizon, then one finds that pairs of particles are created inside and outside of the horizon. More explicitly there is a Bogoliubov transformation such that the in vacuum evolves to a state of the form $\exp(\gamma a_\text{inside}^\dg a_\text{outside}^\dg)$ acting on the out vacuum. We reduce our problem to an essentially two-dimensional one by expanding the modes in spherical harmonics. From a two-dimensional perspective, each harmonic corresponds to a different field. Then following~\cite{Hawking:1974sw, Giddings:1992ff}, as emphasized in this context in~\cite{Giddings:2011ks}, we can use a set of modes that are localized wavepackets so that we can talk about locality. This is implicit in the discussion of~\cite{Mathur:2009hf}. Moreover, we can truncate the Fock space to occupation numbers zero or one. We then are effectively left with a discussion of qubits, with $\ket{0}$ representing no excitation and $\ket{1}$ representing an excitation. In this description, a pair of particles are created roughly every $M$ in Planck units, with the outgoing particles traveling freely outward on the slices and the ingoing particles traveling freely inward toward the initial matter that is very far away on each slice. The pair of particles are created entangled, as should be clear from the above exponential. This all suggests an effective, discrete time evolution with~\cite{Mathur:2009hf, Mathur:2010kx, Mathur:2011wg, Mathur:2011uj, Giddings:2011ks} \begin{equation} \hat{P}_1 = \hat{I} \qquad \hat{P}_{2,3,4} = 0\qquad \hat{U}=U = I. \end{equation} Because the particles are well-separated on the slice, we do not expect interparticle interactions to be significant which is represented by the choice $\hat{U}=U=I$. We call this point in model space, the Hawking model. Location of the particles on the slice can then be read in the following way from the states. Consider, for illustrative purposes, $n=3$ with a state of the form \begin{equation} \ket{\hat{q}_1\hat{q}_2\hat{q}_3}_\text{initial}\ket{\hat{1}_4\hat{0}_5}_\text{infalling}\ket{0_2 1_1}_\text{outgoing}. \end{equation} The first three hatted qubits represent the initial infalling matter. Note that in the Hawking model this matter plays no role in the evolution. In general, we imagine that we find some qubit description of the initial matter, the details of which are irrelevant to our concerns here. The next two hatted qubits represent the infalling Hawking radiation as it travels inward on the slices. We see that on the first time step, a particle was emitted but not on the second time step. The two unhatted bits represent the outgoing radiation, so the above implies an outgoing particle was emitted on the first time step and then no particle on the second time step. We have written the qubits in the above order so that reading from left to right loosely corresponds to traveling outward on the slice. This allows us to talk about a coarse form of locality~\cite{Mathur:2009hf, Giddings:2011ks}. Note that in the Hawking model the above state would be superposed with several other direct product states. As discussed in the Introduction and for this context in~\cite{Mathur:2009hf, Mathur:2011uj, Mathur:2010kx, Giddings:2011ks}, the above semiclassical description of Hawking evaporation is incomplete. In particular, one expects quantum gravity effects, backreaction, and interactions to play a role. Because of the nature of the nice slicing and the low curvature at the horizon, however, one generally expects all of these corrections to be small. That is to say, on the pair creation time scale, one expects from naive estimates that the dynamics be $\epsilon$ away from the above. This expectation is in considerable tension with the expectation that the dynamics are unitary. For instance, we might introduce a set of $\hat{P}$s that act on the last emitted ingoing particle. This would suggest that the horizon is not effectively the vacuum and still ``remembers'' the previous emission. One might also allow some mild nonlocal interactions inside the black hole via some nearest neighbor $\hat{U}_i$s. Or, motivated by holography and fast scrambling~\cite{Hayden:2007cs, Sekino:2008he, Susskind:2011ap}, one might consider general $\hat{U}_i$s, in which case one gives up all notions of locality on the slice inside the black hole. The distinction between the initial matter and the infalling particles is also lost. Allowing general internal dynamics does not affect the argument of~\cite{Mathur:2009hf} and its generalization in Section~\ref{sec:bound}, which only relies on the pair creation, $\hat{P}_i$s, being close to the Hawking model. We will always use $U=I$, since there is no physical motivation to consider strong interactions among the outgoing radiation. (Such corrections would also be irrelevant to Mathur's bound on the entanglement entropy.) This should become clear after we examine some examples. \section{Examples}\label{sec:models} In this section, we highlight some special points in model space that may be of interest and/or were discussed in the recent literature. One of the main results of this paper is the one-parameter family of models presented in Section~\ref{sec:one-par} that continuously deforms the Hawking pair production in Section~\ref{sec:hawking} into the unitary evolution in Section~\ref{sec:G2}; however, it is also useful to write the various models in a common form, so that the similarities and differences are manifest. This is especially true when one wants to compare unitary models to nonunitary models. We start with the canonical Hawking evaporation model. This should be thought of as the baseline model to which all other models should be compared. \subsection{The Hawking Model}\label{sec:hawking} The standard Hawking evaporation corresponds to creating a new pair in the state $\ket{\varphi_1}$ irrespective of the state of the system, as discussed at length in Section~\ref{sec:physics}. Thus, it can be written as \begin{equation}\label{eq:hawking} C^{H}_i = \ket{\varphi_1}\otimes \hat{I}\qquad \hat{U}=U = I, \end{equation} and so we can write the $\hat{P}$s as \begin{equation} \hat{P}_1 = \hat{I} \qquad \hat{P}_{2,3,4} = 0. \end{equation} In~\cite{Mathur:2009hf}, Mathur showed that if the created pair is at most $\epsilon$ away from $\ket{\varphi_1}$, then the entanglement entropy of the radiation continues to grow with each step and thus the final state is mixed and unitarity is lost. In this language, the bound shows that if the $\hat{P}$s are small deformations from the above, then the final state will be mixed and the evolution will not be unitary. In the sequel, we demonstrate this quite explicitly. \subsection{A Burning Paper Model}\label{sec:paper} Here, we present a unitary ``burning paper'' model that is equivalent to the one given in~\cite{Mathur:2011wg}:\footnote{Throughout the discussion, the reader may assume that the identity acts on any subspaces which are not explicitly shown.} \begin{equation}\begin{gathered}\label{eq:paper} \hat{P}_1 = \hat{P}_2 = \left[\frac{1}{\sqrt{2}}\ket{\hat{0}\hat{0}}\bra{\hat{0}\hat{0}} + \frac{1}{2}\ket{\hat{1}\hat{0}}\bra{\hat{1}\hat{0}} - \frac{1}{2}\ket{\hat{1}\hat{0}}\bra{\hat{0}\hat{1}}\right]_{n+i-1, n+i}\\ \hat{P}_3 = \left[\ket{\hat{1}\hat{0}}\bra{\hat{1}\hat{1}} + \frac{1}{\sqrt{2}}\ket{\hat{0}\hat{0}}\bra{\hat{1}\hat{0}} + \frac{1}{\sqrt{2}}\ket{\hat{0}\hat{0}}\bra{\hat{0}\hat{1}}\right]_{n+i-1,n+i}\\ \hat{P}_4 = 0, \end{gathered}\end{equation} where the subscript on the brackets indicates that the qubits referred to are the $(n+i-1)$th and the $(n+i)$th qubits. It should be clear that this model is quite far from the Hawking model. This models the creation step as \begin{multline} C_i = \ket{\hat{0}_{n+i+1}}\ket{1_i}\otimes \left[\ket{\hat{1}\hat{0}}\bra{\hat{1}\hat{1}} + \frac{1}{\sqrt{2}}\ket{\hat{0}\hat{0}}\bra{\hat{1}\hat{0}} + \frac{1}{\sqrt{2}}\ket{\hat{0}\hat{0}}\bra{\hat{0}\hat{1}}\right]_{n+i-1,n+i}\\ + \ket{\hat{0}_{n+i+1}}\ket{0_i}\otimes \left[\ket{\hat{0}\hat{0}}\bra{\hat{0}\hat{0}} + \frac{1}{\sqrt{2}}\ket{\hat{1}\hat{0}}\bra{\hat{1}\hat{0}} - \frac{1}{\sqrt{2}}\ket{\hat{1}\hat{0}}\bra{\hat{0}\hat{1}}\right]_{n+i-1,n+i}. \end{multline} The important property of the evolution to note is that the $(n+i+1)$th black hole qubit is always $\hat{0}$, as is the $(n+i)$th qubit. Thus these two qubits are ``zeroed,'' and effectively deactivated. We use the word zeroed in this sense, even if the qubit under discussion is deactivated to a different value. (It could even be something like $i\,\mathrm{mod}\,2$. In this situation, the information is sometimes said to be ``bleached'' out of the state.) It is clear, then, that these two qubits should be thought of as the auxillary qubits at intermediate steps. We'll discuss this a bit more in Section~\ref{sec:unitarity}. We also introduce some interesting internal dynamics. First, we need to move the auxillary qubits out of the way, so that they don't affect the next radiation step. So we first cyclically shift all of the qubits 2 positions to the right, thus shoving the two $\hat{0}$s to the two leftmost positions, $1$ and $2$. Then, we introduce some dynamics for the physical degrees of freedom. We cyclically shift \emph{only} the nonauxillary qubits to the right by one unit. This defines $\hat{U}$ so that the model agrees with the burning paper model studied in~\cite{Mathur:2011wg}. If we chop off the zeroed qubits, we recover the model exactly. The first model in~\cite{Giddings:2011ks} is in the same class of models. It too zeroes two qubits, one of which is the newly created black hole qubit. The main difference being that instead of the radiation being determined by the two rightmost hatted qubits, it is instead determined by the leftmost qubit. This model may be written as \begin{equation}\begin{gathered}\label{eq:G1} \hat{P}_1 = \hat{P}_2 = \frac{1}{\sqrt{2}}\ket{\hat{0}_{i+1}}\bra{\hat{0}_{i+1}}\otimes \hat{u}\\ \hat{P}_3 = \ket{\hat{0}_{i+1}}\bra{\hat{1}_{i+1}}\otimes \hat{u}', \end{gathered}\end{equation} where $\hat{u}$ and $\hat{u}$' are unitary operators acting on the remaining hatted qubits. While it is not stated in~\cite{Giddings:2011ks}, we should require that $\hat{u}$ and $\hat{u}$' do not mix the first $i+1$ or the last $i$ auxillary hatted qubits with the remaining physical qubits. The simplest case is to take $\hat{u}=\hat{u}' = \hat{I}$. In this model, the entanglement entropy of the radiation is always zero, which contrasts with our expectations from~\cite{Page:1993df}. \subsection{``Nonlocal'' Unitary Evolution}\label{sec:G2} In~\cite{Giddings:2011ks}, Giddings presents three unitary models of evolution. We focus on the second model that he presents. This second model can be written in our notation as \begin{equation}\begin{aligned}\label{eq:G2} \hat{P}_1 &= \ket{\hat{0}_{2i+1}\hat{0}_{2i+2}}\bra{\hat{0}_{2i+1}\hat{0}_{2i+2}}\\ \hat{P}_2 &= \ket{\hat{0}\hat{0}}\bra{\hat{1}\hat{1}}\\ \hat{P}_3 &= \ket{\hat{0}\hat{0}}\bra{\hat{0}\hat{1}}\\ \hat{P}_4 &= \ket{\hat{0}\hat{0}}\bra{\hat{1}\hat{0}} \end{aligned}\qquad \hat{U} = \hat{I},\end{equation} where we have suppressed the $(2i+1)$ and $(2i+2)$ subscripts in all but the first $\hat{P}$. One sees that as in the models presented in Section~\ref{sec:paper}, two hatted qubits are zeroed at each step. In this case, they are the $(2i+1)$th and $(2i+2)$th qubits. Thus as the evolution progresses the hatted qubits are gradually put into a fiducial form. By the $i$th step, the first $2i$ qubits are zeroed, and should be thought of as auxillary. Note that the above evolution rule breaks down on the penultimate step, when there is only one nonauxillary qubit left. By then, we expect the black hole to be on the Planck scale, and so we can just emit the last qubit freely. This model corresponds to $\theta=\frac{\pi}{2}$ in the model presented in Section~\ref{sec:one-par}. This model is nonlocal when one considers the model in the original nice slicing of the black hole. In this context, the $(2i+1)$th and $(2i+2)$th qubit are very far from the pair creation site at the horizon. Note that this property is shared by the model in Equation~\eqref{eq:G1}, and to a lesser extent the model in Equation~\eqref{eq:paper}. The difference being how far away the zeroed qubits are. One can either interpret these unitary models as nonlocal interactions transmitting information far down the nice slice to the horizon~\cite{Giddings:2011ks}, or in terms of fuzzball microstates altering the state at the horizon, or as burning paper; these information theoretic models are too crude to distinguish. This point is discussed futher in the Conclusion. \subsection{A Pure, but Not Invertible Model}\label{sec:G1-broken} There are several of models of this kind. The simplest to consider is \begin{equation} \hat{P}_{1,2,4} = 0\qquad \hat{P}_3 = \hat{I}\qquad \hat{U}=\hat{I}. \end{equation} In this model, regardless of the state of the system, the new pair is created in the state $\ket{\varphi_3}$. The state $\ket{\varphi_3}$ is not entangled, and thus we can think of this model as zeroing the new black hole qubit \emph{and} the new radiation qubit. Instead of putting the internal qubits into a fiducial form, we put the radiation into a fiducial form. We hope that this convincingly demonstrates that purity of the final state does not ensure unitarity. Another interesting example of (almost) pure but not invertible evolution is the model in Equation~\eqref{eq:G1}, with \begin{equation}\label{eq:G1-broken} \hat{u} = \hat{u}' = \hat{S}^{i+2, n+i}_1, \end{equation} where $\hat{S}^{i+2,n+1}_1$ is the operator that cyclically shifts the $(i+2)$th through $(n+i)$th qubits to the left. For example, consider the evolution: \begin{equation}\begin{aligned}\label{eq:samp-run} &\ket{\hat{0}\hat{q}_1\hat{q}_2\cdots\hat{q}_{n-2}\hat{0}}\\ \xrightarrow{C_0} &\ket{\hat{0}\hat{0}\hat{q}_1\hat{q}_2\cdots\hat{q}_{n-2}\hat{0}}\ket{0}\\ \xrightarrow{C_1} &\ket{\hat{0}\hat{0}\hat{0}\hat{q}_1\hat{q}_2\cdots\hat{q}_{n-2}\hat{0}}\ket{00}\\ \xrightarrow{C_2} &\ket{\hat{0}\hat{0}\hat{0}\hat{0}\hat{q}_1\hat{q}_2\cdots\hat{q}_{n-2}\hat{0}}\ket{000}\\ \vdots \end{aligned}\end{equation} In the above example, the radiation is never entangled with the black hole degrees of freedom, but its state is only determined by the first and last qubit of the initial state. Thus, the evolution is not unitary. This is a potential problem with the model even as it is defined in~\cite{Giddings:2011ks}. What happened? In essence, the model keeps ``reading'' and zeroing the same qubit, which has the net effect of zeroing the radiation qubits as well as the new black hole qubits. This illustrates that unitary evolution can be lost when auxillary qubits mix with nonauxillary qubits. This model is not quite pure for arbitrary initial states, since \begin{equation}\begin{aligned} &\ket{\hat{1}\hat{q}_1\hat{q}_2\cdots\hat{q}_{n-2}\hat{1}}\\ \xrightarrow{C_0} &\ket{\hat{0}\hat{1}\hat{q}_1\hat{q}_2\cdots\hat{q}_{n-2}\hat{0}}\ket{1}\\ \xrightarrow{C_1} &\ket{\hat{0}\hat{0}\hat{0}\hat{q}_1\hat{q}_2\cdots\hat{q}_{n-2}\hat{0}}\ket{11}\\ \xrightarrow{C_2} &\ket{\hat{0}\hat{0}\hat{0}\hat{0}\hat{q}_1\hat{q}_2\cdots\hat{q}_{n-2}\hat{0}}\ket{011}\\ \vdots \end{aligned}, \end{equation} which gives a pure final state, but if one considers a nontrivial superposition of the above initial state and the one in~\eqref{eq:samp-run} then the final state is mixed. Note that the radiation only carries two qubits of information about the initial state, and so the entropy of the final state is very small although nonvanishing on some initial states. It is in this sense that we call the evolution almost pure. \subsection{An Impure Model} A generic model that one constructs leads to impure evolution. When thinking about the different forms that $\hat{P}$s can take, a particularly natural variation on the Section~\ref{sec:G2} model to consider might be \begin{equation}\begin{aligned} \hat{P}_1 &= \ket{\hat{0}\hat{0}}\bra{\hat{0}\hat{0}}\\ \hat{P}_2 &= \ket{\hat{1}\hat{1}}\bra{\hat{1}\hat{1}}\\ \hat{P}_3 &= \ket{\hat{0}\hat{1}}\bra{\hat{0}\hat{1}}\\ \hat{P}_4 &= \ket{\hat{1}\hat{0}}\bra{\hat{1}\hat{0}}, \end{aligned}\end{equation} where as in~\eqref{eq:G2} the above operators act on the $(2i+1)$th and $(2i+2)$th qubits. It should be clear that this model leads to mixed states. Note that it is also a large deformation from the Hawking model. It is easy to see that this model is not invertible either, by considering $\ket{\hat{0}\hat{0}}$ and $\ket{\hat{1}\hat{1}}$ as initial states. \subsection{Mathur--Plumberg Shift--Anti-Shift Models}\label{sec:shift} In~\cite{Mathur:2011wg}, Mathur and Plumberg present several models. We can write their ``Model A,'' in the following way. Let $\hat{T}_j$ be the operator that cyclically shifts \emph{only} the newly created (not the first $n$) hatted qubits to the right by $j$. Then, the model is of the form \begin{equation} \hat{P}_1 = \lambda_1\hat{T}_1\qquad \hat{P}_2 = \lambda_2\hat{T}_{-1} \qquad \hat{P}_{3,4}=0, \end{equation} where the completeness relation requires \begin{equation}\label{eq:model-A-constraint} |\lambda_1|^2 + |\lambda_2|^2 = 1. \end{equation} More generally, we can replace $\hat{T}_1$ and $\hat{T}_{-1}$ by other unitary transformations that act on the hatted qubits. The smallness of the corrections to Hawking is loosely determined by $\lambda_2$; see~\cite{Mathur:2011wg} for more details and numerical results. The ``Model B'' of~\cite{Mathur:2011wg} may be written as \begin{equation} \hat{P}_1 = \lambda_1\,\hat{T}_1\otimes \ket{\hat{1}_{i+1}}\bra{\hat{1}_{i+1}} + \hat{I}\otimes\ket{\hat{0}_{i+1}}\bra{\hat{0}_{i+1}} \quad \hat{P}_2 = \lambda_2\,\hat{T}_{-1}\otimes\ket{\hat{1}_{i+1}}\bra{\hat{1}_{i+1}}\quad \hat{P}_{3,4}=0, \end{equation} where one can confirm that the completeness relation imposes the same constraint~\eqref{eq:model-A-constraint}. Note that neither of the above models zeroes any qubits, and so one does not expect information about the initial matter to be transmitted out in the radiation. \subsection{Mathur ``Ising'' Model} In~\cite{Mathur:2010kx}, Mathur presents a model that can be mapped onto the one-dimensional Ising model and thus solved analytically. The model can be written in the form \begin{equation} \hat{P}_1 = \lambda_1 \left[\ket{\hat{0}}\bra{\hat{0}} + \ket{\hat{1}}\bra{\hat{1}}\right]_{n+i}\qquad \hat{P}_2 = \lambda_2 \left[\ket{\hat{0}}\bra{\hat{0}} - \ket{\hat{1}}\bra{\hat{1}}\right]_{n+i}\qquad \hat{P}_{3,4}=0, \end{equation} where $\lambda_1$ and $\lambda_2$ are related to the parameters $a$ and $b$ in~\cite{Mathur:2010kx} via \[ \lambda_1 = \frac{e^a + e^b}{2}\qquad \lambda_2 = \frac{e^a-e^b}{2}. \] The above rule is not valid for the first step, for which we use the Hawking model ($\lambda_1 =1$ and $\lambda_2 = 0$). The parameters $\lambda_1$ and $\lambda_2$ are subject to the same constraint as before \begin{equation} |\lambda_1|^2 + |\lambda_2|^2 = 1. \end{equation} In fact, this model is qualitatively the same as the model in Section~\ref{sec:shift}: both set $\hat{P}_1$ and $\hat{P}_2$ to unitary transformations, and $\hat{P}_3=\hat{P}_4=0$. The expression for the final state entropy can be written in the form~\cite{Mathur:2010kx} \begin{equation} S = n\log 2 - (n-1)\left[ae^{2a} + be^{2b}\right]. \end{equation} Let us note that for no value of $a$ and $b$ that solves the completeness relation does this vanish. Thus, no matter how large the corrections are in this model, unitarity is lost; however, one \emph{can} set $\lambda_1 = \lambda_2$ in which case the entanglement entropy of the radiation is $\log 2$ for all time. In this case, the pair creation (after the first step) is given by \begin{equation} C_i = \ket{\hat{0}_{n+i+1}}\ket{0_i}\otimes\ket{\hat{0}_{n+i}}\bra{\hat{0}_{n+i}} +\ket{\hat{1}_{n+i+1}}\ket{1_i}\otimes\ket{\hat{1}_{n+i}}\bra{\hat{1}_{n+i}}. \end{equation} One sees that after the first step, no further entanglement between the hatted and unhatted qubits is generated. One might think there would be more entanglement generated when the above acts on qubits which are in a superposition of $1$ and $0$, but since the above only acts on previously created pairs we do not have to worry about that issue. One can imagine that the model breaks down when the black hole becomes very small and the last qubit is emitted freely, so this model is effectively pure in this case. Let us note that keeping the entanglement entropy of the radiation constant is in stark contrast with expectations we have from Page~\cite{Page:1993df}; there is no characteristic rise and fall of the entanglement entropy of the radiation. Furthermore, the final state consisting entirely of radiation is \emph{completely} independent of the initial matter that formed the black hole; the evolution is (almost) pure but far from invertible like the model discussed in Section~\ref{sec:G1-broken}. This model, while interesting to consider, never leads to unitary evolution, even when one considers arbitrarily large deformations from the Hawking point in model space. \section{Review and Generalization of Mathur's Argument}\label{sec:bound} In~\cite{Mathur:2009hf}, Mathur argues that small corrections to the pair creation process are insufficient to restore unitarity. More specifically, he demonstrates that small corrections don't accumulate. Let us define $S_i$ as the entanglement entropy of the radiation with the rest of the system at step $i$. If the black hole evaporation process is unitary, then in the limit of large $n$ we expect $S_i$ to rise linearly with $i$ until about the halfway point, $i=n/2$, and then rapidly turn over and fall linearly to zero on the final step~\cite{Page:1993wv}. On the other hand, if one uses the Hawking model of evolution $C^H$, then one sees that $S_i$ increases by $\log 2$ at each time step: \begin{equation} S^{\text{Hawking}}_i = i \log 2; \end{equation} there is no turnover. This is also the maximum entropy that is possible for the $i$ radiation qubits, which indicates that the radiation carries no information about the initial state. The central insight in Mathur's argument~\cite{Mathur:2009hf} is that the marginal increase in entanglement entropy, \begin{equation} \Delta S_i = S_{i+1}-S_i, \end{equation} varies smoothly with small deformations away from the Hawking model, and thus a large deformation is needed to make $\Delta S$ negative if one starts from the Hawking model's $\log 2$. In~\cite{Mathur:2009hf}, only a $\hat{P}_2$-type deformation to the Hawking model was explicitly considered; however, we demonstrate below that the the argument generalizes to all deformations in the model space. In particular, we claim in the class of models discussed in this paper, if \begin{equation} \|C_i - C_i^H\|<\varepsilon< 1, \end{equation} then \begin{equation}\label{eq:my-bound} \Delta S_i \geq \log 2 - k_\varepsilon, \end{equation} where $k_\varepsilon$ is parametrically small, positive, and vanishes as $\varepsilon$ goes to zero. In fact, we demonstrate below that $k_\varepsilon$ behaves as no worse than $k_\varepsilon\sim -9\varepsilon\log\varepsilon$ as $\varepsilon$ approaches zero. Above, $\|\cdot\|$ is the operator norm. For an operator $O$, $\|O\|$ is the square root of the largest eigenvalue of $O^\dg O$.\footnote{In fact, the result is unchanged by using any other norm that is compatible with the Hilbert space norm.} The proof follows that in~\cite{Mathur:2009hf} with some minor modifications. To begin, we use strong subadditivity of (von Neumann) entanglement entropy. Throughout, without loss of generality, we set $\hat{U}=U=I$ since they do not affect the entanglement entropies. Let us consider the $(i+1)$th state \begin{equation} \ket{\psi_{i+1}} = C_i\ket{\psi_i} \end{equation} Let $R_i$ denote the first $i$ emitted radiation qubits, $r$ denote the $(i+1)$th emitted radiation qubit, $B_i$ denote the first $n+i$ black hole qubits, and $b$ denote the $(n+i+1)$th black hole qubit. Then, strong subadditivity implies \begin{equation} S(R_i\cup r)+S(r\cup b)\geq S(R_i) + S(b). \end{equation} By definition, $S(R_i\cup r)$ is simply $S_{i+1}$, whereas since the $C_i$ acts trivially on emitted radiation $S(R_i) = S_i$. Thus, we can write the above as \begin{equation} \Delta S_i \geq S(b) - S(r\cup b). \end{equation} Note that for the Hawking model $S(b) = \log 2$ and $S(r\cup b) = 0$, and the bound is saturated. Now, we need to use the condition on $C_i$ to place bounds on $S(b)$ and $S(r\cup b)$. The operator norm is compatible with the Hilbert space norm, which means \begin{equation} \|(C_i - C^H)\ket{\lambda}\|\leq \|C_i - C^H\|\,\|\ket{\lambda}\| \end{equation} for all $\ket{\lambda}$. Applying this to $\ket{\psi_i}$ gives the condition \begin{equation}\label{eq:ket-bound} \|\ket{\psi_{i+1}} - C_i^H\ket{\psi_i}\| < \varepsilon. \end{equation} We may write the two kets in the form \begin{equation} \ket{\psi_{i+1}} = \alpha_1 \ket{\varphi_1}\ket{\Lambda_1} + \alpha_2 \ket{\varphi_2}\ket{\Lambda_2} + \alpha_3 \ket{\varphi_3}\ket{\Lambda_3} + \alpha_4 \ket{\varphi_4}\ket{\Lambda_4},\qquad C_i^H\ket{\psi_i} = \ket{\varphi_1}\ket{\Lambda_0}, \end{equation} where the $\ket{\Lambda_i}$ are normalized, but not necessarily orthogonal kets in the $n+2i$ qubit space $R_i\cup B_i$. Of course normalization demands that \begin{equation} |\alpha_1|^2 + |\alpha_2|^2+|\alpha_3|^2+|\alpha_4|^2 = 1. \end{equation} One can show that the condition~\eqref{eq:ket-bound} implies that \begin{equation} \Re\big(\alpha_1 \braket{\Lambda_0|\Lambda_1}\big) > 1-\frac{\varepsilon^2}{2}, \end{equation} and since $|\braket{\Lambda_0|\Lambda_1}|\leq 1$, we see \begin{equation} 1-\frac{\varepsilon^2}{2}<|\alpha_1|\leq 1. \end{equation} This, in turn, implies that \begin{equation}\label{eq:delta-def} 1-\delta^2 <|\alpha_1|^2<1,\qquad |\alpha_2|,|\alpha_3|,|\alpha_4|<\delta = \varepsilon\sqrt{1-\frac{\varepsilon^2}{4}}, \end{equation} where we have defined $\delta$ for convenience. Note that $\delta$ is what should be directly compared with $\epsilon$ in~\cite{Mathur:2009hf}. Let us use the above to place an upper bound on $S(r\cup b)$. The corresponding reduced density matrix is given by \begin{equation} \rho_{rb} = \begin{pmatrix} |\alpha_1|^2 & \alpha_1\alpha_2^*\braket{\Lambda_2|\Lambda_1} & \alpha_1\alpha_3^*\braket{\Lambda_3|\Lambda_1} & \alpha_1\alpha_4^*\braket{\Lambda_4|\Lambda_1}\\ \alpha_1^*\alpha_2\braket{\Lambda_1|\Lambda_2} & |\alpha_2|^2 & \alpha_2\alpha_3^*\braket{\Lambda_3|\Lambda_2} & \alpha_2\alpha_4^*\braket{\Lambda_4|\Lambda_2}\\ \alpha_1^*\alpha_3\braket{\Lambda_1|\Lambda_3} & \alpha_2^*\alpha_3\braket{\Lambda_2|\Lambda_3} & |\alpha_3|^2 & \alpha_3\alpha_4^*\braket{\Lambda_4|\Lambda_3}\\ \alpha_1^*\alpha_4\braket{\Lambda_1|\Lambda_4} & \alpha_2^*\alpha_4\braket{\Lambda_2|\Lambda_4} & \alpha_3^*\alpha_4\braket{\Lambda_3|\Lambda_4} & |\alpha_4|^2 \end{pmatrix}. \end{equation} We can now use Audenaert's optimal generalization~\cite{audenaert} of Fannes' inequality, which places a bound on the difference in entropy of two $d$-dimensional density matrices $\rho$ and $\sigma$.\footnote{One could instead use Fannes' inequality for a weaker bound with stronger restrictions on $\varepsilon$.} Let $T$ be the trace distance between $\rho$ and $\sigma$, then~\cite{audenaert} \begin{equation}\label{eq:fannes} |S(\rho) - S(\sigma)|\leq T\log (d-1) - T\log T-(1-T)\log(1-T), \end{equation} where the trace distance is defined as \begin{equation} T = \frac{1}{2}\tr\left[\sqrt{(\rho-\sigma)^\dg(\rho-\sigma)}\right], \end{equation} or one-half the sum of the absolute value of the eigenvalues of $\rho-\sigma$. Note that the above definition of the trace distance differs by a factor of $2$ from some references. With the above normalization $0\leq T\leq 1$ for all unit-trace density matrices $\rho$ and $\sigma$. In this case we consider the $\sigma$ to be the density matrix with $|\alpha_1| = 1$, and $\rho$ to be $\rho_{rb}$. Next, we may use Gershgorin's circle theorem to bound the eigenvalues, $\lambda$, and therefore the trace distance, $T$. Gershgorin's theorem tells us all the eigenvalues must lie in the union of discs in the complex plane centered on the diagonal entries with radii given by the sum of the absolute value of off-diagonal entries for each row. For instance the first row gives a disc \begin{equation} D_1:\quad |\lambda -(|\alpha_1|^2-1)| \leq |\alpha_1|\,|\alpha_2|\, |\braket{\Lambda_2|\Lambda_1}| + |\alpha_1|\,|\alpha_3|\,|\braket{\Lambda_3|\Lambda_1}| + |\alpha_1|\,|\alpha_4|\,|\braket{\Lambda_4|\Lambda_1}| \end{equation} Applying our inequalities on the components and the hermiticity of $\rho-\sigma$ (and therefore reality of its spectrum), we can find an interval that must include any eigenvalues in the above disc: \begin{equation} I_1= (-3\delta-\delta^2, 3\delta). \end{equation} One finds the remaining three rows can be encompassed by the interval \begin{equation} I_2 = (-\delta-2\delta^2, \delta+3\delta^2), \end{equation} and so all eigenvalues must satisfy \begin{equation} |\lambda| < 3\delta+\delta^2. \end{equation} Thus, we may conclude that the trace distance must satisfy \begin{equation} T < 2(3 \delta + \delta^2), \end{equation} and therefore \begin{equation}\label{eq:x1} S(r\cup b) \leq -x_1 \log\left(\tfrac{1}{3}x_1\right) - (1-x_1)\log(1-x_1)\qquad x_1 =\min\big(\tfrac{3}{4},\, 2(3 \delta + \delta^2)\big). \end{equation} The $3/4$ comes by finding critical point of the right-hand side of~\eqref{eq:fannes} as a function of $T$. The bound on $S(b)$ can be derived in analogous fashion. The state $\ket{\psi_{i+1}}$ may be written out as \begin{equation} \begin{gathered} \ket{\psi_{i+1}} = \ket{\hat{0}}\ket{\chi_0}+\ket{\hat{1}}\ket{\chi_1}\\ \ket{\chi_0} = \frac{\alpha_1}{\sqrt{2}}\ket{0}\ket{\Lambda_1} +\frac{\alpha_2}{\sqrt{2}}\ket{0}\ket{\Lambda_2} +\alpha_3\ket{1}\ket{\Lambda_3}\\ \ket{\chi_1} = \frac{\alpha_1}{\sqrt{2}}\ket{1}\ket{\Lambda_1} -\frac{\alpha_2}{\sqrt{2}}\ket{1}\ket{\Lambda_2} +\alpha_4\ket{0}\ket{\Lambda_4}. \end{gathered} \end{equation} The reduced density matrix can be written as \begin{equation} \rho_b = \begin{pmatrix} \braket{\chi_0|\chi_0} & \braket{\chi_1|\chi_0}\\ \braket{\chi_0|\chi_1} & \braket{\chi_1|\chi_1} \end{pmatrix}. \end{equation} As before we can place bounds on the above components \begin{equation} \begin{aligned} \left|\braket{\chi_0|\chi_0}-\frac{1}{2}\right|&<\delta + \frac{\delta^2}{2}\\ \left|\braket{\chi_1|\chi_1}-\frac{1}{2}\right|&<\delta + \frac{\delta^2}{2}\\ |\braket{\chi_1|\chi_0}|&<\sqrt{2}(\delta + \delta^2). \end{aligned} \end{equation} One can once again use the Fannes--Audenaert inequality along with Gershgorin's theorem to bound $S(b)$, where $\sigma$ is the $\alpha_1=1$ density matrix, $I/2$. One finds the trace distance satisfies \begin{equation} T < (1+\sqrt{2})\delta + (\tfrac{1}{2}+\sqrt{2})\delta^2, \end{equation} and this gives \begin{equation}\label{eq:x2} S(b) \geq\log 2 + x_2\log x_2 + (1-x_2)\log(1-x_2)\qquad x_2 = \min\big(\tfrac{1}{2},\,(1+\sqrt{2})\delta + (\tfrac{1}{2}+\sqrt{2})\delta^2\big). \end{equation} Finally, this allows us to write \begin{equation}\label{eq:k} k_\varepsilon = -x_1\log (\tfrac{1}{3}x_1) - x_2\log x_2 -(1-x_1)\log(1-x_1)-(1-x_2)\log(1-x_2), \end{equation} where recall $x_1$ is defined in Equation~\eqref{eq:x1}, $x_2$ in Equation~\eqref{eq:x2}, and $\delta$ in Equation~\eqref{eq:delta-def}. For asymptotically small $\varepsilon$, $k_\varepsilon\sim -(7+\sqrt{2})\varepsilon\log\varepsilon$; and in fact for small but finite $\varepsilon$, $k_\varepsilon< -9\varepsilon\log\varepsilon$. Numerically, one finds that $k_\varepsilon$ first surpasses $\log 2$, thus allowing the entanglement entropy to decrease for $\varepsilon \approx .02$. Furthermore, one finds $k_\varepsilon$ reaches $2\log 2$, thus allowing the maximal marginal decrease of entanglement entropy for $\varepsilon \approx .05$. For larger $\varepsilon$, the inequality with $k_\varepsilon$ given in~\eqref{eq:k} places no restriction on the marginal change in entanglement. Since we are not especially interested in making the tightest possible bound or even the above numerical values, we may as well write the bound in slightly less unwieldy form, \begin{equation} \Delta S_i \geq \log 2 + 9\varepsilon\log\varepsilon\qquad \varepsilon\ll 1. \end{equation} This establishes the claim. Let us note that the above bound's asymptotic behavior is weaker than the inequality derived in~\cite{Mathur:2009hf}\footnote{An equivalently strong bound here would be $k_\varepsilon = 2\delta$.} as a consequence of using more general arguments to include arbitrary perturbations. On the other hand, the result presented here is stronger in the sense that \cite{Mathur:2009hf} finds only a leading order result valid to order $O(\epsilon^2)$ whereas the bound~\eqref{eq:my-bound} with $k_\varepsilon$ given in~\eqref{eq:k} is valid for finite $\varepsilon\in(0,1)$. The above bounds could possibly be strengthened with more work;\footnote{For example, one might make progress by direct computation of $S(r\cup b)$ and $S(b)$ as was performed in~\cite{Mathur:2009hf}, but this would involve solving an eigenvalue problem for a four-dimensional density matrix with arbitrary coefficients.} however, that is irrelevant to the basic claim that small corrections to the low energy pair creation process cannot restore unitarity. \section{Requirements for Unitarity}\label{sec:unitarity} While it is interesting to think about the different kinds of evolution that one could have, perhaps the most interesting question to ask is what kinds of models are unitary or, equivalently what sorts of corrections to the Hawking evolution can restore unitarity. Above we see that small corrections to the evolution \emph{cannot} restore unitarity; this gives a necessary condition that the corrections are \emph{large}. It would be nice to also have some sufficient conditions, since it is clear that not every large correction one could consider leads to unitary evolution. In order for the evolution to be pure, we need the final state (including the auxillary qubits) to be a direct product of the form \begin{equation} \ket{\psi_n} = \ket{\hat{\phi}}\otimes\ket{\chi}, \end{equation} where $\ket{\hat{\phi}}$ is a state in the $2n$-qubit auxillary space and $\ket{\chi}$ is the state of the physical radiation qubits. Our first observation is that $\ket{\hat{\phi}}$ should be independent of the initial state. Suppose that this were not true: \begin{equation}\begin{aligned} \ket{\phi_0^{(1)}} &\mapsto \ket{\hat{\phi}^{(1)}}\otimes\ket{\chi^{(1)}}\\ \ket{\phi_0^{(2)}} &\mapsto \ket{\hat{\phi}^{(2)}}\otimes\ket{\chi^{(2)}} \end{aligned}, \end{equation} which seems fine until one considers an initial state which is a superposition of the above two states; the final state is then mixed when one traces out the hatted qubits. (This argument assumes that the $\ket{\chi}$s are linearly independent so that the evolution is invertible.) This is basically a variant of the no-cloning theorem. The next observation is that we need the final radiation state $\ket{\chi}$ to be a unitary transformation of the initial state $\ket{\psi_0}$. Let $F$ be the total map from initial state to the final state, then $F$ is a linear, isometric (norm-preserving) mapping from $n$ hatted qubits to $2n$ hatted plus $n$ unhatted qubits. From the above, unitarity demands that \begin{equation}\label{eq:Funit} F_\text{unitary}: \ket{\hat{\psi}}\mapsto \ket{\hat{\phi}}\otimes \ket{\chi}\qquad \ket{\chi} = U\ket{\hat{\psi}}, \end{equation} where $U$ in the above is a unitary transformation from the initial $n$ hatted qubits to the final $n$ unhatted qubits, and $\ket{\hat{\phi}}$ is fixed. The total map $F$ is just the product of all the $C_i$s and $\hat{U}_i\otimes U_i$s. All of the hatted qubits have to be zeroed or bleached, and the information stored in the initial matter transferred to the radiation. Let us think about how we can zero or bleach the hatted qubits. We have $n$ steps to project $2n$ qubits to a unique state with the $\hat{P}^i$s; the unitary $\hat{U}$s clearly cannot zero qubits. At each step, we can zero \emph{at most two qubits}. If $C$ bleaches some subspace to a state $\ket{\hat{\alpha}}$, then it may be written as \begin{equation} C = \ket{\hat{\alpha}}\otimes O, \end{equation} for an unspecified operator $O$. If $\ket{\hat{\alpha}}$ is a $p$-qubit subspace, then $O$ maps $n+i$ qubits to $n+i+2-p$ qubits and must satisfy \begin{equation} O^\dg O = I; \end{equation} this is only possible if $n+i+2-p\geq n+i$, immediately implying $p\leq 2$. Our key observation is that the desire to zero the hatted qubits is in tension with the completeness relation~\eqref{eq:completeness}. Since we only have four $\hat{P}$s, at any given step the \emph{best} we can do is project out a four-dimensional subspace, or two qubits. It is this tension that connects the need to zero qubits with the requirement to have large corrections to the Hawking model. This might help elucidate the results in~\cite{Mathur:2009hf}. Note that this is very much in agreement with the picture presented in Figure~\ref{fig:H}, wherein at each stage there are two new auxillary qubits. For the state to be pure, these auxillary qubits must be zeroed. Let us note that there are two different ways to zero two qubits at each step, although the distinction is not actually that sharp when one considers the full $C_i$s. In the burning paper model of Section~\ref{sec:paper} we use only three $\hat{P}$s, which zero one qubit. The three $\hat{P}$s were chosen to ensure that the newly created $\hat{q}$ is also zeroed. The second way is illustrated in Section~\ref{sec:G2}, in which all four $\hat{P}$s are used to zero two old $\hat{q}$s. One can consider various unitary transformations, however, these are the only two qualitative kinds of models that lead to a pure radiation final state. Remember that which $\hat{P}$s get used tell us which pair state is created at the horizon. It is impossible to preferentially use only $\hat{P}_1$ and simultaneously have unitary evolution. As we saw in Section~\ref{sec:G1-broken}, it is possible for the evolution to be pure, but not invertible. In the model in Equation~\eqref{eq:G1-broken}, qubits that were zeroed in previous steps mixed with nonzeroed qubits. When we zero the qubits, we are then thinking of them as auxillary degrees of freedom that should be erased in the operator-sum description~\eqref{eq:op-sum-rep}. Thus, it does not make physical sense to allow mixing with the auxillary degrees of freedom if one wants unitary evolution. The requirements outlined above for purity and invertibilty should ensure unitary evolution. \section{A One-Parameter Interpolating Model}\label{sec:one-par} We can interpolate between the Hawking model in Section~\ref{sec:hawking} and the unitary model in Section~\ref{sec:G2} via \begin{equation}\begin{aligned}\label{eq:theta-model} \hat{P}_1 &= \cos\theta\,\hat{I} + (1-\cos\theta)\ket{\hat{0}_{2i+1}\hat{0}_{2i+2}}\bra{\hat{0}_{2i+1}\hat{0}_{2i+2}}\\ \hat{P}_2 &= \sin\theta\ket{\hat{0}\hat{0}}\bra{\hat{1}\hat{1}}\\ \hat{P}_3 &= \sin\theta\ket{\hat{0}\hat{0}}\bra{\hat{0}\hat{1}}\\ \hat{P}_4 &= \sin\theta\ket{\hat{0}\hat{0}}\bra{\hat{1}\hat{0}} \end{aligned}\qquad \hat{U}=\hat{I},\end{equation} with $\theta=0$ giving Hawking's evolution and $\theta=\frac{\pi}{2}$ giving the unitary evolution in Equation~\eqref{eq:G2}. (Once again we suppressed subscripts on the qubits after the first line.) We may write \begin{equation} (C - C^H)^\dg(C - C^H) = 2 \hat{I} - \hat{P}_1-\hat{P}_1^\dg = 2(1-\cos\theta)\big(\hat{I}-\ket{\hat{0}\hat{0}}\bra{\hat{0}\hat{0}}\big), \end{equation} so that one finds \begin{equation} \|C - C^H\| = 2|\sin\tfrac{\theta}{2}|. \end{equation} We clearly see that this is in accord with Mathur's argument and its generalization in Section~\ref{sec:bound}. One of the main results of this paper is the above model, which continuously connects the Hawking model to a unitary model, clearly illustrating that they are far apart in model space. Previous efforts to illuminate Mathur's bound~\cite{Mathur:2011wg, Mathur:2010kx}, considered different types of small corrections and showed that they did not significantly affect the entropy of the final state; however, they did not consider corrections that when made \emph{large} would give a unitary ``burning paper'' type model. The above model fills this gap. In Figure~\ref{fig:theta}, we plot the second R\'{e}nyi entanglement entropies of the radiation as a function of $\theta$ for three different initial states. Recall that the second R\'{e}nyi entropy is defined as \begin{equation} S_2(\rho_\text{red}) = - \log\tr(\rho^2_{\text{red}}), \end{equation} and is a positive-definite measure of entanglement that vanishes if and only if $\rho_{\text{red}}$ is pure. For computational purposes, however, it is a bit more convenient than the traditional von Neumann entropy. It should also be noted that $S_2$ gives a lower bound for the von Neumann entropy. Moreover, when one examines the von Neumann entropy it behaves qualitatively similarly. \begin{figure}[htb] \subfloat[][$\ket{\hat{0}\hat{0}\hat{0}\hat{0}\hat{0}}$]{ \includegraphics[width=5cm]{S200000} } \subfloat[][$\ket{\hat{0}\hat{0}\hat{0}\hat{1}\hat{1}}$]{ \includegraphics[width=5cm]{S200011} } \subfloat[][$\ket{\hat{1}\hat{1}\hat{1}\hat{1}\hat{1}}$]{ \includegraphics[width=5cm]{S211111} } \caption{Here we present the second R\'{e}nyi entropies of the unhatted radiation qubits as a function of $\theta$ for the model in Equation~\eqref{eq:theta-model} with three different initial states. Intermediate steps are dashed curves and the final step is solid; the steps are shown in the color order (red, yellow, green, blue, purple, black). The point $\theta=0$ corresponds to the canonical Hawking evolution, while $\theta=\frac{\pi}{2}$ corresponds to the unitary model in Equation~\eqref{eq:G2}.}\label{fig:theta} \end{figure} In Figure~\ref{fig:theta}, one sees that at $\theta=0$ (Hawking model) the entropy rises by one at each step, except for the final step where it drops down by one. This is an artifact of the way we end the evolution, since the model breaks down on the penultimate step. We chose to just emit the last qubit freely, which makes sense on physical grounds since the black hole should be quite small by that point; one can easily imagine large corrections in the final stage(s) of black hole evaporation. At $\theta=\frac{\pi}{2}$, we have the unitary model, and the entanglement entropy of the radiation has the expected rise and fall. For no $\theta$ close to $\theta=0$, however, does the entanglement entropy of the radiation fail to increase (excluding the final step). Of course, a $5$ qubit initial state is not realistic for the macroscopically large black holes we are thinking about; however, it suffices to demonstrate the qualitative behaviour which should not change as $n$ is increased. The author was limited by the computational power required, which grows quite rapidly with $n$. \section{Conclusion}\label{sec:conc} We have presented a very general framework that provides a natural, unifying language to compare information-theoretic models of black hole evaporation. The framework involves describing the dynamics of pure state vectors $\ket{\psi}$ in an ever enlarging Hilbert space. The only constraint on the models is the completeness relation~\eqref{eq:completeness}. In Section~\ref{sec:quops}, we explain how to interpret this model in terms of potentially mixed evolution in a fixed dimensional Hilbert space: one must trace out some auxillary degrees of freedom to arrive at a dynamical equation like in Equation~\eqref{eq:op-sum-rep}. Part of a full model, then, involves specifying the auxillary degrees of freedom. While at intermediate steps this can be ambiguous, by the end of the evolution we are left only with radiation and thus any nonradiation degrees of freedom are by default auxillary. This excludes remnants or other scenarios where one can identify physical degrees of freedom that the Hawking radiation is entangled with at the end of the evaporation. In Section~\ref{sec:models}, we show how to write a number of interesting models in our unifying notation. Many of them had been introduced and studied previously in the literature. These models illustrate some of the key obstructions and requirements to have unitary evolution. In Section~\ref{sec:unitarity}, we discuss the requirements for unitarity, and summarize a set of sufficient conditions. A key backdrop to our discussion, and indeed the whole paper, is a recent theorem~\cite{Mathur:2009hf} showing that small corrections to the Hawking model \emph{cannot} give unitary evolution. A corollary is that \emph{large} corrections are necessary to have unitary evolution. As we show, this is, however, not a sufficient condition; unsurprisingly, there are many large corrections that fail to give unitary evolution. One interesting observation is how the unitary requirement that internal qubits be zeroed becomes connected to corrections to the pair creation via the completeness relation~\eqref{eq:completeness}. This may help elucidate the results in~\cite{Mathur:2009hf}. Finally, in Section~\ref{sec:one-par}, we give a one-parameter family of models that continuously interpolates between the Hawking model and a unitary model. In terms of this parameter, one can clearly see that the unitary model is far from the Hawking model, thus illustrating the theorem in~\cite{Mathur:2009hf}. One of the key points emphasized in~\cite{Mathur:2009hf, Mathur:2011uj} is that the nice slicing of the Schwarzschild solution implies at best small corrections to the Hawking model in Equation~\eqref{eq:hawking}, and therefore a loss of unitarity evolution. To restore unitarity, large corrections are required of the form discussed here; however, one must show why these corrections arise in the black hole \emph{and not} in all of our earth-based experiments and observations. It seems quite difficult to do this, since in the nice slice construction no geometric quantity is large. The only quantity that seems to be large is the number of degrees of freedom or number of particles required to form the black hole, but this is not a basic geometric quantity. Let us further note that for our discussion there is a factorization of the internal dynamics and the pair creation dynamics. The internal black hole dynamics can be as nonlocal, or scramble as rapidly as one wants, but whether the evaporation process is unitary or not is (modulo a few caveats mentioned in Sections~\ref{sec:models} and~\ref{sec:unitarity}) entirely determined by the pair creation process. Thus, the discussion in~\cite{Hayden:2007cs, Sekino:2008he, Susskind:2011ap, Lashkari:2011yi} is not directly relevant to our concerns here, although it is important for better understanding black holes. The pair creation process is localized near the horizon, where for a large black hole the geometry suggests one can trust the semiclassical approximation even if one might doubt its validity deep within the black hole. On the pair creation time scale, however, it is precisely this physics that needs an order unity correction~\cite{Mathur:2009hf}. One other issue that one may wish to raise in our discussion is the issue of conservation laws~\cite{Czech:2011wy, Braunstein:2011gz}. While we hope we have sufficiently addressed the issue of an expanding Hilbert space as raised in~\cite{Czech:2011wy}, one may still be concerned that we haven't discussed conservation of energy (or angular momentum, electric charge, etc.). These issues are rebutted in~\cite{Mathur:2011uj}. Let us note, however, by not discussing the original spacetime physics and the resulting at most small corrections, the fundamental issue has been totally elided. While the results presented in~\cite{Czech:2011wy, Braunstein:2011gz} are interesting unto themselves, they do not provide a plausible physical mechanism to modify the pair creation process to get the dynamics they suggest. If, as suggested by string theory, or from other considerations, we think black hole evaporation is a unitary process; then, the pair creation process must not strictly adhere to the causal structure on the Schwarzschild nice slicing. There are two obvious frameworks (ignoring the possibility of remnants) to discuss this deviation: fuzzballs or nonlocality. The fuzzball proposal (see~\cite{Skenderis:2008qn, Bena:2007kg, Balasubramanian:2008da, Mathur:2005ai, Mathur:2005zp} for reviews) suggests that the black hole metric is only an effective geometry that approximates $e^{S_\text{BH}}$ microstates. The microstates differ from each other on the horizon scale, thus large corrections to the Hawking evolution are anticipated and information is transmitted from local excitations. How the fuzzball proposal relates to these qubit models is discussed in~\cite{Mathur:2010kx, Mathur:2011uj, Mathur:2009hf, Mathur:2011wg}. The main point being that since the geometry at the would-be horizon depends on the internal state, their is a physical mechanism to get large corrections to the pair creation process. Since the fuzzball's interior geometry (and the whole causal structure) is quite different from the original black hole solution in which the nice slices were constructed, one should probably not interpret them with the original notion of locality for the internal qubits discussed in Section~\ref{sec:physics}. Moreover, let us note that by adding nontrivial dynamics of the internal fuzzball structure via a $\hat{U}$ to the model in~\eqref{eq:G2}, one can effectively change which qubits get emitted. This is the point referred to at the end of Section~\ref{sec:G2}. There is one explicit family of (non-extremal) fuzzball microstates~\cite{Jejjala:2005yu} for which one can understand the bulk Hawking emission process. As first suggested in~\cite{Chowdhury:2007jx}, the geometry's ergoregion instability~\cite{Cardoso:2007ws} can be interpreted as a Bose enhanced version of the Hawking instability for the corresponding black hole. This explanation was justified by comparing gravitational emission to the dual CFT emission process for both ergoregion emission from the fuzzballs and Hawking radiation from the corresponding black hole~\cite{Chowdhury:2007jx, Chowdhury:2008bd, Chowdhury:2008uj, Avery:2009tu, Avery:2009xr}. In~\cite{Chowdhury:2007jx}, a toy model was presented for the ergoregion emission, based on the CFT description. The toy model consists of a set of two-level atoms that can spontaneously emit or absorb photons. In the geometric description, the de-exciting atoms correspond to accumulating particles in the ergoregion that decrease the geometry's mass and angular momentum. Loosely, in our language, the toy model is in the class discussed in Section~\ref{sec:paper}. To properly capture the Bose enhancement, however, is a bit trickier. In~\cite{Giddings:2011ks}, several unitary models of evolution (some of which were discussed here) were presented, motivated by proposed nonlocal physics on the Schwarzschild background. As mentioned in~\cite{Giddings:2009ae}, it is unclear what sets the scale of the proposed nonlocality, so as to ensure it operates in the black hole background but not in everyday low-energy experiments. While the sorts of models discussed here remain too crude to distinguish between nonlocal physics or fuzzball microstates, their utility lies in their generality, which serves to sharpen our information theoretic understanding of black hole evaporation. One obvious task that remains is to translate Mathur's bound and its generalization in Section~\ref{sec:bound} into a sharper, quantitative statement about the breakdown of the semiclassical limit of quantum gravity. Since the entire discussion has been in a Hamiltonian framework, it would be especially nice to have analogous bounds on the path integral. \begin{acknowledgments} The author is grateful for comments and correspondence related to this work from C.~Asplund, S.~Ghosh, S.~Giddings, and S.~Mathur. The material here especially benefitted from discussions with S.~Kalyana Rama. \end{acknowledgments}
0902.3885
\section{Introduction} \setlength{\baselineskip}{7mm} Hawking radiation from black holes is one of the most important effects in black hole thermodynamics and the quantum effect of gravity \cite{Hawking:1974rv, Hawking:1974sw}. There are several derivations of Hawking radiation and recently one interesting method was proposed by Robinson and Wilczek \cite{Robinson:2005pd}. They considered the effective chiral theory near the horizon and showed that the gravitational anomaly \cite{AlvarezGaume:1983ig} in this effective theory causes the energy flux at the radial infinity which can be identified as Hawking radiation. This effective chiral theory would be related to the effective theory on the membrane in the membrane paradigm \cite{Parikh:1997ma,Parikh:1999mf,Iqbal:2008by,Thorne:1986iy} and thus this derivation suggests the association between the Hawking effect and the membrane paradigm. This derivation would also connect the Hawking effect with some phenomena in condensed matter physics. This new interpretation of the Hawking effect was modified by Iso, Umetsu and Wilczek \cite{Iso:2006wa,Iso:2006ut}. Furthermore this method was simplified by using the covariant currents \cite{Banerjee:2007qs} and the spectra of the thermal distribution functions were also reproduced by considering the higher-spin currents \cite{Iso:2007kt,Iso:2007hd,Iso:2007nc,Iso:2007nf,Bonora:2008nk,Bonora:2008he}. Further developments and the generalization to various black holes were also shown by many authors \cite{Murata:2006pt,Vagenas:2006qb,Setare:2006hq,Xu:2006tq,Iso:2006xj, Jiang:2007wj, Das:2007ru,Jiang:2007pe, Banerjee:2007uc,Huang:2007ed,Gangopadhyay:2007hr, Iso:2008sq, Umetsu:2008cm, Shirasaka:2008yg, Banerjee:2008az,Akhmedova:2008au, Banerjee:2008wq,Morita:2008qn,Papantonopoulos:2008wp,Wei:2009kg}. However there is one problem with this derivation. Hirata and Shirasaka \cite{Shirasaka:2008yg} found a constant of integration which had not been considered in the calculation of the gravitational anomaly method. We will show that the flux is not fixed owing to this constant. The purpose of this study is to modify the gravitational anomaly method such that it reproduces the correct fluxes. We will show that in the case of the U(1) current we can derive the flux by considering the chiral current and in the case of the energy-momentum tensor we can derive it by considering the trace anomaly. In these derivations, we will employ the calculation of the fluxes based on the two-dimensional conformal field theory technique \cite{Christensen:1977jc, Iso:2006ut,Iso:2007hd}. In section \ref{sec ambiguity}, we show the ambiguity in the gravitational anomaly method and we argue for the modifications in section \ref{Sec modification}. In section \ref{Sec energy}, we apply this modification to the derivation of the energy flux. Section \ref{Conclusion} contains conclusions and discussions. In Appendix \ref{App-RN}, we summarize the basics of Reissner-Nordstr\"om black holes. \section{Ambiguity in Gravitational Anomaly Method} \label{sec ambiguity} We show that the gravitational anomaly method has an ambiguity and discuss the problem with it. We investigate the derivation of the flux of the U(1) current from a $4$-dimensional Reissner-Nordstr\"om black hole as an example. It will be possible to generalize this argument to other currents and black holes. First we attempt to derive the flux through the gravitational anomaly method \cite{Iso:2006wa,Banerjee:2007qs}. We consider a matter field in the Reissner-Nordstr\"om black hole background. (See Appendix \ref{App-RN} for the Reissner-Nordstr\"om solution.) It is known that the matter field near the horizon can be effectively described as massless fields in two dimensions $(t,r_*)$. Then the covariant U(1) current $J^\mu$ satisfies the two-dimensional conservation law \cite{Iso:2007nc} \begin{align} \nabla_\mu J^{\mu} & = -\frac{(c_R-c_L)}{2}\frac{ e^2}{2\pi} \epsilon^{\mu\nu} F_{\mu\nu} \label{U(1) conservation}. \end{align} Here $e$ is the electric charge of the matter and $F_{\mu\nu}$ is the background field strength. $\epsilon^{\mu\nu}$ is the covariant antisymmetric tensor. $c_L$ and $c_R$ are the central charges of the left and right modes, which correspond to the in-going and out-going modes in the black hole background, and $c_L=c_R=1$ ($c_L=c_R=1/2$) if the matter is a real boson (fermion). Note that the central charge of a charged field is twice that of a real field, since it is a complex field. Thus the right-hand side of this equation would vanish in all these cases. In the gravitational anomaly method \cite{Robinson:2005pd, Iso:2006wa}, the in-going modes, which are classically irrelevant to physics outside the horizon, are eliminated near the horizon and we divide the outside of the horizon into two: the near horizon region $(r_+<r<r_++\epsilon)$ and the out region $(r_++\epsilon<r<\infty)$\footnote{Since the two-dimensional description is effective near the horizon only, we cannot take $r$ a large value. However we use this description even if $r \gg r_+$. It is known that the fluxes which are derived through this approximation are equivalent to the $4$-dimensional fluxes without the gray body factor. }. Here $r_+$ is the radius of the outer horizon and $\epsilon$ is an appropriately small parameter. In the near horizon region, the effective theory is chiral since the in-going modes do not contribute. It means that the current in this region satisfies the conservation equation (\ref{U(1) conservation}) with $c_L=0$ and thus the current is anomalous. On the other hand, in the out region, the effective theory is still non-chiral ($c_L=c_R$). Then the U(1) current can be described as \begin{align} J^\mu=J^\mu_{(O)}\Theta_+(r)+ J_{(H)}^{\mu} H(r), \end{align} where we have employed step function $\Theta_+(r)=\Theta(r-(r_++\epsilon))$ and $H(r)=1-\Theta_+(r)$. $J^\mu_{(O)}$ denotes the current in the out region and $J^\mu_{(H)}$ denotes the current in the near horizon region. These currents satisfy \begin{align} \nabla_\mu J_{(O)}^{\mu} & = 0, \label{conservation JO} \\ \nabla_\mu J_{(H)}^{\mu} & = -c_R\frac{ e^2}{4\pi} \epsilon^{\mu\nu} F_{\mu\nu}, \label{conservation JH} \end{align} respectively. Now we consider the total current $J^\mu_{(total)}$ including the contribution from the near horizon in-going modes. This current should satisfy the conservation equation (\ref{U(1) conservation}) with $c_L=c_R$ and it can be described as \begin{align} J_{(total)}^\mu=J^\mu+K^\mu H(r)+j_{(total)}^\mu. \label{total current} \end{align} Here $j_{(total)}^\mu$ is a possible additional current which satisfies $\nabla_\mu j_{(total)}^\mu=0$ and $K^\mu$ is the contribution of the in-going modes which satisfies \begin{align} \nabla_\mu K^{\mu} & = c_L\frac{ e^2}{4\pi} \epsilon^{\mu\nu} F_{\mu\nu} . \label{conservation K} \end{align} In addition, these currents should satisfy \begin{align} J_{(O)}^{\mu}=J_{(H)}^{\mu}+K^{\mu} \label{coincident} \end{align} at $r=r_++\epsilon$ such that $\nabla_\mu J^\mu_{(total)}=0$. Since the black hole background is static, the current does not depend on time. Then we can solve the equation (\ref{conservation JO}), (\ref{conservation JH}) and (\ref{conservation K}) by integrating them,\footnote{We evaluate these equations by using the Schwarzschild coordinates. However these coordinates are not appropriate for the calculation of the Hawking effect and we should employ the tortoise coordinate. However we can obtain the same result and we use the Schwarzschild coordinates since the expressions of equations are simpler. } \begin{align} J_{(O)}^{r} & = j_{(O)}^r,\\ J_{(H)}^{r} & = c_R\frac{ e^2}{2\pi} A_t (r)+ j_{(H)}^r, \label{solution J_H}\\ K^{r} & = -c_L\frac{ e^2}{2\pi} A_t (r)+ k^r, \end{align} where we have used $\epsilon^{rt}=-1$. Here $j_{(O)}^r,j_{(H)}^r$ and $k^r$ are integral constants. Especially $j_{(O)}^r$ will correspond to the flux which is observed at the infinity. The existence of the integral constant $k^r$ was pointed out by Hirata and Shirasaka in \cite{Shirasaka:2008yg} but they took $k^r=0$ in their calculation. This constant will cause the ambiguity as we will argue later. \cite{Iso:2006wa, Banerjee:2007qs} impose the following two conditions: \begin{align} J^r=0 \quad \text{at}~ r=r_+, \quad j_{(total)}^r=0. \label{old condition} \end{align} These conditions were supposed to correspond to the Unruh vacuum \cite{Iso:2006ut}, which we will discuss in the next section. Then the integral constants satisfy, \begin{align} j_{(H)}^r&=-c_R\frac{ e^2}{2\pi} A_t(r_+),\\ j_{(O)}^r&= -c_R\frac{ e^2}{2\pi} A_t(r_+)+k^r, \end{align} where we have considered the equation (\ref{coincident}). Thus we obtain $J^r=-c_R e^2 A_t(r_+)/2\pi+k^r$ at the infinity and the flux is not fixed. The correct flux, which is expected in the black hole thermodynamics, is $J^r=-c_R e^2 A_t(r_+)/2\pi$ and it is obvious that $k^r$ causes the ambiguity. Surely we can remove this ambiguity by imposing the additional condition $k^r=0$ as in \cite{Shirasaka:2008yg}. However the physical meaning of this condition is not clear. We can find a similar ambiguity in the derivation of the energy flux also. \section{Modification of Gravitational Anomaly Method} \label{Sec modification} We discuss the modifications of the gravitational anomaly method by considering the chiral current $J^{5\mu}$. We can solve the anomalous conservation equation of $J^{5\mu}$ in the $(t,r)$ coordinates as we calculated in the previous section, but the light-cone coordinate $(u,v)$ (\ref{light-cone}) are much useful and we will use them. Before considering the gravitational anomaly method, we review the derivation of the flux based on the two-dimensional conformal field theory technique \cite{Christensen:1977jc, Iso:2006ut,Iso:2007hd} since this derivation illuminates our problem. The two-dimensional chiral current is defined by $J^{5\mu}=\epsilon^{\mu\nu}J_{\nu}$, where the covariant antisymmetric tensor is $\epsilon^{uv}=2e^{-\varphi}$ in the $(u,v)$ coordinates and $\varphi$ is the background metric (\ref{light-cone}). $J^{5\mu}$ satisfies the anomalous conservation equation (the chiral anomaly) \cite{Iso:2007nc}, \begin{align} \nabla_\mu J^{5\mu} & = \frac{(c_L+c_R)}{2}\frac{ e^2}{2\pi} \epsilon^{\mu\nu} F_{\mu\nu}. \label{chiral conservation} \end{align} By taking the Lorentz gauge $\partial_u A_v+\partial_v A_u=0$ for the background gauge field, we can solve this equation and (\ref{U(1) conservation}) as \begin{align} J_{u}=j_u+c_R\frac{ e^2}{\pi}A_u, \quad J_{v}= j_v +c_L\frac{ e^2}{\pi}A_{v}. \label{uv current } \end{align} Here $j_u$ and $j_v$ are integral constants. Strictly speaking, $j_u$ and $j_v$ should be holomorphic functions with respect to $u$ and $v$ respectively. However since the background is time independent, we can take them as constants. Note that $J_{u}$ $(J_v)$ corresponds to the out-going (in-going) current. We can derive the fluxes by imposing the following boundary conditions: \begin{enumerate} \item Regularity condition: $J_u=0$ at the horizon. \item No in-going flux at the infinity: $J_v=0$ at $r=\infty$. \end{enumerate} The first condition means that the free falling observer does not observe the singular flux at the horizon. It is known that these conditions are corresponding to the Unruh vacuum. (Note that the Boulware vacuum corresponds to the condition $J_u=J_v=0$ at $r=\infty$ and the Hartle-Hawking vacuum corresponds to $J_u=J_v=0$ at the horizon \cite{Birrell:1982ix}.) Then the integral constants are fixed as \begin{align} j_u =-c_R\frac{ e^2}{\pi}A_{u}(r_+), \quad j_v=0. \end{align} Thus we obtain the correct flux at the infinity, \begin{align} J^r(r\rightarrow \infty)&=J_u(r\rightarrow \infty)-J_v(r\rightarrow \infty)\nonumber \\ &=-c_R\frac{ e^2}{2\pi}A_{t}(r_+). \label{J^r flux} \end{align} This is the derivation of the flux associated with the U(1) current through the conformal field theory technique. Now we discuss the gravitational anomaly method by considering this derivation. As we argued in the previous section, we take $c_L=0$ in the near horizon region. It implies that the in-going current (\ref{uv current }) is modified as \begin{align} J_{v}&=J_{(O) v}\Theta_+(r)+ J_{(H) v} H(r),\\ J_{(total)v}&=J_v+ K_v H(r)+j_{(total)v}, \\ J_{(O) v}&=j_{(O) v}+c_L\frac{ e^2}{\pi}A_v, \quad J_{(H) v}=j_{(H) v}, \\ K_v& = k_v +c_L \frac{e^2}{\pi}A_v. \end{align} Here $j_{(total)v} ,j_{(O) v},j_{(H) v}$ and $ k_v$ are integral constants. $J_{(H) v}$ is the in-going current in the near horizon and $J_{(O) v}$ is in the out region. $K_v$ denotes the contribution of the in-going modes. Similarly the out-going current becomes \begin{align} J_{u}&=J_{(O) u}\Theta_+(r)+ J_{(H) u} H(r),\\ J_{(total)u}&=J_u+ K_u H(r)+j_{(total)u}, \\ J_{(O) u}&=j_{(O) u}+c_R\frac{ e^2}{\pi}A_u, \quad J_{(H) u}=j_{(H) u}+c_R \frac{e^2}{\pi}A_u, \\ K_u& = k_u . \end{align} Here $j_{(total)u} ,j_{(O) u},j_{(H) u}$ and $ k_u$ are integral constants. Then it is obvious that $J_{(H)u}$ and $J_{(H)v}$ satisfy the equation (\ref{U(1) conservation}) and (\ref{chiral conservation}) with $c_L=0$ in the near horizon region. By considering the conservation equations of the total currents, the integral constants satisfy \begin{align} j_{(O) v}= j_{(H) v}+k_v, \quad j_{(O) u}= j_{(H) u}+k_u, \end{align} as in (\ref{coincident}). The relations between the integral constants in the previous section and in this section are as follows: \begin{align} j_{(O) }^r&=j_{(O) u}-j_{(O) v},\quad j_{(H) }^r=j_{(H) u}-j_{(H) v}, \nonumber \\ j_{(total)}^r&=j_{(total)u}-j_{(total)v}, \quad k^r=k_u-k_v. \end{align} In order to derive the flux, we consider the boundary conditions for the currents. In \cite{Iso:2006wa,Iso:2006ut}, since they did not consider $k_\mu$, other constants were supposed to satisfy $j_{(O)u}=j_{(H)u}$ and $j_{(O)v}=j_{(H)v}$. In this case, $j_{(O)u}$ and $j_{(O)v}$ are not distinguishable from $j_{(total)u}$ and $j_{(total)v}$ respectively and they took them as $j^r_{(O)}=j_{(O)u}$, $j^r_{(total)}=-j_{(total)v}$ and $j_{(total)u}=j_{(O)v}=j_{(H)v}=0$ in our notation. Then the conditions in (\ref{old condition}) are corresponding to the Unruh vacuum. However we now consider $k_\mu$ and these conditions are not valid. We impose the following conditions for the currents instead of the condition (\ref{old condition}). First we take \begin{align} j_{(total)u}=j_{(total)v}=0. \label{constraint total} \end{align} The meaning of these conditions is as follows. In the out region, $J_{(O)}^\mu$ associates with the excitation of the matter field. The observer at the infinity observes this excitation and thus the observable must be $J_{(O)}^\mu$ only. Thus we take these conditions and ignore $j_{(total) \mu}$ in our derivation. Secondly we take \begin{align} k_u=0. \label{constraint in-going} \end{align} This condition means that $K_u$ does not contribute to the out-going flux since $K_u$ is the contribution from the in-going modes. In addition to these conditions, we impose the boundary conditions corresponding to the Unruh vacuum: \begin{align} J_{u}=0 \quad \text{at}~ r=r_+, \quad J_{v}=0 \quad \text{at}~r=\infty . \label{boundary Unruh} \end{align} Then we obtain the flux at the infinity, \begin{align} j_{(O) }^r=j_{(O) u}=j_{(H) u}=-c_R\frac{ e^2}{\pi}A_u(r_+). \end{align} This equation implies that the origin of the flux at the infinity is $j_{(H) u}$ in the near horizon chiral theory. Thus the Hawking effect can be regarded as the contribution of the near horizon anomalies. Note that $k^r$, which causes the ambiguity of the flux in the previous section, has not been fixed. Even though we could obtain the correct flux since we have derived the in-going and out-going currents at the infinity separately. Here we summarize the derivation of the modified gravitational anomaly method. \begin{enumerate} \item Divide the outside of the horizon into two and take $c_L=0$ in the near horizon and $c_L=c_R$ in the out region. \item Solve the conservation equations (\ref{U(1) conservation}) and (\ref{chiral conservation}) in each region. \item Impose the conditions (\ref{constraint total}) and (\ref{constraint in-going}) on the integral constants. \item Impose the boundary condition (\ref{boundary Unruh}) corresponding to the Unruh vacuum. \end{enumerate} Through this procedure, we can derive the flux from the anomalies in the near horizon. In addition, we can easily show that if we impose the boundary conditions corresponding to the Boulware vacuum or the Hartle-Hawking vacuum instead of the Unruh vacuum, the correct flux can be derived through the same procedure\footnote{In the case of the Hartle-Hawking vacuum, the boundary condition of the in-going modes at the horizon is imposed on the total current $J_{(total) v}$ rather than $J_{v}$ in order to reproduce the correct flux. It implies that the effective chiral theory near the horizon is not essential in this case.}. \section{Derivation of Energy Flux through Modified Gravitational Anomaly Method} \label{Sec energy} In this section, we consider the derivation of the energy flux through the modified gravitational anomaly method. As in the derivation of the U(1) current, the anomalous conservation equation of the energy-momentum tensor is not sufficient to derive the energy flux at the infinity and we need to consider the trace anomaly equation also. These equations are given by, \begin{align} \nabla^\mu T_{\mu\nu}=& F_{\mu\nu} J^\mu - \frac{c_R-c_L}{96\pi} \epsilon_{\mu \nu} \nabla^\mu R, \\ {T^\mu}_{\mu}=&\frac{c_L+c_R}{48\pi}R, \end{align} where $R$ denotes the two-dimensional Ricci scalar \cite{Iso:2007nc}. We can solve these equations as, \begin{align} T_{uu} &= t_{uu} + 2 A_u j_{u} + \frac{c_R e^2}{\pi} A_u^2 + \frac{c_R}{24\pi} \left(\partial_u^2 \varphi - \frac{1}{2}(\partial_u \varphi)^2\right),\\ T_{vv} &= t_{vv} + 2 A_v j_{v} + \frac{c_L e^2}{\pi} A_v^2 + \frac{c_L}{24\pi} \left(\partial_v^2 \varphi - \frac{1}{2}(\partial_v \varphi)^2\right). \end{align} Here $t_{uu}$ and $t_{vv}$ are integral constants and $\varphi$ is the background gravity (\ref{light-cone}). Now we regard the near horizon theory as chiral and divide the outside of the horizon. Then we can obtain the currents. The in-going current becomes \begin{align} T_{vv}=&T_{(O)vv}\Theta_+(r)+ T_{(H) vv} H(r) , \\ T_{(total)vv}=&T_{vv}+K_{vv} H(r)+t_{(total)vv},\\ T_{(O)vv} =& t_{(O)vv} + 2 A_v j_{(O)v} + \frac{c_L e^2}{\pi} A_v^2 + \frac{c_L}{24\pi} \left(\partial_v^2 \varphi - \frac{1}{2}(\partial_v \varphi)^2\right),\\ T_{(H)vv} =& t_{(H)vv} + 2 A_v j_{(H)v} ,\\ K_{vv} =& k_{vv} + 2 A_v k_{v} + \frac{c_L e^2}{\pi} A_v^2 + \frac{c_L}{24\pi} \left(\partial_v^2 \varphi - \frac{1}{2}(\partial_v \varphi)^2\right), \end{align} and the out-going current becomes \begin{align} T_{uu}=&T_{(O)uu}\Theta_+(r)+ T_{(H) uu} H(r) , \\ T_{(total)uu}=&T_{uu}+K_{uu} H(r)+t_{(total)uu},\\ T_{(O)uu} =& t_{(O)uu} + 2 A_u j_{(O)u} + \frac{c_R e^2}{\pi} A_u^2 + \frac{c_R}{24\pi} \left(\partial_u^2 \varphi - \frac{1}{2}(\partial_u \varphi)^2\right),\\ T_{(H)uu} =& t_{(H)uu} + 2 A_u j_{(H)u} + \frac{c_R e^2}{\pi} A_u^2 + \frac{c_R}{24\pi} \left(\partial_u^2 \varphi - \frac{1}{2}(\partial_u \varphi)^2\right),\\ K_{uu} =& k_{uu} + 2 A_u k_{u} . \end{align} Here $t_{(total)vv}$, $t_{(total)uu}$, $t_{(O)vv}$, $t_{(O)uu}$, $t_{(H)vv}$, $t_{(H)uu}$, $k_{vv}$ and $k_{uu}$ are integral constants. These constants satisfy $t_{(O)vv}=t_{(H)vv}+k_{vv}$ and $t_{(O)uu}=t_{(H)uu}+k_{uu}$. By imposing the condition $t_{(total)uu}=t_{(total)vv}=0$ and $k_{uu}=0$ and the boundary conditions corresponding to the Unruh vacuum, we obtain \begin{align} t_{(O)uu}= t_{(H)uu}=&-2 A_u (r_+)j_{(H)u} - \frac{c_R e^2}{\pi} A_u^2(r_+) - \frac{c_R}{24\pi} \left(\partial_u^2 \varphi (r_+)- \frac{1}{2}(\partial_u \varphi(r_+))^2\right) \nonumber \\ =&\frac{c_R}{192\pi}(f'(r_+))^2+\frac{c_R e^2}{\pi}A_u^2(r_+),\\ t_{(O)vv}=&0. \end{align} Then the energy flux at the infinity is given by \begin{align} {T^r}_t(r\rightarrow \infty)=&T_{uu}(r\rightarrow \infty)-T_{vv}(r\rightarrow \infty) \nonumber \\ =&\frac{c_R}{192\pi}(f'(r_+))^2+\frac{c_R e^2}{\pi}A_u^2(r_+). \end{align} This result is coincident with the known result \cite{Iso:2006wa}. \section{Conclusions and Discussions} \label{Conclusion} In this Letter, we have discussed the problem with the ambiguity of the gravitational anomaly method. We have shown that, by considering the chiral current and the trace anomaly, the correct fluxes can be derived. Thus we can interpret the origin of the fluxes as the anomalies in the near horizon. Although we can derive the flux by using the conformal field theory technique without employing the near horizon chiral theory as we showed in section \ref{Sec modification}, the gravitational anomaly method is attractive since it would relate the Hawking effect to the membrane paradigm and condensed matter physics. Another derivation of the Hawking effect associated with the gravitational anomaly method was proposed by Banerjee et al \cite{Banerjee:2008az,Banerjee:2008wq}. They omitted the separation of the outside of the horizon and applied the anomaly equation (\ref{conservation JH}) to the theory in the whole region of the outside. If we impose the condition $J^r_{(H)}=0$ at the horizon, we can obtain the flux $j^r_{(H)}$ and they interpreted that this flux is the Hawking radiation observed at the infinity. If we admit this derivation, the ambiguity which we have discussed in this article does not exist. However this derivation is physically not correct, since the theory in the region apart from the horizon is not anomalous and we cannot use (\ref{conservation JH}) in this region. In addition, the expectation value of the current $J^r_{(H)}$ is not coincident with the correct current at finite $r$ because of the existence of the anomalous term $e^2 A_t(r)/2\pi$ in (\ref{solution J_H}). Thus we avoided using the derivation \cite{Banerjee:2008az,Banerjee:2008wq} in this article. \paragraph{Acknowledgements } I would like to acknowledge useful discussions with T. Hirata, S. Iso, A. Shirasaka and H. Umetsu. I would also especially like to thank G. Mandal for useful discussions and for several detailed comments on the manuscript.
0902.3834
\section{Introduction} \label{sec:intro} The main goal of experiments which perform ultrarelativistic heavy-ion collisions is to produce and study the properties of a deconfined plasma of quarks and gluons. This new state of matter, the quark-gluon plasma (QGP), is expected to be formed once the temperature of nuclear matter exceeds a critical temperature of $T_C\sim$ 200 MeV. Such experiments have already been underway for nearly a decade at the Relativistic Heavy Ion Collider (RHIC) and higher-energy runs are planned at Large Hadron Collider (LHC). Historically, in order to make phenomenological predictions for experimental observables, fluid hydrodynamics has been used to model the space-time evolution and non-equilibrium properties of the expanding matter. For the description of nuclear matter by fluid hydrodynamics to be valid the microscopic interaction time scale must be much shorter than the macroscopic evolution time scale. However, the hot and dense matter created in these experiments is rather small in transverse extent and expands very rapidly causing the range of validity of hydrodynamics to be limited. After the first results of RHIC, it was somewhat of a surprise that ideal hydrodynamics could reproduce the hadron transverse momentum spectra in central and semi-peripheral collisions. This included their anisotropy in non-central collisions which is measured by the elliptic flow coefficient, $v_2 (p_T)$. Ideal hydrodynamical models were fairly successful in describing the dependence of $v_2$ on the hadron rest mass for transverse momenta up to about 1.5-2 GeV/c \cite{Huovinen:2001cy, Hirano:2002ds,Tannenbaum:2006ch, Kolb:2003dz}. This observation led to the conclusion that the QGP formed at RHIC could have a short thermalization time ($\tau_0 \mathrel{\rlap{\lower4pt\hbox{$\sim$} 1$ fm/c) and a low shear viscosity. As a result it was posited that the matter created in the experiment behaves like a nearly perfect fluid starting at very early times after the collision. However, recent results from viscous hydrodynamical simulations which include all 2nd-order transport coefficients consistent with conformal symmetry \cite{Luzum:2008cw} have shown that estimates of the thermalization time are rather uncertain due to poor knowledge of the proper initial conditions, details of plasma hadronization, subsequent hadronic cascade, etc.\footnote{For more about the application of viscous hydrodynamics to heavy-ion phenomenology we refer the reader to Refs.~\cite{Dusling:2007gi,Luzum:2008cw,Song:2008hj,Heinz:2009xj}.}. As a result, it now seems that thermalization times of up to $\tau_0\sim 2$ fm/c are not completely ruled out by RHIC data. Faced with this challenge it has been recently suggested that it may be possible to experimentally constrain $\tau_0$ by making use of high-energy electromagnetic probes such as dileptons \cite{Mauricio:2007vz, Martinez:2008di, Martinez:2008rr, Martinez:2008mc} and photons \cite{Schenke:2006yp, Bhattacharya:2008up, Bhattacharya:2008mv}. As mentioned above, one of the key ingredients necessary to perform any numerical simulation using fluid hydrodynamics is the proper choice of initial conditions at the initially simulated time ($\tau_0$). These initial conditions include the initial fluid energy density $\epsilon$, the initial components of the fluid velocity $u^\mu$ and the initial shear tensor $\Pi^{\mu\nu}$. Once the set of initial conditions is known, it is ``simple'' to follow the subsequent dynamics of the fluid equations in simulations. At the moment there is no first principles calculation that allows one to determine the initial conditions necessary. Two different approaches are currently used for numerical simulations of fluids in heavy-ion collisions: Glauber type \cite{Kolb:2001qz} or Colored-Glass-Condensate (CGC) initial conditions\footnote{For a recent review on the initial conditions based on the CGC approach see Ref.~\cite{Lappi:2009mp} and references therein.}. The uncertainty in the initial conditions introduces a systematic theoretical uncertainty when, for example, the transport coefficient $\eta / s$ is extracted from experimental data \cite{Dusling:2007gi,Luzum:2008cw,Song:2008hj,Heinz:2009xj}. This is due to the fact that when the initial energy density profile is fixed using CGC-based initial conditions \cite{Hirano:2005xf, Drescher:2006pi,Lappi:2006xc}, one obtains larger initial spatial eccentricity and momentum anisotropy when compared with the Glauber model. Moreover, the values of the components of the shear tensor $\Pi^{\mu\nu}$ at $\tau_0$ are also affected by the choice of either CGC or Glauber initial conditions (see discusion in Sect. 4 of Ref. \cite{Baier:2006gy}). In the case of Glauber initial conditions the shear tensor is completely unconstrained. In the case of CGC initial conditions there is a prescription for calculating the initial shear; however, with CGC initial conditions the longitudinal pressure is zero due to the assumption of exact boost invariance and the subsequent thermalization of the system could completely change the initial shear obtained in the CGC approximation. Therefore, in both cases it would seem that the initial shear is completely unconstrained. Given these uncertainties it would be useful to have a method which can help to constrain the allowed initial conditions used in hydrodynamical simulations. In this work we derive general criteria which impose bounds on the initial time $\tau_0$ at which one can apply 2nd-order viscous hydrodynamical modeling of the matter created in ultrarelativistic heavy-ion collision. We do this firstly by requiring the positivity of the effective longitudinal pressure and secondly by requiring that the shear tensor be small compared to the isotropic pressure. Based on these requirements we find that, for a given set of transport coefficients, the allowed minimum value of $\tau_{0}$ is non-trivially related with the initial condition for the shear tensor, $\Pi^{\mu\nu} (\tau_0)\equiv \Pi^{\mu\nu}_0$, and the energy density $\epsilon (\tau_0)\equiv\epsilon_0$. To make this explicit we study $0+1$ dimensional 2nd-order viscous hydrodynamics \cite{Muronga:2003ta, Baier:2007ix, Bhattacharyya:2008jc}, where the transport coefficients are either those of a weakly-coupled transport theory \cite{Arnold:2000dr, Arnold:2003zc,York:2008rr} or those obtained from a strongly-coupled ${\cal N}=4$ supersymmetric (SYM) plasma \cite{Baier:2007ix, Bhattacharyya:2008jc}. We then show how the constraints derived from the 0+1 dimensional case can be used to estimate where higher dimensional simulations will cease to be physical/trustworthy. Our technique is complementary to the approach of Molnar and Huovinen \cite{Huovinen:2008te} which uses kinetic theory to assess the applicability of hydrodynamics. In contrast to their work, here we do not invoke any other physics other than hydrodynamical evolution itself and merely require that it be reasonably self-consistent. The work is organized as follows: in Sec.~\ref{sec:setup} we review the basic setup of 2nd-order viscous hydrodynamics formalism and its application to a $0+1$ dimensional boost invariant QGP (either in the weakly or strongly coupled limits). In Sec.~\ref{sec:analyticapproximation} we present an approximate analytical solution to the equations of motion for a $0+1$ dimensional system. In Sec.~\ref{sec:results}, we present our analytical and numerical results in both the strong and weak coupling limits of the $0+1$ dimensional QGP. In Sec.~\ref{sec:conclusions} we present our conclusions. \section{Basic setup} \label{sec:setup} In this section we briefly review the general framework of 2nd-order viscous hydrodynamics equations for a conformal fluid, i.e. we will consider just shear viscosity and neglect bulk viscosity. We will also ignore heat conduction. The energy-momentum tensor for a relativistic fluid in the presence of shear viscosity is given by\footnote{The notation we use along the text is summarized in the Appendix \ref{app:notations}.}: \begin{equation} \label{tensor} T^{\mu \nu}=\epsilon\, u^\mu\, u^\nu-p \Delta^{\mu \nu}+\Pi^{\mu \nu}, \end{equation} where $\epsilon$ and $p$ are the fluid energy density and pressure, $u^\mu$ is the normalized fluid four-velocity ($u^\mu u_\mu =1$) and $\Pi^{\mu \nu}$ is the shear tensor which has two important properties: (1) $\Pi^\mu_\mu =0$ and (2) $u_\mu\Pi^{\mu \nu}=0$. Requiring conservation of energy and momentum, $D_\mu T^{\mu \alpha}=0$, gives the space-time evolution equations for the fluid velocity and the energy density: \begin{eqnarray} \label{eqsvel+ene} (\epsilon+p)D u^\mu&=&\nabla^\mu p- \Delta^\mu_\alpha D_\beta \Pi^{\alpha \beta}\, , \nonumber\\ D \epsilon &=& - (\epsilon+p) \nabla_\mu u^\mu+\frac{1}{2}\Pi^{\mu \nu} \nabla_{\langle \nu} u_{\mu\rangle}\, , \end{eqnarray} where $D_\mu$ is the geometric covariant derivative, $D\equiv u^\alpha D_\alpha$ is the comoving time derivative in the fluid rest frame and $\nabla^\mu\equiv \Delta^{\mu \alpha} D_\alpha$ is the spatial derivative in the fluid rest frame. The brackets $\langle\ \rangle$ construct terms which are symmetric, traceless, and orthogonal to the fluid velocity (see Appendix \ref{app:notations} for its definition). To obtain a complete solvable system of equations viscous hydrodynamics requires an additional equation of motion for the shear tensor. This is accomplished by expanding the equations of motion to second order in gradients. It has been found that at zero-chemical potential in a conformal fluid in any curved space-time, the shear tensor satisfies \cite{Baier:2007ix, Bhattacharyya:2008jc}: \begin{eqnarray} \Pi^{\mu\nu} &=& \eta \nabla^{\langle \mu} u^{\nu\rangle} - \tau_\pi \left[ \Delta^\mu_\alpha \Delta^\nu_\beta D\Pi^{\alpha\beta} + \frac 4{3} \Pi^{\mu\nu} (\nabla_\alpha u^\alpha) \right] \nonumber\\ &&\quad + \frac{\kappa}{2}\left[R^{<\mu\nu>}+2 u_\alpha R^{\alpha<\mu\nu>\beta} u_\beta\right]\nonumber\\ && -\frac{\lambda_1}{2\eta^2} {\Pi^{<\mu}}_\lambda \Pi^{\nu>\lambda} +\frac{\lambda_2}{2\eta} {\Pi^{<\mu}}_\lambda \omega^{\nu>\lambda} - \frac{\lambda_3}{2} {\omega^{<\mu}}_\lambda \omega^{\nu>\lambda}\, , \label{pieq} \end{eqnarray} where $\omega_{\mu \nu}=-\nabla_{[\mu} u_{\nu]}$ is a symmetric operator that represents the fluid vorticity and $R^{\alpha \mu \nu \beta}$ and $R^{\mu \nu}$ are the Riemann and Ricci tensors, respectively. The coefficients $\tau_\pi,\kappa,\lambda_1,\lambda_2$ and $\lambda_3$ are the transport coefficients required by conformal symmetry. \subsection{0+1 Dimensional Conformal 2nd-Order Viscous Hydrodynamics} \label{subsec:hydroeqs} Let us consider a system expanding in a boost invariant manner along the longitudinal (beamline) direction with a uniform energy density along the transverse plane. For this simplest heavy-ion collision model, it is enough to consider expansion in a flat space. Also for this simple model, there is no fluid vorticity, and the energy density, the shear viscous tensor and the fluid velocity only depend on proper time $\tau$. For this 0+1 dimensional model the 2nd-order viscous hydrodynamic equations (Eqs.~(\ref{eqsvel+ene}) and (\ref{pieq})) are rather simple in the conformal limit. In terms of proper time, $\tau = \sqrt{t^2 -z^2}$, and space-time rapidity, $\zeta = {\rm arctanh}(z/t)$, these are given by \cite{Muronga:2003ta,Baier:2007ix}: \begin{eqnarray} \partial_\tau \epsilon&=&-\frac{\epsilon+p}{\tau}+\frac{\Pi}{\tau} \, , \label{0+1eqe}\\ \partial_\tau \Pi &=& -\frac{\Pi}{\tau_\pi} +\frac{4 \eta}{3\, \tau_\pi \tau}-\frac{4}{3\, \tau} \Pi -\frac{\lambda_1}{2\,\tau_\pi\,\eta^2} \left(\Pi\right)^2 \, , \label{0+1eqp} \end{eqnarray} where $\epsilon$ is the fluid energy density, $p$ is the fluid pressure, $\Pi \equiv \Pi^\zeta_\zeta$ is the $\zeta\zeta$ component of the fluid shear tensor, $\eta$ is the fluid shear viscosity, $\tau_\pi$ is the shear relaxation time, and $\lambda_1$ is a coefficient which arises in complete 2nd-order viscous hydrodynamical equations either in the strong \cite{Baier:2007ix,Bhattacharyya:2008jc} or weakly coupled limit \cite{Arnold:2000dr, Arnold:2003zc,Muronga:2003ta,York:2008rr,Betz:2008me}. The Navier-Stokes limit is recovered upon taking $\tau_\pi\rightarrow 0$ and $ \lambda_1 \rightarrow 0$ in which case one obtains $\Pi_\text{Navier-Stokes} = 4 \eta/(3 \tau)$. These coupled differential equations are completed by a specification of the equation of state which relates the energy density and the pressure through $p = p(\epsilon)$ and initial conditions. For 0+1 dimensional dynamics one must specify the energy density and $\Pi$ at the initial time, $\epsilon_0 \equiv \epsilon(\tau_0)$ and $\Pi_0 \equiv \Pi(\tau_0)$, where $\tau_0$ is the proper-time at which one begins to solve the differential equations. \subsection{Specification of equation of state and dimensionless variables} \label{subsec:eosscaling} In the following analysis we will assume an ideal equation of state, in which case we have \begin{equation} \label{eqstate} p = \frac{N_{\rm dof}\, \pi^2}{90} T^4\, , \end{equation} where for quantum chromodynamics with $N_c$ colors and $N_f$ quark flavors, $N_{\rm dof} = 2 (N_c^2-1) + 7 N_c N_f/2$ which for $N_c=3$ and $N_f=2$ is $N_{\rm dof} = 37$. The general method used below, however, can easily be extended to a more realistic equation of state. In the conformal limit the trace of the four-dimensional stress tensor vanishes requiring $\epsilon = 3 p$ which, using Eq.~(\ref{eqstate}), allows us to write compactly \begin{equation} \epsilon = (T/\gamma)^4, \hspace{0.5cm} \text{with}\hspace{0.5cm}\gamma \equiv \left( \frac{30}{\pi^2 N_{\rm dof}} \right)^{1/4} \,. \label{energyideal} \end{equation} Likewise we can simplify the expression for the entropy density, $s$, using the thermodynamic relation $T s = \epsilon + p$ to obtain $s = 4 \epsilon / 3 T$ or equivalently \begin{equation} s = \frac{4}{3 \gamma}\, \epsilon^{3/4} \, . \label{entropyideal} \end{equation} \begin{table}[t] \begin{tabular}{ | c || c | c | c |} \hline {\bf $\;$ Transport coefficient $\;$} & {\bf $\;$ Weakly-coupled QCD $\;$ } & {\bf $\;$ Strongly-coupled ${\cal N}=4$ SYM $\;$} \\ \hline $\bar{\eta}\equiv \eta/s$ & $\thicksim1/(g^4 \log g)$ & $1/(4 \pi)$\\ \hline $\tau_\pi$ & $6 \bar{\eta}/T$ & $\bigl(2-\log 2\bigr)/(2\pi T)$ \\ \hline $\lambda_1$ & $(4.1 \rightarrow 5.2)\,\bar{\eta}^2 s/T$ & 2 $\bar{\eta}^2 s/T$\\ \hline \end{tabular} \caption{Typical values of the transport coefficients for a weakly-coupled QGP \cite{York:2008rr,Arnold:2000dr,Arnold:2003zc} and a strongly coupled ${\cal N}=4$ SYM plasma \cite{Baier:2007ix,Bhattacharyya:2008jc}.} \label{transcoeff} \end{table} When solving Eqs.~(\ref{0+1eqe}) and (\ref{0+1eqp}) it is important to recognize that the transport coefficients depend on the temperature of the plasma and hence on proper-time. We summarize in Table \ref{transcoeff} the values of the transport coefficients in the strong and weak coupling limits. We point out that these are not universal relations as explained below in Secs.~\ref{strongcouplinglimit} and \ref{weakcouplinglimit}. The reader should note that in either the strong or weak coupling limit the coefficients $\tau_\pi$ and $\lambda_1$ are proportional to $\tau_\pi\propto T^{-1}$ and $ \lambda_1\propto \bar\eta^2 s / T$. This suggests that we can parametrize both coefficients as: \begin{subequations} \label{parametrization} \begin{align} \tau_\pi & = \frac{c_\pi}{T} \, ,\\ \lambda_1 & = c_{\lambda_1} \bar{\eta}^2 \biggl(\frac{s}{T}\biggr)\, , \end{align} \end{subequations} where we have introduced a dimensionless version of the shear viscosity \begin{equation} \bar{\eta}\equiv \eta / s \, . \end{equation} In our analysis we assume that $\bar\eta$ is independent of time.\footnote{Including a temperature-dependent shear viscosity does not change our observations fundamentally; however, there will be quantitative effects which will be elaborated upon in a forthcoming publication.} The dimensionless numbers $\bar\eta$, $c_\pi$ and $c_{\lambda_1}$ carry all of the information about the particular coupling limit we are considering. Using the ideal gas equation of state [Eqs. (\ref{energyideal}) and (\ref{entropyideal})], the parametrization (\ref{parametrization}) of $\tau_\pi$ and $\lambda_1$ can be rewritten in terms of the energy density $\epsilon$: \begin{subequations} \label{parametrization2} \begin{align} \tau_\pi & = \frac{c_\pi}{\gamma\, \epsilon^{1/4}} \, ,\\ \lambda_1 & = \frac{4}{3\gamma^2}\, c_{\lambda_1}\, \bar{\eta}^2 \, \epsilon^{1/2} \, . \end{align} \end{subequations} To remove the dimensionful scales and rewrite the fluid equations in a more explicit form we define the following dimensionless variables: \begin{subequations} \label{variabledefs} \begin{align} \bar\epsilon &\equiv \epsilon/\epsilon_0 \, , \\ \overline\Pi &\equiv \Pi/\epsilon_0 \, , \\ \bar\tau &\equiv \tau/\tau_0 \, , \end{align} \end{subequations} where $\tau_0$ is the proper-time at which the hydrodynamic evolution equations start to be integrated and $\epsilon_0$ is the energy density at $\tau_0$. After replacing the dimensionless variables (\ref{variabledefs}) in the parametrization (\ref{parametrization2}) and Eqs.~(\ref{0+1eqe}) and (\ref{0+1eqp}), we rewrite the fluid equations: \begin{subequations} \begin{eqnarray} && \bar\tau\, \partial_{\bar\tau} \bar\epsilon + \frac{4}{3}\, \bar\epsilon - \overline\Pi = 0 \, , \label{diffeqa} \\ && \overline\Pi + \frac{c_\pi}{\gamma\, k\, \bar\epsilon^{1/4}} \left[ \partial_{\bar\tau} \overline\Pi + \frac{4}{3} \frac{\overline\Pi}{\bar\tau} \right] - \frac{16 \,\bar\eta}{9\, \gamma\, k} \frac{\bar\epsilon^{3/4}}{\bar\tau} + \frac{3\, c_{\lambda_1}}{8} \frac{{\overline\Pi}^2}{\bar\epsilon} = 0 \, , \label{diffeqb} \end{eqnarray} \label{diffeqs} \end{subequations} where $k \equiv \tau_0 \epsilon_0^{1/4}$. Note that in terms of (\ref{variabledefs}) the boundary conditions are specified at $\bar\tau=1$ where $\bar\epsilon=1$ and $\overline\Pi(\bar\tau=1) = \overline\Pi_0$ which is a free parameter. When the hydrodynamical equations are written in the form given above [Eq.~(\ref{diffeqs})] all information about the initial proper-time and energy density is encoded in the parameter $k$ and all information about the equation of state is encoded in the parameter $\gamma$. \subsection{Strong coupling limit} \label{strongcouplinglimit} Motivated and guided by the AdS/CFT correspondence Baier et.~al \cite{Baier:2007ix} and the Tata group \cite{Bhattacharyya:2008jc} have recently shown that new transport coefficients arise in a complete theory of second order relativistic viscous hydrodynamics. They also estimate their values at infinite t'Hooft coupling for ${\cal N}=4$ SYM theory at finite temperature. Different calculations for a finite t'Hooft coupling within the same theory have been carried out \cite{Buchel:2004di,Benincasa:2005qc,Buchel:2008ac, Myers:2008yi,Paulos:2008tn,Buchel:2008bz}. A remarkable aspect is that, while at first the strong t'Hooft coupling limit of the transport coefficients was expected to be universal \cite{Policastro:2001yc,Son:2007vk}, there is now evidence that these coefficients are not universal \cite{Brigante:2007nu,Brigante:2008gz,Kats:2007mq,% Natsuume:2007ty,Buchel:2008vz}. Faced with this complication one is forced to make a choice as to which dual theory to consider. Here we will consider the values obtained in ${\cal N}=4$ SYM at infinite t'Hooft coupling as used in \cite{Baier:2007ix,Bhattacharyya:2008jc} as our typical strong coupling values. One can expect that these coefficients change in strongly-coupled QCD compared to ${\cal N}=4$ SYM theory at infinite t'Hooft limit. Nevertheless, we take these values over from strongly-coupled ${\cal N}=4$ SYM in order to get a feeling for what to expect in this regime. Expressed in terms of the dimensionless transport coefficients defined above, typical values of the strongly coupled transport coefficients are \begin{equation} \label{stronglimitvalues} \begin{aligned} \bar\eta &= \frac{1}{4 \pi} \, , \\ c_\pi &= \frac{2 - \log 2}{2 \pi} \, ,\\ c_{\lambda_1} &= 2 \, . \end{aligned} \end{equation} \subsection{Weak coupling limit} \label{weakcouplinglimit} Contrary to the case of ${\cal N}=4$ SYM at infinite coupling, in the case of QCD, where there is a running coupling and inherent scale dependence, the various transport coefficients are not fixed numbers but instead depend on the renormalization scale. In this limit the transport coefficients necessary have been calculated completely to leading order \cite{Arnold:2000dr, Arnold:2003zc,York:2008rr}. Higher order corrections to some transport coefficients from finite-temperature perturbation theory show poor convergence \cite{CaronHuot:2008uh,CaronHuot:2007gq} which is similar to the case for the thermodynamical potential; however, resummation techniques can dramatically extend the range of convergence of finite-temperature perturbation theory in the case of static quantities and can, in the future, also be applied to dynamical quantities\footnote{See Ref.~\cite{Andersen:2004fp} and references therein.}. Until such resummation schemes are carried out for dynamical quantities, the values of the leading-order weak-coupling transport coefficients in Table \ref{transcoeff} can only be considered as rough guides to the values expected phenomenologically. Using this rough guide the value of $\bar\eta$ from finite-temperature QCD calculations \cite{Arnold:2000dr, Arnold:2003zc} is $\eta / s\thicksim 0.5 \rightarrow 1$ at realistic couplings ($g\sim2\rightarrow3$). In this work we will assume a typical value of $\bar\eta =10/(4 \pi)$ in the weakly-coupled limit in order to compare with the results obtained in the strong coupling limit. In our analysis for the weak coupling limit, we will use \begin{equation} \label{weaklimitvalues} \begin{aligned} \bar\eta &= \frac{10}{4 \pi} \, , \\ c_\pi &= 6 \bar\eta \, , \\ c_{\lambda_1} &= \frac{9}{2} \, . \end{aligned} \end{equation} \subsection{Momentum space anisotropy} \label{sec:momentumanisotropybounds} We introduce the dimensionless parameter, $\Delta$, which measures the degree of momentum-space isotropy of the fluid as follows \begin{equation} \Delta \equiv \frac{p_T}{p_L} - 1 \, , \end{equation} where $p_T = (T^{xx} + T^{yy})/2$ and $p_L = T^{zz} = - T^\zeta_\zeta$ are the effective transverse and longitudinal pressures, respectively. If $\Delta=0$, the system is locally isotropic. If $-1 < \Delta < 0$ the system has a local prolate anisotropy in momentum space and if $\Delta >0$ the system has a local oblate anisotropy in momentum space. In appendix \ref{app:xideltarelation} we derive the relation between the $\Delta$ parameter defined above and the $\xi$ parameter introduced in Ref.~\cite{Romatschke:2003ms} to quantify the degree of local plasma isotropy. For small values of $\Delta$ the relation is $\Delta = 4\xi/5 + {\cal O}(\xi^2)$. In the 0+1 dimensional model of viscous hydrodynamics one can express the effective transverse pressure as $p_T = p + \Pi/2$ and the effective longitudinal pressure as $p_L = p - \Pi$. In the case of an ideal equation of state, rewriting (\ref{variabledefs}) in terms of our dimensionless variables gives % \begin{equation} \Delta = \frac{9}{2} \left(\frac{\overline\Pi}{\bar\epsilon - 3 \overline\Pi}\right) \, . \end{equation} At the initial time $\bar\tau=1$, $\Delta_0 \equiv \Delta(\bar\tau=1)$ is given by \begin{equation} \Delta_0 = \frac{9}{2} \left(\frac{\overline\Pi_0}{1 - 3 \overline\Pi_0}\right) \, . \end{equation} In the limit $\overline\Pi \rightarrow -2\bar\epsilon/3$ we have $\Delta \rightarrow -1$ and in the limit $\overline\Pi \rightarrow \bar\epsilon/3$ we have $\Delta \rightarrow \infty$. Positivity of the longitudinal pressure requires $\Delta \neq \infty$ at any time during the evolution of the plasma. Note that requiring positivity is a {\em weak constraint} on the magnitude of $\Delta$ since the formal justification for applying viscous hydrodynamical approximations is the neglect of large gradients and higher-order nonlinear terms. This requires that ${\overline\Pi}$ be small compared to the pressure, $p$, i.e. $|{\overline\Pi}| \ll \bar{p}$. This can be turned into a quantitative statement by requiring that $-\alpha \, \bar{p} < {\overline\Pi} < \alpha \, \bar{p}$, where $\alpha$ is a positive phenomenological constant which is less than or equal to 1, i.e. $0 \leq \alpha \leq 1$. The limit $\alpha \rightarrow 1$ gives the weak constraint of $-3/4 \leq \Delta < \infty$ and for general $\alpha$ requires $\Delta_- \leq \Delta \leq \Delta_+$ where \begin{equation} \Delta_\pm \equiv \pm\frac{3}{2} \left(\frac{\alpha}{1 \mp \alpha}\right) \, . \end{equation} For example, requiring $\alpha = 1/3$ we would find the constraint $-3/8 \leq \Delta_\alpha \leq 3/4$. \section{Approximate Analytic Solution of 0+1 Conformal Hydrodynamics} \label{sec:analyticapproximation} In this section we present an approximate analytic solution to the 0+1 dimensional conformal 2nd-order hydrodynamical evolution equations. The approximation used will be to first exactly integrate the differential equation for the energy density (\ref{diffeqa}), thereby expressing the energy density as an integral of the shear. We then insert this integral relation into the equation of motion for shear itself (\ref{diffeqb}) and expand in $\bar{\eta}$. Explicitly, the solution obtained from the first step is \begin{equation} \bar{\epsilon}(\bar\tau) = \bar\tau^{-4/3} \left[ 1+ \int_1^{\bar\tau} \, d{\bar\tau^\prime} (\tau^\prime)^{1/3} {\overline\Pi}(\bar\tau^\prime) \right] \, . \label{analyticesolution} \end{equation} We then solve the second differential equation for $\overline\Pi$ approximately by dropping the second term in Eq.~(\ref{analyticesolution}) and inserting this into the second equation of (\ref{diffeqs}) to obtain \begin{equation} 27 \, c_{\lambda_1} \, \gamma \, k \, \bar\tau^{10/3} \, \overline\Pi^2 + 72 \, c_\pi \, \bar\tau^{7/3} \, \partial_{\bar\tau}\overline\Pi + ( 72 \, \gamma \, k \, \bar\tau^2 + 96 \, c_\pi \, \bar\tau^{4/3}) \, \overline\Pi = 128 \, \bar\eta \, . \label{pilodiffeq} \end{equation} This differential equation has a solution of the form \begin{eqnarray} \overline\Pi = && \left(\frac{4}{3 c_{\lambda_1} \bar\tau^{4/3}}\right) \nonumber \\ && \times \, \frac{ {\cal C} \left[ 2 \; {}_1F_1\left( { 1-b \atop 2 } \Big| - a \, \bar\tau^{2/3} \right) + a\, (b-1) \, \bar\tau^{2/3} \; {}_1F_1\left( { 2-b \atop 3 } \Big| - a \, \bar\tau^{2/3} \right) \right] + 2 \, G_{1,2}^{2,0}\left( a \, \bar\tau^{2/3} \Big| {b \atop 0,0} \right) }{ a \, {\cal C} \, \bar\tau^{2/3} \; {}_1F_1\left( { 1-b \atop 2 } \Big| - a \, \bar\tau^{2/3} \right) - G_{1,2}^{2,0}\left( a \, \bar\tau^{2/3} \Big| {b+1 \atop 0,1} \right) } \, , \nonumber \\ \label{analyticpisolution} \end{eqnarray} where ${}_1F_1$ is a confluent hypergeometric function, $G$ is the Meijer G function, $a = 3 \gamma k / (2 c_\pi)$, $b = c_{\lambda_1} \bar\eta / c_\pi$, and ${\cal C}$ is an integration constant which is fixed by the initial condition for $\overline\Pi$ at $\bar\tau=1$. Requiring $\overline\Pi(\bar\tau=1) = \overline\Pi_0$ fixes ${\cal C}$ to be \begin{equation} {\cal C} = \frac{ 8 \, G_{1,2}^{2,0}\left( a \Big| {b \atop 0,0} \right) + 3\, c_{\lambda_1} \, \overline\Pi_0 \, G_{1,2}^{2,0}\left( a \Big| {b+1 \atop 0,1} \right) }{ \left[ 3\, a \, c_{\lambda_1} \, \overline\Pi_0 - 8 \right] {}_1F_1\left( { 1-b \atop 2 } \Big| - a \right) - 4 \, a \, (b-1) \, {}_1F_1\left( { 2-b \atop 3 } \Big| - a \right) } \, . \end{equation} % To obtain the proper-time evolution of the energy density one must integrate (\ref{analyticesolution}) using (\ref{analyticpisolution}). This is possible to do analytically but the answer is rather unwieldy and hence not very useful to list explicitly. Below we will use this approximate analytic solution as a cross check for our numerics. In the limit $\bar\eta \rightarrow 0$ this solution becomes an increasingly better approximation and hence represents the leading correction to ideal hydrodynamical evolution in that limit. Note that in the limit $c_{\lambda_1} \rightarrow 0$ and $c_\pi \rightarrow 0$ the differential equation above (\ref{pilodiffeq}) reduces to an algebraic equation % \begin{equation} \overline\Pi_\text{Ideal Navier-Stokes} = \frac{16 \bar\eta}{9 \gamma k \bar\tau^2} \, , \end{equation} % which, when converted back to dimensionful variables, corresponds to the Navier-Stokes solution under the assumption that $\bar\epsilon = \bar\tau^{-4/3}$. Finally we note that in the large time limit Eq.~(\ref{analyticpisolution}) simplifies to % \begin{equation} \lim_{\bar\tau \rightarrow \infty} \overline\Pi = \overline\Pi_\text{Ideal Navier-Stokes} + {\cal O}\left(e^{- a \bar\tau^{-2/3}}\right) \, . \end{equation} \section{Results} \label{sec:results} In this section we present our results of numerical integration of Eq.~(\ref{diffeqs}) and present consistency checks obtained by comparing these results with the approximate analytic solution presented in the previous section. \subsection{Time Evolution of $\Delta$} Below we present numerical results for the time evolution of the plasma anisotropy parameter $\Delta$. For purpose of illustration we will hold the initial temperature fixed at $T = $ 350 MeV and vary the starting time $\tau_0$. This will allow us to probe different values of $k = \tau_0 \epsilon_0^{1/4} = \tau_0 T_0/\gamma$ in a transparent manner. Note that, by doing this, each curve corresponds to a different initial entropy density; however, this is irrelevant for the immediate discussion since we are not concerned with phenomenological consequences, only with the general mathematical properties of the system of differential equations as one varies the fundamental parameters. In Secs.~\ref{sec:criticalline} and \ref{sec:convergencecriteria} we will present the general results as a function of the dimensionless parameter $k$. \subsubsection{Strong Coupling} \begin{figure*}[t] \begin{center} \includegraphics[width=10cm]{strongcouplingvaryingtau0.eps} \end{center} \vspace{-6mm} \caption{ Result for the proper-time evolution of $\Delta$ obtained by numerical integration of Eq.~(\ref{diffeqs}). Long-dashed, solid, and short-dashed lines correspond $\tau_0 = \{0.4, 1, 2\}$ fm/c, respectively. Transport coefficients were the typical strong coupling values given in Eq.~(\ref{stronglimitvalues}). The initial temperature, $T_0$, is held fixed at $T_0 = 350$ MeV and it is assumed that $\Delta_0=0$ for this example. } \label{fig:strongcouplingvaryingtau0} \end{figure*} In Fig.~\ref{fig:strongcouplingvaryingtau0} we show our result for the proper-time evolution of the pressure anisotropy parameter, $\Delta$, obtained by numerical integration of Eq.~(\ref{diffeqs}). The transport coefficients in this case are the typical strong coupling values given in Eq.~(\ref{stronglimitvalues}). For purpose of illustration we have chosen the initial temperature, $T_0$, to be held fixed at $T_0 = 350$ MeV and assumed that the initial pressure anisotropy, $\Delta_0$, vanishes, i.e. $\Delta_0=0$. As can be seen from this figure, when the initial value of the pressure anisotropy is taken to be zero it does not remain so. A finite oblate pressure anisotropy is rapidly established due to the intrinsic longitudinal expansion of the fluid. Depending on the initial time at which the hydrodynamic evolution is initialized, $\Delta$ peaks in the range $0.2 \mathrel{\rlap{\lower4pt\hbox{$\sim$} \Delta \mathrel{\rlap{\lower4pt\hbox{$\sim$} 1$. \subsubsection{Comparison with analytic approximation} As a cross check of our numerical method, in Fig.~\ref{fig:strongcouplingcompare} we compare the result for $\Delta$ obtained via direct numerical integration of Eq.~(\ref{diffeqs}) and the approximate analytic solution given via Eqs.~(\ref{analyticpisolution}) and (\ref{analyticesolution}). As can be seen from the figure the analytic solution provides a reasonable approximation to the true time-evolution of the plasma anisotropy. The parameter $\Delta$ is a particularly sensitive quantity to compare. If one compares the analytic and numerical solutions for the energy density, for example, in the strongly-coupled case there is at most a 1\% deviation between the analytic approximation and our exact numerical integration during the entire 10 fm/c of simulation time. Of course, for larger viscosity the analytic approximation becomes more suspect but for the weakly-coupled case we find that there is at most a 8\% deviation between the energy densities obtained using our analytic approximation and the exact numerical result. In the limit that $\bar\eta$ goes to zero, the analytic treatment and our numerical integration agree to arbitrarily better precision. Based on the agreement between the two approaches we are confident in our numerical integration of the coupled differential equations. \begin{figure*}[t] \begin{center} \includegraphics[width=10cm]{strongcouplingcompare.eps} \end{center} \vspace{-6mm} \caption{ Comparison of result for $\Delta$ as a function of proper time using numerical integration of Eq.~(\ref{diffeqs}) and the approximate analytic solution given via Eqs.~(\ref{analyticpisolution}) and (\ref{analyticesolution}). Transport coefficients in this case are the typical strong coupling values given in Eq.~(\ref{stronglimitvalues}). The initial temperature, $T_0$, is taken to be $T_0 = 350$ MeV, the initial time, $\tau_0$, is taken to be $\tau_0 = 1$ fm/c and it is assumed that $\Delta_0=0$ for this example. } \label{fig:strongcouplingcompare} \end{figure*} \subsubsection{Weak Coupling} \begin{figure*}[t] \begin{center} \includegraphics[width=10cm]{weakcouplingvaryingtau0.eps} \end{center} \vspace{-6mm} \caption{ Result for the proper-time evolution of $\Delta$ obtained by numerical integration of Eq.~(\ref{diffeqs}). Long-dashed, solid, and short-dashed lines correspond $\tau_0 = \{0.4, 1, 2\}$ fm/c, respectively. Transport coefficients in this case are the typical weak coupling values given in Eq.~(\ref{weaklimitvalues}). The initial temperature, $T_0$, is held fixed at $T_0 = 350$ MeV and it is assumed that $\Delta_0=0$ for this example. \vspace{5mm} } \label{fig:weakcouplingvaryingtau0} \end{figure*} In Fig.~\ref{fig:weakcouplingvaryingtau0} we show our result for the proper-time evolution of the pressure anisotropy parameter, $\Delta$, obtained by numerical integration of Eq.~(\ref{diffeqs}). The transport coefficients in this case are the typical weak coupling values given in Eq.~(\ref{weaklimitvalues}). For purpose of illustration we have chosen the initial temperature, $T_0$, to be held fixed at $T_0 = 350$ MeV and assumed that the initial pressure anisotropy, $\Delta_0$, vanishes, i.e. $\Delta_0=0$. As can be seen from this figure, as in the strongly coupled case, a finite oblate pressure anisotropy is rapidly established due to the intrinsic longitudinal expansion of the fluid. In the case of weak coupling transport coefficients a larger pressure anisotropy develops. Depending on the initial time at which the hydrodynamic evolution is initialized, $\Delta$ peaks in the range $1 \mathrel{\rlap{\lower4pt\hbox{$\sim$} \Delta \mathrel{\rlap{\lower4pt\hbox{$\sim$} 9$. As can be seen from the $\tau_0 = 0.4$ fm/c result, if the initial simulation time is assumed to be small, then very large pressure anisotropies can develop. In that case, in dimensionful units, the peak of the $\Delta$ evolution occurs at a time of $\tau \sim 2.3$ fm/c. Such large pressure anisotropies would cast doubt on the applicability of the 2nd-order conformal viscous hydrodynamical equations, since nonconformal 2nd-order terms and higher-order non-linear terms corresponding to 3rd- or higher-order expansions could become important.\footnote{See Ref.~\cite{Betz:2008me} for an example of 2nd-order terms which can appear when conformality is broken.} If, in the weakly coupled case, the initial simulation time $\tau_0$ is taken to be 0.2 fm/c one would find that $\Delta$ would become infinite during the simulation. This divergence is due to the fact that the longitudinal pressure goes to zero and then becomes negative during some period of the time evolution. \subsection{Negativity of Longitudinal Pressure} \begin{figure*}[t] \begin{center} \includegraphics[width=10cm]{weakcouplingvaryingxi0.eps} \end{center} \vspace{-6mm} \caption{ Result for the proper-time evolution of the ratio of the longitudinal pressure over the pressure, $p_L/p$, obtained by numerical integration of Eq.~(\ref{diffeqs}). Solid, long-dashed, and short-dashed lines correspond $\Delta_0 = \{0, -0.5, 10\}$, respectively. Transport coefficients in this case are the typical weak coupling values given in Eq.~(\ref{weaklimitvalues}). The initial temperature, $T_0$, is held fixed at $T_0 = 350$ MeV and it is assumed that $\tau_0=0.2$ fm/c for this example. The dotted grey line indicates $p_L=0$ in order to more easily identify the point in time where the longitudinal pressure becomes negative. } \label{fig:weakcouplingvaryingxi0} \end{figure*} In order to explicitly demonstrate the possibility that $\Delta$ diverges, in Fig.~\ref{fig:weakcouplingvaryingxi0} we have plotted the evolution the longitudinal pressure over the isotropic pressure ($p = \epsilon/3$), $p_L/p$, obtained by numerical integration of Eq.~(\ref{diffeqs}) for different assumed initial pressure anisotropies. The transport coefficients in this case are the typical weak coupling values given in Eq.~(\ref{weaklimitvalues}). The initial temperature, $T_0$, is held fixed at $T_0 = 350$ MeV and it is assumed that $\tau_0=0.2$ fm/c for this example. As this figure shows, if the initial simulation time is too early, the longitudinal pressure of the system can become negative. The exact point in time at which it becomes negative depends on the assumed initial pressure anisotropy. As the initial pressure anisotropy becomes more prolate, the time over which the longitudinal pressure remains positive is increased. For initially extremely prolate distributions the longitudinal pressure can remain positive during the entire simulation time. In the opposite limit of extremely oblate distributions, the longitudinal pressure can become negative very rapidly and remain so throughout the entire lifetime of the plasma. We note that in the Navier-Stokes limit the initial shear would be $\left(\overline\Pi_0\right)_\text{Navier Stokes} = 16 \bar\eta/(9 \tau_0 T_0)$ which, using the initial conditions indicated in Fig.~\ref{fig:weakcouplingvaryingxi0}, gives $p_{L,0}/p = -11.1$. This means that if one were to use Navier-Stokes initial conditions the system would start with an extremely large negative longitudinal pressure. Using $\tau_0 = 1$ fm/c and $T_0 = $350 MeV improves the situation somewhat; however, even in that case the initial Navier-Stokes longitudinal pressure remains negative with $p_{L,0}/p = -1.4$. What does a negative longitudinal pressure indicate? From a transport theory point of view it indicates that something is unphysical about the simulation since in transport theory the pressure components are obtained from moments of the momentum-squared over the energy, e.g. for the longitudinal pressure \begin{equation} p_L = \int\!\frac{d^3p}{(2\pi)^3} \, \frac{p_z^2}{p^0} \, f({\bf p}) \, , \end{equation} where $f({\bf p})$ is the one-particle phase-space distribution function. Therefore, in transport theory all components of the pressure are positive definite. It is possible to generate negative longitudinal pressure in the case coherent fields as in the case of the early-time evolution of the quark-gluon plasma \cite{Romatschke:2006wg,Fries:2007iy,% Kovchegov:2007pq,Rebhan:2008uj}; however, such coherent fields are beyond the scope of hydrodynamical simulations which describe the time evolution of a locally color- and charge-neutral fluid. This fundamental issue aside, the negativity of the longitudinal pressure indicates that the expansion which was used to derive the hydrodynamical equations themselves is breaking down. This expansion implicitly relies on the perturbation described by $\Pi$ being small compared to the isotropic pressure $p$. The point at which the longitudinal pressure goes to zero is the point at which the perturbation, $\Pi$, is equal in magnitude to the background around which one is expanding. This means that the perturbation is no longer a small correction to the system's evolution and that higher order corrections could become important. Therefore negative longitudinal pressure signals regions of parameter space where one cannot trust 2nd-order viscous hydrodynamical solutions. In the following two subsections we will make this statement quantitative and extract constraints on the initial conditions which allow for 2nd-order viscous hydrodynamical simulation. \subsection{Determining the critical line in initial condition space} \label{sec:criticalline} For a fixed set of transport coefficients given by $\{\bar\eta, c_\pi, c_{\lambda_1}\}$ the only remaining freedom in the hydrodynamical evolution equations (\ref{diffeqs}) comes from the coefficient $\gamma$ (using the assumed ideal equation of state) and from the initial conditions through the dimensionless coefficient $k=\tau_0 \epsilon_0^{1/4}$ and the initial shear $\overline\Pi_0$. In the next section we will vary these two parameters and determine for which values one obtains a solution which, at any point during the evolution, has a negative longitudinal pressure. For a given $\overline\Pi_0$ we find that for $k$ below a certain value, the system exhibits a negative longitudinal pressure. We will define this point in $k$ as the ``critical'' value of $k$. Above the critical value of $k$ the longitudinal pressure is positive definite at all times. \subsubsection{Strong Coupling} \label{sec:strongcriticalline} \begin{figure*}[t] \begin{center} \includegraphics[width=10cm]{strongcouplingboundary.eps} \end{center} \vspace{-6mm} \caption{ Critical boundary in $k$ ($k_{\rm critical}$) as a function of the initial shear, $\overline\Pi_0$. Above this line solutions have positive longitudinal pressure at all times. Below this line solutions have negative longitudinal pressure at some point during the evolution. Transport coefficients in this case are the typical strong coupling values given in Eq.~(\ref{stronglimitvalues}). Left limit of plot region corresponds to $\Delta_0 = -1$ and right to $\Delta_0 = \infty$. } \label{fig:strongcouplingboundary} \end{figure*} In Fig.~\ref{fig:strongcouplingboundary} we plot the critical boundary in $k$ ($k_{\rm critical}$) as a function of the initial value of the shear, $\overline\Pi_0$. Since $k$ is proportional to the assumed initial simulation time $\tau_0$ increasing $k$ with fixed initial energy density corresponds to increasing $\tau_0$. Assuming fixed initial temperature, for an initially prolate distribution, one can start the simulation at earlier times. For an initially oblate distribution, one must start the simulation at later times in order to remain above the critical value of $k$. In general, $k = \tau_0 \epsilon_0^{1/4}$ and our result can be used to set a bound on this product. In the case of typical strong coupling transport coefficients, the critical value of $k$ at $\overline\Pi_0 = 0$ is $k_{\rm critical} (\overline\Pi_0 = 0) = 0.26$. In the case of an ideal QCD equation of state and assuming $\overline\Pi_0 = 0$, the constraint is that $\tau_0 > \gamma \, k_{\rm critical} \, T_0^{-1}$, which is numerically $\tau_0 > 0.14 \, T_0^{-1}$. Assuming an initial time of $\tau_0$ = 1 fm/c = 5.07 ${\rm GeV}^{-1}$ this implies that $T_0 > 28$ MeV. For other initial values of $\overline\Pi_0$ one can use Fig.~\ref{fig:strongcouplingboundary} to determine the constraint. \subsubsection{Weak Coupling} \begin{figure*}[t] \begin{center} \includegraphics[width=10cm]{weakcouplingboundary.eps} \end{center} \vspace{-6mm} \caption{ Critical boundary in $k$ ($k_{\rm critical}$) as a function of the initial shear, $\overline\Pi_0$. Above this line solutions have positive longitudinal pressure at all times. Below this line solutions have negative longitudinal pressure at some point during the evolution. Transport coefficients in this case are the typical weak coupling values given in Eq.~(\ref{weaklimitvalues}). Left limit of plot region corresponds to $\Delta_0 = -1$ and right to $\Delta_0 = \infty$. } \label{fig:weakcouplingboundary} \end{figure*} In Fig.~\ref{fig:weakcouplingboundary} we plot the critical boundary in $k$ ($k_{\rm critical}$) as a function of the initial value of the shear, $\overline\Pi_0$. Since $k$ is proportional to the assumed initial simulation time $\tau_0$ increasing $k$ with fixed initial energy density corresponds to increasing $\tau_0$. As in the case of strong coupling, for an initially prolate distribution, one can start the simulation at earlier times. For an initially oblate distribution, one must start the simulation at later times in order to remain above the critical value of $k$. In the case of typical weak coupling transport coefficients the critical value of $k$ at $\overline\Pi_0 = 0$ is $k_{\rm critical} (\overline\Pi_0 = 0) = 0.74$. In the case of an ideal QCD equation of state and assuming $\overline\Pi_0 = 0$, the constraint is that $\tau_0 > \gamma \, k_{\rm critical} \, T_0^{-1}$, which is numerically, $\tau_0 > 0.40 \, T_0^{-1}$. Assuming an initial time of $\tau_0$ = 1 fm/c this implies that $T_0 > 79$ MeV. For other initial values of $\overline\Pi_0$ one can use Fig.~\ref{fig:weakcouplingboundary} to determine the constraint. \subsection{For which initial conditions can one trust 2nd-order viscous hydrodynamical evolution?} \label{sec:convergencecriteria} As mentioned in Sec.~\ref{sec:momentumanisotropybounds} the requirement that the longitudinal pressure is positive during the simulated time only gives a weak constraint in the sense that it merely requires that $\overline\Pi < \bar{p}$. A stronger constraint can be obtained by requiring instead $- \alpha \, \bar{p} \leq \overline\Pi \leq \alpha \, \bar{p}$ and then using this to constrain the possible initial time and energy density which can be used in hydrodynamical simulations. In the following subsections we will fix $\alpha=1/3$ as our definition of what is a ``large'' correction. For this value of $\alpha$ the initial values of $\overline\Pi_0$ are constrained to be between $-1/9 \leq \overline\Pi_0 \leq 1/9$. For a given $\overline\Pi_0$ in this range we find that for $k$ below a certain value we cannot satisfy the stronger constraint at all simulated times. We will define this point in $k$ as the ``convergence'' value of $k$ or $k_\text{convergence}$. Above this value of $k=k_\text{convergence}$ the shear satisfies the constraint $- \bar{p}/3 \leq \overline\Pi \leq \bar{p}/3$ at all simulated times and therefore represents a ``reasonable'' simulation. \subsubsection{Strong Coupling} \begin{figure*}[t] \begin{center} \includegraphics[width=10cm]{strongcouplingconvergence.eps} \end{center} \vspace{-6mm} \caption{ Convergence boundary in $k$ ($k_{\rm convergence}$) as a function of the initial shear, $\overline\Pi_0$. Above this line solutions satisfy the convergence constraint. Transport coefficients in this case are the typical strong coupling values given in Eq.~(\ref{stronglimitvalues}).} \label{fig:strongcouplingconvergence} \end{figure*} In Fig.~\ref{fig:strongcouplingconvergence} we plot the ``convergence boundary'' in $k$ ($k_{\rm convergence}$) as a function of the initial shear, $\overline\Pi_0$. In the case of typical strong coupling transport coefficients the convergence value of $k$ at $\overline\Pi_0 = 0$ is $k_{\rm convergence}(\overline\Pi_0 = 0) = 1.58$. In the case of an ideal QCD equation of state and assuming $\overline\Pi_0 = 0$, the constraint is that $\tau_0 > \gamma \, k_{\rm convergence} \, T_0^{-1}$, which is numerically $\tau_0 > 0.85 \, T_0^{-1}$. Assuming an initial time of $\tau_0$ = 1 fm/c this implies that $T_0 > 167$ MeV. For other initial values of $\overline\Pi_0$ one can use Fig.~\ref{fig:strongcouplingconvergence} to determine the constraint. \subsubsection{Weak Coupling} In Fig.~\ref{fig:weakcouplingconvergence} we plot the ``convergence boundary'' in $k$ ($k_{\rm convergence}$) as a function of the initial shear, $\overline\Pi_0$. In the case of typical weak coupling transport coefficients the convergence value of $k$ at $\overline\Pi_0 = 0$ is $k_{\rm convergence}(\overline\Pi_0 = 0) = 10.9$. In the case of an ideal QCD equation of state and assuming $\overline\Pi_0 = 0$, the constraint is that $\tau_0 > \gamma \, k_{\rm convergence} \, T_0^{-1}$, which is numerically $\tau_0 > 5.9 \, T_0^{-1}$. Assuming an initial time of $\tau_0$ = 1 fm/c = 5.07 ${\rm GeV}^{-1}$ this implies that $T_0 > 1.16$ GeV. For other initial values of $\overline\Pi_0$ one can use Fig.~\ref{fig:weakcouplingconvergence} to determine the constraint. \subsection{What does this imply for higher dimensional hydrodynamical simulations?} \label{sec:higherd} If one proceeds to more realistic simulations in higher dimensional boost invariant treatments, e.g. 1+1 and 2+1, the spatial variation of the initial conditions and time evolution in the transverse plane have to be taken into account. In addition, new freedoms such as the initial fluid flow field and additional transport coefficients arise; however, to first approximation one can treat these higher dimensional systems as a collection of 0+1 dimensional systems with different initial conditions at each point in the transverse plane. Within this approximation one would quickly find that there are problems with the hydrodynamic treatment at the transverse edges of the simulated region. This happens because as one goes away from the center of the hot and dense matter, the energy density (temperature) drops and, assuming a fixed initial simulation time $\tau_0$, one would find that at a finite distance from the center the condition $k > k_\text{critical}$ would be violated by the initial conditions. In these regions of space, hydrodynamics would then predict an infinitely large anisotropy parameter, $\Delta$, casting doubt on the reliability of the hydrodynamic assumptions. Even worse is that at a smaller distance from the center one would cross the ``convergence boundary'' in $k$, $k_\text{convergence}$, and therefore not fully trust the analytic approximations used in deriving the hydrodynamic equations (conformality, truncation at 2nd order, etc.). \begin{figure*}[t] \begin{center} \includegraphics[width=10cm]{weakcouplingconvergence.eps} \end{center} \vspace{-6mm} \caption{ Convergence boundary in $k$ ($k_{\rm convergence}$) as a function of the initial shear, $\overline\Pi_0$. Above this line solutions satisfy the convergence constraint. Transport coefficients in this case are the typical weak coupling values given in Eq.~(\ref{weaklimitvalues}).} \label{fig:weakcouplingconvergence} \end{figure*} Of course, an approximation by uncoupled 0+1 systems with different initial conditions would not generate any radial or elliptic flow; however, we find empirically that the picture above holds true in higher-dimensional simulations, justifying the basic logic. For example, using strongly-coupled transport coefficients and assuming an initially isotropic plasma ($\overline\Pi_0=0$), we found in Sec.~\ref{sec:strongcriticalline} that $k_{\rm critical} = 0.26$. In terms of the initial temperature this predicts that when starting a simulation with $\tau_0=1$ fm/c, one will generate negative longitudinal pressures for any initial temperature $T_0 \mathrel{\rlap{\lower4pt\hbox{$\sim$} 28$ MeV. We will now compare this prediction with results for the longitudinal pressure extracted from the 2+1 dimensional code of Luzum and Romatschke \cite{Luzum:2008cw,RomatschkeCode}. In Fig.~\ref{fig:hydro1p1} we show fixed $\tau$ snapshots of the longitudinal pressure. The runs shown in Fig.~\ref{fig:hydro1p1} were performed on a $69^2$ transverse lattice with a lattice spacing of 2 ${\rm GeV}^{-1}$ using Glauber initial conditions starting at $\tau_0$=1 fm/c, an initial central temperature of $T_0 = 350$ MeV, zero initial shear and zero impact parameter. For these runs we have used the realistic QCD equation of state used in Ref.~\cite{Luzum:2008cw}. In the left panel of Fig.~\ref{fig:hydro1p1} the transport coefficients were set to the typical strong coupling values given in Eq.~(\ref{stronglimitvalues}), except with $c_{\lambda_1}=0$ due to the fact that the code used did not include this term in the hydrodynamic equations. Based on the initial transverse temperature profile and our estimated critical initial temperature, in the strong-coupling case we expect negative longitudinal pressures to be generated at transverse radius $r \mathrel{\rlap{\lower4pt\hbox{$\sim$} 10$ fm. As can be seen from the left panel of Fig.~\ref{fig:hydro1p1}, at the edge of the simulated region the longitudinal pressure becomes negative starting already at very early times. The transverse radii at which this occurs is in good agreement with our estimate based on the 0+1 dimensional critical value detailed above. Based on our convergence criterium detailed in Sec.~\ref{sec:convergencecriteria} we found, in the strong-coupling case, that $k_{\rm convergence}(\overline\Pi_0 = 0) = 1.58$. Assuming $\tau_0 = 1$ fm/c this translates into a minimum initial temperature of 167 MeV. Based on the transverse temperature profile used in the run shown in the left panel of Fig.~\ref{fig:hydro1p1} this results in a maximum transverse radius $r \sim 6.8$ fm. At radii larger than this value it is possible that higher order corrections are large and therefore the applicability of 2nd-order viscous hydrodynamics becomes questionable. Since this temperature is greater than the typical freeze-out temperature used, $T_f \sim 150$ MeV, this means that in the strong coupling limit it is relatively safe to use hydrodynamical simulations. However, one should be extremely careful with the transverse edges. The situation, however, is not as promising in the weak-coupling case. To see this explicitly, in the right panel of Fig.~\ref{fig:hydro1p1} we show the longitudinal pressure resulting from a run with weak coupling transport coefficients (\ref{weaklimitvalues}). Based on the initial transverse temperature profile and our estimated critical initial temperature, in the weak-coupling case we expect negative longitudinal pressures to be generated at transverse radius $r \mathrel{\rlap{\lower4pt\hbox{$\sim$} 8$ fm. Comparing this prediction to the results shown in the right panel of Fig.~\ref{fig:hydro1p1} we see that the situation is even worse than expected. By the final time of 4.5 fm/c the entire central region has very low or negative longitudinal pressure. We note that at that time the radius at which the temperature has dropped below the freeze-out temperature is around 7.3 fm so the region where the longitudinal pressure is negative (or almost negative) is still in the QGP phase. In terms of convergence, we remind the reader that based on our convergence criterium detailed in Sec.~\ref{sec:convergencecriteria} we found that in the weakly-coupled case $k_{\rm convergence}(\overline\Pi_0 = 0) = 10.9$. Assuming $\tau_0 = 1$ fm/c we found that the initial central temperature should be greater than 1.16 GeV. As can be seen in Fig.~\ref{fig:hydro1p1} the corrections to ideal hydrodynamics are sizable so this again points to the possibility that there are large corrections to the 2nd-order hydrodynamic equations. Based on this, it would be questionable to ever apply 2nd-order viscous hydrodynamics to a weakly-coupled quark-gluon plasma generated in relativistic heavy-ion collisions. At the very least one would need to include nonconformal 2nd-order terms and 3rd-order terms in order to assess their impact. \begin{figure*}[t] \begin{center} \includegraphics[width=8cm]{hydro1p1strong.eps} $\;\;$ \includegraphics[width=8cm]{hydro1p1weak.eps} \end{center} \vspace{-6mm} \caption{ Evolution of the longitudinal pressure in proper-time obtained from the 2+1 dimensional viscous hydrodynamics code of Ref.~\cite{Luzum:2008cw}. Horizontal axis is the distance from the center of the simulated region. In the left panel we show the result obtained using the typical strong coupling values given in Eq.~(\ref{stronglimitvalues}) but with $c_{\lambda_1}=0$. In the right panel we show the result obtained using the typical weak coupling values given in Eq.~(\ref{weaklimitvalues}) but with $c_{\lambda_1}=0$. The runs shown used Glauber initial conditions with an initial central temperature of $T_0 = 350$ MeV, initial time $\tau_0 = 1$ fm/c and $\Pi_\mu^\nu(\tau_0)=0$. } \label{fig:hydro1p1} \end{figure*} \section{Conclusions and Outlook} \label{sec:conclusions} In this paper we have derived two general criteria that can be used to assess the applicability of 2nd-order conformal viscous hydrodynamics to relativistic heavy-ion collisions. We did this by simplifying to a 0+1 dimensional system undergoing boost invariant expansion and then (a) requiring the longitudinal pressure to be positive during the simulated time or (b) requiring a convergence criterium that $|\Pi| < p/3$ during the simulated time. We showed that these requirements lead to a non-trivial relation between the possible initial simulation time $\tau_0$, the initial energy density $\epsilon_0$, and the initial value of the fluid shear tensor, $\Pi_0$. As a cross check of our numerics we presented an approximate analytic solution of 2nd-order conformal viscous hydrodynamical evolution which represents the leading correction to 0+1 dimensional boost-invariant ideal hydrodynamics in the limit $\eta/s \rightarrow 0$. The constraints derived here were then shown to provide guidance for where one might expect 2nd-order viscous hydrodynamics to be a good approximation in higher-dimensional cases. We found that the prediction of our criticality bound was in reasonable agreement with where the longitudinal pressure becomes negative in 2+1 dimensional viscous hydrodynamical simulations. Based on these findings it seems possible to estimate where one obtains convergent/trustable 2nd-order viscous hydrodynamical simulations based solely on the initial conditions and analysis of the hydrodynamical evolution equations themselves. In closing we mention that another outcome of this work is that we have shown that it is possible to use hydrodynamical simulations to predict the proper-time dependence of the plasma momentum-space anisotropy as quantified by the $\Delta$ or $\xi$ parameters. This can be used as input to calculations of production of electromagnetic radiation from an anisotropic plasma \cite{Mauricio:2007vz, Martinez:2008di, Martinez:2008rr, Martinez:2008mc,% Schenke:2006yp, Bhattacharya:2008up, Bhattacharya:2008mv}, calculations of quarkonium binding/polarization in anisotropic plasma \cite{Dumitru:2007hy,Dumitru:2009ni}, and also to assess the phenomenological growth rate of plasma instabilities on top of the mean colorless fluid background (see Ref.~\cite{Strickland:2007fm} and references therein). The findings here present a complication in this regard since phenomenological studies will require knowledge of $\Delta$ in the full transverse plane. As we have shown, 2nd-order hydrodynamical simulations predict that this parameter can become infinite in certain regions. In these regions one would no longer trust the predictions of the hydrodynamical model and additional input would be required. \section*{Acknowledgements} We are extremely grateful to Pasi Huovinen for his careful reading of our manuscript. We also acknowledge conversations with Adrian Dumitru, Berndt M\"uller, Paul Romatschke, and Derek Teaney. M. Martinez thanks J. Aichelin and K. Eskola for assistance provided in order to attend the 18th Jyvaskyla Summer School 2008 where this work was initiated. M. Martinez was supported by the Helmholtz Research School and Otto Stern School of the Goethe-Universit\"at Frankfurt am Main. M. Strickland was supported partly by the Helmholtz International Center for FAIR Landesoffensive zur Entwicklung Wissenschaftlich-\"Okonomischer Exzellenz program.
0902.3252
\section{Introduction} Recently quantum field theory on noncommutative spaces has been studied extensively, see e.g. \cite{NCreviews} and references therein. General quantum mechanical arguments indicate that it is not possible to measure a classical background space-time at the Planck scale, due to the effects of the gravitational backreaction \cite{Doplicher}. This has led to the belief that the classical differentiable manifold structure of space-time at the Planck scale should be replaced by some sort of noncommutative structure. The simplest approximation is a flat noncommutative space-time, which can be realized by the coordinate operators $\hat{x}^{\mu}$ satisfying $\left[ \hat{x}^{\mu},\hat{x}^{\nu}\right] =i\hbar\theta^{\mu\nu}\,,$ where $\theta^{\mu\nu\,}$ is the noncommutativity parameter. However, the restriction to flat space-time is not natural and one must discuss more general curved noncommutative space-time, when the commutator of coordinates depends on these coordinates. The generalized noncommutative spaces arise e.g. in the context of string theory because of the presence of background antisymmetric magnetic $B$-field. The construction of a consistent quantum field theory and gravity on a curved noncommutative space is one of the main open challenges in modern theoretical physics. However, to do it is not so easy because of the conceptual and technical problems. To begin with let us study quantum mechanics QM with position-dependent noncommutativity. Usually, noncommutative QM \cite{NCQM} deals with the following commutation relations: \begin{align} \left[ \hat{x}^{i},\hat{x}^{j}\right] & =i\hbar\theta^{ij},\ \label{1}\\ \ \left[ \hat{x}^{i},\hat{p}_{j}\right] & =i\hbar\delta_{j}^{i}% ,\ \ \label{1a}\\ \left[ \hat{p}_{i},\hat{p}_{j}\right] & =0, \label{1b}% \end{align} where $\theta^{ij}$ is some constant antisymmetric matrix. However, it is not always reasonable to assume that the noncommutativity extends to the whole space, leaving the parameter of noncommutativity $\theta^{ij}$ to be constant. One can consider more general situation of position-dependent or even local noncommutativity, when noncommutativity exists only in some restricted area of the space, like, e.g., in the two-dimensional case, \begin{equation} \lbrack\hat{x},\hat{y}]=\frac{i\hbar\theta}{1+\theta\alpha\left( \hat{x}% ^{2}+\hat{y}^{2}\right) }~. \label{2}% \end{equation} The constant $\alpha$ is a parameter which measure the degree of locality, if $\alpha=0$ the noncommutativity is global (\ref{1}-\ref{1b}), if $\alpha\neq0$ the noncommutativity is local. Other examples of position-dependent noncommutativity are Lie-algebraic $\left[ \hat{x}^{i},\hat{x}^{j}\right] =i\hbar f_{k}^{ij}\hat{x}^{k}$ and, in particular the kappa-Poincare noncommutativity \cite{Lukierski}, and the quadratic noncommutative algebra $\left[ \hat{x}^{i},\hat{x}^{j}\right] =i\hbar R_{kl}^{ij}\hat{x}^{k}\hat {x}^{l}$ which appears in the context of quantum groups \cite{Sklyanin}, \cite{Szabo}. The aim of this work is to construct consistent quantum mechanics with a given position-dependent noncommutativity,% \begin{equation} \left[ \hat{x}^{i},\hat{x}^{j}\right] =i\hbar\omega^{ij}\left( \hat {x}\right) ~, \label{3}% \end{equation} i.e., to construct the complete algebra of commutation relations, including momenta, which obey the Jacobi identity. \section{Jacobi identity and position-dependent noncommutativity} Note that in the presence of the position-dependent noncommutativity (\ref{3}), the other commutators $\left[ \hat{x}^{i},\hat{p}_{j}\right] $ and $\left[ \hat{p}_{i},\hat{p}_{j}\right] $ should be changed as well in order to satisfy the Jacobi identity. For example, consider the identity% \begin{equation} \left[ \hat{p}_{k},\left[ \hat{x}^{i},\hat{x}^{j}\right] \right] +\left[ \hat{x}^{j},\left[ \hat{p}_{k},\hat{x}^{i}\right] \right] +\left[ \hat {x}^{i},\left[ \hat{x}^{j},\hat{p}_{k}\right] \right] \equiv0~, \label{4}% \end{equation} where coordinates obey (\ref{3}) and momenta still obey (\ref{1a}), (\ref{1b}). Then from (\ref{4}) one has: \[ \left[ \hat{p}_{k},\omega^{ij}\left( \hat{x}\right) \right] +\left[ \hat{x}^{j},\delta_{k}^{i}\right] +\left[ \hat{x}^{i},\delta_{k}^{j}\right] \equiv0~, \] or \begin{equation} \left[ \hat{p}_{k},\omega^{ij}\left( \hat{x}\right) \right] \equiv0~. \label{5}% \end{equation} If we suppose now that \begin{equation} \omega^{ij}\left( \hat{x}\right) =f_{l}^{ij}\hat{x}^{l}~, \label{6}% \end{equation} then from (\ref{1a}) and (\ref{5}) it follows that% \begin{equation} \left[ \hat{p}_{k},f_{l}^{ij}\hat{x}^{l}\right] =-i\hbar f_{l}^{ij}% \delta_{k}^{l}=-i\hbar f_{k}^{ij}\equiv0~. \label{7}% \end{equation} Thus, because of the Jacobi identity, the NCQM commutation relations (\ref{1}-\ref{1b}) are valid only for a position independent parameter $\theta^{ij}$. Otherwise, we should change (\ref{1a}) and (\ref{1b}) as well in order to satisfy the Jacobi identity including coordinates and momenta. And the question is how to do it? \section{The model of position-dependent noncommutativity} To answer the question posed at the end of the previous section, let us consider the classical model described by the first-order Lagrangian% \begin{equation} L=p_{i}\dot{x}^{i}-H\left( p,x\right) +\left( p_{i}+B_{i}\left( x,\alpha\right) \right) \theta^{ij}\left( \dot{p}_{j}+\dot{B}_{j}\left( x,\alpha\right) \right) /2~, \label{8}% \end{equation} where the functions $B_{i}$ depend on the parameter $\alpha$, such that $B_{i}\rightarrow0$ if $\alpha\rightarrow0$, and $H\left( p,x\right) $ is a given function which we will call Hamiltonian. This Lagrangian is, in fact, a generalization of a first-order model \cite{Kup} which reproduce after quantization the NCQM commutation relations (\ref{1})-(\ref{1b}). Note that first-order Lagrangians also have been used in the context of chiral bosons \cite{CBoson}. For simplicity we consider just a two dimensional case, $i=1,2,$ $x^{i}=(x,y),\ p_{i}=\left( p_{x},p_{y}\right) ,\ B_{i}=\left( B_{x},B_{y}\right) $ and$\ $% \begin{equation} \theta^{ij}=\theta\varepsilon^{ij}, \label{9}% \end{equation} where $\theta$ is a real number which, as we will see, controls the noncommutativity, and $\varepsilon^{12}=1$. In the limit of $\theta \rightarrow0$ the action (\ref{8}) transforms into the usual Hamiltonian action of classical mechanics. The Hamiltonization and canonical quantization of theories with first-order Lagrangians were considered in \cite{GK}, see also \cite{FJ}. Following the general lines of \cite{GK}, we construct the Hamiltonian formulation of (\ref{8}). Let us first rewrite (\ref{8}) as% \begin{equation} L=p_{i}\dot{x}^{i}+\frac{\theta}{2}p_{i}\varepsilon^{ij}\dot{p}_{j}+\theta B_{i}\varepsilon^{ij}\dot{p}_{j}+\frac{\theta}{2}B_{j}\varepsilon^{jk}% \partial_{i}B_{k}\dot{x}^{i}-H\left( p,x\right) ~.\label{8a}% \end{equation} We adopt the notation of \cite{GK}, $\xi^{\mu}=\left( x,y,p_{x},p_{y}\right) ,\ J_{\mu}=\left( J_{i},J_{i+2}\right) $, where% \begin{equation} J_{i}=p_{i}+\frac{\theta}{2}B_{j}\varepsilon^{jk}\partial_{i}B_{k}% ,\ \ J_{i+2}=-\frac{\theta}{2}\varepsilon^{ij}\left( p_{j}+2B_{j}\right) ~.\nonumber \end{equation} In this notation (\ref{8a}) has the form% \begin{equation} L=J_{\mu}\dot{\xi}^{\mu}-H\left( \xi\right) ~.\label{8b}% \end{equation} The Hamiltonization of the first-order Lagrangian (\ref{8b}) leads to the Hamiltonian theory with second-class constraints \begin{equation} \Phi_{\mu}\left( \xi,\pi\right) =\pi_{\mu}-J_{\mu}(\xi)=0~,\label{constr}% \end{equation} where $\pi_{\mu}$ are the momenta conjugated to $\xi_{\mu}$. The constraint bracket is \[ \{\Phi_{\mu},\Phi_{\nu}\}=\ \Omega_{\mu\nu}=\partial_{\mu}J_{\nu}% -\partial_{\nu}J_{\mu}~. \] For the canonical variables $\xi_{\mu}$ the Dirac brackets are% \[ \left\{ \xi^{\mu},\xi^{\nu}\right\} _{D}=\omega_{0}^{\mu\nu},\ \ \omega _{0}^{\mu\nu}=\Omega_{\mu\nu}^{-1}~. \] The explicit form is:% \begin{align} & \left\{ x^{i},x^{j}\right\} _{D}=\theta d\varepsilon^{ij},\label{10}\\ & \left\{ x^{i},p_{j}\right\} _{D}=d\left( \delta_{j}^{i}-\theta \varepsilon^{ik}\partial_{k}B_{j}\right) ,\nonumber\\ & \left\{ p_{i},p_{j}\right\} _{D}=\theta\left( \partial_{2}B_{2}% \partial_{1}B_{1}-\partial_{1}B_{2}\partial_{2}B_{1}\right) d\varepsilon _{ij},\nonumber \end{align} where% \begin{equation} d=\frac{1}{1+\theta\left( \partial_{1}B_{2}-\partial_{2}B_{1}\right) }.\label{11}% \end{equation} It is easy to see that in the commutative limit, $\theta\rightarrow0$, the constructed Dirac brackets (\ref{10}) transform into the canonical Poisson brackets $\left\{ x^{i},x^{j}\right\} =\left\{ p_{i},p_{j}\right\} =0,\ \left\{ x^{i},p_{j}\right\} =\delta_{j}^{i}$, and in the limit $\alpha\rightarrow0$ ($B_{i}\rightarrow0$), (\ref{10}) transform into% \[ \left\{ x^{i},x^{j}\right\} _{D}=\theta\varepsilon^{ij},\ \ \left\{ x^{i},p_{j}\right\} _{D}=\delta_{j}^{i},\ \ \left\{ p_{i},p_{j}\right\} _{D}=0, \] which will reproduce after quantization NCQM commutation relations (\ref{1})-(\ref{1b}). So, in the general case, the vector field $B_{i}$ introduced in order to generalize the previously known model \cite{Kup}, can be interpreted as the correction to the simplectic potential which measure the curvature of the phase space due to noncommutativity. At this point we may ask if it is possible to generalize the above construction to the case of second order models, i.e., models whose Lagrangians are quadratic in the velocities. To investigate this possibility we consider the model introduced by Lukierski et al \cite{Lukierski2}:% \begin{equation} L_{LSZ}=\frac{\dot{x}_{i}^{2}}{2}+\frac{\theta}{2}\varepsilon_{ij}\dot{x}% _{i}\ddot{x}_{j}. \label{l1}% \end{equation} Introducing Lagrangian multipliers $p_{i}$ and new variables $y_{i}$, one rewrites (\ref{l1}) in an equivalent form:% \begin{equation} L^{\left( 0\right) }=p_{i}\left( \dot{x}_{i}-y_{i}\right) +\frac{y_{i}% ^{2}}{2}+\frac{\theta}{2}\varepsilon_{ij}y_{i}\dot{y}_{j}. \label{l2}% \end{equation} Next, by using the Horvathy-Plyushchay variables \cite{HP} \begin{equation} X_{i}=x_{i}+\theta\varepsilon_{ij}y_{j}-\theta\varepsilon_{ij}p_{j}% ,\ \ Q_{i}=\theta\left( y_{i}-p_{i}\right) , \label{l3}% \end{equation} we represent (\ref{l2}) as% \begin{equation} L^{\left( 0\right) }=L_{ext}^{\left( 0\right) }+L_{int}^{\left( 0\right) }, \label{l4}% \end{equation} where% \begin{align*} L_{ext}^{\left( 0\right) } & =p_{i}\dot{X}_{i}+\frac{\theta}{2}% \varepsilon_{ij}p_{i}\dot{p}_{j}-\frac{1}{2}p_{i}^{2},\\ L_{int}^{\left( 0\right) } & =\frac{1}{2\theta}\varepsilon_{ij}Q_{i}% \dot{Q}_{j}+\frac{1}{2\theta^{2}}Q_{i}^{2}. \end{align*} We see that Lagrangian (\ref{l4}) separates into two disconnected parts describing the \textquotedblleft external\textquotedblright\ and \textquotedblleft internal\textquotedblright\ degrees of freedom. The Lagrangian $L_{ext}^{\left( 0\right) }$ is exactly a first-order model \cite{Kup} for which we construct the generalization (\ref{8}). Note that if now to put in (\ref{l4}) instead $L_{ext}^{\left( 0\right) }$ the generalized Lagrangian (\ref{8}) and then to make an inverse transformation to (\ref{l3}) (to turn back from the Horvathy-Plyushchay variables to the original ones) we will come to a Lagrangian involving time derivatives of variables $p_{i}$. So, $p_{i}$ are not Lagrangian multipliers any more and cannot be eliminated from consideration in order to go back to the higher order model (\ref{l1}). Therefore, the generalization to the case of an arbitrary fields $B_{i}$ is possible only in the first-order model \cite{Kup}. \section{Quantization} After canonical quantization, the Dirac brackets (\ref{10}) will determine the commutation relations between the operators of the coordinates and momenta $\hat{\xi}^{\mu}=\left( \hat{x},\hat{y},\hat{p}_{x},\hat{p}_{y}\right) $: \begin{equation} \left[ \hat{\xi}^{\mu},\hat{\xi}^{\nu}\right] =i\hbar\omega^{\mu\nu}\left( \hat{x},\hat{y}\right) ,\label{11q}% \end{equation} and quantum Hamiltonian $\hat{H}$ is constructed according to the classical function $H\left( p,x\right) ,$where some ordering must be chosen in order to construct the operators $\omega^{\mu\nu}\left( \hat{x},\hat{y}\right) $ and $\hat{H}$. The most natural choice is the symmetric Weyl ordering prescription, where to each function $f\left( \xi\right) $ on the phase space is associated a symmetrically ordered operator function $\hat{f}\left( \hat{\xi}\right) $ according to the rule% \begin{equation} \hat{f}\left( \hat{\xi}\right) =\int\frac{d^{4}k}{\left( 2\pi\hbar\right) ^{4}}\tilde{f}\left( k\right) e^{-\frac{i}{\hbar}k_{\mu}\hat{x}^{\mu}% },\label{12q}% \end{equation} with $\tilde{f}(k)$ is a Fourier transform of $f$. In particular, the function $d\left( x,y\right) $ will determine the position-dependent noncommutativety, $[\hat{x},\hat{y}]=i\hbar\theta d\left( \hat{x},\hat {y}\right) $. In \cite{KV} it was shown that the Jacobi identity for the operator algebra (\ref{11q}) is equivalent to the following condition% \begin{equation} \left( \xi^{\mu}\star\omega^{\nu\lambda}-\omega^{\nu\lambda}\star\xi^{\mu }\right) +\mbox{cycl}(\mu\nu\lambda)=0~,\label{i1}% \end{equation} where% \begin{equation} f\star g=\sum_{k=0}^{\infty}\hbar^{k}f\star_{k}g=f\cdot g+\frac{i\hbar}% {2}\omega^{\mu\nu}\partial_{\mu}f\partial_{\nu}g+...\label{i2}% \end{equation} is a star product associated with the noncommutative algebra (\ref{11q}) and $\omega^{\nu\lambda}=\omega_{0}^{\nu\lambda}+($quantum corrections$)$. In the first order in $\hbar$ the equation (\ref{i1}) is equivalent to the Jacobi identity for the classical matrix $\omega_{0}^{\mu\nu}$:% \begin{equation} \omega_{0}^{\mu\sigma}\partial_{\sigma}\omega_{0}^{\nu\lambda}+\mbox{cycl}(\mu \nu\lambda)=0,\label{i3}% \end{equation} which we have by the construction. In the second order, as well as in all even orders, the left-hand side of (\ref{i1}) is identically equal to zero, since% \begin{equation} f\star_{2n}g-g\star_{2n}f=0.\label{i4}% \end{equation} In the third order the condition (\ref{i1}) is not satisfied for $\omega^{\mu\nu}=\omega_{0}^{\mu\nu}$, i.e. it does not follow from the Jacobi identity (\ref{i3}) for $\omega_{0}^{\mu\nu}$. To solve this problem one can construct a quantum correction to $\omega_{0}$, and this has to be an $\hbar^{2}$ correction:% \begin{equation} \omega^{\mu\nu}=\omega_{0}^{\mu\nu}+\hbar^{2}\omega_{2}^{\mu\nu}+O\left( \hbar^{4}\right) \ .\label{i5}% \end{equation} Doing so, the third order of the condition (\ref{i1}) will become% \begin{equation} \left( \xi^{\mu}\star_{3}\omega_{0}^{\nu\lambda}-\omega_{0}^{\nu\lambda}% \star_{3}\xi^{\mu}\right) +\left( \xi^{\mu}\star_{1}\omega_{2}^{\nu\lambda }-\omega_{2}^{\nu\lambda}\star_{1}\xi^{\mu}\right) +\mbox{cycl}(\mu\nu \lambda)=0~.\label{i6}% \end{equation} A quantum non-Poisson correction $\omega_{2}^{\mu\nu}$ can be found from (\ref{i6}) and has the form:% \begin{equation} \omega_{2}^{\mu\nu}=\frac{1}{48}\partial_{\gamma}\omega_{0}^{\rho\sigma }\partial_{\rho}\omega_{0}^{\gamma\delta}\partial_{\sigma}\partial_{\delta }\omega_{0}^{\mu\nu}-\frac{1}{24}\partial_{\sigma}\partial_{\gamma}\omega _{0}^{\mu\rho}\partial_{\rho}\partial_{\delta}\omega_{0}^{\nu\sigma}\omega _{0}^{\gamma\delta}.\label{i7}% \end{equation} An explicit formulae for $\omega_{2}^{\mu\nu}$ taking into account the concrete form (\ref{10}) of $\omega_{0}^{\mu\nu}$\ is presented in appendix. A systematic procedure for the construction of quantum corrections $\omega_{2n}^{\mu\nu }$ to the classical Dirac bracket $\omega_{0}^{\mu\nu}$ was described in \cite{KV}, but explicit calculations were made only up to the fourth order in $\hbar$ and no general formula is yet available. Note that in some particular cases in which there is no ordering problem , e.g., for a linear Poisson structure $\omega^{\mu\nu}$ or if $\omega^{\mu\nu}$ depends only on one of the coordinates, the quantum Dirac brackets $\omega^{\mu\nu}$ coincide with the classical ones $\omega_{0}% ^{\mu\nu}$ (there is no corrections). In this case, the Jacobi identity for the quantum algebra (\ref{11q}) holds true as a consequence of the Jacobi identity for the matrix $\omega_{0}^{\mu\nu}\left( x,y\right) $. The interesting question is whether it is possible to present an exact formulae for quantized Dirac brackets of the model or one can only get some reasonable approximation, expressed as power series in $\hbar$? To work with operators $\hat{\xi}^{\mu}$ which obey the commutation relations (\ref{11q}) one can use the polydifferential representation of the algebra (\ref{11q}): $\hat{\xi}^{\mu}=\xi^{\mu}+i\hbar/2\omega^{\mu\nu}\partial_{\nu }+...~,$ constructed in \cite{KV}. \section{Definition of $B_{i}$} Suppose that we know the position-dependent noncommutativity from some physical considerations, i.e., the function $d\left( x,y\right) ,$ which is the Weyl symbol of the operator $d\left( \hat{x},\hat{y}\right) $, is given. In order to define the complete algebra (\ref{10}), we need to know the functions $B_{i}$. For that one can use the equation (\ref{11}). However, one cannot determine two functions $B_{x}$ and $B_{y}$ from just one equation (\ref{11}). Therefore, we need to impose one additional condition. We will consider now two different choices of the additional conditions. Let us first consider the condition $B_{i}=\varepsilon^{ij}\partial_{j}\phi$, so that the equation (\ref{11}) becomes% \[ d=\frac{1}{1+\theta\bigtriangleup\phi}~, \] where $\bigtriangleup=\partial_{x}^{2}+\partial_{y}^{2}$. Suppose that the function $d$ has a rotational symmetry like in the example (\ref{2}), i.e.,% \begin{equation} d=\frac{1}{1+\theta f\left( \alpha\left( x^{2}+y^{2}\right) \right) }~, \label{11a}% \end{equation} where $f$ is some given function, $f\left( 0\right) =const<\infty$. We will also need the integral $F$, $F^{\prime}=f,$ $F\left( 0\right) =const<\infty$. From (\ref{11}) and (\ref{11a}) one finds \begin{equation} \bigtriangleup\phi=f\left( \alpha\left( x^{2}+y^{2}\right) \right) ~. \label{12}% \end{equation} In polar coordinates $x=r\cos\varphi,\ y=r\sin\varphi$ the equation (\ref{12}) can be written as:% \begin{equation} \frac{1}{r}\partial_{r}r\partial_{r}\phi=f\left( \alpha r^{2}\right) ~, \label{13}% \end{equation} which yields% \begin{equation} \partial_{r}\phi=\frac{F\left( \alpha r^{2}\right) }{2\alpha r}+\frac{c}% {r}~. \label{14}% \end{equation} We fix the constant $c$ from the condition% \begin{equation} \lim_{\alpha\rightarrow0}\partial_{r}\phi=0 \label{15}% \end{equation} which gives $c=-\frac{F\left( 0\right) }{2\alpha}$ \begin{equation} \partial_{r}\phi=\frac{F\left( \alpha r^{2}\right) -F\left( 0\right) }{2\alpha r}. \label{16}% \end{equation} Then we calculate% \begin{align} & B_{x}=\partial_{y}\phi=\left( \sin\varphi\partial_{r}+\frac{1}{r}% \cos\varphi\partial_{\varphi}\right) \phi\left( r\right) =\label{17}\\ & \sin\varphi\frac{F\left( \alpha r^{2}\right) -F\left( 0\right) }{2\alpha r}~=y\frac{F\left( \alpha\left( x^{2}+y^{2}\right) \right) -F\left( 0\right) }{2\alpha\left( x^{2}+y^{2}\right) },\nonumber \end{align} and% \begin{align} & B_{y}=-\partial_{x}\phi=-\left( \cos\varphi\partial_{r}-\frac{1}{r}% \sin\varphi\partial_{\varphi}\right) \phi\left( r\right) =\label{18}\\ & -\cos\varphi\frac{F\left( \alpha r^{2}\right) -F\left( 0\right) }{2\alpha r}~=-x\frac{F\left( \alpha\left( x^{2}+y^{2}\right) \right) -F\left( 0\right) }{2\alpha\left( x^{2}+y^{2}\right) }.\nonumber \end{align} We see that $B_{i}\rightarrow0$ when $\alpha\rightarrow0$. The second choice is $B_{x}=B_{y}=\chi$. Note, that this condition implies that $\left\{ p_{x},p_{y}\right\} _{D}=0$. We consider more general case% \[ d=\frac{1}{1+\theta g\left( \alpha,x,y\right) }, \] where $g\left( \alpha,x,y\right) $ is an arbitrary function, $g\left( 0,x,y\right) =0$. The equation (\ref{11}) yields% \[ \left( \partial_{x}-\partial_{y}\right) \chi=g\left( \alpha,x,y\right) \] After the change of variables $\xi=x-y,\ \eta=x+y,$ one has% \[ \partial_{\xi}\chi=g\left( \alpha,\frac{1}{2}\left( \xi+\eta\right) ,\frac{1}{2}\left( \xi-\eta\right) \right) , \] the solution of this equation is% \[ \chi=G_{\xi}\left( \xi,\eta\right) +G_{0}\left( \eta\right) \] where% \[ G_{\xi}\left( \xi,\eta\right) =% {\displaystyle\int} d\xi g\left( \alpha,\frac{1}{2}\left( \xi+\eta\right) ,\frac{1}{2}\left( \xi-\eta\right) \right) , \] and the function $G_{0}\left( \eta\right) $ can be determined from the condition that $\lim_{\alpha\rightarrow0}\chi=0$. Thus, we have constructed the classical model (\ref{8}) which after quantization leads to the two-dimensional QM with position-dependent noncommutativity $[\hat{x},\hat{y}]=i\theta d\left( \hat{x},\hat{y}\right) $. To define this model we use the position-dependent noncommutativity itself, which is supposed to be known ab initio, and an additional condition, imposed by hand from some physical considerations. For example, if we want $[\hat {p}_{x},\hat{p}_{y}]=0$, we choose the additional condition $B_{x}=B_{y}$, etc. \section{Local noncommutativity} Let us consider the particular example of local noncommutativity (\ref{2}). In this case the function $d$ is% \[ d=\frac{1}{1+\theta\alpha\left( x^{2}+y^{2}\right) }. \] The first choice of additional condition ($B_{i}=\varepsilon^{ij}\partial _{j}\phi$) implies:% \[ B_{x}=-\frac{\alpha}{4}y\left( x^{2}+y^{2}\right) ,\ \ B_{y}=\frac{\alpha }{4}x\left( x^{2}+y^{2}\right) , \] and the Dirac brackets (\ref{10}) are% \begin{align} & \left\{ x,y\right\} _{D}=\theta d~,\ \ \left\{ p_{x},p_{y}\right\} _{D}=\frac{3\theta\alpha^{2}}{16}\left( x^{2}+y^{2}\right) ^{2}% d~,\label{19}\\ & \left\{ x,p_{x}\right\} _{D}=\left[ 1+\frac{\alpha\theta}{4}\left( x^{2}+3y^{2}\right) \right] d~,\ \ \left\{ x,p_{y}\right\} _{D}% =-\frac{\alpha\theta}{2}xyd~,\nonumber\\ & \left\{ y,p_{y}\right\} _{D}=\left[ 1+\frac{\alpha\theta}{4}\left( 3x^{2}+y^{2}\right) \right] d~,\ \ \left\{ y,p_{x}\right\} _{D}% =-\frac{\alpha\theta}{2}xyd.\nonumber \end{align} The second choice means% \[ B_{x}=B_{y}=\frac{\alpha}{3}\left( x^{3}-y^{3}\right) ~, \] and% \begin{align} & \left\{ x,y\right\} _{D}=\theta d~,\ \ \left\{ p_{x},p_{y}\right\} _{D}=0,\label{19a}\\ & \left\{ x,p_{x}\right\} _{D}=\left[ 1+\alpha\theta y^{2}\right] d~,\ \ \left\{ x,p_{y}\right\} _{D}=\alpha\theta x^{2}d~,\nonumber\\ & \left\{ y,p_{y}\right\} _{D}=\left[ 1+\alpha\theta x^{2}\right] d~,\ \ \left\{ y,p_{x}\right\} _{D}=\alpha\theta y^{2}d.\nonumber \end{align} In order to compare the two models we consider the limit $r\rightarrow\infty$. In both cases $\left\{ x,y\right\} _{D}\rightarrow0$ and the Dirac brackets $\left\{ x,p_{x}\right\} _{D},\ \left\{ x,p_{y}\right\} _{D},\ \left\{ y,p_{y}\right\} _{D}$ and $\left\{ y,p_{x}\right\} _{D}$ are limited functions in this limit. However, $\lim_{r\rightarrow\infty}\left\{ p_{x},p_{y}\right\} _{D}=\infty$ in the first model, while $\left\{ p_{x},p_{y}\right\} _{D}=0$ in the second. Since, usually, the non-zero commutator of the momenta means the presence of a magnetic field, it would be difficult to give some physical meaning to the first model on the infinity whereas the second one is free from this difficulty. \section{Discussions and conclusions} We have proposed a model of the consistent quantum mechanics with position-dependent noncommutativity. Our construction is based on the first-order Lagrangian, which after quantization reproduces the desired commutation relations between the operators of coordinates and momenta. Note that a first-order Lagrangian for the Duval-Horvathy model \cite{Duval} can also lead to the position-dependent Dirac brackets \cite{Lukierski1}, see also \cite{Horvathy}, where the correspondent symplectic structure was obtained by means of introducing an interaction with the magnetic field in the model of nonrelativistic anyon \cite{Horvathy1}. However, the position-dependence in this case is due to the presence of a nonconstant magnetic field $B\left( x\right) $. In our model (\ref{8}) the noncommutativity is caused by other factors and magnetic field can enter the theory via Hamiltonian $H\left( x,p\right) $. Also, the possibility to localize the noncommutativity within the model \cite{Lukierski} meets some difficulties, since the magnetic field $B\left( x\right) $ should go to infinity outside the area of local noncommutativity. Three-dimensional generalization of the model \cite{Lukierski} was considered in \cite{Chaichian}. It should be mentioned that the particular case of a position-dependent noncommutativity, a model of a point particle on kappa-Minkowski space was derived from a first-order Lagrangian in \cite{Ghosh}. In order to obtain some phenomenological consequences of such a type of noncommutativity in space it would be interesting to consider some particular physical problems in the presence of this noncommutativity. For example, the scattering of plane waves on the local noncommutativity. For that one needs to take the Hamiltonian of free particle $\hat{H}=\frac{1}{2}\left( \hat{p}% _{x}^{2}+\hat{p}_{y}^{2}\right) $ and to use perturbation theory on $\theta$. Also, it would be interesting to calculate the uncertainty relations. \section*{Acknowledgements} We are grateful to Dmitri Vassilevich for fruitful discussions. We also thank Funda\c{c}\~{a}o de Amparo \`{a} Pesquisa do Estado de S\~{a}o Paulo (FAPESP) and Conselho Nacional de Desenvolvimento Cient\'{\i}fico e Tecnol\'{o}gico (CNPq) for partial support. \section{Appendix} Taking into account the concrete form (\ref{10}) of $\omega_{0}^{\mu\nu}$ one can calculate the explicit form of quantum non-Poisson correction $\omega _{2}^{\mu\nu}$, which are listed below with $\mu<\nu$:% \begin{align*} \omega_{2}^{12} & =\frac{\theta^{3}}{24}\left[ \frac{1}{2}\left( \partial_{2}d\right) ^{2}\partial_{1}^{2}d-\partial_{1}d\partial_{2}% d\partial_{1}\partial_{2}d+\frac{1}{2}\left( \partial_{1}d\right) ^{2}\partial_{2}^{2}d\right. \\ & +\left. d\left( \partial_{1}\partial_{2}d\right) ^{2}-d\partial_{2}% ^{2}d\partial_{1}^{2}d\right] ,\\ \omega_{2}^{ij+2} & =\frac{\theta^{2}}{24}\left[ \frac{1}{2}\left( \partial_{2}d\right) ^{2}\partial_{1}^{2}-\partial_{1}d\partial_{2}% d\partial_{1}\partial_{2}+\frac{1}{2}\left( \partial_{1}d\right) ^{2}\partial_{2}^{2}\right] \\ & \times\left( \delta_{j}^{i}d-\theta\varepsilon^{ik}\partial_{k}% B_{j}d\right) -\frac{\theta^{2}}{24}\varepsilon^{im}d\left( \partial _{j}\partial_{1}d\partial_{m}\partial_{2}d-\partial_{j}\partial_{2}% d\partial_{m}\partial_{1}d\right) \\ & +\frac{\theta^{3}}{24}\varepsilon^{im}\varepsilon^{jk}d\left[ \partial _{n}\partial_{1}d\partial_{m}\partial_{2}\left( \partial_{k}B_{n}d\right) -\partial_{n}\partial_{2}d\partial_{m}\partial_{1}\left( \partial_{k}% B_{n}d\right) \right] ,\\ \omega_{2}^{34} & =\frac{\theta^{3}}{24}\left[ \frac{1}{2}\left( \partial_{2}d\right) ^{2}\partial_{1}^{2}-\partial_{1}d\partial_{2}% d\partial_{1}\partial_{2}+\frac{1}{2}\left( \partial_{1}d\right) ^{2}\partial_{2}^{2}\right] \\ & \times\left( \left( \partial_{2}B_{2}\partial_{1}B_{1}-\partial_{1}% B_{2}\partial_{2}B_{1}\right) d\right) -\\ & \frac{\theta}{24}d\left[ \partial_{n}\partial_{1}\left( \delta_{m}% ^{1}d-\theta\partial_{2}B_{m}d\right) \partial_{m}\partial_{2}\left( \delta_{n}^{2}d+\theta\partial_{1}B_{n}d\right) \right. \\ & -\left. \partial_{n}\partial_{2}\left( \delta_{m}^{1}d-\theta\partial _{2}B_{m}d\right) \partial_{m}\partial_{1}\left( \delta_{n}^{2}% d-\theta\partial_{1}B_{n}d\right) \right] . \end{align*}
2001.09387
\section{Introduction}\label{intro} \setlength{\epigraphwidth}{2.3in} \epigraph{A little learning is a dangerous thing.}{Alexander Pope\\ \textit{An Essay on Criticism}} This is a paper on the value of information. The setting is a two player sender-receiver, signaling, or communication, game. There is an unknown state of the world, about which the receiver is uninformed. The receiver is faced with a decision problem but has no direct access to information about the state. Instead, there is an informed sender, who, after learning the state, chooses a (possibly costly) action (message)\footnote{Throughout, in order to distinguish the sender's action from the receiver's action, the sender's action is termed a message. In some settings, like cheap talk games, this moniker is literal. In some settings, like e.g. the classic Spence scenario, in which the sender chooses a level of education, message is less fitting as a label. Hence, the reader should keep in mind that the message is simply the receiver's action.}, which the receiver observes. We make a minimal number of assumptions. Each player has a von Neumann-Morgenstern utility function that may depend on the message chosen by the sender, the state of the world, and the action chosen by the receiver. Thus, these games include cheap talk games as in Crawford and Sobel (1982) \cite{cs}, and signaling games as in Spence (1978) \cite{spence} or Cho and Kreps (1986) \cite{cho}. Throughout we assume that the number of states, messages, and actions are finite. In this setting, we pose a simple question. Is the receiver's maximal equilibrium payoff convex in the prior? That is, restricting attention to the equilibrium that maximizes the receiver's expected payoff, does \textit{ex ante} learning always benefit the receiver? If not, then are there conditions that guarantee this convexity? We show that the answer to the first question is no: the receiver's maximal equilibrium payoff is not generally convex in the prior. However, there are broad conditions that guarantee convexity. If the game is \hyperlink{simple}{simple}--the sender's message has only instrumental value to the receiver--then the receiver's payoff is convex in the prior provided either \begin{enumerate} \item There are at most two states; or \item The receiver has at most two actions and \begin{enumerate}[i.] \item The game is cheap talk; or \item There are at most two messages; or \item There are at most three states. \end{enumerate} \end{enumerate} If there are three or more messages, four or more states, and the game is not cheap talk, then even if the game is simple and the receiver has just two actions, the receiver's payoff may fail to be convex in the prior. Moreover, if there are three or more states and the receiver has three or more actions, then the receiver's payoff may fail to be convex in the prior, even if the game is simple and cheap talk with transparent motives (a cheap talk game in which each sender has identical preferences over the action chosen by the receiver). Furthermore, if the game is non-simple then the receiver's payoff may fail to be convex in the prior, even if there are just two states and two actions. Why is the receiver's payoff convex in those scenarios described above? Why may the payoff fail to be convex otherwise? There is a crucial trade-off that belongs to \textit{ex ante} information acquisition: there is an initial gain in information that, all else equal, benefits the receiver. However, all else may not be equal: the initial learning may result in a belief at which the receiver-optimal equilibrium may be quite bad for the receiver. Hence, the two effects may have opposite effects on the receiver's welfare, in which case the magnitude of each effect determines whether learning is beneficial. The conditions described above guarantee that the first effect dominates--even if the resulting beliefs after learning lead to worse equilibria for the receiver, her welfare loss is guaranteed to be less than the welfare gain from the information acquisition itself. If the conditions do not hold, then the first effect may not dominate. Even though the receiver gains information initially, the resulting equilibria may be so bad that the receiver may strictly prefer not to learn. Thanks to the ubiquity of communication games, there are numerous interpretations of \textit{ex ante} information acquisition. In the Spence (1978) \cite{spence} setting, this paper's question becomes, ``when does any test (prior to the sender's education choice) benefit the hiring firm(s)?'' A seminal paper in finance is Leland and Pyle (1977) \cite{lp}, who explore an entrepreneur signaling through his equity retainment decision. There, ``when does any background information or access to the entrepreneur's history benefit a prospective investor?'' In a political economy setting in which an incumbent signals through his policy choice (see e.g. Angeletos, Hellwig, and Pavan 2006, and Caselli, Cunningham, Morelli, and de Barreda 2014) \cite{pavan, ines}, we ask, ``when does any initial news article benefit a representative member of the populace?'' More applications of \textit{ex ante} information acquisition include reports about the state of the economy, in the case of a central bank signaling through its monetary policy (Melosi 2016) \cite{mel}; product reviews, in the case of a firm signaling through advertising (Nelson 1974, and Milgrom and Roberts 1986) \cite{nelson, mil}, or through its warranty offer (Gal-Or 1989) \cite{gal}; and financial reports or audits, in the case of a firm signaling through dividend provision (Bhattacharyya 1980) \cite{bat}. The remainder of Section \ref{intro} discusses related work, and Section \ref{model} describes the formal model. Sections \ref{staypositive} and \ref{negative} contain the main results of the paper, Theorems \ref{main1} and \ref{main2}, which provide sufficient conditions for convexity and show that the receiver's payoff may not be convex should those conditions not hold, respectively. Section \ref{conclusion} concludes. \subsection{Related Work} One way to rephrase this paper's research question is, ``if information is free prior to a communication game, then does it benefit the receiver in expectation to acquire it?" Ramsey (1990) \cite{ram} asks this question in the context of a decision problem and answers in the affirmative, and this result also follows from Blackwell (1951, 1953) \cite{blackwell, blackwell2} among many others. There are a number of papers that investigate the value of information in strategic interactions (games). Neyman (1991) \cite{ney} shows that information can only help a player in a game if other players are unaware that she has it. Kamien, Tauman, and Zamir (1990) \cite{kamien} explore an environment in which an outside agent, ``the Maven'', possesses information relevant to an $n$-player game in which he is not a participant. There they look at the outcomes that the maven can induce in the game and how (and for how much) the maven should sell the information. Bassan, Gossner, Scarsini, and Zamir (2003) \cite{bass} establish necessary and sufficient conditions for the value of information to be socially positive in a class of games with incomplete information. In two-player (simultaneous-move) Bayesian games, Lehrer, Rosenberg, and Shmaya (2013) \cite{sh2} forward a notion of equivalence of information structures as those that induce the same distributions over outcomes. They characterize this equivalence for several solution concepts, including Nash equilibrium. In a companion paper, they (Lehrer, Rosenberg, and Shmaya 2010) \cite{sh} look at the same set of solution concepts in (two-player) common interest games and characterize which information structures lead to higher (maximal) equilibrium payoffs. Gossner (2000) \cite{GOSS1} compares information structures through their ability to induce correlated equilibrium distributions, and Gossner (2010) \cite{GOSS2} introduces a relationship between ``ability'' and knowledge: not only does more information imply a broader strategy set, but a converse result holds as well. Ui and Yoshizawa (2015) \cite{ui} explore the value of information in (symmetric) linear-quadratic-Gaussian games and provide necessary and sufficient conditions for (public or private) information to increase welfare. Kloosterman (2015) \cite{close} explores (dynamic) Markov games and provides sufficient conditions for the set of strongly symmetric subgame perfect equilibrium payoffs of a Markov game to decrease in size (for any discount factor) as the informativeness of a public signal about the next period's game increases. Gossner and Mertens (2001) \cite{goos}, Lehrer and Rosenberg (2006) \cite{teacher}, P\k{e}ski (2008) \cite{pes}, and De Meyer, Lehrer, and Rosenberg (2010) \cite{meyer} all study the value of information in zero-sum games. In a sense, this paper explores the decision problem faced by the receiver in which the information she obtains is endogenously generated by equilibrium play by the sender. That is, the receiver's problem is one in which \textit{ex ante} information acquisition results in a (possibly) different information generation process at the resulting posterior belief. Outside of that there are no strategic concerns; and the sender is perfectly informed, so there is no learning on his part. Consequently, this paper is more similar in spirit to the original question asked by Ramsey, and we need not concern ourselves with the possible complexity of information structures for multiplayer games of incomplete information. Furthermore, this paper investigates the value of information in communication games, which are by definition games of information transmission. In contrast to the broad class of games of incomplete information, in communication games the transfer of information between sender and receiver is of paramount importance. The main results of this paper pertain to a restriction of that class of games--\hyperlink{simple}{simple} games--in which the sender's message affects the receiver's payoff only through the information that it contains. The paper closest to this one is its companion paper, Whitmeyer (2019) \cite{Whit}, which investigates how a receiver can design an information structure in order to optimally elicit information from a sender in a communication game. There, in a two player communication game, the receiver may commit \textit{ex ante} to a signal $\pi: M \to \Delta(X)$, where $X$ is a (compact) set of signal realizations. Instead of observing the sender's message, the receiver observes a signal realization correlated with the message. In one of the main results of that paper, we discover that in simple two-action games this ability guarantees that the value of information is always positive. Contrast this to the negative result that we find in this paper--that in simple two-action games the value of information is not generally positive--the other paper turns this on its head and shows that information design guarantees a positive value of information. \section{The Model}\label{model} There are two players: an informed sender, $S$; and a receiver, $R$, who share a common prior about the state of the world, $\mu_{0} \in \Delta(\Theta)$, where $\mu_{0}(\theta) = \Pr(\Theta = \theta)$. There are two stages to the scenario--first, there is a learning stage. \textbf{Stage 1 (Learning Stage):} There is some finite (or at least compact) set of signal realizations $Y$ and a signal or Blackwell experiment, mapping $\zeta: \Theta \to \Delta(Y)$ whose realization is public. This experiment leads to a distribution over posteriors, where the posterior following signal realization $y$ is $\mu_{y}$. Call $\zeta$ the \hypertarget{ie}{\textcolor{Plum}{Initial Experiment}}. Each signal realization begets (via Bayes' law) a posterior distribution, $\mu_{y}$. Thus, experiment $\zeta$ leads to a distribution over posterior distributions, $P \in \Delta \Delta \left(\Theta\right)$, whose average is the prior distribution: \[\mathbb{E}_{P}\left[\mu\right] \equiv \int_{\Delta\left(\Theta\right)}\mu dP(\mu) = \mu_{0}\] Each posterior is the prior for the ensuing communication game. That is, following each realization of the experiment, the sender and receiver then take part in a second stage, the communication game. \textbf{Stage 2 (Communication Game):} In this stage, $S$ and $R$ share the common prior $\mu_{y}$. The sender has private information, his type (or the state of the world), $\theta \in \Theta$: he observes his type before choosing a message, $m$, from a set of messages $M$. The receiver observes $m$, but not $\theta$, updates her belief about the receiver's type and message using Bayes' law, then chooses a mixture over actions, $A$. We assume that these sets, $M, A$ and $\Theta$, are finite. Each player, $S$ and $R$, has preferences over the message sent, the action taken, and the type of the sender. These are represented by the utility functions\footnote{Since the domain is finite, any $u_{i}$ is continuous.} $u_{i}$, $i \in \left\{S,R\right\}$: $u_{i}: M \times A \times \Theta \to \Re$. Let us revisit the timing. First, there is an initial experiment which begets a distribution over (common) posterior beliefs, which are each respectively (common) prior beliefs in the ensuing communication game. Second, $S$ observes his private type $\theta \in \Theta$, and chooses a message $m \in M$ to send to $R$. $R$ observes $m$, updates his belief, and chooses action $a \in A$. We extend the utility functions for the players to behavioral strategies. A behavioral strategy for $S$, $\sigma_{\theta}(m)$ is a probability distribution over $M$; it is the probability that a type $\theta$ sender sends message $m$. Similarly, a behavioral strategy for $R$, $\rho(a | m)$ is a probability distribution over $A$; it is the probability that the receiver chooses action $a$ following message $m$. We focus on receiver-optimal Perfect Bayesian Equilibrium (PBE), which we define in the standard manner. Henceforth by equilibrium or PBE, we refer to those particular equilibria, and by receiver's payoff we mean the receiver's payoff in the receiver-optimal PBE. Throughout, we consider various sub-classes of communication games. These sub-classes are defined as follows \begin{definition} A communication game is \hypertarget{simple}{\textcolor{Plum}{Simple}} if the receiver has preferences over the action taken and the type of the sender, but not over the message chosen by the sender. Equivalently, a game is simple provided the receiver's preferences are represented by the utility function $u_{R}: A \times \Theta \to \Re$. \end{definition} On occasion, we derive results that hold for two other classes of communication games; cheap talk, and cheap talk with transparent motives. We remind ourselves of their definitions: \begin{definition} A communication game is \hypertarget{ct}{\textcolor{Plum}{Cheap Talk}} if the sender has preferences over the action taken by the receiver and the type of the sender, but not over the message he chooses. Namely, for each type, each message is equally costless. Equivalently, a game is cheap talk provided the sender's preferences are represented by utility function $u_{S}: A \times \Theta \to \Re$. \end{definition} A subclass of the class of cheap talk games are those with transparent motives, which term was introduced in Lipnowski and Ravid (2017) \cite{transparent}: \begin{definition} A communication game is \hypertarget{ctwt}{\textcolor{Plum}{Cheap Talk with Transparent Motives}} if the game is cheap talk and the sender's preferences over the action taken by the receiver are independent of his type. Equivalently, a game is cheap talk with transparent motives provided the sender's preferences are represented by the utility function $u_{S}: A \to \Re$. \end{definition} \section{When the Value of Information is Always Positive}\label{staypositive} This section is devoted to establishing the following theorem, which provides sufficient conditions for the value of information to always be positive in communication games. \begin{theorem}\label{main1} In simple communication games, the value of information is always positive for the receiver provided \begin{enumerate} \item There are two states (or fewer); or \item The receiver has two actions (or fewer) and \begin{enumerate}[i.] \item There are three states (or fewer); or \item There are two messages (or fewer); or \item The game is cheap talk. \end{enumerate} \end{enumerate} \end{theorem} To begin, we show that if there are two states of the world (or two types of sender), the receiver's payoff is convex in the prior. Observe that if there is no initial experiment, and the sender and receiver participate in the signaling game with common prior $\mu_{0}$, then there exists a signal or experiment $\eta: \Theta \to \Delta(M)$ that is induced by the optimal equilibrium. This experiment leads to a distribution over posteriors, where the posterior following message $m$ is $\mu_{m}$. Call this experiment the \hypertarget{ne}{\textcolor{Plum}{Null-Optimal Experiment}}. \begin{lemma}\label{prop22} In any \textit{simple} communication game with two states and $n$ actions, the receiver's payoff is convex in the prior. \end{lemma} \begin{proof} We sketch the proof here and leave the details to Appendix \ref{22proof}. The first step is to establish Claim \ref{steak}, which allows us to restrict the number of messages in the game to two without loss of generality. As a result, there are just three cases that we need to consider: first, where the sender types pool in the receiver-optimal equilibrium at belief $\mu_{0}$; second, where one sender type mixes and the other chooses a pure strategy (in the receiver-optimal equilibrium at belief $\mu_{0}$); and third, where both sender types mix. Note that this lemma holds trivially if there exists a separating equilibrium, so we need not consider that case. Next, following any realization of the initial experiment, $y$, there exists a receiver-optimal equilibrium. Equivalently, there exists a signal or experiment $\gamma_{y}: \Theta \to \Delta(M)$ that is induced by the optimal equilibrium. This experiment leads to a distribution over posteriors, where the posterior following message $m$ is $\mu_{m}$. Call this experiment the \hypertarget{se}{\textcolor{Plum}{y-equilibrium experiment}}. Then, we define $\xi$ as the experiment that corresponds to the information ultimately acquired by the receiver following the initial learning and the resulting equilibrium play in the signaling game. All that remains is to show in each of the three cases that the null-optimal experiment, $\eta$, is less Blackwell informative than $\xi$ and so the receiver prefers $\xi$--the receiver prefers any learning. Note that it is possible to ``prove this result without words", which proof is depicted in Figure \ref{case1}. In each case, the red point corresponds to the prior, the blue arrows and points to the initial experiment and posteriors, the yellow arrows and points to the null-optimal experiment, the green arrows to the y-equilibrium experiments, and the purple arrows and points to experiment $\xi$. \begin{figure} \begin{center} \includegraphics[scale=.33]{proofohnewoerter.jpg} \end{center} \caption{\label{case1} Lemma \ref{prop22} Proof}\end{figure} \end{proof} Next, we explore convexity when the receiver has only two actions. First, we establish that it is without loss of generality to restrict attention to equilibria in which no type mixes over messages at which the receiver strictly prefers different actions. \begin{lemma}\label{nodivide} In simple games, there exists a receiver-optimal equilibrium in which no type of sender mixes over messages that induce beliefs at which the receiver strictly prefers different actions. \end{lemma} \begin{proof} The proof is left to Appendix \ref{ndproof} \end{proof} Second, we discover that if there is a receiver optimal equilibrium at belief $\mu_{0}$ in which at most two messages are used, then any information benefits the receiver. Formally, \begin{lemma}\label{too} Consider any simple communication game. If there is a receiver-optimal equilibrium at belief $\mu_{0}$ in which at most two messages are used, then any initial experiment benefits the receiver. \end{lemma} \begin{proof} The full proof is left to Appendix \ref{tooproof}. \end{proof} Because of the costless nature of messages in cheap talk games, in conjunction with Lemma \ref{nodivide}, it is clear that there must be a receiver-optimal equilibrium at belief $\mu_{0}$ in which at most two messages are used. Accordingly, Lemma \ref{too} implies \begin{corollary} In any $n$ state, two action, simple cheap talk game, the receiver's payoff is convex in the prior. \end{corollary} From Lemma \ref{prop22} we know that in two state, two action simple communication games, the value of information is always positive for the receiver. Perhaps surprisingly, the value of information is also always positive for the receiver in three state, two action simple communication games. \textit{Viz}, \begin{lemma}\label{409} In simple communication games, for three states and two actions, the receiver's payoff is convex in the prior. \end{lemma} \begin{proof} Again, we leave the detailed proof to Appendix \ref{409proof} but provide a sketch here. From Lemma \ref{nodivide}, we conclude that there is a receiver-optimal equilibrium at $\mu_{0}$ in which at most three messages are used. If two messages or fewer are used, then from Lemma \ref{too}, we have convexity. Thus, it remains to consider the case in which three messages are used. Fortunately, we show that there is just one such equilibrium that we need to consider. \begin{figure} \centering \includegraphics[scale=.28]{3state3act.png} \caption{Lemma \ref{409} Proof} \label{3ow} \end{figure} Like Lemma \ref{prop22}, it is also possible to prove Lemma \ref{409} without words, which proof is depicted in Figure \ref{3ow}. The red point corresponds to the prior, the blue arrows and points to the initial experiment and posteriors, the yellow arrows and points to the null-optimal experiment, the green arrows to the y-equilibrium experiments (or rather experiments that are payoff-equivalent to the y-equilibrium experiments), and the purple arrows and points to experiment $\xi$. \end{proof} \section{When the Value of Information is not Always Positive}\label{negative} This section tempers the optimism inspired by the Section \ref{staypositive}. Namely, we establish Theorem \ref{main2}, which states that if none of the sufficient conditions from Theorem \ref{main1} hold in some communication game, then there may be information that hurts the receiver \begin{theorem}\label{main2} In the following communication games, \textit{ex ante} information may hurt the receiver: \begin{enumerate} \item Simple games with four or more states, three or more messages, and two actions; \item Simple games with three or more states and actions, and two or more messages; \item Non-simple games with two or more states, actions, and messages. \end{enumerate} \end{theorem} We begin by proving Lemma \ref{410}, the first result listed in Theorem \ref{main2}. \begin{figure} \begin{center} \includegraphics[scale=.7]{fourcountfin.png} \end{center} \caption{\label{4c} Lemma \ref{410} Game}\end{figure} \begin{lemma}\label{410} In simple communication games, for four or more states, three or more messages, and two actions, the receiver's payoff is not generally convex in the prior. \end{lemma} \begin{proof} Proof is via counter-example. There are four states, $\Theta = \left\{\theta_{1}, \theta_{2}, \theta_{3}, \theta_{4}\right\}$, and a belief is a quadruple $(\mu_{1}, \mu_{2}, \mu_{3},\mu_{4})$, where $\mu_{i} \coloneqq \Pr\left(\Theta=\theta_{i}\right)$ for all $i = 1, 2, 3, 4$ and $\mu_{1} + \mu_{2} + \mu_{3} + \mu_{4} = 1$. The belief can be fully described with just three variables; hence, depicting the receiver's payoff as a function of the belief requires four dimensions. This is (rather) difficult to do, so instead we will restrict attention to a family of experiments that involve learning on just one dimension. That is, we fix $\mu_{1} = 1/3$ and $\mu_{3} = 1/8$, and consider only the receiver's payoff as a function of her (prior) belief about states $\theta_{2}$ and $\theta_{4}$. Learning is on just one dimension, and so (abusing notation) we rewrite the receiver's belief $\mu_{2}$ as $\mu$ and $\mu_{4}$ as $13/24 - \mu$, where $\mu \in [0,13/24]$. In states $\theta_{1}$ and $\theta_{2}$, action $a_{2}$ is the correct action for the receiver; and in states $\theta_{3}$ and $\theta_{4}$, action $a_{1}$ is correct: \begin{center} \begin{tabular}{ |c|c|c|c|c| } \hline Action & $\theta_{1}$ & $\theta_{2}$ & $\theta_{3}$ & $\theta_{4}$ \\ $a_{1}$ & $0$ & $0$ & $1$ & $2$ \\ $a_{2}$ & $1$ & $1$ & $0$ & $0$ \\ \hline \end{tabular} \end{center} \medskip Likewise, the sender's state (type)-dependent payoffs from message, action pairs are given as follows: \medskip \begin{center} \begin{tabular}{ |c|c|c|c|c| } \hline type & $\theta_{1}$ & $\theta_{2}$ & $\theta_{3}$ & $\theta_{4}$ \\ \hline message & $m_{1}$ \quad $m_{2}$ \quad $m_{3}$ & $m_{1}$ \quad $m_{2}$ \quad $m_{3}$ & $m_{1}$ \quad $m_{2}$ \quad $m_{3}$ & $m_{1}$ \quad $m_{2}$ \quad $m_{3}$\\ \hline $a_{1}$ & $1$ \quad $0$ \quad $0$ & $0$ \quad $1$ \quad $0$ & $0$ \quad $1$ \quad $3$ & $-1$ \quad $2$ \quad $-2$\\ \hline $a_{2}$ & $1$ \quad $0$ \quad $0$ & $0$ \quad $1$ \quad $0$ & $2$ \quad $4$ \quad $0$ & $5/4$ \quad $0$ \quad $-1$\\ \hline \end{tabular} \end{center} \medskip Note that types $\theta_{1}$ and $\theta_{2}$ have messages that are \textit{strictly dominant} ($m_{1}$ and $m_{2}$, respectively), and that $\theta_{4}$ has a message that is \textit{strictly dominated} ($m_{3}$). \begin{figure} \centering \includegraphics[scale=.7]{4dea.png} \caption{Receiver Payoffs (Lemma \ref{410} Proof)} \label{4dconvex} \end{figure} Figure \ref{4dconvex} depicts the receiver's equilibrium payoff as a function of $\mu$, $V^{T}$.\footnote{The super-script $T$ refers to ``transparency.'' This notation is due to the fact that in Whitmeyer (2019) \cite{Whit} we explore the value of information in the case when the receiver can choose the information structure in the ensuing game. There, we contrast the value of information in that ``optimal transparency" setting to the setting with full transparency, the focus of this paper.} Explicitly, that function is \[V^{T}(\mu) = \begin{cases} \frac{37}{24}- 2\mu & \mu \leq \frac{13}{36}\\ \frac{1}{3} + \mu & \frac{13}{36} < \mu \leq \frac{13}{24} \end{cases}\] and its derivation is left to Appendix \ref{410proof}. The receiver's payoff is no longer convex in the belief--in fact, it is no longer upper-semicontinuous. If $\mu > 13/36$--the receiver becomes too sure that the sender is not type $\theta_{3}$ or $\theta_{4}$--the only equilibria beget the pooling payoff, which correspond to no (or at least no useful) information transmission. The dotted line is a secant line that corresponds to a binary initial experiment that strictly hurts the receiver. \end{proof} With three or more states and actions, information may harm the receiver: \begin{lemma}\label{3by3} If there are at least three states, three actions and two messages then the receiver's payoff is not generally convex in the prior. \end{lemma} \begin{proof} Proof is via counterexample. Consider the game depicted in Figure \ref{convexfig}. \begin{figure} \begin{center} \includegraphics[scale=.7]{threecountfin.png} \end{center} \caption{\label{convexfig} Lemma \ref{3by3} Game}\end{figure} There are three types $\theta_{L}$, $\theta_{M}$, and $\theta_{H}$. Write a belief as a triple $(\mu_{L}, \mu_{M},\mu_{H})$. Note that the game is cheap talk with transparent motives: each type gets utility $1$ if the receiver chooses $l$ or $s$, and $0$ if the receiver chooses $x$. The receiver's preferences are given as follows: \begin{center} \begin{tabular}{ |c|c|c|c| } \hline Action & $\theta_{L}$ & $\theta_{M}$ & $\theta_{H}$ \\ $l$ & $0$ & $1$ & $2$ \\ $s$ & $13/24$ & $13/24$ & $1$ \\ $x$ & $1$ & $0$ & $1$ \\ \hline \end{tabular} \end{center} Consider the following three beliefs \[\begin{split} \mu_{0} &\coloneqq \left(\frac{1}{4}, \frac{1}{4}, \frac{1}{2}\right), \qquad \mu_{1} \coloneqq \left(\frac{1}{12},\frac{1}{4}, \frac{2}{3}\right), \quad \text{and} \quad \mu_{2} \coloneqq \left(\frac{5}{12},\frac{1}{4}, \frac{1}{3}\right) \end{split}\] and note that $\mu_{0}$ is a convex combination of $\mu_{1}$ and $\mu_{2}$, each with weight $1/2$. That is, for some prior $\mu_{0}$, $\mu_{1}$ and $\mu_{2}$ are the realizations of a binary initial experiment, $\zeta$. We depict the three prior distributions in the $(x,y)$-coordinate plane, where the $x$-axis corresponds to $\mu_{H}$, and the $y$-axis corresponds to $\mu_{M}$. There exist three convex regions of beliefs, $l$, $s$, and $x$, in which actions $l$, $s$, and $x$, respectively, are optimal. These regions and the three beliefs, $\mu_{0}$, $\mu_{1}$, and $\mu_{2}$, are illustrated in Figure \ref{convex1}. \begin{figure} \begin{center} \includegraphics[scale=.7]{bettercountv5.png} \end{center} \caption{\label{convex1} Lemma \ref{3by3} Action Regions}\end{figure} After some effort (relegated to Appendix \ref{3by3proof}), we conclude that at beliefs $\mu_{0}$ and $\mu_{1}$ the receiver-optimal equilibrium is one in which $\theta_{H}$ and $\theta_{L}$ choose different messages, say $g$ and $b$, respectively; and $\theta_{M}$ mixes between those messages ($g$ and $b$). The receiver's payoffs at these beliefs are $67/52$ and $83/52$, respectively. In contrast, at belief $\mu_{2}$, such an equilibrium does not exist. Instead, the receiver-optimal equilibrium sees $\theta_{H}$ and $\theta_{M}$ choose different messages, say $g$ and $m$, respectively; and $\theta_{L}$ mix between those messages ($g$ and $m$). The receiver's payoff is $127/132$. The posteriors corresponding to the y-equilibrium experiments for $\mu_{1}$ and $\mu_{2}$ and the null-optimal experiment are depicted in Figure \ref{convex2}, where each $x$ denotes a posterior distribution. \begin{figure} \begin{center} \includegraphics[scale=.7]{threexpv5.png} \end{center} \caption{\label{convex2} Lemma \ref{3by3} Optimal Posteriors}\end{figure} Finally, we can directly calculate and compare the receiver's expected payoff from this information acquisition to her payoff without obtaining the information: \[\frac{67}{52} > \frac{2195}{1716} = \frac{1}{2}\cdot \frac{83}{52} + \frac{1}{2}\cdot \frac{127}{132}\] whence we conclude that the receiver's optimal equilibrium payoff is not convex in the prior. Note that this game is a cheap talk game with transparent motives--even these restrictions are not enough to guarantee convexity. \end{proof} An analog to Figure \ref{case1} is depicted in Figure \ref{big}. There, the red point corresponds to the prior $\mu_{0}$, the blue arrows and points to the initial experiment and posteriors, the yellow arrows and points to the null-optimal experiment, the green arrows to the y-equilibrium experiments, and the purple arrows and points to experiment $\xi$. \begin{figure} \begin{center} \includegraphics[scale=.7]{threebeliefs.png} \end{center} \caption{\label{big} Lemma \ref{3by3} Induced Experiments}\end{figure} What goes wrong when there are three or more states and actions? Recall the two state case. In such a setting, because there are only two states, the actors' beliefs are one dimensional. As a result, any additional information can only shift the belief to the left or right on the one-dimensional simplex of beliefs. Moreover, because of the of lack of diversity of sender types, the set of possible equilibrium vectors of strategies is quite small (qualitatively)--they either pool, separate, both mix, or only one mixes. Consequently, a change in the prior cannot have too great of an effect on the resulting equilibrium distribution of posteriors: as long as the change is in the correct direction (and remember, there are only two possible directions) and/or is sufficiently small, the receiver-optimal equilibrium from the original prior remains feasible, and hence the same vector of posteriors can be generated at equilibrium (albeit with different probabilities). Furthermore, any change in the prior that eliminates the receiver-optimal equilibrium from the original prior must be large, so large that the resulting belief is more extreme than any posterior generated by the original equilibrium. Thus, the beliefs that correspond to $\xi$ must be the same, or more extreme than the beliefs from $\eta$, and hence learning must be beneficial. Put another way, there is a trade-off to initial learning--the gain in information from the initial experiment versus the (possibly decreased) gain in information from the receiver-optimal equilibria at the new priors. With two states, the first effect dominates, and makes up for the fact that the gain in information from the equilibria may be diminished. With more than two states, this is no longer true. As in the two state case, initial learning can result in priors for the communication game for which the receiver-optimal equilibrium under the original prior is no longer feasible. However, due to the fact that the belief space is now multi-dimensional, the beliefs generated by the receiver-optimal equilibrium at these new priors, while more extreme in \textit{some} direction, do not correspond in general to a more informative experiment. Hence, $\xi$ and $\eta$ may not be Blackwell comparable, in which case comparisons of the receiver welfare between the no learning and learning scenarios must rely on the specific details of the initial experiment and payoffs of the game. As in the two state case, there is the same trade-off to initial learning, but now the initial gain may not dominate. In the counterexample constructed in Lemma \ref{3by3}, the proposed initial experiment is harmful, since it involves too much learning about whether the state is $\theta_{L}$. In particular, for belief $\mu_{2}$, the receiver is too confident that the state is $\theta_{L}$, which precludes the existence of an equilibrium in which the receiver can distinguish between the high type and the low type. Instead, the receiver-optimal equilibrium is one in which she can distinguish between the high type and the medium type, which is much less helpful for the receiver. As discussed in Whitmeyer (2019) \cite{Whit}, the value of information may not be positive in this game even when the receiver can choose the optimal information structure in the signaling game. As we discover in Whitmeyer (2019), the equilibria described above at beliefs $\mu_{0}$, $\mu_{1}$, $\mu_{2}$, yield the maximum payoffs to the receiver of any equilibrium under any information structure. Finally, if there are only two states and actions, the receiver's payoff may fail to be convex in the prior if the game is not simple. To wit, \begin{lemma}\label{ls} If the communication game is not simple, then the receiver's payoff is not generally convex in the prior. \end{lemma} \begin{proof} Proof is via counter example. Consider the modified Beer-Quiche game (\textit{cf.} Cho and Kreps (1987) \cite{cho}) depicted in Figure \ref{nonsimpcount}, in which the receiver now obtains an additional payoff of $1$ if the sender chooses Quiche and the receiver chooses the ``correct" action (i.e. $F$ if the sender is $\theta_{W}$ and $NF$ if the sender is $\theta_{S}$). \begin{figure} \centering \begin{tikzpicture}[scale=1.4,font=\footnotesize] \tikzset{ solid node/.style={circle,draw,inner sep=1.5,fill=black}, hollow node/.style={circle,draw,inner sep=1.5}} \tikzstyle{level 1}=[level distance=12mm,sibling distance=25mm] \tikzstyle{level 2}=[level distance=15mm,sibling distance=15mm] \tikzstyle{level 3}=[level distance=17mm,sibling distance=10mm] \node(0)[hollow node]{} child[grow=up]{node[solid node,label=above:{$\theta_{W}$}] {} child[grow=left]{node(1)[solid node]{} child{node[solid node,label=left:{$(0, 1)$}]{} edge from parent node [above]{$F$}} child{node[solid node,label=left:{$(2, 0)$}]{} edge from parent node [below]{$NF$}} edge from parent node [above]{$B$}} child[grow=right]{node(3)[solid node]{} child{node[solid node,label=right:{$(3, 0)$}]{} edge from parent node [below]{$NF$}} child{node[solid node,label=right:{$(1, 2)$}]{} edge from parent node [above]{$F$}} edge from parent node [above]{$Q$}} edge from parent node [right]{$1-\mu$}} child[grow=down]{node[solid node,label=below:{$\theta_{S}$}] {} child[grow=left]{node(2)[solid node]{} child{node[solid node,label=left:{$(1, 0)$}]{} edge from parent node [above]{$F$}} child{node[solid node,label=left:{$(3, 1)$}]{} edge from parent node [below]{$NF$}} edge from parent node [above]{$B$}} child[grow=right]{node(4)[solid node]{} child{node[solid node,label=right:{$(2, 2)$}]{} edge from parent node [below]{$NF$}} child{node[solid node,label=right:{$(0, 0)$}]{} edge from parent node [above]{$F$}} edge from parent node [above]{$Q$}} edge from parent node [right]{$\mu$}}; \draw[dashed,rounded corners=10]($(1) + (-.45,.45)$)rectangle($(2) +(.45,-.45)$); \draw[dashed,rounded corners=10]($(3) + (-.45,.45)$)rectangle($(4) +(.45,-.45)$); \node at ($(1)!.5!(2)$) {$R$}; \node at ($(3)!.5!(4)$) {$R$}; \end{tikzpicture} \caption{Lemma \ref{ls} Game} \label{nonsimpcount} \end{figure} If $\mu \geq 1/2$, the receiver optimal equilibrium is one in which the sender types pool on $Q$. The receiver's best response is $NF$ and his expected payoff is $2\mu$. If $\mu < 1/2$, the receiver optimal equilibrium is one in which $\theta_{W}$ mixes, choosing $B$ with probability $\sigma = \mu/(1-\mu)$, and $\theta_{S}$ chooses $B$. The receiver's expected payoff for $\mu < 1/2$ is thus $\Pr(B)/2 + 2\Pr(Q) = 2 - 3\mu$. Hence, \[V^{T}(\mu) = \begin{cases} 2-3\mu, & \mu < \frac{1}{2}\\ 2 \mu, & \mu \geq \frac{1}{2} \end{cases}\] Figure \ref{nonsimp} depicts $V^{T}$. The receiver's payoff is neither convex nor lower semi-continuous. The dotted line in the figure is a secant line that corresponds to a binary initial experiment that strictly hurts the receiver. \begin{figure} \begin{center} \includegraphics[scale=.7]{nonsimpv3.png} \end{center} \caption{\label{nonsimp} Receiver Payoffs (Lemma \ref{ls} Proof)}\end{figure} \end{proof} \section{Conclusion}\label{conclusion} This paper comprehensively answers the question of when information always benefits the receiver in two player communication games. As Theorem \ref{main2} illustrates, Theorem \ref{main1} is as strong as possible--should none of its conditions hold, the value of information may be strictly negative. Naturally, there is room for more work on related questions. What can we say, for instance, about the value of information for the sender? Answering such a question would pose a challenge since the proof techniques used in this paper would no longer work. Here we are able to bypass the details of the sender's incentives and work with distributions of beliefs. This allows us to tackle the problem as a decision problem for the receiver, in which we apply Blackwell's theorem. Such an approach would not work when exploring sender welfare because he is not a decision maker.
1612.04176
\section{Introduction} Intentionally transferring energy along with information, using radio frequency (RF) signals, is an attractive alternative to perpetually and remotely power energy harvesting sensors that have limited physical accessibility. It is foreseeable that in next generation wireless systems, a picocell or femtocell base station will be enabled to wirelessly charge low power communication devices within its range. These base stations themselves could be energy harvesting \textit{green} base stations. Apart from the numerous system design challenges the problem offers, it also opens up a rich set of theoretically motivated research avenues. {On} this premise, we address the problem of characterizing the fundamental limits in jointly broadcasting data and power over a wireless medium with energy harvesting transmitter and receivers. \par In this work, we consider the problem of Simultaneous Wireless Information and Power Transfer (SWIPT) over a fading Gaussian broadcast channel (GBC) with an energy harvesting transmitter, ensuring a certain quality of service (QoS) guarantee to the receivers. The QoS parameter we refer to is that of \textit{minimum-rate} constraint. For the canonical fading GBC (non energy harvesting), the problem of characterizing the fundamental limits with minimum rate constraints as a means to ensure \textit{fairness} among receivers is a well studied topic (\cite{jindal2003capacity}). In the context of SWIPT, the above constraint has the added advantage that the transmission ensures {a} \textit{minimum instantaneous RF power} at the receivers at all times (which can potentially be harvested). \par We provide an overview of the related literature so as to elucidate our contributions in its context. The idea of transmitting power using an information encapsulated data symbol goes back to \cite{varshney2008transporting}. In \cite{grover2010shannon}, the optimal trade-off between {the} achievable rate and {the} power transferred across a noisy coupled-inductor circuit is discussed. Capacity-energy regions of a discrete memoryless multiple access channel and a {multi-hop} channel with a single relay is characterized in \cite{fouladgar2012transfer}. Achievable rates over an \textit{uplink} channel wherein the transmitters are powered via RF signals in the \textit{downlink} are provided in \cite{hadzi2014multiple}. In \cite{amor2015feedback}, authors report a result on feedback enhancing the \textit{rate-energy region} over a \textit{constant gain} multiple access channel with simultaneous transmission of information and power. \par In the context of MIMO systems, \cite{zhang2013mimo} studies SWIPT by a transmitter to two receivers (either \textit{spatially separated} or \textit{co located}) in which one receiver harvests energy and the other receiver decodes information. A joint transmit beamforming and receive power-splitting design for downlink SWIPT system is studied in \cite{shi2014joint}. A joint information and power transfer scheme that encodes information in the receive antenna index and power transfer intensity is pursued in \cite{zhang2015energy}. As for the receiver design for SWIPT systems, two practical receiver architectures proposed in literature are time-switching receivers and power-splitting receivers \cite{zhang2013mimo}. The power splitting can happen non adaptively or adaptively (\cite{liu2013wireless}, \cite{zhou2013wireless}). For a comprehensive survey of recent advances in the domain of RF energy harvesting networks, refer \cite{lu2015wireless}. A survey on SWIPT communication systems can be found in \cite{krikidis2014simultaneous}. \par Existing works, to the best of our knowledge, do not consider the information theoretic characterization of fundamental limits of SWIPT systems \textit{with energy harvesting transmitter}. In the fading GBC setting that we consider, we assume both the transmitter and the receivers can harvest from a perennial ambient source. The receivers treat the transmitter as an RF energy source to meet additional energy requirements, if any. Another novel aspect in the model we propose is the inclusion of minimum-rate constraints in characterizing the fundamental limits of SWIPT systems. \par This paper is organized as follows. In Section \ref{S_Prel}, we present the system model and notation. Section \ref{S_Main} is devoted to explain the main results of this work. We derive the minimum-rate capacity region of the SWIPT system under consideration, with ideal, time-switching and power-splitting receivers. Numerical results are provided in Section \ref{S_NR}. We conclude in Section \ref{S_Conc}. {Proofs are sketched in the Appendices.} \section{ System Model and Notation} \label{S_Prel} \begin{figure}[h] \begin{center} \includegraphics[scale=0.44]{gbc_fading_EH_block-eps-converted-to.pdf} \caption{A fading GBC with SWIPT.} \label{fig1} \end{center} \end{figure} \subsection{Transmission with QoS Constraints} Consider an energy harvesting transmitter equipped with an energy buffer (synonymous with battery or buffer) of infinite capacity. The transmitter could well be a \textit{green} base station harnessing renewable energy, like solar or wind energy. Alternatively, in the context of energy harvesting sensor networks, transmitter could represent a \textit{fusion centre} which multiple sensor nodes report to. \par We consider a time slotted system. In slot $k$, let $Y_k^s$ ($s$ indicates sender) denote the energy harvested by the transmitter from a renewable source. We assume the energy harvesting process $\{Y_k^s,~k\geq 1\}$ is stationary, ergodic. Let $\mathbb{E}[Y^s]$ denote the mean value of the energy harvesting process. Let $E_k^s$ denote the energy available in the transmitter's buffer at the beginning of slot $k$. In the model we consider, the harvested energy $Y_k^s$ can be used in the same slot and the remaining, if any, is stored in the buffer for future use. Let $T_k^s$ denote the energy used up by the transmitter in slot $k$. Thus, $T_k^s \leq \hat{E}_k^s \triangleq E_k^s+Y_k^s$. The energy in the buffer evolves according to $E_{k+1}^s=\hat{E}_{k}^s-T_k^s$. \par The transmitter has $L$ messages to send, denoted by the message vector $\textbf{M} \triangleq (M_1,\hdots,M_L)$, to $L$ distant receivers, where $M_l$ is the message corresponding to receiver $l\in[1:L] \triangleq \{1,2,\hdots,L\}$. Simultaneously, the transmitter is powering each of the receivers. The receivers, in practice, could either be user mobile devices or low power sensor motes. Corresponding to the message vector $\mathbf{M}$, a codeword of length $n$ $\big(X_1'(\mathbf{M}),\hdots,X_n'(\mathbf{M})\big) $, is chosen. Since the transmitter is energy harvesting, transmitted symbol in slot $k$ could be different from the codeword symbol {because of insufficient energy at the transmitter}. The channel input symbol in slot $k$ is denoted as $X_k$. We note that the total energy used for transmission in slot $k$, $T_k^s=X_k^2=\sum\limits_{l=1}^LT_k^s(l)$, where $T_k^s(l)$ is the energy allocated for receiver $l$ in slot $k$. \par On account of the time varying nature of the underlying wireless channel, some users may be cut off from the {transmitter} for a certain duration of time depending upon the channel conditions. This is because the power allocation strategy which {ensures} the optimal \textit{long term} data rates will allocate zero transmission power in certain time slots to those users with low channel gains \cite{li2001capacity}. At the same time, it is not desirable to transmit at a target rate irrespective of the channel gains (essentially by a multi user variant of \textit{channel inversion}) as it reduces the permissible data rates \cite{li2001capacity2}. An alternative to the above approaches is transmitting at a certain \textit{minimum instantaneous rate} irrespective of the channel conditions (there by ensuring certain fairness among receivers) and use the additional power to maximize the long term achievable data rates \cite{jindal2003capacity}. Accordingly, let $\rho(l)$ be the minimum rate of transmission to be ensured {to receiver $l$}, irrespective of the channel conditions. The model parameter $\bm{\rho}\triangleq \big(\rho(1),\hdots,\rho(L)\big)$ dictates the quality of service guarantee on the joint data and power broadcast. \subsection{The Channel Model } \par The channel from the transmitter to the $l^{\text{th}}$ receiver is a fading channel corrupted by an independent and identically distributed (i.i.d.) additive Gaussian noise process $\{N_k(l),~k\}$ at the receiver. We denote the probability density function of $N_k(l)$ (having mean $0$ and variance $\sigma_l^2)$ by $\mathcal{N}(0,\sigma_l^2)$. The multiplicative channel gain from the transmitter to the $l^{\text{th}}$ receiver in slot $k$ is denoted as $H_k(l)$. We assume that the fading process $\{\mathbf{H}_k,~k\geq 1\}$ is jointly stationary, ergodic, where $\mathbf{H}_k \triangleq \big(H_k(1),\hdots,H_k(L)\big) \in \mathcal{H}^L$ with stationary distribution $F_{\mathbf{H}}$. Here, $\mathcal{H}\subset \mathbb{R}^+$, the positive real axis and $\mathcal{H}^L$ is the Cartesian product $\mathcal{H}\times\hdots\times\mathcal{H}$ (L times). In addition, we assume that the channel gains $H_k(l)$ and $H_k(j)$ are statistically independent for $l \neq j$ and are known to all the receivers and the transmitter {at time $k$}. We consider a block fading channel model wherein the channel gain from the transmitter to receiver $l$ remains fixed for the duration of a \textit{channel coherence time} $T_c(l)$. The codeword length $n$ is assumed to be an integer multiple of the least common multiple of $\{T_c(l)$, $l \in [1:L]\}$. If $W_k(l)$ is the channel output at receiver $l$ in slot $k$, $W_k(l)=\sqrt{H_k(l)}X_k+N_k(l).$ \par Note that, ensuring a certain minimum non-zero transmission power to all users in all time slots (dictated by $\bm{\rho}$), potentially requires infinite power if the fading process, with {positive} probability, can take values arbitrarily close to zero. As an example, for the same reason, the zero outage capacity region for a Rayleigh fading GBC is \textit{null} (\cite{li2001capacity2}). To encompass those transmission schemes that require a finite average power and ensure non-zero minimum rate, we make the assumption that $\mathbb{E}[\frac{1}{H_k(l)}]<\infty$ for all $l$ and for all $k\geq 1$. \subsection{Receiver} \par The receivers for SWIPT serve a dual purpose. There is a communication receiver to receive and decode the incoming data, and a rectenna module to harvest the RF energy. In slot $k$, receiver $l$ harvests $Y_k^r(l)$ ($r$ denotes receiver) from a surrounding perennial source. We assume that $\{Y_k^r(l),~k\geq 1\}$ is a stationary, ergodic process for each $l$ and is independent across receivers. Let $\mathbb{E}[Y^r(l)]$ denote its mean value. The receivers have an energy buffer of infinite capacity. Let $E_k^r(l)$ denote the energy in $l^{\text{th}}$ receiver's buffer at the beginning of slot $k$. Let $\hat{E}_k^r(l)\triangleq E_k^r(l)+Y_k^r(l)$. There are various sources of energy consumption at the receivers. The front end of the communication receiver requires energy for filtering and other processing operations. This energy requirement, at receiver $l$, is modelled by a stationary, ergodic process $\{T_k^r(l),~k \geq 1\}$. We refer to {$\Delta_l \triangleq (\mathbb{E}[T^r(l)]-\mathbb{E}[Y^r(l)])^+$} as the average energy deficit at receiver $l$, where $(.)^+=\max\{0,.\}$. Receiver $l$ harvests, on an average, $\Delta_l$ units of RF energy so as to compensate for the deficit. \par We now provide a brief description of the various receiver architectures considered in this work. An ideal receiver can harvest the incoming RF energy without \textit{distorting} the noise corrupted data symbol. Thus, the total energy harvested $D_k^r(l)$ at receiver $l$ in slot $k$ is $Y_k^r(l)+\xi_k(l)$, where $\xi_k(l)=\eta H_k(l)X_k^2$. Here, $\eta$ denotes the efficiency factor of {the} energy harvesting system (\cite{lu2015wireless}). The fundamental limits obtained in \cite{varshney2008transporting}, \cite{fouladgar2012transfer} are achievable only using ideal SWIPT receivers. In contrast, a time-switching receiver harvests RF energy in {a} slot, at the expense of erasing the corresponding noise corrupted data symbol. Let $\mathcal{I}_{l,k}$ denote the indicator function of the event that RF energy is harvested by receiver $l$ {in slot} $k$. Then, $D_k^r(l)=Y_k^r(l)+\mathcal{I}_{l,k}\xi_k(l)$. A power-splitting receiver \textit{divides} the incoming RF power between the communication module and the rectenna, non-adaptively. We refer to it as the constant fraction power-splitting receiver. If $0 \leq \pi_{\mathcal{E}} \leq 1$ is the fraction of power harvested in every slot, $D_k^r(l)=Y_k^r(l)+\pi_{\mathcal{E}}\xi_k(l)$. \par In general, among the $L$ receivers, some receivers could be ideal and some others could be time-switching or power-splitting. But for the sake of exposition, we derive results assuming that all the receivers belong to one of the above kind. Our proof techniques readily yield the corresponding results for the general case as well. \par In this work, we derive the fundamental limits in the framework propounded in \cite{rajesh2014capacity}. Specifically, the channel input and output processes need not be stationary, since at time $k=0$, the transmitter and the receivers start operating with \textit{arbitrary} initial energy in their buffers. We consider \textit{power control policies} (to combat fading) at the transmitter such that the stochastic process $\{T_k^s,~k \geq 1\}$ is an asymptotic mean stationary (AMS), ergodic process \cite{gray2011entropy}. We prove that, for the SWIPT system in place, the \textit{AMS capacity region} is equivalent to the \textit{Shannon capacity region} of a non energy harvesting system with the same average power constraints. \section{Minimum-Rate Capacity Region with Various Receiver Architectures} \label{S_Main} In this section, we derive the minimum-rate capacity regions of the SWIPT system for the three receiver models. We begin {with} the following definitions. An energy management policy $T_{k}^s$ is called a {\textit{Markovian policy}}, if it is exclusively a function of the variables $\hat{E}_k^s$ and $\mathbf{H}_k$. In this work, we only consider policies that are Markovian. {We refer to such policies as Markovian because, if the processes $\{Y_k^s\}$, $\{\textbf{H}_k\}$, $\{Y_k^r(l),l\in[1:L]\}$, $\{T_k^r(l),l\in[1:L]\}$ are i.i.d., adopting Markov policies make the joint process $\{\big(Y_k^s,E_k^s,X_k^s,W_k(1),\hdots,W_k(L)\big)\}$ a Markov process}. We prove that such policies are \textit{optimal} among the class of AMS, ergodic policies. A rate tuple $\mathbf{R}=\big(R(1),\hdots R(L) \big)$ is {\textit{achievable}} if there {exists} a sequence of $\big((2^{nR_1},\hdots, 2^{nR_L} ),n\big)$ codes, an encoding function, a power controller so that for each joint fading state $\mathbf{h}=\big(h(1),\hdots h(L)\big)$, the instantaneous rate vector $\mathbf{R}(\mathbf{h}) \triangleq \big(R_1(\mathbf{h}),\hdots R_L(\mathbf{h})\big)$ satisfies $R_i(\mathbf{h})\geq \rho(i)$ and $\mathbb{E}_{\mathbf{H}}[R_i(\mathbf{H})]\geq R(i)$, $L$ decoders and energy harvesters, such that the average probability of decoding error (averaged over all possible realizations of codebooks) $P_{e}^{(n)} \rightarrow 0$ as $n \rightarrow \infty$. {\textit{Minimum-rate capacity region}} is the closure of the set of all achievable rate vectors. We note that the minimum-rate vector $\bm{\rho}$ should be within the zero outage capacity region \cite{li2001capacity2} of a fading GBC with {the} peak power constraint corresponding to the minimum peak power imposed by the energy harvesting process. Since non zero minimum rates can be ensured only if the energy harvesting process $\{Y_k^s\}$ at the transmitter is such that $Y_k^s>\delta$ a.s. for some small $\delta>0$ for all $k$, we assume the same. \subsection{ SWIPT System: Ideal Receivers} \label{SS_ID} We now provide a characterization of the minimum-rate capacity region when all receivers are assumed to be ideal. Let $\Sigma_l(\mathbf{H})\triangleq H(l)T_l'^s(\mathbf{H})$, $\nu_l(\mathbf{H})\triangleq \sigma^2_l+\sum\limits_{j =1}^{L}H(j)T_j'^s(\mathbf{H})\mathds{1}_{\mathcal{E}_{l,j}}$, where $\mathds{1}_{\mathcal{E}_{l,j}}$ is the indicator function corresponding to the event $\mathcal{E}_{l,j} \triangleq \{\sigma_l^2H(j)>\sigma_j^2H(l)\}$, $T_l'^s$ is an energy allocation policy corresponding to receiver $l$ and $\text{SNR}_l(\mathbf{H})\triangleq \Sigma_l(\mathbf{H})/\nu_l(\mathbf{H})$. Define \begin{flalign*} \hspace{-20pt} \mathcal{C}_i(\mathbf{T}'^s) = \Big\{\mathbf{R}:~ & {\rho}(l) \leq {R}(l) \leq \mathbb{E}_\mathbf{H}\Big[\mathbf{C}_{i,l}(\mathbf{H})\Big],~ l \in [1:L] \Big\}. \end{flalign*} Here, $\mathbf{C}_{i,l}(\mathbf{H}) \triangleq \frac{1}{2}\log\big(1+\text{SNR}_l(\mathbf{H})\big)$ and $\mathbf{T}'^s=(T_1'^s,\hdots T_L'^s)$. For $\bm{\Delta} \triangleq (\Delta_1,\hdots,\Delta_L)$, let $$\mathcal{T}^s(\bm{\Delta}) \triangleq \Big\{\mathbf{T}'^s: \mathbb{E}_\mathbf{H}\big[\sum\limits_{l=1}^{L}T_l'^s(\mathbf{H})\big]\leq \mathbb{E}[Y^s],~R_l(\mathbf{H}) \stackrel{\text{a.s.}}{\geq} \rho(l),$$ $$\mathbb{E}_\mathbf{H}\big[\eta H(l)T_l'^s(\mathbf{H})\big] \geq \Delta_l,~ l \in [1:L]\Big\}.$$ \begin{thm} \label{cap_region} \emph{(Capacity Region with Ideal Receivers)}: The minimum rate capacity region is \begin{flalign*} \hspace{35pt}\mathcal{C}_i(\bm{\Delta}) = \overline{\text{Conv}}\Bigg(\bigcup_{\mathbf{T}'^s \in \mathcal{T}^s(\bm{\Delta})} \mathcal{C}_i(\mathbf{T}'^s)\Bigg), \end{flalign*} where $\overline{\text{Conv}}(S)$ is the closure of convex hull of the set $S$. \end{thm} \begin{proof} See Appendix A. \end{proof} Since the capacity region $\mathcal{C}_i(\bm{\Delta})$ is convex, we can obtain the boundary points of $\mathcal{C}_i(\bm{\Delta})$ by solving the following optimization problem: \begin{flalign*} \hspace{20pt} &\max_{\mathbf{T}'^s(.)} ~ \sum\limits_{l=1}^L \mu(l) \mathbb{E}_{\mathbf{H}}\big[R_l\big(T_l'^s(\mathbf{H})\big)\big],\\ & \text{s.t.}~ \mathbb{E}_\mathbf{H}\big[\sum\limits_{l=1}^{L}T_l'^s(\mathbf{H})\big]\leq \mathbb{E}[Y^s], ~ \forall l, \\ & ~R_l(\mathbf{H}) \stackrel{\text{a.s.}}{\geq} \rho(l),~\forall ~ l,\\ & \mathbb{E}_\mathbf{H}\big[\eta H(l)T_l'^s(\mathbf{H})\big] \geq {\Delta_l},~\forall ~ l. \end{flalign*} Let $\Pi(.)$ be a permutation function on $[1:L]$ such that $H\big(\Pi(1)\big)/\sigma_{\Pi(1)}^2\geq H\big(\Pi(2)\big)/\sigma_{\Pi(2)}^2\geq \hdots\geq H\big(\Pi(L)\big)/\sigma_{\Pi(L)}^2.$ Also, let $T_{m,l}'$ ($m$ indicates minimum) denote the energy expended for maintaining the minimum rate $\rho(l)$ and $T_{e,l}'$ ($e$ denotes excess) be the excess energy such that $T_{m,l}'+T_{e,l}'=T_{l}'^s$. Then, with additional algebraic manipulation, it is easy to show that the above optimization is equivalent {to the optimization problem:} \begin{flalign*} \hspace{20pt} &\max_{\mathbf{T}'_e} ~ \sum\limits_{l=1}^L \Big[\mu(l)\rho(l) + \mathbb{E}_{\mathbf{H}_{\text{ef}}}\big[C_{l}^{\text{ef}}(\mathbf{H}_{\text{ef}})\big]\Big],\\ & \text{s.t.}~ \mathbb{E}_{\mathbf{H}_{\text{ef}}}\big[\sum\limits_{l=1}^{L}T_{e,l}'(\mathbf{H}_{\text{ef}})\big]\leq \mathbb{E}[Y^s], ~ \forall l, \\ & {\mathbb{E}_{\mathbf{H}_{\text{ef}}}\big[\eta H_{\text{ef}}(l)T_{e,l}'(\mathbf{H}_{\text{ef}})\big] \geq {\Delta_{l,\text{ef}}},~\forall ~ l}, \end{flalign*} where, $C_{l}^{\text{ef}}(\mathbf{H}_{\text{ef}}) \triangleq \frac{1}{2}\log\big(1+\text{SNR}_{l,\text{ef}}(\mathbf{H}_{\text{ef}})\big),$ $\text{SNR}_{l,\text{ef}}(\mathbf{H}_{\text{ef}})=\Sigma_{l,\text{ef}}(\mathbf{H}_{\text{ef}})/\nu_{l,\text{ef}}(\mathbf{H}_{\text{ef}})$, $\Sigma_{l,\text{ef}}(\mathbf{H}_{\text{ef}})=H_{\text{ef}}\big(\Pi(l)\big)T_{e,\Pi(l)}'(\mathbf{H}_\text{ef})$ and $\nu_{l,\text{ef}}(\mathbf{H}_{\text{ef}})=\sigma_{l,\text{ef}}^2+\sum\limits_{j <l}H_{\text{ef}}\big(\Pi(j)\big)T_{e,\Pi(j)}'(\mathbf{H}_{\text{ef}})$. We refer to $\mathbf{H}_{\text{ef}}$, $(\sigma_{l,\text{ef}}^2,l \in [1:L])$ as the effective fading coefficients and noise variances respectively, and can be obtained as in \cite{jindal2003capacity}. {Also, $\mathbf{T}'_e \triangleq (T_{e,1}',\hdots,T_{e,L}')$, $\Delta_{l,\text{ef}}=\Delta_l- \mathbb{E}_{\mathbf{H}}\big[\eta H(l)T_{m,l}'(\mathbf{H})\big]$. As an example, for the two receiver case, let $q$ denote the probability of the event $\mathcal{E}_{1,2}$ and let, $q_c=(1-q)$. Denote, for $l \in \{1,2\}$, $p_l=(e^{2\rho(l)}-1)$. Then, under the event $\mathcal{E}_{1,2}$, $\sigma_{1,\text{ef}}^2=\sigma_1^2$, $\sigma_{2,\text{ef}}^2=(\sigma_2^2-\sigma_1^2)e^{-2\rho(1)}+\sigma_1^2$, $H_{\text{ef}}(l)=H(l)e^{-2\rho(1)-2\rho(2)},$ $\Delta_{1,\text{ef}}=\Delta_1-\sigma_1^2p_1-\sigma_2^2p_1p_2q_c$, $\Delta_{2,\text{ef}}=\Delta_2-\sigma_2^2p_2-\sigma_1^2p_1p_2q$. For the complement of the event $\mathcal{E}_{1,2}$, the indices are swapped to the obtain corresponding expressions.} \begin{rem} \label{rem1} As a consequence of Theorem \ref{cap_region}, we can recover various important results as special cases. For instance, the capacity region of a fading GBC with an energy harvesting transmitter, and without power transfer and minimum rate constraints, is readily obtained. We also obtain the capacity of a fading AWGN channel with energy harvesting transmitter, sending simultaneously a delay sensitive data (at a pre specified rate $\rho$) and a delay tolerant data. The result can be obtained using the proof of Theorem \ref{cap_region}, but using two separate codebooks (for each class of data) in conjunction with the rate splitting argument \cite{rimoldi1996rate}. \end{rem} \subsection{SWIPT System: Time-Switching Receivers} \label{S_TS} In this section, we consider the SWIPT system with time-switching receivers. The corresponding capacity region is referred to as the minimum-rate erasure capacity region. The terminology signifies the fact that harvesting energy from a data bearing symbol (using time-switching receiver) erases its information content. {In time-switching case, even though there is \textit{no minimum rate} at those times}, the constraint ensures that receivers can harvest a certain minimum RF power. An important aspect of our model is that, without loss of \textit{optimality}, each receiver can decide when to harvest RF energy independent of other receivers' decision and the transmitter not knowing the same. The probability with which receiver $l$ decides to harvest in any slot is dictated by $\Delta_l$. Let $\pi_{\mathcal{E}}(l) \triangleq \Delta_l/ \mathbb{E}_\mathbf{H}\big[\eta H(l)T_l'^s(\mathbf{H})\big]$ and denote $\pi^c_{\mathcal{E}}(l)=1-\pi_{\mathcal{E}}(l)$. Let \begin{flalign*} \hspace{-20pt} \mathcal{C}_t^e(\mathbf{T}'^s) \triangleq \Big\{\mathbf{R}:~ & {\rho}(l) \leq {R}(l) \leq \mathbb{E}_\mathbf{H}\Big[\mathbf{C}_{t,l}(\mathbf{H})\Big],~ l \in [1:L] \Big\}, \end{flalign*} where, $\mathbf{C}_{t,l}(\mathbf{H}) \triangleq \frac{\pi_{\mathcal{E}}^c(l)}{2}\log\big(1+\text{SNR}_l(\mathbf{H})\big)$. \begin{thm} \label{Th_TSR} \emph{(Capacity Region with Time-Switching Receivers)}: \begin{flalign*} \hspace{35pt}\mathcal{C}_t^e(\bm{\Delta}) = \overline{\text{Conv}}\Bigg(\bigcup_{\mathbf{T}'^s \in \mathcal{T}^s(\bm{\Delta})} \mathcal{C}_t^e(\mathbf{T}'^s)\Bigg), \end{flalign*} is the minimum-rate erasure capacity region. \end{thm} \begin{proof} See Appendix B. \end{proof} \subsection{SWIPT System: Power-Splitting Receiver} \label{S_PS} At receiver $l$, let $\pi_{\mathcal{E}}(l)$ fraction of energy be harvested in every slot, where $\pi_{\mathcal{E}}(l)$ is defined as in the time-switching case. Let $\tilde{\nu}_l(\mathbf{H})=\sigma^2_l+\sum\limits_{j =1}^{L}\pi_{\mathcal{E}}^c(j)H(j)T_j'^s(\mathbf{H})\mathds{1}_{\tilde{\mathcal{E}}_{l,j}}$, where $\mathds{1}_{\tilde{\mathcal{E}}_{l,j}}$ is the indicator function corresponding to the event $\{\sigma_l^2\pi_{\mathcal{E}}^c(j)H(j)>\sigma_j^2\pi_{\mathcal{E}}^c(l)H(l)\}$. Also, let $ \overset{\sim}{\text{SNR}}_l(\mathbf{H}) =\Sigma_l(\mathbf{H})/\tilde{\nu}_l(\mathbf{H})$. Define \begin{flalign*} \hspace{-20pt} \mathcal{C}_p(\mathbf{T}'^s) \triangleq \Big\{\mathbf{R}:~ & {\rho}(l) \leq {R}(l) \leq \mathbb{E}_\mathbf{H}\Big[\mathbf{C}_{p,l}(\mathbf{H})\Big],~ l \in [1:L] \Big\}, \end{flalign*} where, $\mathbf{C}_{p,l}(\mathbf{H}) \triangleq \frac{1}{2}\log\big(1+\pi_{\mathcal{E}}^c(l)\overset{\sim}{\text{SNR}}_l(\mathbf{H})\big)$. \begin{thm} \label{Th_PSR1} \emph{(Capacity Region with Power-Splitting Receivers)}: The closure of \begin{flalign*} \hspace{35pt}\mathcal{C}_p(\bm{\Delta}) = \overline{\text{Conv}}\Bigg(\bigcup_{\mathbf{T}'^s \in \mathcal{T}^s(\bm{\Delta})} \mathcal{C}_p(\mathbf{T}'^s)\Bigg), \end{flalign*} is the minimum-rate capacity region with constant fraction power-splitting receivers. \end{thm} \begin{proof} The proof follows from the proof of Theorem \ref{cap_region} with the channel gain from the transmitter to $l^{\text{th}}$ receiver scaled by a factor $\pi_{\mathcal{E}}^c(l)$. \end{proof} The boundary points of $\mathcal{C}_t^e(\bm{\Delta})$ and $\mathcal{C}_p(\bm{\Delta})$ can be obtained by solving optimization problems similar to that for the ideal case. \begin{rem} In the absence of energy harvesting constraints, it is well known that a GBC and a Gaussian multiple access channel (GMAC) are \textit{duals} of each other \cite{jindal2004duality}. A similar result can be proved for these channels when powered by energy harvesting sources. On account of space constraints, we choose to avoid the technical details. Rather, we provide a numerical example in Section \ref{S_NR}. \end{rem} \section{Numerical Results} \label{S_NR} {In this section, we provide numerical examples to compare the minimum-capacity region of the SWIPT system for the ideal, time-switching (TS) and power-splitting (PS) receivers. The time slot is considered in multiples of $1 \mu$sec. We consider a 2 user GBC with $\sigma_1^2=0.8$, $\sigma_2^2=1.6$. We assume i.i.d. fading, independent across users. The fading distribution at each user is chosen such that $\mathbb{E}[H(1)]=0.8$, $\mathbb{E}[H(2)]=0.5$. We consider a \textit{discretized} Rayleigh fading channel, obtained as follows: Fix an appropriate subset of the positive real axis. We choose the interval $[0,10]$. We discretize this set in steps of $.1$. For the channel between the transmitter and receiver $l$, the probability $p_{l}(h)$, $h \in \{.1,.2,\hdots,9.9\}$, is chosen such that $p_{l}(h)=Pr\big(H(l) \in [h-.1,h]\big)$ and $p_l(10)=Pr\big(H(l) \geq 9.9\big)$, where $H(l)$ is exponentially distributed. We take $\mathbb{E}[Y^s]=10W$. Fix $\rho(1)=300$ Kbps, $\rho(2)=150$ Kbps. Let $\mathbb{E}[T^r(1)]=90\mu$W, $\mathbb{E}[T^r(2)]=50\mu$W and the energy deficits $\Delta_1=60\mu\text{W} \approx -12$dBm, $\Delta_2=30\mu\text{W}\approx -15 $dBm. The efficiency factor $\eta$ is fixed to $10^{-4}$. } \par{ In Figure \ref{all_rate_reg}, we compare the minimum-rate capacity regions of the SWIPT system with all receivers ideal, all TS and all PS, against the capacity region of \textit{all ideal receivers} case without minimum rate constraints and achievable rates without RF power transfer. For the example we consider, RF power transfer with ideal receivers enhances the data rates by 200$\%$ for receiver 1 and about $67\%$ for receiver 2, with respect to that with no RF power transfer. Also, due to concavity of the achievable rates as a function of the power expended, $\mathcal{C}_t^e(\bm{\Delta}) \subseteq \mathcal{C}_p(\bm{\Delta})$. \begin{figure}[h] \begin{center} \includegraphics[scale=0.33]{1All_CR_151116_1_trim-eps-converted-to.pdf} \caption{ $\mathcal{C}_i(\bm{\Delta})$, $\mathcal{C}_t^e(\bm{\Delta})$ and $\mathcal{C}_p(\bm{\Delta})$ versus capacity region without minimum rate constraints, achievable rates without RF power transfer.} \label{all_rate_reg} \end{center} \end{figure} \par { We also compute the capacity region for a more realistic scenario in which receiver 1 is PS and receiver 2 is TS. We compare it with $\mathcal{C}_t^e(\bm{\Delta})$ and $\mathcal{C}_p(\bm{\Delta})$, in Figure \ref{PS_TS_PSTS_Cap_reg}, for two different values of $\mathbb{E}[Y^s]$. For the same amount of energy harvested at the transmitter, on average, a relatively wide range of energy deficit values at the receivers can be catered without \textit{much} rate loss. } \begin{figure}[h] \begin{center} \includegraphics[scale=0.33]{2TSPS_CR_151116_2_trim-eps-converted-to.pdf} \caption{Comparison of capacity regions with receiver architectures all same and all different, for $\mathbb{E}[Y^s]=10W$, $\mathbb{E}[Y^s]=15W$. } \label{PS_TS_PSTS_Cap_reg} \end{center} \end{figure} {Next, we fix $\mathbb{E}[Y^s]$ value and compare the data rates achievable for various values of energy deficits at the receiver. In Figure \ref{rr_TS_D}, we exemplify the change in $\mathcal{C}_t^e(\bm{\Delta})$ as a function of $\bm{\Delta}$. A similar plot for $\mathcal{C}_p(\bm{\Delta})$ is provided in Figure \ref{rr_PS_D}.} \begin{figure}[h] \begin{center} \includegraphics[scale=0.33]{3TS_CR_vsDv_151116_1_trim-eps-converted-to.pdf} \caption{Comparison of capacity regions for various values of $\bm{\Delta}$ with all TS receivers. } \label{rr_TS_D} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[scale=0.33]{4PS_CR_vsDv_151116_1_trim-eps-converted-to.pdf} \caption{Comparison of capacity regions for various values of $\bm{\Delta}$ with all PS receivers. } \label{rr_PS_D} \end{center} \end{figure} \par {Finally, in Figure \ref{duality_fig}, we obtain the minimum-rate capacity region (with minimum rates as before) of a 2 user fading GMAC, with average energy harvested at the transmitters $\mathbb{E}[Y^s(1)]=6$W, $\mathbb{E}[Y^s(2)]=4$W, average energy consumed at the receiver (assumed to be ideal) $\mathbb{E}[T^r]=90\mu$W, energy deficit at the receiver $\Delta=60\mu$W, receiver noise variance $\sigma^2=1$, from that of a GBC with $\mathbb{E}[Y^s]=10$W, $\mathbb{E}[T^r(l)]=90\mu$W, $\Delta_l=60\mu$W and $\sigma_l^2=\sigma^2$, $l=1,2$. Even though the capacity regions are readily obtained via duality, the method does not bring out the structure of the corresponding optimal power control policies. \begin{figure}[h] \begin{center} \includegraphics[scale=0.33]{5Duality_151116_3_trim-eps-converted-to.pdf} \caption{The minimum-rate capacity region (in log scale) of a fading GMAC with energy harvetsing constrainsts and SWIPT, via duality.} \label{duality_fig} \end{center} \end{figure} \section{Conclusion} \label{S_Conc} In this work, we considered a fading GBC with an energy harvesting transmitter and receivers. We characterized the minimum-rate capacity region of the channel with SWIPT for the ideal, time-switching and power splitting receiver architectures. The resultant power control policies obtained are optimal within a general class of permissible policies for energy harvesting SWIPT systems. Also, from our results in this work, we obtained numerically the corresponding capacity regions of a fading GMAC using duality arguments.} \section*{Appendix A} \textit{Proof sketch of Theorem \ref{cap_region}:} {For the sake of clarity and brevity, we explain the proof technique for fading processes with finite support set $\mathcal{H}$. The extension to the continuous fading distributions can be handled in a standard way as in \cite{jindal2003capacity}.} \\ {Achievability:} \textit{Codebook Generation:} {Fix the power control policy $\mathbf{T}'^s$ obtained by solving the optimization problem in Section \ref{SS_ID} (with $\mathbb{E}[Y^s]$ therein, replaced by $\mathbb{E}[Y^s]-\epsilon$, for some small $\epsilon>0$). Fix message vector $\mathbf{M}$, blocklength $n$ and a rate vector $\mathbf{R}$. The message vector is divided into independent messages $\mathbf{M}_{\mathbf{h}}$ with rate $\mathbf{R}(\mathbf{h})$ such that $R(l)=\sum_{\mathbf{h}}R_l(\mathbf{h})$, $\mathbf{h}\in\mathcal{H}^L$. Corresponding to each joint fading state $\mathbf{h}$, there exists a unique order in which the channel is \textit{degraded}. That is, the receivers can be ordered according to the increasing values of $h(l)/ \sigma_l^2,$ $l\in[1:L]$ such that the receiver with the lowest value of $h(l)/ \sigma_l^2$ is the \textit{weakest receiver} and that with the highest value is the \textit{strongest}. Accordingly, for each joint fading state (and the corresponding order of degradation), generate an $L$ level superposition codebook as per the \textit{satellization process} (\cite{bergmans1973random}, Section III B). Each of the $2^{nR_l(\mathbf{h})}$ codewords of the $l^{\text{th}}$ {satellite codebook} are generated i.i.d. according to $\mathcal{N}\big(0,T_l'^s(\mathbf{h})\big)$ and independent of other codebooks. The superposition codebooks generated are shared with all the receivers.} \\ \textit{Encoding and Signalling Scheme:} {At time $k$, if the joint fading state is $\mathbf{h}_k$, the next untransmitted symbol in the codewords (to each of the receivers) corresponding to message $\mathbf{M}_{\mathbf{h}}$ is chosen for transmission. Since the transmitter is energy harvesting, in a given slot $k$, it may not have the required amount of energy $T'^s(\mathbf{h}_k)=\sum\limits_{l=1}^{L}T_l'^s(\mathbf{h}_k)$ in the buffer. In that case, transmission is done according to the following \textit{truncated policy}:} \begin{displaymath} T_k^s(l) = \left\{ \begin{array}{lr} T_l'^s(\mathbf{h}_k) & : T_l'^s(\mathbf{h}_k) \leq \hat{E}_k^s,\\ \frac{\hat{E}_k^s}{T'^s(\mathbf{h}_k)}T_l'^s(\mathbf{h}_k) & : T_l'^s(\mathbf{h}_k) > \hat{E}_k^s. \end{array} \right. \end{displaymath} Since the average power expended at the transmitter $\mathbb{E}[Y^s]-\epsilon$ is strictly less than the average harvested energy $\mathbb{E}[Y^s]$, $E_k^s \rightarrow \infty$ a.s. as $k \rightarrow \infty$ (Chapter 7, \cite{walrand1988introduction}). Accordingly, $T_k^s(l) \rightarrow T_l'^s $ a.s. as $k \rightarrow \infty$ for each $l$. \subsubsection*{Decoding} {Since the channel gains are known perfectly, receiver $l$ can \textit{demultiplex} its received sequence $w^n(l)$ into subsequences $\{w^{n_{\mathbf{h}}}(l)\}$ such that $n=\sum_{\mathbf{h}}n_{\mathbf{h}}$. Note that, by the law of large numbers, $(n_{\mathbf{h}}/n) \geq (1-\delta)p(\mathbf{h})$, for a large $n$ and small $\delta>0$, where $p(\mathbf{h})$ is the probability of the joint fading state $\mathbf{h}$. Hence, for the demultiplexed subsequence corresponding to state $\mathbf{h}$ at each receiver, the decoding operation can be performed using a sub codebook of block length $n(1-\delta)p(\mathbf{h})$. Each receiver adopts successive cancellation decoding. Note that, each $\mathbf{h}$ corresponds to a particular channel degradation order. Successive cancellation decoding corresponding to the degradation order of state $\mathbf{h}$ is performed such that, each receiver decodes all the codewords (corresponding to message $\mathbf{M}_{\mathbf{h}})$ of all the receivers {degraded with respect it}, \textit{subtracts them off} and decodes its own codeword.} \subsubsection*{Analysis of Error Events} First note that, by ensuring $\eta\mathbb{E}[H(l)T'^s(l)] \geq \Delta_l$, the total mean harvested energy by an ideal receiver $l$, $\mathbb{E}[D^r(l)] \geq \mathbb{E}[T^r(l)]$. Thus, the probability of energy outages can be proven to \textit{vanish} asymptotically as in the Transmitter's case. Next, note that AMS, ergodic sequences satisfy Asymptotic Equipartition Property (AEP) (\cite{barron1985strong}) under appropriate regularity conditions. These conditions hold good for the setting under consideration. In particular, the channel input and output random variables have finite variances. Also, the non energy harvesting channel with average transmitter power constraint equal to $\mathbb{E}[Y^s]$ has finitely bounded capacity region and is an upper bound to the capacity of the system model under consideration. Hence, the associated mutual information rates in our case are all finite. In addition, the AMS stationary mean is dominated by an i.i.d. Gaussian measure on a suitable Euclidean space. Thus the AEP result in (\cite{barron1985strong}) can be invoked in our context. Decoding is done with respect to the joint finite dimensional distribution induced by the stationary AMS mean distribution on the channel input and output processes. Taking into consideration these facts, error event analysis can be performed as in the standard case to obtain the required result. \subsection*{Converse:} To prove the converse part, assume that there exist codebooks (satisfying the required constraints; in particular the minimum rate constraint is equivalent to a time varying constraint on the minimum transmitted power), encoder and decoders such that $P^{(n)}_e$ (average probability of decoding error) goes to zero as $n \rightarrow \infty$. For the energy harvesting transmitter, $\frac{1}{n} \sum\limits_{k=1}^{n}T_k^s \leq \frac{1}{n} \sum\limits_{k=1}^{n}Y_k^s\leq \mathbb{E}[Y^s]+\delta_{n_0} $ for $n>n_{n_0} $ and arbitrarily chosen small $\delta_{n_0}>0$. Hence, rates obtained via any coding, decoding scheme subject to the average power constraint $\mathbb{E}[Y^s]$ alone is an upper bound to the system capacity. This proves the converse. \qed \section*{Appendix B} \textit{Proof of Theorem \ref{Th_TSR}:} At receiver $l$ fix an appropriate $\pi_{\mathcal{E}}(l) \in [0,1]$. In each time slot, receiver $l$ harvests RF energy with probability $\pi_{\mathcal{E}}(l)$. If energy is harvested, channel output is recorded as an erasure. Thus, the system with time-switching receiver can be equivalently thought of as a fading GBC concatenated with an erasure channel. Encoding is done as per in the proof of Theorem \ref{cap_region}. Decoder discards the erasures and perform successive cancellation on the remaining. The erasures are independent of the channel output and the fraction of erasure instances converge almost surely to $\pi_{\mathcal{E}}(l)$. Hence, the achievability follows as in the case of Theorem \ref{cap_region}. \par To prove the converse, we can assume that the encoder has access to non-causal knowledge of erasure locations. The encoders can choose to send a zero symbol during the erasure instances. The decoder discards the erased channel outputs. This, along with the converse argument in Theorem \ref{cap_region} proves the converse for the time-switching receiver case. \qed \bibliographystyle{IEEEtran}
1606.06287
\subsection*{Acknowledgements} The author thanks Matthew Daws for various helpful exchanges on operator-space tensor products, and Adam Skalski for comments and corrections on an earlier draft of this note. He also wishes to thank John Rainwater for supplying the key ideas in the proof of Theorem~\ref{t:mainthm}. Rainwater has set an admirable if idiosyncratic example over the years (see \cite{Rain-bio} for further details) but, in these more sober times, he has declined an invitation to be a co-author of this note. Nevertheless, in view of Rainwater's generous habits in acknowledging the input of colleagues and correspondents, it seems only fitting to return the favour. \end{section}
2107.01136
\subsection{Benchmark Families}\label{sec:benchmarkfamilies} The upper part of \Cref{tab:benchmarks} shows the results for live updates from specification patterns introduced by Menghi et. al.~\cite{RobotSpecifications}, where \textbf{Reactivity} implements additional interaction with the environment. The specifications define the behavior of a robot that is able to travel between n different locations and needs to satisfy different specifications on the way. Our second set of benchmarks is taken from the annual synthesis competition \textsc{SYNTCOMP}~\cite{SYNTCOMP}. The results for live updates in the reactive synthesis setting are shown in the lower part of \Cref{tab:benchmarks}. \begin{itemize} \item \textbf{Visit}, \textbf{Seq. Visit}, and \textbf{Patrolling} enforce the robot to visit every location once, in a sequence, and infinitely often respectively. \item \textbf{Reactivity}. The reactivity specification forces the robot to react to an event after two steps at latest by driving to a delineated location, e.g., for refueling. The Reactivity specification can be added to arbitrary specifications. \item \textbf{Relay Station}. The running example of this paper. The relay station communicates with $n$ satellites and forwards the message if clients acknowledged. \item \textbf{Arbiter}. An arbiter controls the access of multiple clients to a shared resource. It ensures that every request to the resource is eventually granted. We consider three variants of arbiter, a simple arbiter (\textbf{s}) only iterating over grants, a full arbiter (\textbf{f}) only granting access if requested beforehand, and a prioritized arbiter (\textbf{p}) that prioritizes the requests of client 0. \item \textbf{ABP}. The alternating bit protocol consists of a receiver \textbf{ABPReceiver} and a transmitter \textbf{ABPTransmitter} specifying the data link layer in the OSI communication network. \item \textbf{Load Balancer}. The load balancer distributes workload over $n$ worker. \end{itemize} In addition to the specifications, we denote updates with an increased parameter with \textbf{$n \rightarrow n+1$}. This property is of interest if the parameter may change during the execution, e.g., increasing the number of clients of an arbiter. \begin{table}[t] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c|c|c|c|c|c|c} \hline \multicolumn{7}{c}{\textit{Robot Specification Patterns}}\\ \hline \textit{Ben.} & \textit{Update} & \textit{\#OM-States} & \textit{\#Fin. Trace} & \textit{Universal} & \textit{Time} $\psi$& \textit{Time Univ.} \\ \hline {Visit} & Seq. Visit & 4 & 4 & \textit{real.} & 0.75&0.75\\ & Patrolling & 6 & 6 & \textit{real.} & 0.68 & 0.68\\ & Seq. Patrolling & 6 & 6 & \textit{real.} & 0.64 & 0.72\\ & Reactivity & 7 & 7 & \textit{real.} & 0.49 & 0.49\\ \hline Seq. Visit & Patrolling & 14 & 14 & \textit{real.} & 0.56& 0.59\\ & Seq. Patrolling & 16 & 16 & \textit{real.} & 0.57 & 0.59\\ & Reactivity & 5 & 5 & \textit{real.} & 0.44& 0.44\\ \hline Patrolling & Ord. Visit & 6 & 6 & \textit{real.} & 0.61 & 0.67 \\ & Reactivity & 7 & 7 & \textit{real.} & 0.49 & 0.52\\ \hline \multicolumn{7}{c}{\textit{SYNTCOMP}}\\ \hline Relay Station & 1 $\rightarrow$ 2 & 4 & 4 & \textit{real.} & 16.26 & 17.23\\ & 2 $\rightarrow$ 1 & 19 & 19 & \textit{real.} & 0.61 & 0.61 \\ \hline Arbiter & 2f $\rightarrow$ 3f & 11 & 6 & \textit{unreal.} & 5.30 & - \\ & 2s $\rightarrow$ 2f & 4 & 2 & \textit{unreal.} & 0.56 & - \\ & 2s $\rightarrow$ 4s & 4 & 4 & \textit{real.} & 0.69 & 0.79 \\ & 2s $\rightarrow$ 2p & 13 & 13 & \textit{real.} & 0.46 & 0.48\\ & 2f $\rightarrow$ 2p & 10 & 10 & \textit{real.} & 0.45 & 0.52\\ & 2p $\rightarrow$ 3p & 6& 6 & \textit{real.} & 0.65 & 0.74 \\ \hline ABPReceiver & 1 $\rightarrow$ 2 & 5 & 4 & \textit{unreal.} & 0.55 & - \\ & 2 $\rightarrow$ 3 & 9 & 3& \textit{unreal.} & 0.43 & - \\ \hline ABPTransmitter & 1 $\rightarrow$ 2 & 5 & 5 & \textit{real.} & 2.70 & 2.82\\ \hline Load Balancer & 2 $\rightarrow$ 4 & 7 & 7 & \textit{real.} & 0.72 & 0.75 \\ \end{tabular}} \vspace{0.2cm} \caption{Results of Live Updates for Robot and SYNTCOMP specifications.} \label{tab:benchmarks} \vspace{-.8cm} \end{table} \subsection{Observations} Throughout all experiments, the minor runtime overhead of the universal update synthesis shows that the additional cost for live update correctness is feasible. The robot specifications provide insight of obligations raised during execution. Since most of the benchmarks obtain the same structural behavior, i.e., the robot visits the locations under some restrictions, the universal live updates are realizable. Even when adding requests, e.g., the robot has to refuel in two steps after requested, the live update is realizable by satisfying the open obligations after the update. Changes to the visiting sequence or infinitely often reaching a location with patrolling increases the size of the obligation monitor (\textit{\#OM-States}) but does not lead to unrealizability. Nevertheless, the sizes of the obligation monitors indicate that tracking the behavior of the system is necessary to obtain the correct obligation. Altogether, our results show that although robot specifications raise obligations, synthesizing correct live updates is often feasible due to the absence of conflicts between the specifications. Most interestingly for the reactive systems benchmarks are arbiter live updates. Changing a specification to a simple arbiter is realizable since the arbiter does not additionally restrict the behavior. However, live updates to full arbiter are only possible from some obligation monitor states, shown by the difference of \textit{\#OM-States} and \textit{\#finite trace updates}. Unrealizability follows from obligation states forcing a grant - an unrequested grant of the update system would be spurious. Since the prioritized arbiter does not include non-spuriousness, a live update from and to this arbiter is realizable. The relay station can be universally updated to the one more and one less base stations. Once computed, the obligations can be satisfied in finite time-steps and synthesizing a solution that reacts to all obligations is possible. The experiments answer the questions stated at the beginning of this section: Specifications that are meaningful live updates state obligations for the update system, shown by the large number of states of the obligation monitors. Realizability of the update system depends on the restrictiveness of the specification, even if the universal update is unrealizable, our results show that in all benchmarks some finite trace live updates are realizable. \subsection{Model Checking Live Updates} Model checking a transition system TS against an LTL formula $\phi$ corresponds to answering the question if TS satisfies $\phi$, i.e., $TS \vDash \phi$. For live systems, the evaluation of the update transition system starts with the initial finite execution and switches to the update system afterwards. Model checking the update system is therefore a language inclusion check of the traces of the transition system combined with $\trace$ against the \liveltl semantics. \begin{definition}[Model Checking Finite Trace Live Updates]\label{def:modelcheckingfinitetrace} Let $TS_U$ be an update system, $\phi$ be an initial specification, $\psi$ be an update specification, and $\trace$ be a finite trace. The problem of model checking finite trace live updates is defined as $\trace \cdot Traces(TS_U) \subseteq \text{Words}(\phi,\psi, \trace).$ \end{definition} The model checking problem can be split into two separate parts, directly identifying the newly introduced conditions for live systems with the operators $\vDashInitial$ and $\vDashUpdate$. In addition to that, $TS_U$ combined with $\trace$ needs to satisfy the update semantics of \liveltl. Since both tasks can possibly be performed in isolation of each other, the overhead given by the live update semantics under the assumption of an update system already verified with LTL is an interesting topic but left open for future work. The complexity of the problem is stated w.r.t. the length of the trace and the combination of initial and update formula: \begin{theorem}[Complexity in $\phi$, $\psi$, and $\eta$] The model checking problem for finite trace live updates is \textsc{PSPACE}-complete in $|\phi| + |\psi|$ and in \textsc{NL} in $\eta \cdot TS_U$. \end{theorem} The proof is based on model checking the combination of $\eta$ and $TS_U$. The universal live update is verified independently of specific initial traces. The condition is stronger than for finite trace updates, and the number of compatible initial and update systems is smaller. Given that the context is unknown, the executions starting in the initial state of $TS_U$ need to satisfy every possible open obligation. Universal updates are relevant if neither the trace nor the obligation monitor are stored and computed respectively. Given the initial system, model checking universal update compatibility obtains the same complexity as finite trace updates. \begin{definition}[Model Checking Universal Live Updates]\label{def:modelcheckinguniversal} Let $TS_I$ be an initial system, $TS_U$ be an update system, $\phi$ be an initial specification, and $\psi$ be an update specification. The problem of model checking universal live updates is defined as $\forall \trace \in \text{FinTraces}(TS_I): \trace \cdot Traces(TS) \subseteq \text{Words}(\phi,\psi, \trace).$ \end{definition} The implicit update points in $TS_I$ allow for the connection of both transition systems and model checking with a linearly increased formula. \begin{theorem}[Complexity in $\phi + \psi$, and $TS_I \cdot TS_U$]\label{th:modelcheckinguniversal} The~model~checking problem for universal live updates is \textsc{PSPACE}-complete in $|\phi|+|\psi|$ and \textsc{NL} in $TS_I \cdot TS_U$. \end{theorem} The complexity results from encoding the live update in the combined transition system $TS_I \cdot TS_U$ and an adapted formula. Based on the model checking results we introduce live synthesis, the major contribution of this paper. \subsection{Live Synthesis}\label{sec:livesynthesis} In this section, we introduce the problem of live synthesis and show the complexity of synthezising live systems. Synthesis of live updates during the runtime of the initial system promises correct-by-definition updates that can substitute the executed system instantaneously. In contrast to model checking, the synthesis procedure returns an implementation or \textit{unrealizable}, proving that the finite trace or initial system and initial specification are incompatible with the update specification. We begin with live updates for an explicit finite trace of the initial system -- the update system needs to react to the explicit context and open obligation. The definition follows the model checking problem, but searches for a transition system satisfying the live update. \begin{definition}[Finite Trace Live Synthesis]\label{def:finitetracelivesynthesis} Let $\phi$ be an initial specification, $\psi$ be an update specification, and $\trace$ be a finite trace. The finite trace live synthesis problem is the computation of a transition system $TS$ s.t. $\trace \cdot \text{Traces}(TS) \subseteq \text{Words}(\phi,\psi, \trace).$ \end{definition} We additionally call a live update \textit{realizable} if there exists a transition system that satisfies the finite trace live update. The complexity of the update synthesis is expressed w.r.t. $\phi$ and $\psi$ and aligns to existing LTL synthesis bounds. \begin{theorem}[Complexity in $\phi$ and $\psi$] The finite trace live synthesis problem is \textsc{2EXPTIME}-complete in $|\phi|$ and $|\psi|$. \end{theorem} The proof is subsumed by the proof of \Cref{th:universallivesynthesis}. The universal update is again of interest if the context of the live update is unknown. Synthesizing a transition system that satisfies the universal live update enables the user to plug-in the new system at any time-step without further analysis. \begin{definition}[Universal Live Synthesis]\label{def:universallivesynthesis} Let $\phi$ be an initial specification, $TS_I$ be an initial system, and $\psi$ be an update specification. The universal live synthesis problem is the computation of a transition system TS s.t. $\forall \trace \in \text{FinTraces}(TS_I): \trace \cdot \text{Traces}(TS) \subseteq \text{Words}(\phi,\psi, \trace).$ \end{definition} Again, we call the problem of the existence of a solution realizability. In general, the universal update obtains a conjunction of double exponentially many conjuncted obligations. To avoid the expansion of the update system, we combine the parity games of the initial and update system. Again, the initial formula conducts the impact on the update system and provides the complexity results. \begin{theorem}[Complexity in $\phi$ and $\psi$]\label{th:universallivesynthesis} The universal update synthesis problem is \textsc{2EXPTIME}-complete in $|\phi|$ and $|\psi|$. \end{theorem} \begin{proof}[Sketch] The hardness proof follows from \Cref{th:liveltlltl}. To show the completeness, we sketch the reduction from \liveltl to LTL. Let $\phi'$ be $\phi$ with release formulas limited to the environment AP $update$. We build the parity game of $\phi' \wedge \LTLeventually(update \wedge \LTLnext \psi)$ (cf.~\cite{paritygames}), where $update$ is enforced to only occur once but will eventually hold. We introduce the following changes to the game: The first part of the game ($\phi'$) is restricted to the edges that can be taken in $TS_I$ and all edges are controlled by the environment. Therefore the environment can move arbitrarily in the first game and build any obligation possible. Solving the game synthesizes a universal update for the triple $TS_I,\phi, \psi$ regarding the \liveltl semantics. Since the reduction is linear in $|\phi|$ and $|\psi|$, we obtain the complexity results from LTL for $\phi$ and $\psi$. \end{proof} \subsection{LiveLTL} \liveltl is designed according to three aspects: (1) the initial system is not able to enforce new obligations after termination, (2) all obligations stated before termination are satisfied by the update system, and (3) obligations are satisfiable in finite time. This guideline is a trade-off between independence of the previous system and incurring obligations from the initial specification to the update system. The definition of \liveltl follows the finite trace update structure and builds the language for inputs as a combination of a finite and an infinite trace evaluation. The syntax is taken from LTL and we assume the set of atomic proposition for the initial system to be a subset of the atomic propositions of the update system. As extension to the semantic operator $\vDash$ of LTL, the operators $\vDashInitial$ and $\vDashUpdate$ form the language for the initial system and the update system respectively. $\vDashUpdate$ performs an index shift from time-step 0 to the update position and evaluates the changed formula with the LTL operator and is defined as $\sigma, i \vDashUpdate \phi \text{ iff }\ \sigma, i+|\trace| \vDash \phi$. Since the update specification is only relevant for the update system, the shift of size $|\trace|$ enables the correct evaluation of the update system's part of the trace. $\vDashInitial$ inserts $|\trace|$ as upper bound for recurrent formulas, i.e., formulas with the \textit{release} operator: \begin{align*} \sigma, i & \vDashOriginal \LTLtrue && \phantom{t}\sigma, i \nvDashOriginal \LTLfalse\\ \sigma, i & \vDashOriginal a && \text{ iff }\ A_i \vDash a, \text{i.e. } a \in A_i\\ \sigma, i & \vDashOriginal \neg a && \text{ iff }\ A_i \nvDash a, \text{i.e. } a \notin A_i\\ \sigma, i & \vDashOriginal \varphi_1 \wedge \varphi_2 &&\text{ iff }\ \sigma, i \vDashOriginal \varphi_1 \text{ and}\ \sigma, i \vDashOriginal \varphi_2\\ \sigma, i & \vDashOriginal \varphi_1 \vee \varphi_2 &&\text{ iff }\ \sigma, i \vDashOriginal \varphi_1 \text{ or}\ \sigma, i \vDashOriginal \varphi_2\\ \sigma, i & \vDashOriginal \LTLnext \varphi &&\text{ iff }\ {\sigma, i+1 \vDashInitial \varphi}\\ \sigma, i & \vDashOriginal \varphi_1 \LTLuntil \varphi_2 && \text{ iff }\ \exists j, j \geq i.\ \sigma,j \vDashOriginal \varphi_2\ \text{and} \ \forall k,i\leq k < j.\ \sigma, k \vDashOriginal \varphi_1\\ \sigma, i & \vDashOriginal \phi_1 \LTLrelease \phi_2 && \text{ iff }\ \forall j,{ \color{red}|\trace| > j \geq i.}\ \sigma, j \vDashOriginal \phi_2\ \text{or}\\ &&&\phantom{ a}\ \exists k,{ \color{red}|\trace| > k \geq i.}\ (\sigma, k \vDashOriginal \phi_1\wedge \forall l, i \leq l \leq k.\ \sigma, l \vDashOriginal \phi_2) \end{align*} Informally, $\phi_1 \LTLrelease \phi_2$ opens the \textit{obligation} $\phi_2$ in every execution step which contradicts (1) if evaluated after the update. As standard LTL semantics enables the specification to infinitely open new obligations, $\vDashInitial$ is built to limit this behavior to the actual finite execution of the initial system. The definition of $\vDashInitial$ mostly follows the definition of $\vDash$, except for the evaluation of \textit{release} formulas. For all indices greater or equal to the length of the trace, $\phi_1 \LTLrelease \phi_2$ is immediately satisfied, thus imposing the end of newly created obligations from the initial implementation. Therefore, the initial operator permits the transfer of finitely satisfiable obligations to the update system (2), but forbids the impact of the initial system after its termination (1). Note that for LTL formulas in PNF, all operators except \textit{release} only specify finite behavior and all open obligations are satisfiable in finite time (3). The newly introduced operators are used to define the language of \liveltl. \begin{definition}[Language of \liveltl] Let $\phi,\psi$ be LTL formulas and let $\trace \in (2^{AP})^{*}$. The linear time property induced by $\phi, \psi$, and $\trace$ is \vspace{-0.12cm} \[\text{Words}(\phi,\psi, \trace) = \{\trace \cdot \sigma \in (2^{AP})^{\omega} \mid \trace \cdot \sigma, 0 \vDashInitial \phi\ \wedge\trace \cdot \sigma, 0 \vDashUpdate \psi\}.\] \end{definition} \vspace{-0.09cm} The language is dependent on the initial specification, the update specification, and the finite trace. Evaluating the inclusion of an infinite trace with the first $|\trace|$ elements being fixed consists of a combination of the operators $\vDashInitial$ and $\vDashUpdate$. The initial \liveltl operator is defined on the syntactic structure of the initial formula and is insensitive with respect to syntactic tautologies. Providing formulas without syntactic ambiguity that cannot be dissolved in $|\trace|$ time steps is left to the specifier. The following theorem relates \liveltl and LTL. \begin{theorem}\label{th:liveltlltl} \liveltl and LTL are equally expressive. \end{theorem} The proof is a reduction via encoding the initial trace into the LTL formula. While being equally expressive, \liveltl enables the direct evaluation of the newly introduced live update problems on a given context. Correctness for finite trace live updates follows from standard language inclusion. \begin{definition}[Finite Trace LiveLTL Update]\label{def:finitetraceliveupdateltl} Let $TS_U$ be an update system, $\phi$ be an initial specificaiton, $\psi$ be an update specification, and $\eta$ be a finite trace. $TS_U$ is correct w.r.t. finite trace \liveltl if $\trace \cdot \text{Traces}(TS_U) \subseteq \text{Words}(\phi, \psi, \trace).$ \end{definition} \begin{example} Interpreting the running example as finite trace LiveLTL update, we can obtain the finite trace $\trace = \{m_1, i_0, i_1, r\}, \{i_1\}, \{m_0, m_1\}$ as execution of the relay station. Evaluating $\trace$ with $\vDashInitial$ shows that $\LTLfinally\,i_0$, $\LTLfinally\,i_1$, and $\LTLfinally\,r$ need to be satisfied by the update system, since both measurements are unanswered and no report was given after both base stations sent their measurements. Note that changing the last trace element to $\{m_0\}$ eliminates the obligations for the base station $i_1$ and the report $r$. \end{example} The finite trace update directly translates to the definition of \liveltl, whereas the universal live update adds a level of quantification. \begin{definition}[Universal Live LTL Update]\label{def:universalliveupdateltl} Let $TS_I$ be an initial system, $TS_U$ be an update system, $\phi$ be an initial specification, and $\psi$ be an update specification. $TS_{U}$ is correct w.r.t. universal \liveltl if \vspace{-0.1cm} \[\forall \trace \in \text{FinTraces}(TS_I): \trace \cdot \text{Traces}(TS_U) \subseteq \bigcup_{\trace \in \text{FinTraces}(TS_I)}\text{Words}(\phi, \psi, \trace).\] \end{definition} \vspace{-0.1cm} To satisfy the universal update condition, the update system needs to be robust against every possible obligation of the initial system. We explore the model checking and synthesis problems of \liveltl in \Cref{sec:model-checkingandsynthesis}. \subsection{Obligations} The impact of the initial system on the update system is declared by the operator $\vDashInitial$ and forms a class of temporal properties. We investigate this class and build a monitor that traces the open obligations during the execution of a system. In practice, the explicit update to be performed is unknown during the design of the initial system. Therefore, one approach to face live updates is keeping track of \textit{open} obligations while the system is executed. To obtain the expressivity of the obligations possibly enforced by \liveltl, we introduce the \textit{obligation property}. \begin{definition}[Obligation Property] A linear time property $P_{obl}$ over AP is called an \textit{obligation} property if for all words $\sigma \in P_{obl}$ there exists a good prefix, i.e., for every $\sigma \in P_{obl}$ there exists a word $ \sigma[0, m]$ s.t. $\forall x. x \in (2^{AP})^\omega:\ \sigma[0,m] \cdot x \in P_{obl}$. Obligation properties coincide with the class of \textit{co-safety} properties. \end{definition} Obligations and co-safety properties describing the same language is a natural outcome of the \liveltl semantics. To obtain the open obligations with constant cost during runtime, the construction of a monitor tracking the obligations provides a space bounded solution. The monitor is meant to be constructed simultaneously to the initial system. \begin{definition}[Obligation Monitor] Let $\stripoperator : LTL \rightarrow LTL$ be a function syntactically substituting every $\LTLrelease$ by $\LTLtrue$. A deterministic obligation monitor for an LTL formula $\phi$ is the tuple $\obligationmonitor_\phi = (T, t_0, \Upsilon, \af, o)$, where $T = \{ \phi' \mid \omega \in \apstar: \phi' = \af(\phi, \omega)\}$ is the set of states, $t_0 = \stripoperator(\phi)$ is the initial state, $\Upsilon = 2^{AP}$ is the set of directions, $\af$ is the transition function defined over $T$ and $\Upsilon$, and $o(t) = \stripoperator(t)$ is the labeling function. \end{definition} Since the state space of $\obligationmonitor_\phi$ corresponds to the state exploration of $\phi$, converting the formulas to obligations is achieved by $\stripoperator$ and stored in the labeling function. This can be interpreted as the obligations that have to be satisfied by the update system if an update is initiated in this state. The obligation monitor only tracks states and does not guarantee that every reachable state corresponds to a reachable state of a correct implementation of $\phi$. We justify this property by assuming $TS_I$ is correct. \newsavebox{\boxfinally} \savebox{\boxfinally}{$ \LTLfinally \,$} \begin{figure}[t] \centering \resizebox{.6\textwidth}{!}{ \begin{tikzpicture} \node[draw,fill=blue!20,minimum height=3em,minimum width=6em,rounded corners=2] (N0) at (0,0) {$ \LTLtrue$}; \node[draw,fill=blue!20,minimum height=3em,minimum width=6em,rounded corners=2] (N3) at (4,0) {$\LTLnext$ \usebox{\boxfinally}$i_1$}; \node[draw,fill=blue!20,minimum height=3em,minimum width=6em,rounded corners=2] (N4) at (4,-2.1) {\usebox{\boxfinally}$ i_1 \wedge \LTLnext$ \usebox{\boxfinally}$i_1$}; \node[draw,fill=blue!20,minimum height=3em,minimum width=6em,rounded corners=2] (N5) at (0,-2.1) {\usebox{\boxfinally}$i_1$}; \node[minimum height=0.2em,minimum width=2.1em,anchor=west] (NN) at (N0.west) {}; \node[minimum height=0.2em,minimum width=2.1em,anchor=east] (NX) at (N3.east) {}; \node[circle,inner sep=0pt] (I) at ($ (N0.north west) + (0.2,0.4) $) {}; \path[->,>=stealth] (I) edge ($ (N0.north west) + (0.4,0) $) (NN) edge[loop left] node {$ \neg m_1 $} (N0) (NX) edge[loop right,opacity=0] node[opacity=0] {\phantom{$ m_{0} $}} (N3) (N0) edge node[above] {$ m_{1} $} (N3) (N3) edge[bend right=7] node [left]{$ m_{1} $} (N4) edge[bend right=7] node[above left] {$\neg m_1$}(N5) (N4) edge[loop below] node {$ m_1 \wedge \neg i_1 $} (N4) edge[bend left=7] node[below] {$ \neg m_{1}$} (N5) edge[bend right = 7] node [right]{$m_1 \wedge i_1$}(N3) (N5) edge[loop below] node {$ \neg m_1 \wedge \neg i_1 $} (N5) edge[bend left=7] node[above, yshift=-1] {$ m_1 \wedge \neg i_1 $} (N4) edge [bend right=7] node[below right, xshift=-3pt, yshift=2pt]{$m_1 \wedge i_1$}(N3) edge node[left] {$ \neg m_{1} \wedge i_1 $} (N0) ; \end{tikzpicture} } \caption{The obligation monitor for $\protect\phi_1$ with one base station.} \label{fig:minimalOM} \end{figure} \begin{example} \Cref{fig:minimalOM} displays the obligation monitor for $\phi_1 =\LTLglobally(m_1 \rightarrow \LTLnext \LTLeventually i_1)$ of our running example with one base station. The monitor starts in an obligation free state corresponding to the state before the system is started and contains one direction for every element of $2^{AP}$. Note that we denote directions symbolically. Whenever $m_1$ is received on an edge, the obligation $\LTLnext \LTLeventually i_1$ is raised. From the $\LTLnext \LTLeventually i_1$ state, we differentiate between $m_1$ and $\neg m_1$ leading to another raise of the $\LTLnext \LTLeventually i_1$ obligation together with $\LTLeventually i_1$ or only $\LTLeventually i_1$ respectively. Returning to the obligation $\LTLtrue$ is only possible if $i_1$ is set to $\LTLtrue$ and $m_1$ is $\LTLfalse$ in the same step. \end{example} Note that an offset between initial system and obligation monitor is created. While transitions of the initial system consider environment inputs and states correspond to system outputs, elements of the state space of the obligation monitor are formulas and the transitions are defined by inputs and outputs combined. Residing in a state in the obligation monitor can be interpreted as taking a transition in the system and not yet reaching the next state. \Cref{fig:minimalOM} shows a monitor for a specification, where the implementation is unknown during construction and the obligation monitor over-approximates the reachable states of the implementation. One can limit the reachable states of the monitor to the paths in the transition system. Indeed, in regard of completeness, unreachable obligations need to be eliminated from the obligation monitor during verification. \section{Introduction}\label{sec:introduction} \input{InputFiles/introduction} \section{Running Example -- Relay Station}\label{sec:example} \input{InputFiles/running-example} \section{Preliminaries}\label{sec:prelimnaries} \input{InputFiles/preliminaries} \section{Live Updates}\label{sec:liveupdates} \input{InputFiles/liveupdates} \section{A Temporal Language for Live Updates}\label{sec:liveltl} \input{InputFiles/liveltl} \section{Model Checking and Synthesis}\label{sec:model-checkingandsynthesis} \input{InputFiles/model-checkingandsynthesis} \section{Case Study}\label{sec:casestudy} \input{InputFiles/casestudy.tex} \section{Related Work}\label{sec:relatedwork} \input{InputFiles/relatedwork} \section{Conclusion}\label{sec:conclusion} \input{InputFiles/conclusion} \bibliographystyle{splncs04}
2205.03545
\section{Introduction} \label{Intro} A mathematical function `$f$' in the `$x$' domain can be transformed to a function `$F$' in the `$u$' domain using the integral \begin{equation} F(u) = \int_{a}^{b} f(x) \, K(u,x) dx. \end{equation} This process is known as an integral transform and the function $K(u,x)$ is the kernel of the transformation. Here the integral transform maps the problem in a given domain to another domain in which it is simpler to solve. Laplace transform is one of the widely used integral transforms in physics. It is very useful in solving convolution integral equations and differential equations. The inverse Laplace transform is an equally important transform with several applications. In general an inverse Laplace transform is carried out using a Bromwich contour integral in the complex plane \cite{schiff1999laplace} . But this practice of using a complex variable technique is done only for the sake of convenience. In fact an inverse Laplace transform based only on real variables was introduced by Post \cite{post1930generalized} and later it was refined by Widder \cite{widder1934inversion}. This method has been investigated in several works \cite{jagerman1982inversion,al2001inversion,soldatov2000widder} considering different applications. The kernel of a Laplace transform is an exponential function of the form $\exp(-st)$. Tsallis introduced \cite{tsallis1988possible} generalizations of the logarithm and the exponential functions as follows: \begin{equation} \ln_{q}(x) \equiv \frac{x^{1-q} - 1} {1-q}; \qquad \exp_{q}(x) \equiv [1 + (1-q) x]^{\frac{1}{1-q}}, \end{equation} where $q \in \mathbb{R}_{+}$ is the generalization parameter. These functions are generally referred to as Tsallis q-logarithm and Tsallis q-exponential and are inversely related. The $q$-logarithm and the $q$-exponential function reduces to the usual logarithm and exponential functions in the $q \rightarrow 1$ limit. These generalized functions have been investigated in a wide variety of fields like astrophysics \cite{nakamichi2002non,sakagami2004self}, high energy physics \cite{tsallis2003nonextensive,tsallis2003fluxes} neutrino physics \cite{luciano2021nonextensive}, mathematical physics \cite{umarov2022mathematical,yamano2002some} and nonequilibrium statistical physics \cite{rajagopal1998equations}. A generalization of the Laplace transform has been done in Ref. \cite{lenzi1999q,plastino2013tsallis,chung2013q,naik2016q} using the Tsallis q-exponential. Here again the inverse was based on using a contour integral in the complex plane. In our present work we extend the concept of Post and Widder's method to the generalized Laplace transform based on Tsallis q-exponential. The work is organized as follows: In Section 2, we give a brief introduction to the Laplace transform, Post-Widders technique and the Tsallis $q$-exponential. The Laplace and inverse Laplace based on type - I kernel is defined in the third section. The properties of Laplace transform are derived in Section 4. In Section 5 we give the $q$-generalization of the Post-Widder's method for the inverse Laplace transform. The Laplace transform and the inverse Laplace transform based on the Widder's method for different functions are computed in Section 6. As an application of this method we calculate the density of states of a $D$-dimensional classical ideal gas and a linear harmonic oscillator from the partition function using the Post-Widder's method based inverse Laplace transform. \section{A primer on Laplace transform, Post- Widder's technique and Tsallis $q$-exponential} \label{primer} In this section we briefly review the concept of Laplace transform and its inverse. Then we describe the Post-Widder's method to calculate the inverse Laplace transform. The Laplace transform of a function $f(t)$ denoted by $\mathcal{L}[f(t)] = F(s)$ is defined as \begin{equation} F(s) = \int_{0}^{\infty} \exp(-st) f(t) dt. \label{laplacetransform} \end{equation} Here $f \in S(\mathbb{R})$ where $S(\mathbb{R})$ represents the Schwartz space of functions $f: \mathbb{R} \rightarrow \mathbb{C}$ with $f \in C^{\infty} (\mathbb{R})$, i.e., $f$ is infinitely differentiable on $\mathbb{R}$. In general $s = \sigma + i \tau$ with $\sigma$ and $\tau$ being real numbers. The integral converges when ${\rm Re}[s] = \sigma > 0$ and for $\sigma < 0$, $F(s) =0$. For the Laplace transform of $F(s)$, the inverse transform is defined as \begin{equation} f(t) = \frac{1}{2 \pi i} \lim_{\tau \rightarrow \infty} \int_{\sigma - i \tau}^{\sigma+i \tau} \exp(st) F(s) ds. \label{inverselaplacetransform} \end{equation} The direct Laplace transform Eq. (\ref{laplacetransform}) and the inverse Laplace transform Eq. (\ref{inverselaplacetransform}) are inverses of each other for the functions $f \in S(\mathbb{R})$. In order to calculate the inverse Laplace transform we need to perform a Bromwich contour integration over the complex plane. Below we explain an alternative method introduced by Post and Widder which does a Laplace inverse using only real variables. Let us consider the $n^{th}$ derivative of the Laplace transform $F(s)$ with respect to the variable `$s$', \begin{equation} \frac{ {\rm d}^{n}} {{\rm d} s^{n}} F(s) \equiv F^{(n)}(s) = (-1)^{n} \int_{0}^{\infty} t^{n} \exp(-st) f(t) dt. \label{laplacederiv} \end{equation} We use the following three steps, (i) First we use the variable transformation $s =n/x$, (ii) We follow it by multiplying the numerator and the denominator by $x^{n}$ and (iii) Finally we use the variable change $y= t/x$, and recast the integral in Eq. (\ref{laplacederiv}) to \begin{equation} F^{(n)} \left( \frac{n}{x} \right) = (-1)^{n} x^{n+1} \int_{0}^{\infty} (y e^{-y})^{n} f(xy) dy. \label{laplaceintegral} \end{equation} Here we would like to point out that the function $ y e^{-y}$ has a single maximum at $y=1$ and for the function $(y e^{-y})^{n}$ this maximum is sharply peaked at $y=1$ and hence Eq. (\ref{laplaceintegral}) can be rewritten as \begin{equation} F^{(n)} \left( \frac{n}{x} \right) \approx (-1)^{n} x^{n+1} f(x) \int_{0}^{\infty} (y e^{-y})^{n} dy. \end{equation} Evaluating the integral we get \begin{equation} F^{(n)} \left( \frac{n}{x} \right) \approx (-1)^{n} n! \left( \frac{x}{n} \right)^{n+1} f(x). \label{laplaceintresult} \end{equation} In the limit $n\rightarrow \infty$, we can use the transformation $y=\frac{t}{x} \big|_{y=1} \Rightarrow x=t$ and so $s=\frac{n}{x} \Rightarrow s=\frac{n}{t}$ we can rewrite Eq. (\ref{laplaceintresult}) as \begin{equation} f(t) = \lim_{n \to \infty} \frac{(-1)^{n}}{n!} \, s^{n+1} F^{n} (s) |_{s=n/t}. \end{equation} Thus we observe that the Laplace transform and the inverse Laplace transform can be expressed as functions of real variables alone. This sequence converges very fast since the rate of convergence is at least $1/n$. The Laplace transform exists for a piecewise continuous function of exponential order. Similarly, the $q$-Laplace transform has been defined for a piecewise continuous function of q-exponential order. The $q$-exponential based Laplace transform can be defined using three different types of kernels as noted in Ref. \cite{lenzi1999q} and these kernels are \begin{eqnarray} K_{{\rm I}} (q;s,t) &=& \exp_{q}(-st), \\ K_{{\rm II}}(q;s,t) &=& [\exp_{q}(-t)]^{s}, \\ K_{{\rm III}}(q;s,t) &=& [\exp_{q}(t)]^{-s}. \end{eqnarray} Of these three kernels, Laplace transform has been defined using the first and the second kernels in Ref . \cite{lenzi1999q,chung2013q,naik2016q}. In the extensive limit ($q \rightarrow 1$), both these generalized Laplace transforms reduce to the ordinary Laplace transform. For the generalized Laplace transform based on the first kernel \cite{naik2016q}, the inverse Laplace transform has not been defined. But for the case where the Laplace transform was defined using the second kernel, the inverse transform was defined using the complex integration method. \section{Laplace and Inverse transform based on type - I kernel} \label{typeIkernel} The Laplace transform due to type-I kernel which was introduced in \cite{naik2016q} is examined in this section. For this generalization, the inverse Laplace transform has not been introduced so far and we restrict and present only the $q < 1$ case here. Below we define the inverse Laplace transform and prove its inverse nature. A function `$f$' is said to be of $q$-exponential order `$c$', if there exists $c$, $M >0$, $T>0$, such that $|f(t)| \leq M \exp_{q}(ct)$ $\forall$ $t>T$. If a function is piecewise continuous and is of $q$-exponential order $c$, then $F_{q}(s) = L_{q} [f(t)]$ exists for $s>c$ and $\lim_{s \rightarrow \infty} F(s) =0$. Under these conditions, the $q$-Laplace transform is defined as \begin{equation} L_{q} [f(t)](s) = F_{q}(s) = \int_{0}^{\infty} f(t) \exp_{q}(-st) dt. \label{laplacetypeI} \end{equation} The corresponding inverse Laplace transform is defined as \begin{equation} L_{q}^{-1} [ F_{q}(s) ] (t) = f(t) = \frac{2-q}{2 \pi i} \int_{c-i \infty}^{c+i\infty} F_{q}(s) [\exp_{q}(-st)]^{2q-3} ds, \end{equation} where in the limit $q \rightarrow 1$, we have $[\exp_{q}(-st)]^{2q-3} \rightarrow \exp(st) $. Here $c$ is a real constant that exceeds real part of all the singularities of $F_{q}(s)$. To prove the inverse relationship between $L_{q}$ and $L_{q}^{-1}$, we verify the following two identities: \begin{eqnarray} f(t) &=& L_{q}^{-1} [ L_{q} [ f(t) ] ], \label{Identity1} \\ F_{q}(s) &=& L_{q} [ L_{q}^{-1} [ F_{q} (s) ] ]. \label{Identity2} \end{eqnarray} Using the inverse Laplace transform we can write (\ref{Identity2}), \begin{eqnarray} \fl L_{q} [ L_{q}^{-1} [ F_{q} (s) ] ] &=& \int_{0}^{\infty} \exp_{q}(-st) \; L_{q}^{-1} [ F_{q} (s) ] \nonumber \\ &=& \int_{0}^{\infty} dt \exp_{q}(-st) \; \frac{(2-q)}{2 \pi i } \; \int_{c-i \infty}^{c+i \infty} ds^{\prime} \; F_{q}(s^{\prime}) [ \exp_{q}(-s^{\prime} t) ]^{2q-3}. \end{eqnarray} We can rewrite the above expression as \begin{equation} \fl L_{q} [ L_{q}^{-1} [ F_{q} (s) ] ] = \frac{(2-q)}{2 \pi i } \; \int_{c-i \infty}^{c+i \infty} ds^{\prime} \; F_{q}(s^{\prime}) \int_{0}^{\infty} dt \exp_{q}(-st) \; [ \exp_{q}(-s^{\prime} t) ]^{2q-3}. \label{laplacelaplaceinverse} \end{equation} The second integral converges when ${\rm Re}[s^{\prime}] = c < {\rm Re}[s]$ and the resulting solution is \begin{equation} \mathcal{I}_{q} (s,s^{\prime}) = \int_{0}^{\infty} dt \exp_{q}(-st) \; [ \exp_{q}(-s^{\prime} t) ]^{2q-3} = \frac{1}{(2-q)} \; \frac{1}{(s-s^{\prime})}. \end{equation} Substituting this in Eq. (\ref{laplacelaplaceinverse}) we get \begin{equation} L_{q} [ L_{q}^{-1} [ F_{q} (s) ] ] = \frac{1}{2 \pi i } \; \int_{c-i \infty}^{c+i \infty} ds^{\prime} \frac{F_{q}(s^{\prime})} {(s-s^{\prime})}. \label{laplapinverse} \end{equation} The function $F_{q} (s)$ is an entire function and does not have any poles in the complex plane. The integrand $F_{q}(s^{\prime})/(s-s^{\prime})$ has a simple pole at $s=s^{\prime}$. To evaluate the integral we draw a straight line at $s^{\prime}$ and an arc enclosing the pole of the integrand. The solution of the integral is \begin{equation} \int_{c-i \infty}^{c+i \infty} ds^{\prime} \frac{F_{q}(s^{\prime})} {(s-s^{\prime})} = 2 \pi i F_{q} (s). \end{equation} Substituting this in Eq. (\ref{laplapinverse}) we can observe that $L_{q} [ L_{q}^{-1} [ F_{q} (s) ] ] = F_{q} (s)$. Next we verify the first identity Eq. (\ref{Identity1}) as follows: \begin{equation} L_{q}^{-1} [ L_{q} [ f(t) ] ] \equiv L_{q}^{-1} [F_{q}(s)] = L_{q}^{-1} \left[ \int_{0}^{\infty} dt \exp_{q}(- st) f(t) \right]. \end{equation} Using the definition of the inverse Laplace transform \begin{eqnarray} \fl L_{q}^{-1} [ L_{q} [ f(t) ] ] = \frac{2-q}{2 \pi i} \int_{c-i \infty}^{c+\infty} [ \exp_{q}(-s t) ]^{2q-3} \int_{0}^{\infty} dt^{\prime} f(t^{\prime}) \exp_{q}(-s t^{\prime}) \; ds. \label{lapinverselap} \end{eqnarray} Rewriting the above equation (\ref{lapinverselap}) we arrive at \begin{equation} \fl L_{q}^{-1} [ L_{q} [ f(t) ] ] = \frac{2-q}{2 \pi i} \; \int_{0}^{\infty} dt^{\prime} f(t^{\prime}) \int_{c-i \infty}^{c+\infty} [ \exp_{q}(-s t) ]^{2q-3} \exp_{q}(-s t^{\prime}) \; ds. \end{equation} Using the definition of the Dirac delta function based on the $q$-exponential described in Ref. \cite{mamode2010integral,jauregui2010new} we get \begin{equation} L_{q}^{-1} [ L_{q} [ f(t) ] ] = \int_{0}^{\infty} dt^{\prime} f(t^{\prime}) \delta(t-t^{\prime}) \equiv f(t). \end{equation} Thus in the present section, we have introduced the inverse for the Laplace transform defined using the type-I kernel using the complex integration technique. \section{Properties of $q$-Laplace transform} \label{properties} In this section, we list some of the properties of the $q$-Laplace transform: \begin{enumerate} \item ${\rm I}^{st}$ Identity on Limits: \begin{equation} \lim_{s \rightarrow \infty} s L_{q}[f(t)] = \lim_{t \rightarrow 0} \frac{f(t)}{1+(1-q)}, \end{equation} {\it Proof:} Let us consider a general convergent function which can be expressed in terms of a power series $f(t) = \sum_{n=0}^{\infty} a_{n} t^{n}$. The $q$-Laplace transform of this general function is % \begin{equation} L_{q} [f(t)] = \int_{0}^{\infty} \left(\sum_{n=0}^{\infty} a_{n} t^{n} \right) [1-(1-q)st]^{\frac{1}{1-q}} \, dt . \label{lc11} \end{equation} % Since the function $f(t)$ is a convergent function we can rewrite the above Equation (\ref{lc11}) as % \begin{equation} L_{q} [f(t)] = \sum_{n=0}^{\infty} a_{n} \int_{0}^{\infty} t^{n} [1-(1-q)st]^{\frac{1}{1-q}} dt . \label{lc12} \end{equation} % To solve Eq. (\ref{lc12}) we use the integration by parts method choosing $u = t^{n}$ and $dV = [1-(1-q)st]^{\frac{1}{1-q}}$. The resulting solution reads: % \begin{eqnarray} s L_{q} [f(t)] &=& - \frac{1}{1+(1-q)} \bigg( f(t) [1- (1-q) st]^{\frac{1}{1-q} + 1} \big|_{0}^{\infty} \nonumber \\ & & - \int_{0}^{\infty} [1-(1-q) st] \left [ \frac{d}{dt} f(t) \right] \exp_{q} (-st) dt \bigg) . \label{lc13} \end{eqnarray} On applying the limits corresponding to the integration in Eq. (\ref{lc13}) we get % \begin{eqnarray} s L_{q} [f(t)] &=& \frac{f(0)}{1+(1-q)} + \frac{1}{1+(1-q)} \nonumber \\ & & \int_{0}^{\infty} [1-(1-q)st] \; \frac{d f(t)}{dt} [1 - (1-q) st]^{\frac{1}{1-q}} dt. \label{lc14} \end{eqnarray} % Under the limiting condition $s \rightarrow \infty$, Eq. (\ref{lc14}) gives \begin{equation} \lim_{s \rightarrow \infty} s L_{q}[f(t)] = \lim_{s \rightarrow \infty} \frac{f(t)}{1+(1-q)} . \end{equation} We can observe that the RHS is independent of $s$ and can be expressed as a limiting value of the parameter $t$ and this gives us \begin{equation} \lim_{s \rightarrow \infty} s L_{q}[f(t)] = \lim_{t \rightarrow 0} \frac{f(t)}{1+(1-q)}. \end{equation} Thus we prove the first identity on the limits of a Laplace transform. \item ${\rm II}^{nd}$ Identity on Limits: \begin{equation} \lim_{s \rightarrow 0} s L_{q}[f(t)] = \lim_{t \rightarrow \infty} \frac{f(t)}{1+(1-q)}. \end{equation} {\it Proof:} To prove this limit let us consider the Eq. (\ref{lc13}) and evaluate the limit, $s \rightarrow 0$ and the resulting expression is % \begin{eqnarray} \lim_{s \rightarrow 0} s L_{q} [f(t)] &=& \lim_{s \rightarrow 0} \frac{f(0)}{1+(1-q)} + \frac{1}{1+(1-q)} \int_{0}^{\infty} \frac{df}{dt} dt \nonumber \\ &=& \frac{f(0)}{1+(1-q)} + \frac{f(t)}{1+(1-q)} \bigg|_{0}^{\infty}. \end{eqnarray} % Applying the limits we get % \begin{equation} \lim_{s \rightarrow 0} s L_{q} [f(t)] = \frac{f(\infty)}{1+(1-q)}. \end{equation} % The RHS of the equation can be rewritten as % \begin{equation} \lim_{s \rightarrow 0} s L_{q} [f(t)] = \lim_{t \rightarrow \infty} \frac{f(t)}{1+(1-q)}. \end{equation} % Thus we prove the second identity on the Laplace transform. \item Scaling \begin{equation} L_{q} [ f(at) ] = \frac{1}{a} F_{q} (s/a). \end{equation} {\it Proof:} Let us consider the Laplace transform of a function $f(at)$ \begin{equation} L_{q} [ f(at) ] = \int_{0}^{\infty} dt [1-(1-q) st]^{\frac{1}{1-q}} f(at). \end{equation} Substituting $at = x$, we get \begin{equation} L_{q} [ f(at) ] = \frac{1}{a} \, \int_{0}^{\infty} dx \bigg[1-(1-q) \frac{sx}{a} \bigg]^{\frac{1}{1-q}} f(x) = \frac{1}{a} F_{q}(s/a). \end{equation} Hence the scaling relation is proved. \item Shifting \begin{equation} F_{q}(s-s_{0}) = L_{q} \left[ f(t) \exp_{q} \left[ \frac{s_{0} t}{1-(1-q) s t} \right] \right] . \end{equation} {\it Proof:} Let us consider the definition of the Laplace transform % \begin{equation} F_{q}(s) = \int_{0}^{\infty} \exp_{q}(-st) f(t) dt, \end{equation} % and introduce a shift in `$s$' as `$s \rightarrow s-s_{0}$' and this yields % \begin{equation} F_{q}(s-s_{0}) = \int_{0}^{\infty} \exp_{q}(-(s-s_{0})t) f(t) dt. \end{equation} % This can be rewritten as % \begin{equation} F_{q}(s-s_{0}) = \int_{0}^{\infty} \exp_{q}(-st) \exp_{q} \bigg( \frac{s_{0} t}{1-(1-q)st} \bigg) f(t) dt. \end{equation} % Hence we have proved % \begin{equation} F_{q}(s-s_{0}) = L_{q} \bigg[ f(t) \exp_{q} \bigg( \frac{s_{0} t}{1-(1-q)st} \bigg) \bigg]. \end{equation} \item $q$-translation \begin{equation} \fl L_{q} [f(t)] \big[ \exp_{q}(st_{0}) \big]^{1+(1-q)} = L_{q} \bigg[ f\left(\frac{t-t_{0}}{1 - (1-q) s t_{0}} \right) \Theta \left( \frac{t - t_{0}}{1-(1-q)s t_{0}} \right) \bigg]. \end{equation} {\it Proof:} The RHS of the above identity gives % \begin{eqnarray} I &\equiv& L_{q} \bigg[ f\left(\frac{t^{\prime}-t_{0}}{1 - (1-q) s t_{0}} \right) \Theta \left( \frac{t^{\prime} - t_{0}}{1-(1-q)s t_{0}} \right) \bigg] \nonumber \\ &=& \int dt^{\prime} f\left(\frac{t^{\prime}-t_{0}}{1 - (1-q) s t_{0}} \right) \Theta \left( \frac{t^{\prime} - t_{0}}{1-(1-q)s t_{0}} \right) \exp_{q}(-s t^{\prime}), \nonumber \label{qtrans1} \end{eqnarray} % where $\Theta(x)$ is the Heaviside step function such that $\Theta(x) = 0$ when $x<0$ and $\Theta(x) = 1$ for $x \geq 0$. Using the scaling $t = \frac{t^{\prime} - t_{0}}{1-(1-q)s t_{0}}$, we can rewrite the integral as % \begin{equation*} I = \int_{- \alpha}^{\infty} \; dt \exp_{q}(-st) \; f(t) [\exp_{q}(-s t_{0})]^{2-q} \; \Theta(t). \end{equation*} % where $\alpha = \frac{t_{0}}{1-(1-q)s t_{0}}$. On applying the Heaviside step function we get % \begin{equation} I = L_{q}[f(t)] [\exp_{q}(-s t_{0})]^{2-q}. \end{equation} % Hence the $q$-translation identity has been proved. \end{enumerate} The properties of linearity and $q$-convolution have been established in Ref. \cite{naik2016q} and here we are stating them just for the sake of completeness. The two properties of linearity and $q$-convolution are \begin{enumerate} \item Linearity: \begin{equation} L_{q} [a_{1} f_{1}(t) + a_{2} f_{2}(t) ] = a_{1} L_{q} [ f_{1}(t) ] + a_{2} L_{q} [ f_{2}(t) ]. \end{equation} \item $q$-convolution: \\ Let $f(t)$ and $g(t)$ be two positive scalar functions of `$t$', and $F_{q}(s)$ and $G_{q}(s)$ be their q-Laplace transforms, then \begin{equation} L_{q}[f(t) \ast g(t)] = F_{q}(s) \ast G_{q} (s), \end{equation} where $f(t) \ast g(t) = \int_{0}^{t} f(\tau) \ast g(t-\tau) d \tau$. \end{enumerate} \noindent{\it Laplace transform of $q$-derivatives and $q$-integrals:} The properties of the $q$-Laplace transform based on derivatives and integrals have been derived in Ref. \cite{naik2016q}. For the sake of completeness, we give below the expression for the generalized Laplace transform of a derivative function \begin{eqnarray} \fl L_{q} \left[ \frac{{\rm d}^{n}} {{\rm d} t^{n} } f(t) \right] &=& - \left(f^{(n-1)}(t) + (1-\delta_{1n}) \sum_{\ell=1}^{n-1} Q_{\ell -1} (q) s^{\ell} f^{(n-\ell-1)}(t) \right) \Bigg\vert_{t=0} \nonumber \\ \fl & & + Q_{n-1}(q) s^{n} L_{\frac{a_{n+1}}{a_{n}}} (f(t)) (a_{n} s), \end{eqnarray} where, $Q_{n}(q) = \prod_{j=0}^{n} a_{j}$ with $a_{j} = jq - (j-1)$ and $\delta_{1j}$ is the Kronecker delta function. The Laplace transform of the integral reads: \begin{equation} L_{q} \left[ \int_{0}^{\infty} f(x) dx \right] = \frac{2-q}{s} L_{\frac{1}{2-q}} \left[ f(t) \right] \left(s(2-q)\right). \label{integrallaplacetransform} \end{equation} In the present work, we derive the $q$-Laplace transform of $q$-calculus of functions. The concept of $q$-calculus was first introduced in \cite{borges2004possible}, where the authors defined the derivative and integral based on $q$-deformation. Many such derivatives and integrals were investigated in subsequent works \cite{umarov2022mathematical,lenzi1999q,naik2016q}. For the present work we use the $q$-derivative defined in \cite{chakrabarti2010nonextensive} and its corresponding $q$-deformed integral. The relevant $q$-derivative and $q$-integral operators are \begin{equation} \fl D_{q}(s) = \frac{1}{1-(1-q) \left( s \frac{d}{ds} \right)} \frac{d}{ds}; \qquad \int d_{q} x = \int dx \left[1-(1-q) x \frac{d}{dx}\right]. \end{equation} \begin{enumerate} \item Derivative of Laplace transform: \begin{equation} D_{q}^{n}(s) \{ L_{q} [f(t)] \} (s) = L_{q}[(-t)^{n} f(t)]. \end{equation} \noindent{\it Proof:} Let us consider the first derivative of the $q$-Laplace transform \begin{eqnarray} D_{q} \{ L_{q} [f(t)] \} (s) &=& D_{q}(s) \left \{ \int_{0}^{\infty} dt \exp_{q}(-st) f(t) \right \} \nonumber \\ &=& \int_{0}^{\infty} dt \, D_{q}(s) \exp_{q}(-st) f(t) \nonumber \\ &=& L_{q}[(-t) f(t)]. \end{eqnarray} The second derivative of the $q$-Laplace transform is \begin{equation} D_{q}^{2} \{ L_{q} [f(t)] \} (s) = L_{q}[(-t)^{2} f(t)]. \end{equation} For the $n^{th}$ $q$-derivative we get \begin{equation} D_{q}^{n} \{ L_{q} [f(t)] \} (s) = L_{q}[(-t)^{n} f(t)]. \end{equation} \item Integral of a Laplace transform: \begin{equation} \int_{0}^{\infty} d_{q}s \, F_{q}(s) = L_{q} \left\{ \frac{f(t)}{t} \right\}. \end{equation} Let us consider the definition of the generalized Laplace transform apply the $q$-integral operator \begin{equation} \int_{0}^{\infty} d_{q} s \, F_{q}(s) = \int_{s}^{\infty} d_{q} s \int_{0}^{\infty} \exp_{q}(-st) f(t) dt. \end{equation} Rearranging the order of the integrals and evaluating the $q$-integral with respect to `$s$' gives \begin{equation} \int_{0}^{\infty} d_{q} s \, F_{q}(s) = \int_{0}^{\infty} \frac{1}{t} \exp_{q}(-st) f(t) dt = L_{q} \left[ \frac{f(t)}{t} \right]. \end{equation} Hence proved. \end{enumerate} \section{Post-Widders method of inverse Laplace transform} \label{postwiddersmethod} The inverse Laplace transform can be calculated using a Bromwich contour integral over the complex plane. As an alternative we introduce a $q$-deformed generalization of the Post-Widder's method to compute the inverse Laplace transform using only real variables. To derive the $q$-Widder's formula we will have to rewrite the type I kernel as $K_{q}(s,t) = \exp\left( \frac{1}{1-q} \ln (1-(1-q)st) \right)$. We then have to take the $k^{th}$ derivative of the Laplace transform and scale the function $f(t)$ to $f(\xi_{m} t)$ and the resulting expression is \begin{equation} \fl F_{q}^{(k)} (s) = \int_{0}^{\infty} dt [-(1-q)t]^{k} \frac{\Gamma \left( \frac{2-q}{1-q} \right)}{\Gamma \left( \frac{2-q}{1-q} - k \right)} (1-(1-q)st)^{\frac{1}{1-q} - k} f(t \xi_{m}). \end{equation} Through a change of variables $t=xy$ and $s=k/x$ we get: \begin{equation} \fl F_{q}^{(k)} \left( \frac{k}{x} \right) = [-(1-q)]^{k} x^{k+1} \frac{\Gamma \left( \frac{2-q}{1-q} \right)}{\Gamma \left( \frac{2-q}{1-q} - k \right)} \int_{0}^{\infty} (1-(1-q) ky)^{\frac{1}{1-q} - k } y^{k} dy f(x y \xi_{m}). \end{equation} The function $y^{k} (1-(1-q)k y)^{\frac{1}{1-q} - k}$ has a single maximum which is sharply peaked at $y=1$. Hence we can replace the function $f(\xi_{m} xy)$ by $f(\xi_{m}x)$ and get \begin{equation} \fl F_{q}^{(k)} \left( \frac{k}{x} \right) = f(\xi_{m}x) [-(1-q)]^{k} x^{k+1} \frac{\Gamma \left( \frac{2-q}{1-q} \right)} {\Gamma \left( \frac{2-q}{1-q} - k \right)} \int_{0}^{\infty} (1-(1-q) ky)^{\frac{1}{1-q} - k } y^{k} dy. \label{kderiveqn} \end{equation} The solution of the integral in the above equation is \begin{equation} \int_{0}^{\infty} (1-(1-q) ky)^{\frac{1}{1-q} - k } y^{k} dy = \frac{\Gamma(k+1) \Gamma \left(\frac{2-q}{1-q} -k \right)} {\Gamma \left( \frac{3-2q}{1-q} \right)}. \end{equation} Replacing the value of the integral in Eq. (\ref{kderiveqn}) and simplifying we have \begin{equation} F_{q}^{(k)} \left( \frac{k}{x} \right) = (-1)^{k} f(x \xi_{m}) x^{k+1} \frac{\Gamma(k+1)}{k^{k+1} (2-q) }. \end{equation} where $\xi_{m} = \left[ \frac{1+(1-q)}{Q_{m}(2-q)} \right]^{\frac{1}{m-1}}$ valid for $m \geq 2$ and ill defined for $m=1$ and the polynomial $Q_{m}(q) = \prod_{j=1}^{m} (1-(1-q)j)$ exists for only for $j \geq 1$. Substituting $t=\xi_{m} x$, the $q$-deformed Widder's formula for the inverse Laplace transform is \begin{equation} f(t) = \lim_{k \rightarrow \infty} \frac{(-1)^{k}} {\Gamma(k+1)} F_{q}^{(k)} (s) (2-q) s^{k+1} |_{s=\frac{k \xi_{m}} {t}}. \label{qwiddersfinalformula} \end{equation} \section{Inverse Laplace transform for some elementary functions} In this section we evaluate the Laplace transform of some elementary functions and then compute their inverse transform using Post-Widders method. \vspace{0.2cm} \noindent {\bf Algebraic function:} \\ Let us consider an algebraic function $f(t)=t^{m-1}$, the $q$-Laplace transform of this function is \begin{equation} L_{q}\left[t^{m-1}\right] = \displaystyle \int_{0}^{\infty} dt \exp_{q}(-st) t^{m-1}. \label{qlap_alg} \end{equation} Evaluating the Laplace transform we get \begin{equation} F_{q}(s) = \frac{\Gamma(m)}{Q_{m}(2-q)} \frac{1}{s^{m}}\;, \qquad \mbox{for}\; m \geq 2. \end{equation} The inverse of the $q$-Laplace transform can be computed using a $q$-version of the Widder's formula. For this we calculate the $k^{th}$-derivative of $F_{q}(s)$ \begin{equation} F_{q}^{k}(s) = \frac{1}{Q_{m}(2-q)} (-1)^{k} \frac{\Gamma(m+k)}{s^{m+k}}. \label{algkderiv} \end{equation} Substituting Eq. (\ref{algkderiv}) in the $q$-Widder's formula we get \begin{eqnarray} \fl f(t) &=& \displaystyle \lim_{k \rightarrow \infty} \frac{(-1)^{k}}{\Gamma(k+1)} \left( \frac{1}{Q_{m}(2-q)} (-1)^{k} \frac{\Gamma(m+k)}{s^{m+k}}\right) Q_{1}(2-q) s^{k+1}\Biggr|_{s = \frac{k \xi_{m}}{t}}, \nonumber \\ \fl &=& \displaystyle \lim_{k \rightarrow \infty} \frac{(-1)^{k}}{\Gamma(k+1)} \left( \frac{1}{Q_{m}(2-q)} (-1)^{k} \frac{\Gamma(m+k)}{s^{m-1}}\right) Q_{1}(2-q) \Biggr|_{s = \frac{k \xi_{m}}{t}}. \end{eqnarray} where, $\xi_{m} = \left(\frac{Q_{1}(2-q)}{Q_{j}(2-q)}\right)^{\frac{1}{j-1}}$ and the subscript `$j$' depends on the exponent of `$s$', since the exponent of `$s$' is `$m-1$' the value of `$j$' is `$m$'. Substituting the value of `$s$' we get: \begin{eqnarray} f(t) &=& \frac{Q_{1}(2-q)}{Q_{m}(2-q)} \frac{t^{m-1}}{\xi^{m-1}} \left(\displaystyle \lim_{k \rightarrow \infty} \frac{ k^{1- m} \Gamma(m+k)}{\Gamma(k+1)} \right). \label{q_ilap_ag_wid} \end{eqnarray} On substitution of the limits and simplifying Eq. (\ref{q_ilap_ag_wid}) we get the algebraic limit $f(t) = t^{m-1}$. This validates the Post-Widder's method of computing inverse Laplace transform for any Algebraic function. \vspace{0.2cm} \noindent {\bf Exponential functions:} \\ {\it Regular exponential:} Let us consider a generalized form of the exponential function $f(t) = \exp\left(\pm \sigma_{\epsilon}\; t\right)$ where $\sigma_{\epsilon} = i^{\epsilon} \alpha$ with $\alpha \in \mathcal{R}$ and $\epsilon = 0\; \mbox{or}\; 1$. We can write this definition in the following way: \begin{eqnarray} f(t) = \exp\left(\pm \sigma_{\epsilon}\; t\right) = \left\{ \begin{array}{ll} \exp(\pm \alpha t) & \mbox{for}\; \epsilon = 0 \\ \exp(\pm i \alpha t) & \mbox{for}\; \epsilon = 1 \\ \end{array} \right. \label{defn_class_exp} \end{eqnarray} In terms of hypergeometric functions, the classical exponential given in Eq. (\ref{defn_class_exp}) can be expressed as: \begin{equation} f(t) = \exp\left(\pm \sigma_{\epsilon}\; t\right) = {}_{0}F_{0}\left(\;;\;;\pm \sigma_{\epsilon}\; t\right). \end{equation} The $q$-Laplace transform of $f(t)$ gives: \begin{eqnarray} L_{q}\left[{}_{0}F_{0}\left(\;;\;;\pm \sigma_{\epsilon}\; t\right)\right] &=& \displaystyle \int_{0}^{\infty} dt \; \exp_{q}(- s t) \;{}_{0}F_{0} \left(\;;\;;\pm \sigma_{\epsilon} t \right) \nonumber\\ &=& \frac{1}{s\; Q_{1}(2-q)}\; {}_{1}F_{1} \left(1;\frac{3-2q}{1-q};\pm \frac{\sigma_{\epsilon}}{(1-q) s}\right). \label{lap_trfm_class_exp} \end{eqnarray} To find the $q$-inverse Laplace transform using Widder like formula for the Tsallis $q$-case, we take the $k^{th}$-derivative of $F_{q}(s)$ given by Eq. (\ref{lap_trfm_class_exp}) which is \begin{equation} F_{q}^{(k)}(s) = \frac{1}{Q_{1}(2-q)} \left[ \displaystyle \sum_{n=0}^{\infty} \frac{1}{\left(\frac{3-2q}{1-q}\right)_{n}} \frac{\left(\pm \frac{\sigma_{\epsilon}}{1-q}\right)^{n}}{n!} (-1)^{k} \frac{\Gamma(n+k+1)}{s^{n+k+1}}\right]. \label{expkthderiv} \end{equation} Substituting Eq. (\ref{expkthderiv}) in the $q$-Widder's formula, Eq. (\ref{qwiddersfinalformula}) and simplifying it we get: \begin{equation} \fl f(t) = \displaystyle \sum_{n=0}^{\infty} \frac{1}{\left(\frac{3-2q}{1-q}\right)_{n}} \frac{1}{n!} \left(\pm \frac{\sigma_{\epsilon} t}{(1-q) \xi_{m}}\right)^{n} \left[ \lim_{k \rightarrow \infty} k^{1-n} \left(1 + \frac{n}{k}\right) \frac{\Gamma(k+n)}{\Gamma(k+1)} \right]. \label{qwidd_class_exp_lim} \end{equation} Applying the limits in Eq. (\ref{qwidd_class_exp_lim}) and simplifying the expression we get the initial function $f(t) = \exp(\pm \sigma_{\epsilon} t)$ validating the Post-Widder's method. \vspace{0.2cm} \noindent{\it Generalized exponential:} In analogy with the exponential, we can write the generalized exponential as $f(t) = \exp_{q^{\prime}}\left(\pm \sigma_{\epsilon}\; t\right)$ where $\sigma_{\epsilon} = i^{\epsilon} \alpha$ with $\alpha \in \mathcal{R}$ and $\epsilon = 0\; \mbox{or}\; 1$. This definition of $f(t)$ can be written as \begin{eqnarray} f(t) = \exp_{q^{\prime}}\left(\pm \sigma_{\epsilon}\; t\right) = \left\{ \begin{array}{ll} \exp_{q^{\prime}}(\pm \alpha t) & \mbox{for}\; \epsilon = 0 \\ \exp_{q^{\prime}}(\pm i \alpha t) & \mbox{for}\; \epsilon = 1 \\ \end{array} \right. \label{defn_q_exp} \end{eqnarray} The generalized exponential given in equation (\ref{defn_q_exp}) can be written in terms of hypergeometric functions \begin{equation} f(t) = \exp_{q^{\prime}}\left(\pm \sigma_{\epsilon}\; t\right) = {}_{1}F_{0}\left(-\frac{1}{1-q^{\prime}};\;;\mp (1-q^{\prime})\sigma_{\epsilon}\; t\right). \end{equation} Applying the $q$-Laplace transform on $f(t)$ we get: \begin{eqnarray} L_{q}[f(t)] &=& \displaystyle \int_{0}^{\infty} dt \; \exp_{q}(- s t)\; {}_{1}F_{0}\left(-\frac{1}{1-q^{\prime}};\;; \mp (1-q^{\prime})\sigma_{\epsilon}\; t\right) \nonumber \\ &=& \frac{1}{s\; Q_{1}(2-q)} \;{}_{2}F_{1} \left(1, - \frac{1}{1-q^{\prime}};\frac{3-2q}{1-q};\mp \frac{(1-q^{\prime})\sigma_{\epsilon}}{(1-q)s}\right). \nonumber \label{lap_trfm_q_exp} \end{eqnarray} To find the inverse $q$-Laplace transform, using the Widder's method we compute the $k^{th}$ derivative of $F_{q}(s)$ given in the previous expression. The derivative so calculated is \begin{equation} F_{q}^{(k)}(s,q^{\prime}) = \displaystyle \sum_{n=0}^{\infty} \frac{\left( - \frac{1}{1-q^{\prime}}\right)_{n} \left( \mp (1-q^{\prime}) \sigma_{\epsilon}\right)^{n}}{Q_{n+1}(2-q) \;\; n!} \frac{(-1)^{k} \Gamma(n+k+1)}{s^{n+k+1}}. \label{qwiddersqexponential} \end{equation} Thus, to find the $q$-Widders formula we substitute Eq. (\ref{qwiddersqexponential}) in Eq. (\ref{qwiddersfinalformula}) in this case to get: \begin{equation} \fl f(t;q^{\prime}) = Q_{1}(2-q) \left( \displaystyle \sum_{n=0}^{\infty} \frac{\left( - \frac{1}{1-q^{\prime}}\right)_{n} \left( \mp (1-q^{\prime}) \sigma_{\epsilon}\;t\;\right)^{n}}{Q_{n+1}(2-q)\; \xi_{m}^{n} \;\; n!} \left[\displaystyle \lim_{k \rightarrow \infty} \frac{(n+k) (k)_{n}}{k^{2+n}} \right] \right). \label{qwidd_q_exp_lim} \end{equation} To get the inverse $q$-transform we apply the limits in the above expression and simplify using the identities of Pochammer and this gives us the generalized exponential function defined in Eq. (\ref{qwidd_q_exp_lim}). This result confirms the use of Post-Widder's method. \vspace{0.2cm} \noindent {\bf Gaussian function:} \\ {\it Regular Gaussian:} The $q$-Laplace transform of a Gaussian function $f(t) = \exp(- \alpha t^{2})$ is \begin{equation} L_{q}\left[\exp\left(- \alpha t^{2}\right)\right] = \displaystyle \int_{0}^{\infty} dt\; \exp_{q}\left(- s t\right) \exp\left( - \alpha t^{2}\right). \label{qlap_class_gauss} \end{equation} and can be written in terms of the hypergeometric function as follows: \begin{equation} \fl F_{q}(s) = \frac{1}{s\; Q_{1}(2-q)}\; {}_{2}F_{2} \left(1, \frac{1}{2};\frac{3-2q}{2(1-q)}, \frac{4-3q}{2(1-q)}; - \frac{\alpha}{(1-q)^{2} s^{2}} \right). \qquad \end{equation} The $q$-inverse Laplace transform can be found using the Widder's method and for this purpose we evaluate the $k^{th}$-derivative of $F_{q}(s)$ which is \begin{equation} \fl F_{q}^{(k)}(s) = \frac{1}{Q_{1}(2-q)}\displaystyle \sum_{n=0}^{\infty} \frac{\left(\frac{1}{2}\right)_{n} \left(- \frac{\alpha}{(1-q)^{2}}\right)^{n}} {\left(\frac{3-2q}{2(1-q)}\right)_{n}\left(\frac{4-3q}{2(1-q)}\right)_{n}} \frac{(-1)^{k} (2n+k)!}{(2n)!} \frac{1}{s^{2n+k+1}}. \qquad \label{kderivgaussian} \end{equation} Using Eq. (\ref{kderivgaussian}) in Eq. (\ref{qwiddersfinalformula}) we can derive the Widder's formula, for the Gaussian function as \begin{equation} \fl f(t) = \displaystyle \left[\displaystyle \sum_{n=0}^{\infty} \frac{\left(\frac{1}{2}\right)_{n} \left(- \frac{\alpha}{(1-q)^{2}}\right)^{n}} {\left(\frac{3-2q}{2(1-q)}\right)_{n}\left(\frac{4-3q}{2(1-q)}\right)_{n}} \left(\lim_{k \rightarrow \infty} \frac{(-1)^{k}}{\Gamma(k+1)} \frac{(-1)^{k} (2n+k)!}{k^{2n}} \right) \frac{t^{2n}}{(2n)! \xi_{m}^{2n}}\right]. \label{qinv_widder_gauss} \end{equation} Applying the limiting value for $k$ as \begin{equation*} \displaystyle \lim_{k \rightarrow \infty} \frac{(-1)^{k}}{\Gamma(k+1)} \frac{(-1)^{k} (2n+k)!}{k^{2n}} = \displaystyle \lim_{k \rightarrow \infty} \frac{\Gamma(2n+k+1)}{\Gamma(k+1)} k^{1- (2n+1)} \rightarrow 1. \end{equation*} Using the identities of Pochammer symbol $\left(\frac{3-2q}{2(1-q)}\right)_{n}\left(\frac{4-3q}{2(1-q)}\right)_{n} = \frac{1}{4^{n}} \frac{Q_{2n+1}(2-q)}{(1-q)^{2n} Q_{1}(2-q)}$, and the identity of factorials $(2n)! = 4^{n} \; n!\;\left(\frac{1}{2}\right)_{n}$ we can simplify Eq. (\ref{qinv_widder_gauss}). This yields the Gaussian function $f(t) = \exp(-\alpha t^{2})$ confirming the usefulness of Post-Widder's method. \vspace{0.2cm} \noindent{\it Generalized Gaussian:} The $q$-Laplace transform of a classical Gaussian function $f(t) = \exp_{q^{\prime}}(- \alpha t^{2})$ is: \begin{equation} L_{q}\left[\exp_{q^{\prime}}\left( - \alpha t^{2} \right)\right] = \displaystyle \int_{0}^{\infty} dt \; \exp_{q}\left( - s t \right) \exp_{q^{\prime}}\left( - \alpha t^{2} \right), \label{qlap_qwidd_qguss} \end{equation} which on integration gives the hypergeometric function \begin{equation} \fl F_{q}(s;q^{\prime}) = \frac{1}{s Q_{1}(2-q)} {}_{3}F_{2} \left(1,\frac{1}{2},-\frac{1}{1-q^{\prime}}; \frac{3-2q}{2(1-q)}, \frac{4-3q}{2(1-q)}; \frac{(1-q^{\prime})\alpha}{(1-q)^{2} s^{2}}\right). \end{equation} The inverse of the $q$-Laplace transform can be computed using the Widder's method and to find it we take the $k^{th}$-derivative of the function $F_{q}(s)$ as \begin{equation} F_{q}^{(k)}(s;q^{\prime}) = \displaystyle \sum_{n=0}^{\infty} \frac{\left(-\frac{1}{1-q^{\prime}}\right)_{n}}{Q_{2n+1}(2-q)} \frac{((1-q^{\prime}) \alpha)^{n}}{n!} (-1)^{k} \frac{\Gamma(k+2n+1)}{s^{2n+k+1}} . \label{kderivgengaussian} \end{equation} Substituting Eq. (\ref{kderivgengaussian}) in the $q$-Widder's formula in Eq. (\ref{qwiddersfinalformula}) we get: \begin{equation} \fl f(t,q^{\prime}) = \displaystyle \sum_{n=0}^{\infty} \frac{ Q_{1}(2 - q) \left(-\frac{1}{1-q^{\prime}}\right)_{n}}{Q_{2n+1}(2-q)} \frac{((1-q^{\prime}) \alpha t^{2})^{n}}{n! \;\; \xi^{2n}} \frac{1}{s^{2n}} \left[ \lim_{k\rightarrow \infty} \frac{\Gamma(k+2n+1)}{k^{2n} \Gamma(k+1)}\right] . \label{qinv_qwidd_01} \end{equation} On applying the appropriate limits and using the Pochammer identities, we will recover the generalized Gaussian function indicating that the Post-Widder's method works. \vspace{0.2cm} \noindent {\bf Circular functions:} \\ The circular functions (both cosine and sine) can be expressed in terms of hypergeometric function and written in a compact form as follows: \begin{equation} f(t) = (\alpha t)^{\delta} {}_{0}F_{1}\left(\; ; \frac{1}{2} + \delta; - \frac{1}{4} \alpha^{2} t^{2}\right) = \left\{ \begin{array}{ll} \cos \alpha t & \mbox{for}\; \delta = 0 \\ \sin \alpha t & \mbox{for}\; \delta = 1 \\ \end{array} \right. \label{defn_q_exp1} \end{equation} The $q$-Laplace transform of Eq. (\ref{defn_q_exp1}) is \begin{eqnarray} \fl F_{q}(s) &=& \frac{1}{s} \left(\frac{\alpha}{s}\right)^{\delta} \frac{1}{Q_{\delta+1}(2-q)} \nonumber \\ \fl & & \times {}_{1}F_{2}\left( 1; \frac{1}{2}\left( \frac{3-2q}{1-q} + \delta \right), \frac{1}{2}\left( \frac{4-3q}{1-q} + \delta \right); - \frac{\alpha^{2}}{4 (1-q)^{2} s^{2}} \right) \label{defn_class_circular} \\ \fl &=& \left\{ \begin{array}{ll} \cos \alpha t & \mbox{for}\; \delta = 0 \\ \sin \alpha t & \mbox{for}\; \delta = 1 \\ \end{array} \right. \nonumber \end{eqnarray} For calculating the inverse $q$-Laplace transform, we find the $k^{th}$ derivative of Eq. (\ref{defn_class_circular}) as follows: \begin{eqnarray} \fl F_{q}^{(k)}(s) = \frac{1}{Q_{\delta+1}(2-q)}\displaystyle \sum_{n=0}^{\infty} \left( \frac{(-1)^{n+k} \Gamma(2n+\delta+k+1) \alpha^{2n+\delta}}{(1-q)^{2n} \left( \frac{3-2q}{1-q} + \delta \right)_{2n} (2n+\delta)! } \right) \frac{1}{s^{2n+\delta + k + 1}}. \label{kderiv_circfunc} \end{eqnarray} \noindent{(i)} $\cos \alpha t$: For $\delta = 0$, the $k^{th}$ derivative of $F_{q}^{(k)}(s)$ can be written from Eq. (\ref{kderiv_circfunc}) as \begin{eqnarray} \fl F_{q}^{(k)}(s) = \frac{1}{Q_{1}(2-q)}\displaystyle \sum_{n=0}^{\infty} \left( \frac{(-1)^{n+k} \Gamma(2n+k+1) \alpha^{2n}}{(1-q)^{2n} \left( \frac{3-2q}{1-q} \right)_{2n} (2n)! } \right) \frac{1}{s^{2n + k + 1}}. \qquad \label{kderiv_cosfunc} \end{eqnarray} To calculate the inverse Laplace transform we use the $q$-generalization of the Widder's method and for this we substitute Eq. (\ref{kderiv_cosfunc}) in Eq. (\ref{qwiddersfinalformula}) to obtain \begin{equation} \fl f(t) = \displaystyle \sum_{n=0}^{\infty} \left( \frac{(-1)^{n} (\alpha t)^{2n}}{(1-q)^{2n} \left( \frac{3-2q}{1-q} \right)_{2n} (2n)! \xi^{2n}} \right) \left[ \displaystyle \lim_{k \rightarrow \infty} \frac{ k^{1-(2n+1)}\Gamma(2n+k+1)}{\Gamma(k+1) } \right]. \end{equation} Under the limiting conditions of $k$ and using the proper Pochammer identities we get \begin{equation} f(t) = \displaystyle \sum_{n=0}^{\infty} \frac{\left(- \;\alpha^{2} t^{2}\right)^{n}}{(2n)!} = \cos \alpha t. \end{equation} \noindent{(ii)} $\sin \alpha t$: Considering $\delta =1$, the $k^{th}$ derivative of $F_{q}^{(k)}(s)$ can be written as \begin{equation} \fl F_{q}^{(k)}(s) = \frac{1}{Q_{2}(2-q)}\displaystyle \sum_{n=0}^{\infty} \left( \frac{(-1)^{n+k} \Gamma(2n+k+2) \alpha^{2n+1}}{(1-q)^{2n} \left( \frac{4-3q}{1-q} \right)_{2n} (2n+1)! } \right) \frac{1}{s^{2n + k + 2}}. \label{kderivsin} \end{equation} Substituting Eq. (\ref{kderivsin}) in $q$-Widder's formula in Eq. (\ref{qwiddersfinalformula}) we get: \begin{equation} \fl f(t) = \displaystyle \sum_{n=0}^{\infty} \left( \frac{(-1)^{n} (\alpha t)^{2n+1}}{(1-q)^{2n} \left( \frac{4-3q}{1-q} \right)_{2n} (2n+1)! \xi^{2n+1}} \right) \left[ \displaystyle \lim_{k \rightarrow \infty} \frac{ k^{1-(2n+2)}\Gamma(2n+k+2)}{\Gamma(k+1) } \right]. \end{equation} On application of the limiting value of $k$ and using the Pochammer identities we get: \begin{equation} f(t) = \displaystyle \sum_{n=0}^{\infty} (-1)^{n}\frac{\left(\alpha t\right)^{2n+1}}{(2n+1)!} = \sin \alpha t. \end{equation} Thus we prove that the inverse of the $q$-Laplace transform works. \vspace{0.2cm} \noindent {\bf Generalized circular functions:} \\ Using the generalized exponential a set of circular functions have been defined in Ref. \cite{borges1998q}. In terms of the hypergeometric functions these generalized circular functions can be written in a compact form as \begin{eqnarray} \fl f(t,q^{\prime}) &=& (\alpha t)^{\delta} {}_{0}F_{1}\left(\frac{1}{2}\left(- \frac{1}{1-q^{\prime}} + \delta\right), \frac{1}{2}\left( - \frac{q^{\prime}}{1-q^{\prime}} + \delta\right); \frac{1}{2} + \delta; - (1-q^{\prime})^{2} \alpha^{2} t^{2}\right), \nonumber \\ \fl &=& \left\{ \begin{array}{ll} \cos_{q^{\prime}} \alpha t & \mbox{for}\; \delta = 0 \\ \sin_{q^{\prime}} \alpha t & \mbox{for}\; \delta = 1 \\ \end{array} \right. \label{defn_q_circular} \end{eqnarray} The $q$-Laplace transform of the generalized circular function is \begin{eqnarray} \fl F_{q}(s;q^{\prime}) &=& \frac{1}{s} \left(\frac{\alpha}{s}\right)^{\delta} \frac{1}{Q_{\delta+1}(2-q)} \times \nonumber \\ \fl & & {}_{3}F_{2}\left( 1, \frac{ - a + \delta}{2}, \frac{- a + 1 + \delta}{2}; \frac{b + \delta}{2}, \frac{b + 1 + \delta}{2}; - \frac{(1-q^{\prime})^{2}\alpha^{2}}{(1-q)^{2}s^{2}} \right), \nonumber \\ \fl &=& \left\{ \begin{array}{ll} L_{q}\left[\cos_{q^{\prime}} \left(\alpha t\right) \right] & \mbox{for}\; \delta = 0 \\ L_{q}\left[\sin_{q^{\prime}} \left(\alpha t\right) \right] & \mbox{for}\; \delta = 1 \\ \end{array} \right. \label{L_defn_q_circular} \end{eqnarray} where we have used the relations: \begin{equation} a = \frac{1}{1-q^{\prime}}; \qquad b = \frac{1}{1-q} + 2. \label{hypergeometricfactors} \end{equation} The $k^{th}$-derivative of equation (\ref{L_defn_q_circular}) is calculated to find the inverse $q$-Laplace transform using the Widder's method. The $k^{th}$ derivative is \begin{eqnarray} \fl F_{q}^{(k)}(s;q^{\prime}) &=& \frac{1}{Q_{\delta+1}(2-q)}\displaystyle, \\ \fl & & \sum_{n=0}^{\infty} \left( \frac{(-1)^{n+k} \Gamma(2n+\delta+k+1) \left(- \frac{1}{1-q^{\prime}} + \delta \right)_{2n} \frac{(1-q^{\prime})^{2n}}{(1-q)^{2n}} \alpha^{2n+\delta}} {Q_{\delta+1}(2-q) \left( \frac{3-2q}{1-q} + \delta \right)_{2n} (2n+\delta)! s^{2n+\delta + k + 1} } \right). \nonumber \end{eqnarray} \noindent{(i)} $\cos_{q^{\prime}} \alpha t$: For $\delta=0$, the $k^{th}$-derivative of $F_{q}^{k}(s;q^{\prime})$ can be written as \begin{eqnarray} \fl F_{q}^{(k)}(s;q^{\prime}) = \frac{(-1)^{k}}{Q_{1}(2-q)} \displaystyle \sum_{n=0}^{\infty} \frac{(-1)^{n} \left(- \frac{1}{1-q^{\prime}} \right)_{2n} \Gamma(2n+k+1) \frac{(1-q^{\prime})^{2n}}{(1-q)^{2n}} \alpha^{2n}}{ \left( \frac{3-2q}{1-q} \right)_{2n} (2n)!\; s^{2n + k + 1}}. \end{eqnarray} Substituting in the formula for the Post-Widder's method in Eq. (\ref{qwiddersfinalformula}) we get: \begin{equation} \fl f(t;q^{\prime}) = \displaystyle \sum_{n=0}^{\infty} \left(\frac{ \left(- \frac{1}{1-q^{\prime}} \right)_{2n} \left( \frac{ (1-q^{\prime}) \, \alpha t }{1-q} \right)^{2n} } {(-1)^{n} \left( \frac{3-2q}{1-q} \right)_{2n} \xi^{2n} \; (2n)! } \right) \left[ \displaystyle \lim_{k \rightarrow \infty} \frac{ k^{1-(2n+1)}\Gamma(2n+k+1)}{\Gamma(k+1) } \right]. \end{equation} Applying the appropriate limits and also using Pochammer identities we get \begin{eqnarray} f(t;q^{\prime}) = \displaystyle \sum_{n=0}^{\infty} (-1)^{n} \frac{ \left(- \frac{1}{1-q^{\prime}} \right)_{2n} \left( (1-q^{\prime})\;\alpha t\right)^{2n}}{(2n)!} = \cos_{q^{\prime}} \left(\alpha t\right). \end{eqnarray} \noindent{(ii)} $\sin_{q^{\prime}} \alpha t$: The choice of $\delta=1$ gives the generalized sine function. The $k^{th}$-derivative of $F_{q}^{(k)}(s;q^{\prime})$ of this function is \begin{equation} \fl F_{q}^{(k)}(s;q^{\prime}) = \frac{(-1)^{k}}{Q_{2}(2-q)}\displaystyle \sum_{n=0}^{\infty} \frac{(-1)^{n} \left(- \frac{q^{\prime}}{1-q^{\prime}} \right)_{2n} \Gamma(2n+k+2) \frac{(1-q^{\prime})^{2n}}{(1-q)^{2n}} \alpha^{2n+1}}{ \left( \frac{4-3q}{1-q} \right)_{2n} (2n+1)!\; s^{2n + k + 2}} . \end{equation} Under the appropriate limits and using the proper Pochammer identities we get \begin{equation} \fl f(t,q^{\prime}) = \displaystyle \sum_{n=0}^{\infty} (-1)^{n+1} \left(- \frac{1}{1-q^{\prime}} \right)_{2n+1} \frac{ \left( (1-q^{\prime})\;\alpha t\right)^{2n+1}}{(2n+1)!} = \sin_{q^{\prime}} \left(\alpha t\right) . \end{equation} Thus the $q$-Laplace transform and the inverse $q$-Laplace transform computed for the $\sin_{q^{\prime}} \alpha t$ give consistent results proving that the Post-Widder's method works. \vspace{0.2cm} \noindent {\bf Hyperbolic functions:} \\ The hyperbolic functions $\cosh \alpha t$ and $\sinh \alpha t$ can be expressed in terms of the hypergeometric function and can be written in compact form as: \begin{equation} f(t) = (\alpha t)^{\delta} {}_{0}F_{1}\left(\; ; \frac{1}{2} + \delta; \frac{1}{4} \alpha^{2} t^{2}\right) = \left\{ \begin{array}{ll} \cosh \alpha t & \mbox{for}\; \delta = 0 \\ \sinh \alpha t & \mbox{for}\; \delta = 1 \\ \end{array} \right. \label{defn_class_hyperb} \end{equation} The $q$-Laplace transform of Eq. (\ref{defn_class_hyperb}) is: \begin{eqnarray} \fl F_{q}(s) &=& \frac{1}{s} \left(\frac{\alpha}{s}\right)^{\delta} \frac{1}{Q_{\delta+1}(2-q)} {}_{1}F_{2}\left( 1; \frac{b+\delta}{2}, \frac{b+1+\delta}{2} \frac{\alpha^{2}}{4 (1-q)^{2} s^{2}} \right) \nonumber \\ \fl &=& \left\{ \begin{array}{ll} L_{q}[\cosh \alpha t] & \mbox{for}\; \delta = 0 \\ L_{q}[\sinh \alpha t] & \mbox{for}\; \delta = 1 \\ \end{array} \right. \label{defn_hyperbolic} \end{eqnarray} where the factor $b$ used in Eq. (\ref{defn_hyperbolic}) is defined in Eq. (\ref{hypergeometricfactors}). The inverse $q$-Laplace transform can be calculated using the Post-Widder's method. For this we compute the $k^{th}$-derivative of Eq. (\ref{defn_hyperbolic}): \begin{eqnarray} \fl F_{q}^{(k)}(s) = \frac{1}{Q_{\delta+1}(2-q)}\displaystyle \sum_{n=0}^{\infty} \left( \frac{(-1)^{k} \Gamma(2n+\delta+k+1) \alpha^{2n+\delta}}{(1-q)^{2n} \left( \frac{3-2q}{1-q} + \delta \right)_{2n} (2n+\delta)! } \right) \frac{1}{s^{2n+\delta + k + 1}}. \end{eqnarray} \noindent{(i)} $\cosh \alpha t$: For the $\cosh \alpha t$ ($\delta=0$) function, the $k^{th}$-derivative is \begin{eqnarray} F_{q}^{(k)}(s) = \frac{1}{Q_{1}(2-q)}\displaystyle \sum_{n=0}^{\infty} \left( \frac{(-1)^{k} \Gamma(2n+k+1) \alpha^{2n}}{(1-q)^{2n} \left( \frac{3-2q}{1-q} \right)_{2n} (2n)! } \right) \frac{1}{s^{2n + k + 1}}. \end{eqnarray} Substituting in the $q$-generalization of Widder's formula, and applying the appropriate limits and Pochammer identities we get \begin{equation} f(t) = \displaystyle \sum_{n=0}^{\infty} \frac{\left(\;\alpha^{2} t^{2}\right)^{n}}{(2n)!} = \cosh \alpha t. \end{equation} which confirms the $q$-Laplace transform as well as the generalized Widder's method defined in our work. \noindent{(ii)} $\sinh \alpha t$: For the choice $\delta=1$ the $k^{th}$-derivative of $F_{q}^{(k)} (s)$ is \begin{eqnarray} \fl F_{q}^{(k)}(s) = \frac{1}{Q_{2}(2-q)}\displaystyle \sum_{n=0}^{\infty} \left( \frac{(-1)^{k} \Gamma(2n+k+2) \alpha^{2n+1}}{(1-q)^{2n} \left( \frac{4-3q}{1-q} \right)_{2n} (2n+1)! } \right) \frac{1}{s^{2n + k + 2}}. \end{eqnarray} Applying the appropriate limits and using the Pochammer under the limit $k \rightarrow \infty$ in the Equation above gives us \begin{eqnarray} f(t) = \displaystyle \sum_{n=0}^{\infty} \frac{\left(\alpha t\right)^{2n+1}}{(2n+1)!} = \sinh \alpha t. \end{eqnarray} These results confirms that the $q$-Laplace transform as well as the inverse transform work as expected. \vspace{0.2cm} \noindent {\bf Generalized hyperbolic functions:} \\ The generalized hyperbolic functions {\it viz} the $\cosh_{q^{\prime}} (\alpha t)$ and $\sinh_{q^{\prime}} (\alpha t)$ can be expressed in terms of the hypergeometric function and can be written in a compact form as \begin{eqnarray} \fl f(t,q^{\prime}) &=& (\alpha t)^{\delta} {}_{2}F_{1}\left(\frac{1}{2}\left(- \frac{1}{1-q^{\prime}} + \delta\right), \frac{1}{2}\left( - \frac{q^{\prime}}{1-q^{\prime}} + \delta\right); \frac{1}{2} + \delta; (1-q^{\prime})^{2} \alpha^{2} t^{2}\right) \nonumber \\ \fl &=& \left\{ \begin{array}{ll} \cosh_{q^{\prime}} \alpha t & \mbox{for}\; \delta = 0 \\ \sinh_{q^{\prime}} \alpha t & \mbox{for}\; \delta = 1 \\ \end{array} \right. \label{defn_q_hyperb} \end{eqnarray} The $q$-Laplace transform of the $q$-hyperbolic function is \begin{eqnarray} \fl F_{q}(s;q^{\prime}) &=& \frac{1}{s} \left(\frac{\alpha}{s}\right)^{\delta} \frac{1}{Q_{\delta+1}(2-q)} \times \nonumber \\ \fl & & {}_{3}F_{2}\left( 1, \frac{- a + \delta}{2}, \frac{-(a -1) + \delta}{2}; \frac{b+\delta}{2}, \frac{b+1+\delta}{2} ; \frac{(1-q^{\prime})^{2}\alpha^{2}}{(1-q)^{2}s^{2}} \right) \nonumber \\ \fl &=& \left\{ \begin{array}{ll} L_{q}\left[\cosh_{q^{\prime}} \left(\alpha t\right) \right] & \mbox{for}\; \delta = 0 \\ L_{q}\left[\sinh_{q^{\prime}} \left(\alpha t\right) \right] & \mbox{for}\; \delta = 1 \\ \end{array} \right. \label{L_defn_q_hyperbolic} \end{eqnarray} where the factors used in the hypergeometric function are defined in Eq. (\ref{hypergeometricfactors}). The $k^{th}$-derivative of $F_{q}^{(k)}(s)$ is \begin{eqnarray} \fl F_{q}^{(k)}(s;q^{\prime}) = \displaystyle \sum_{n=0}^{\infty} \left( \frac{(-1)^{k} \Gamma(2n+\delta+k+1) \left(- \frac{1}{1-q^{\prime}} + \delta \right)_{2n} \frac{(1-q^{\prime})^{2n}}{(1-q)^{2n}} \alpha^{2n+\delta}}{ Q_{\delta+1}(2-q) \left( \frac{3-2q}{1-q} + \delta \right)_{2n} (2n+\delta)! s^{2n+\delta + k + 1}} \right). \end{eqnarray} For the case {\it (i)} $\cosh_{q^{\prime}} (\alpha t)$ we can choose $\delta =0$ we can calculate the $k^{th}$ derivative of $F_{q}(s,q^{\prime})$ by allowing $\delta=1$ and substituting this in the $q$-Widder's formula and choosing the appropriate limits and using the Pochammer identities we get: \begin{equation} f(t,q^{\prime}) = \displaystyle \sum_{n=0}^{\infty} \frac{ \left(- \frac{1}{1-q^{\prime}} \right)_{2n} \left( (1-q^{\prime})\;\alpha t\right)^{2n}}{(2n)!} = \cosh_{q^{\prime}} \left(\alpha t\right). \end{equation} By choosing $\delta=1$ we can get the results corresponding to the case {\it (ii)} $\sinh_{q^{\prime}} (\alpha t)$ by repeating the procedure as being done in all the earlier cases. The final result reads: \begin{equation} f(t,q^{\prime}) = \displaystyle \sum_{n=0}^{\infty} \left(- \frac{q^{\prime}}{1-q^{\prime}} \right)_{2n} (1-q^{\prime})^{2n} \frac{ (\alpha t)^{2n+1}}{(2n+1)!} = \sinh_{q^{\prime}} \left(\alpha t\right). \end{equation} Thus we compute the generalized Laplace transform and its inverse using the Post-Widders method and we observe consistent results. \section{Applications in statistical mechanics} \label{smapplications} The Laplace transform has several applications in physics. Notable among these applications is the use of Laplace transform to interrelate the partition function and the density of states. The partition function is the fundamental quantity using which all the thermodynamic quantities are computed in the canonical ensemble which is used to study a closed system. The thermodynamic quantities are calculated using the density of states in the microcanonical ensemble which is used to describe an isolated system. In both the canonical and the microcanonical ensemble, the number of particles remain fixed. The density of states is the number of states of a given energy in a system and the partition function is the average of the density of states weighted by a Boltzmann distribution. Hence these two quantities {\it viz} partition function and density of states can be connected by a Laplace transform \cite{greiner2012thermodynamics}. To investigate the statistical mechanics of systems involving long-range interactions a generalization of Boltzmann-Gibbs statistics was proposed by Tsallis \cite{tsallis1988possible} using the deformed $q$-exponential. When the average of the density of states is weighted by the Tsallis statistics i.e., using a deformed $q$-exponential, we can connect the partition function to the density matrix using a $q$-deformed Laplace transform. While both the partition function and the density of states can be calculated using first principles, the density of states which completely characterizes a isolated system is generally hard to compute. To overcome this one can calculate the partition function in the canonical ensemble and use the inverse Laplace transform to obtain the density of states. In our case we use the partition function of the Tsallis statistics and use the inverse of the $q$-Laplace transform and obtain the density of states. \noindent{\it Classical Ideal gas: }\\ As an illustration let us consider a system of classical ideal gas in $D$ dimensions with the Hamiltonian \cite{greiner2012thermodynamics,prato1995generalized,chakrabarti2008rigid,abe1999thermodynamic} \begin{equation} H=\displaystyle \sum_{i=1}^{DN} \frac{p_{i}^{2}}{2m}, \label{Idealgas} \end{equation} where $p$ and $m$ are the momentum and mass of the gas. The nonextensive statistical mechanics begins with the introduction of a generalization of the entropy. The probability distribution for the microstates can be obtained by optimizing the entropy using physically appropriate constraints. In the course of the development of this field, three different ways of optimizations emerged. Of these three methods, the third constraint method \cite{tsallis1998role,tsallis2009escort} known as the escort probability distribution is the one presently accepted. Using this constraint leads to implicit thermodynamics relations which are hard to solve. To overcome this one can solve the problem in the second constraint approach and the results obtained can be transformed to those in the third constraint using a transformation rule based on the inverse temperature. Since we are interested in demonstrating the usefulness of Laplace transform, we prefer to avoid the complications arising with the third constraint and instead use the second constraint formulation. But we would like to note that any result we obtain in the second constraint can be transformed to third constraint. The partition function of the system in the second constraint formulation is \begin{equation} Z_{q}(\beta) = \frac{1}{h^{DN} N!} \displaystyle \int_{0}^{\infty} \; d\bar{x}\;d\bar{p}\; \exp_{q}\left( - \beta \displaystyle \sum_{i=1}^{DN} \frac{p_{i}^{2}}{2m} \right), \label{partitionfunctionidealgas} \end{equation} where $\bar{x}$ and $\bar{p}$ are $\left(x_{1},x_{2},\ldots,x_{DN} \right)$, $\left(p_{1},p_{2},\ldots,p_{DN}\right)$. On calculating the integral we get: \begin{equation} \fl Z_{q}(\beta) = \frac{V^{N}(2\pi m)^{\frac{DN}{2}}}{h^{DN} N!} \left(\frac{\Gamma\left(\frac{1}{1-q} +1 \right)}{(1-q)^{\frac{DN}{2}}\; \Gamma\left(\frac{1}{1-q} + \frac{DN}{2} + 1\right)}\right) \frac{1}{\beta^{\frac{DN}{2}}}. \label{pfidealgas} \end{equation} Using the inverse of the $q$-Laplace transform one can calculate the density of states. Here we use the $q$-generalization of the Widder's method to get: \begin{equation} g(E) = \displaystyle \lim_{k \rightarrow \infty} \frac{(-1)^{k}}{\Gamma(k+1)} Z_{q}^{(k)}(\beta) \beta^{k+1} Q_{1}(2-q) \Biggr|_{\beta = \frac{k \xi_{m}}{E}}. \label{dosidealgas} \end{equation} Substituting for the partition function Eq. (\ref{pfidealgas}) in the expression for the density of states Eq. (\ref{dosidealgas}), we apply the limiting value for $\beta$ and simplify the expression to get: \begin{equation} g(E) = \frac{V^{N}(2\pi m)^{\frac{DN}{2}}}{h^{DN} N!\left(\frac{DN}{2}-1\right)!} E^{\frac{DN}{2}-1}, \label{dosfinal} \end{equation} which is the density of states of the classical ideal gas \cite{greiner2012thermodynamics}. A system is in thermodynamic equilibrium when it is in thermal, mechanical and chemical equilibrium with its surroundings and consequently we need three macroscopic variables to describe the system. The macroscopic variable can be either an extensive variable dependent on the number of particle or an intensive variable which is independent of the particle number. The energy ($E$) of a system is an example for an extensive variable and the temperature ($T$) is an example of an intensive variable. Thus from the thermal, mechanical and chemical equilibrium we can choose from any of the two variables and hence we have in total eight different ensembles. Of the eight ensembles there are four ensembles in which the temperature is fixed and these are referred to as the isothermal ensemble and four in which the heat function is fixed and these ensembles are known as the adiabatic ensembles. The four isothermal ensembles are characterized by the partition function, while the four adiabatic ensembles are characterized by the density of states. The density of states is the number of states with same value for the heat function in a system and the partition function is the average of the density of states weighted by a distribution and in general cases we use a Boltzmann distribution. The eight ensembles can be grouped into four pairs with each pair comprising of an isothermal ensemble and an adiabatic ensemble. The partition of an isothermal ensemble can be related to the density of states of the adiabatic ensemble through a Laplace transform. In the discussion above we saw the correspondence between the microcanonical and the canonical ensemble. Similarly, the constant pressure ensemble and the isoenthalpic-isobaric ensemble are also related through a Laplace transform. A theory of the thermodynamic formulation of all the eight ensembles are given in Ref. \cite{chandrashekar2011class} for the generalized statistical mechanics based on Tsallis $q$-exponential. \noindent{\it Linear harmonic oscillator:} \\ Next let us consider a system of $N$ number of $D$-dimensional non-interacting harmonic oscillators. The Hamiltonian of the oscillator is \cite{greiner2012thermodynamics} \begin{equation} H=\displaystyle \sum_{i=1}^{DN} \left(\frac{p_{i}^{2}}{2m} + \frac{1}{2} m \omega^{2} x_{i}^{2} \right), \label{Hamiltonianharmonicoscillator} \end{equation} where $x$ and $p$ are the position and momentum co-ordinates of the oscillator. There are three different approaches to calculate the probability distribution in the current work we use the second constraint approach. The partition function of the system in the second constraint formulation is \begin{equation} Z_{q}(\beta) = \frac{1}{h^{DN}} \displaystyle \int_{0}^{\infty} \; d\bar{x}\;d\bar{p}\; \exp_{q} \left( - \beta \displaystyle \sum_{i=1}^{DN}\left(\frac{p_{i}^{2}}{2m} + \frac{1}{2} m \omega^{2} x_{i}^{2} \right) \right), \end{equation} where $\bar{x}$ and $\bar{p}$ are $\left(x_{1},x_{2},\ldots,x_{DN} \right)$,$\left(p_{1},p_{2},\ldots,p_{DN}\right)$. Evaluating the integral we get: \begin{equation} Z_{q}(\beta) = \frac{1}{\left(\hbar\omega\right)^{DN}} \left(\frac{\Gamma\left(\frac{1}{1-q} +1 \right)}{(1-q)^{DN}\; \Gamma\left(\frac{1}{1-q} + DN + 1\right)}\right) \frac{1}{\beta^{DN}}. \label{partitionfunctionoscillator} \end{equation} To calculate the thermodynamics of a set of isolated harmonic oscillator we use the microcanonical ensemble and so we need to find the density of states. The density of states can be computed from the partition function using the inverse of the $q$-Laplace transform. Instead of using the regular complex analysis based approach to calculate the inverse transform we use the $q$-generalization of the Widder's method introduced in the present work. Using this method the density of states obtained is \begin{eqnarray} g(E) = \displaystyle \lim_{k \rightarrow \infty} \frac{(-1)^{k}}{\Gamma(k+1)} Z_{q}^{(k)}(\beta) \beta^{k+1} Q_{1}(2-q) \Biggr|_{\beta = \frac{k \xi}{E}}. \label{densityofstates} \end{eqnarray} Substituting the partition function Eq. (\ref{partitionfunctionoscillator}) in the expression for the density of states Eq. (\ref{densityofstates}) and substituting the value of $\beta$ and applying the limiting value of $k$ we get \begin{equation} g(E) = \frac{1}{\left(\hbar \omega\right)^{DN}\left(DN-1\right)!} E^{DN-1}, \end{equation} which is the expression of the density of states of a system of $N$ non-interacting $D$-dimensional oscillators in the microcanonical ensemble \cite{greiner2012thermodynamics}. Thus we use the generalized Post-Widder's method was based inverse Laplace transform to transform the partition function of a non-relativistic classical ideal gas and linear harmonic oscillator to their respective density of states. \section{Conclusion} \label{conclusion} A generalization of the Laplace transform based on the Tsallis $q$-exponential is defined in the present work based on the kernel $K_{I} (q;s,t) = \exp_{q}(-st)$. While this definition was also already introduced in Ref. \cite{naik2016q}, an inverse transformation was not defined. In our work we define the inverse transform using the complex integration method. We also provide an alternative method based on Post-Widder's method to compute the inverse of the transform using only real variables. Several properties of the Laplace transform and the inverse transform are verified. Then we compute the Laplace transform of some functions like algebraic functions, Gaussian functions, $q$-Gaussian functions, circular functions, $q$-circular function, hyperbolic function, and $q$-hyperbolic functions. On the resulting expressions we use the inverse Laplace transform based on the Post-Widders method to derive the original functions. Finally we show the applications of these methods in statistical mechanics where we compute the density of states of a $N$-Dimensional classical ideal gas and linear harmonic oscillator from their partition function. \section*{Acknowledgements} Md.~Manirul Ali was supported by the Centre for Quantum Science and Technology, Chennai Institute of Technology, India, vide funding number CIT/CQST/2021/RD-007. R. Chandrashekar was supported in part by a seed grant from IIT Madras to the Centre for Quantum Information, Communication and Computing. \section{References:} \bibliographystyle{iopart-num}
1609.07834
\section{Introduction} \label{sec:intro} In epidemiology, observational studies are often used to investigate the relation between an exposure and a health outcome of interest. However, several potential biases might jeopardize our inference and conclusions \cite{greenland2005multiple}. Selection bias arises when the selected population is not representative of the target population of interest. As a consequence of selection bias, the association between exposure and outcome in the selected population differs from the association in the target population \cite{hernan2004structural}. In case-control studies, causal conclusions are more likely to be subject to selection bias than other epidemiologic studies \cite{geneletti2009adjusting}. In a case-control study that recruits all (or most) of the diseased subjects and a small fraction of non-diseased subjects, the famous doctrine is that the selection of controls should not depend on their exposure status \cite{huang2015bounding}. Failing to satisfy this can lead to biased result. Previously, many researchers have discussed selection bias (e.g. \cite{mezei2006selection,ding2016sharp}). Some researchers derived the bias analytically \cite{nguyen2016collider}, and some proposed methods to recover or adjust for selection bias (e.g. \cite{bareinboim2015recovering,didelez2010graphical,yanagawa1984case,greenland2003quantifying,valeri2016estimating,bareinboim2012controlling}). We advance the literature by establishing qualitative relations between the exposure-outcome association in the selected population and that in the target population. In this paper, we first consider the setting of the case-control studies with three variables (i.e., a binary exposure, a binary outcome and a binary indicator of selection), and then comment on the setting with covariates. Based on a decomposition of the odds ratio in the selected population, we show that if the exposure and the outcome affect the selection indicator in the same direction and have non-positive interaction on the risk ratio, odds ratio or risk difference scale, the odds ratio in the selected population is smaller than or equal to the true odds ratio in the target population. This relation can help us to draw qualitative conclusion about the true odds ratio. Compared with previous literature, we do not need prior quantitative knowledge of some unknown parameters, which are required in the sensitivity analysis and the adjustment methods. In contrast, we require some prior qualitative knowledge of the selection mechanism, and obtain the qualitative relation between the observed odds ratio and the true odds ratio. \section{Main results for the directions of selection bias for the odds ratio} We first introduce the notation. Let $E$ be a binary exposure variable with $E=1$ for treatment and $E=0$ for control, and $D$ be a binary outcome variable with $D=1$ if disease is present and $D=0$ otherwise. Let $S$ be the binary indicator of selection with $S=1$ if selected. For any binary variables $A$ and $B$ and a general variable $C$, we define \begin{eqnarray*} &&\textsc{OR}_{AB\mid C=c}=\frac{P(A=1,B=1\mid C=c)P(A=0,B=0\mid C=c)}{P(A=1,B=0\mid C=c)P(A=0,B=1\mid C=c)},\\ &&\textsc{RR}_{AB\mid C=c}=\frac{P(B=1\mid A=1,C=c)}{P(B=1\mid A=0, C=c)},\\ &&\textsc{RD}_{AB\mid C=c}=P(B=1\mid A=1,C=c)-P(B=1\mid A=0, C=c), \end{eqnarray*} as the odds ratio, risk ratio and risk difference of two random variables $A$ and $B$ conditional on $C=c$, respectively. For simplicity, we consider the setting without covariates and comment on the setting with covariates later. We are concerned about the true odds ratio, $\textsc{OR}_{ED}$, in the target population. However, from the selected population, we can estimate only the odds ratio conditional on $S=1$, $\textsc{OR}_{ED\mid S=1}$. In general, $\textsc{OR}_{ED}$ and $\textsc{OR}_{ED\mid S=1}$ are different, and they are related by an interaction measure between $E$ and $D$ on $S.$ On the risk ratio scale, the multiplicative interaction of exposure and outcome on the selection indicator \cite{vanderweele2015explanation} is defined as \begin{eqnarray*} \text{Inter}_{\RR} = \frac{P(S=1\mid D=1,E=1)P(S=1\mid D=0,E=0)}{P(S=1\mid D=1,E=0)P(S=1\mid D=0,E=1)}. \end{eqnarray*} The following result shows a well known relation between $\textsc{OR}_{ED\mid S=1}$ and $\textsc{OR}_{ED}$ \cite{kleinbaum1982epidemiologic,greenland1996basic,rothman2008modern,greenland2009bayesian}. \setcounter{result}{-1} \renewcommand {\theresult} {\arabic{result}} \begin{result} We have \begin{eqnarray} \label{formula:OR} \textsc{OR}_{ED\mid S=1} =\textsc{OR}_{ED} \times \textup{Inter}_{\RR}. \end{eqnarray} \end{result} Formula \eqref{formula:OR} states that the odds ratio in the selected population equals the true odds ratio multiplied by the interaction, on the risk ratio scale, of the exposure and outcome on the selection indicator. Berkson \cite{berkson1946limitations} gave numerical examples to show that the association between two diseases in the hospital population (selected population) is unrepresentative of that in the target population. In his examples, the two diseases are independent in the target population, but are positively associated in the selected population. With some abuse of notation, we let $E$ and $D$ indicate the occurrences of the two diseases respectively. Because $E$ and $D$ are independent in the target population, $\textsc{OR}_{ED} =1$, and thus according to \eqref{formula:OR}, $\textsc{OR}_{ED\mid S=1}=\text{Inter}_{\RR} $, i.e., the odds ratio in the selected population equals the multiplicative interaction of $E$ and $D$ on selection. Berkson's choices of selection probabilities make $\text{Inter}_{\RR} >1$, which results in positive associations between $E$ and $D$ in the selected population. Note that the relation $\textsc{OR}_{ED\mid S=1} = \text{Inter}_{\RR}$ is also the fundamental identity in case-only designs for identifying gene-environment interactions \cite{piegorsch1994non,yang1999case}. If $P(S=1\mid D=d,E=e)$ is constant in $d$ or $e$, then $\text{Inter}_{\RR}=1$ and hence $\textsc{OR}_{ED\mid S=1} = \textsc{OR}_{ED} $. This is related to the collapsibility conditions for the odds ratio \cite{didelez2010graphical,bareinboim2012controlling,whittemore1978collapsibility, guo1995collapsibility, xie2008some}, i.e., if $D \ind S\mid E$ or $E \ind S\mid D$, then $\textsc{OR}_{ED\mid S=s} = \textsc{OR}_{ED} $ for $s=0,1$. Therefore, the odds ratio in the selected population will be equal to the odds ratio in the target population under either of the following two scenarios: (a) the probability of being selected is dependent only on the subjects' outcome status, but the exposure does not directly affect the subjects' selection or inclusion probabilities (Figure \ref{fig::2}); (b) the probability of being selected is dependent only on the subjects' exposure status, but the outcome does not directly affect the subjects' selection or inclusion probabilities (Figure \ref{fig::3}). If the study recruits all of the diseased subjects as cases, and the selection of non-diseased subjects is independent of their exposure status, then condition (a) holds because $P(S=1\mid D=1,E=e)=1$ and $S \ind E \mid D=0$. Thus, the odds ratio in the selected population equals to the odds ratio in the target population, which justifies the doctrine mentioned in Section 1. \begin{figure}[htp] \centering \subfigure[]{$$ \xymatrix{ \label{fig::2} E \ar[r] & D \ar[d]\\ & S \\ } $$}\qquad \quad \subfigure[]{$$ \xymatrix{ \label{fig::3} E \ar[r] \ar[rd] & D \\ &S \\ } $$} \qquad \quad \subfigure[]{ \label{fig::1} $$ \xymatrix{ E \ar[r] \ar[rd]_{+(-)} & D \ar[d]^{+(-)}\\ & S\\ } $$} \caption{Illustrative directed acyclic graphs.} \end{figure} If the collapsibility conditions, $D\ind S\mid E$ and $E\ind S\mid D$, do not hold but there is no interaction of $E$ and $D$ on $S$ on the risk ratio scale, we still have $\text{Inter}_{\RR}=1$, which immediately gives the following result. \begin{result} \label{re:rr} If there is no interaction of $E$ and $D$ on $S$ on the risk ratio scale, i.e., $\textup{Inter}_{\RR}=1$, then $\textsc{OR}_{ED\mid S=1} = \textsc{OR}_{ED} $. \end{result} However, the equality $\textsc{OR}_{ED\mid S=1} = \textsc{OR}_{ED} $ does not hold if the no-interaction assumption holds on other scales (e.g., odds ratio, risk difference). Fortunately, in these cases, we can obtain the directions of the selection bias under certain monotonicity. We first give the result on the odds ratio scale. \begin{result} \label{re:or} Suppose that there is no interaction of $E$ and $D$ on $S$ on the odds ratio scale, i.e., $\textsc{OR}_{ES\mid D=1}=\textsc{OR}_{ES\mid D=0}$. (a) If $P(S=1\mid D=d,E=e)$ is non-increasing or non-decreasing in both $d$ and $e$, then $\textsc{OR}_{ED\mid S=1} \leq \textsc{OR}_{ED} $. (b) If $P(S=1\mid D=d,E=e)$ has opposite monotonicity in $d$ and $e$, then $\textsc{OR}_{ED\mid S=1} \geq \textsc{OR}_{ED} $. \end{result} The condition that $P(S=1\mid D=d,E=e)$ is both non-increasing or non-decreasing in $d$ and $e$ means that $E$ and $D$ affect $S$ in the same direction. As illustrated in Figure \ref{fig::1}, intuitively this means that the edge from $E$ to $S$ and the edge from $D$ to $S$ have the same sign. For more formal discussion about the signed directed acyclic graphs, please see VanderWeele and Robins \cite{vanderweele2010signed}. In case-control studies, the proportion of selected units among cases will be larger than that among noncases, thus $P(S=1\mid D=d,E=e)$ is increasing in $d$. If $P(S=1\mid D=d,E=e)$ is increasing in $e$, i.e., given the outcome status, exposed units are more likely to be selected, then $\textsc{OR}_{ED\mid S=1} $ is a lower bound of the true odds ratio; if $P(S=1\mid D=d,E=e)$ is decreasing in $e$, i.e., given the outcome status, exposed units are less likely to be selected, then $\textsc{OR}_{ED\mid S=1} $ is an upper bound of the true odds ratio. The assumption that there is no interaction of $E$ and $D$ on $S$ on the odds ratio scale is equivalent to a logistic model for $S$ without interaction of $D$ and $E$: \begin{eqnarray} \label{eqn:logit} \text{logit}\{P(S=1\mid D=d, E=e)\}=\beta_0+\beta_1d+\beta_2e. \end{eqnarray} From \eqref{eqn:logit}, we can easily see that $P(S=1\mid D=1,E=e)$ and $P(S=1\mid D=0,E=e)$ must have the same monotonicity in $e$. Therefore, to determine whether $\textsc{OR}_{ED\mid S=1}$ is a lower or upper bound, we need only to know the monotonicity of $P(S=1\mid D=d,E=e)$ in $e$ for either $d=1$ or 0, i.e., the sign of $\beta_2$. The no-interaction assumption on the odds ratio scale has many other equivalent forms \cite{yanagawa1984case}. First, it is equivalent to $\textsc{OR}_{DS\mid E=1}=\textsc{OR}_{DS\mid E=0}$ ($ = \beta_1$ in \eqref{eqn:logit}), i.e., the odds ratios between $S$ and $D$ in the treatment and control groups are the same. Second, it is equivalent to $\textsc{OR}_{ED\mid S=1}=\textsc{OR}_{ED\mid S=0}$ ($ = \beta_2$ in \eqref{eqn:logit}), i.e., the odds ratios between $D$ and $E$ for selected and unselected units are the same. According to the second equivalent form, however, we cannot obtain $\textsc{OR}_{ED\mid S=1} = \textsc{OR}_{ED} $ even if $\textsc{OR}_{ED\mid S=1}=\textsc{OR}_{ED\mid S=0}$, because the odds ratio is not collapsible \cite{guo1995collapsibility, xie2008some}. When the no-interaction assumption holds on the risk difference scale, the direction of selection bias remains the same as that in Result \ref{re:or}. \begin{result} \label{re:rd} Suppose that there is no interaction of $E$ and $D$ on $S$ on the risk difference scale, i.e., $\textsc{RD}_{ES\mid D=1}=\textsc{RD}_{ES\mid D=0}$ or equivalently $\textsc{RD}_{DS\mid E=1}=\textsc{RD}_{DS\mid E=0}$. (a) If $P(S=1\mid D=d,E=e)$ is non-increasing or non-decreasing in both $d$ and $e$, then $\textsc{OR}_{ED\mid S=1} \leq \textsc{OR}_{ED} $. (b) If $P(S=1\mid D=d,E=e)$ has opposite monotonicity in $d$ and $e$, then $\textsc{OR}_{ED\mid S=1} \geq \textsc{OR}_{ED} $. \end{result} Again, we can understand Results \ref{re:or} and \ref{re:rd} intuitively. If $E$ and $D$ affects $S$ in the same direction and they have no interaction on $S$, then conditioning on $S=1$ will introduce spurious negative association between $E$ and $D$, which further decreases the association between $E$ and $D$ in the selected population compared to the target population. Furthermore, we can make the no-interaction assumptions and monotonicity assumptions in our results more plausible by including observed covariates $C$. In this case, the relations between the odds ratio in the selected population and target population hold conditional on $C$. In many cases, it is possible that the no-interaction assumptions fail. However, we can still obtain the directions of the selection bias if we have some qualitative knowledge of the interaction. For example, if there is a non-positive interaction of $E$ and $D$ on $S$ on the risk ratio scale, then $\text{Inter}_{\RR}\leq 1$ and hence $\textsc{OR}_{ED\mid S=1} \leq \textsc{OR}_{ED} $. The following result shows the directions of selection bias when $E$ and $D$ have interaction on $S$. \begin{result} \label{re:interaction} (a) If there is a non-positive interaction of $E$ and $D$ on $S$ on the odds ratio (risk ratio, risk difference) scale, and $P(S=1\mid D=d,E=e)$ is non-increasing or non-decreasing in both $d$ and $e$, then $\textsc{OR}_{ED\mid S=1} \leq \textsc{OR}_{ED} $. (b) If there is a non-negative interaction of $E$ and $D$ on $S$ on the odds ratio (risk ratio, risk difference) scale, and $P(S=1\mid D=d,E=e)$ has opposite monotonicity in $d$ and $e$, then $\textsc{OR}_{ED\mid S=1} \geq \textsc{OR}_{ED} $. \end{result} Note that we do not have general results, when $P(S=1\mid D=d, E=e)$ is non-increasing or non-decreasing in both $d$ and $e$ and there is a positive interaction of $E$ and $D$ on $S$ on the risk ratio scale. The conditions in Results \ref{re:rr}--\ref{re:interaction} are sufficient but not necessary. We give an example to illustrate this. \begin{example} Suppose that $P(S=1\mid D=1,E=1)=0.8$, $P(S=1\mid D=1,E=1)=0.6$, $P(S=1\mid D=1,E=1)=0.4$ and $P(S=1\mid D=1,E=1)=0.1$. We see that $P(S=1\mid D=d,E=e)$ is decreasing in both $d$ and $e$. Because \begin{eqnarray*} &&P(S=1\mid D=1,E=1)+P(S=1\mid D=0,E=0) \\ &<& P(S=1\mid D=1,E=0)+P(S=1\mid D=0,E=1), \end{eqnarray*} the interaction of $D$ and $E$ on $S$ on the risk difference scale is negative, the conditions in Result 4(a) hold. Thus, according to Result 4(a), $\textsc{OR}_{ED\mid S=1} \leq \textsc{OR}_{ED} $. From the value of $P(S=1\mid D=d,E=e)$, we have $\textsc{OR}_{ED\mid S=1}= \text{Inter}_{\RR} \times \textsc{OR}_{ED} =\textsc{OR}_{ED}/3 < \textsc{OR}_{ED} $, which is consistent with Result 4(a). If we change $P(S=1\mid D=1,E=1)=0.1$ to $P(S=1\mid D=1,E=1)=0.25$, then the interaction of $D$ and $E$ on $S$ on the risk difference scale is positive and hence the conditions in Result 4(a) fail. However, we can still obtain that $\textsc{OR}_{ED\mid S=1}= \text{Inter}_{\RR} \times \textsc{OR}_{ED} =5/6\cdot \textsc{OR}_{ED} < \textsc{OR}_{ED} $. Thus, the conditions in Result 4(a) are not necessary. \end{example} \section{Illustration} We illustrate the applicability of our results with a real data example from a case-control study of sudden infant death syndrome \cite{kraus1989risk}. The exposure is mother's report of antibiotic use during pregnancy ($E=1$ for yes and 0 for no) and the outcome is subsequent sudden infant death syndrome ($D=1$ for yes and 0 for no). The goal is to obtain the odds ratio of these two variables but we can calculate only the odds ratio in the selected population, $\textsc{OR}_{ED\mid S=1}=1.42$. Greenland \cite{greenland2014sensitivity} suggested conducting sensitivity analysis by viewing $\text{Inter}_{\RR}$ as a sensitivity parameter, i.e., if we specify the value or range of $\text{Inter}_{\RR}$, then we can divide the point estimate and confidence limits of $\textsc{OR}_{ED\mid S=1}$ by $\text{Inter}_{\RR}$ to obtain those of $\textsc{OR}_{ED}$. If we have the qualitative knowledge that using antibiotic during pregnancy and having sudden infant death syndrome both increase the selection probability and they have non-positive interaction, then according to Result 4, we can conclude that $\textsc{OR}_{ED} \geq \textsc{OR}_{ED\mid S=1}=1.42$, i.e, mother's antibiotic use during pregnancy and sudden infant death syndrome are positively associated. \section{Discussion} Recoding $D$ or $E$ can affect the monotonicity of the interaction and $P(S=1\mid D=d,E=e)$. We obtain Results 2(b), 3(b) and 4(b) by recoding $D$ or $E$ in Results 2(a), 3(a) and 4(a), respectively, and thus our paper contains the general results by recoding $D$ or $E$. Our results can be helpful in settings where the prior quantitative knowledge of $\text{Inter}_{\RR}$ is hard to obtain, but qualitative knowledge of the selection mechanism is relatively easy to obtain. For example, in case control studies, $P(S=1\mid D=d,E=e)$ is often increasing in both $d$ and $e$, and thus the only condition we need is the sign of the interaction of $E$ and $D$ on $S$ on the odds ratio (risk ratio, risk difference) scale. \section*{Appendix} \noindent {\it Proof of Result 0.} The result is known, but we give a simple proof for completeness. By definition, \begin{eqnarray*} \textsc{OR}_{ED\mid S=1} &=& \frac{P(D=1 \mid E=1,S=1)P(D=0 \mid E=0,S=1)}{P(D=1 \mid E=0,S=1)P(D=0 \mid E=1,S=1)}\\ &=& \frac{P(D=1,S=1 \mid E=1)P(D=0,S=1 \mid E=0)}{P(D=1,S=1 \mid E=0)P(D=0,S=1 \mid E=1)}\\ &=& \frac{P(S=1\mid D=1,E=1)P(S=1\mid D=0,E=0)}{P(S=1\mid D=1,E=0)P(S=1\mid D=0,E=1)}\\ && \times \frac{P(D=1 \mid E=1)P(D=0 \mid E=0)}{P(D=1 \mid E=0)P(D=0 \mid E=1)}\\ &=&\textsc{OR}_{ED\mid S=1} \times \text{Inter}_{\RR}. \end{eqnarray*} \hfill\ensuremath{\square} \vspace{5mm} \noindent {\it Proof of Result 1.} From $\textsc{RR}_{ES\mid D=1}=\textsc{RR}_{ES\mid D=0}$, we have $\text{Inter}_{\RR}=\textsc{RR}_{ES\mid D=1}/\textsc{RR}_{ES\mid D=0}=1$. Therefore, $\textsc{OR}_{ED\mid S=1} = \textsc{OR}_{ED}$. \hfill\ensuremath{\square} \vspace{5mm} \noindent {\it Proof of Result 2.} Because all the variables are binary, we can assume the saturated logistic model, \begin{eqnarray*} \text{logit}\{P(S=1 \mid D=d,E=d)\}=\beta_0+\beta_1d+\beta_2e+\beta_3de, \end{eqnarray*} where $\beta_2=\textsc{OR}_{ES\mid D=0}$ and $\beta_2+\beta_3=\textsc{OR}_{ES\mid D=1}$. From $\textsc{OR}_{ES\mid D=1}=\textsc{OR}_{ES\mid D=0}$, we know $\beta_3=0$ and hence the logistic model \begin{eqnarray*} \text{logit}\{P(S=1 \mid E=e,D=d)\}=\beta_0+\beta_1d+\beta_2e \end{eqnarray*} does not have the interaction term between $d$ and $e.$ Define $\text{expit}(x)=1/(1+e^{-x})$. We have $\text{Inter}_{\RR}=A/B$, where \begin{eqnarray*} &&A=P(S=1 \mid E=0,D=0)P(S=1 \mid E=1,D=1)\\ &=&\text{expit}(\beta_0)\text{expit}(\beta_0+\beta_1+\beta_2)=\frac{1}{1+e^{-\beta_0}+e^{-(\beta_0+\beta_1+\beta_2)}+e^{-(2\beta_0+\beta_1+\beta_2)}}, \end{eqnarray*} and \begin{eqnarray*} &&B=P(S=1 \mid E=0,D=1)P(S=1 \mid E=0,D=1)\\ &=&\text{expit}(\beta_0+\beta_1)\text{expit}(\beta_0+\beta_2)=\frac{1}{1+e^{-(\beta_0+\beta_1)}+e^{-(\beta_0+\beta_2)}+e^{-(2\beta_0+\beta_1+\beta_2)}}. \end{eqnarray*} Because \begin{eqnarray*} &&1/A-1/B\\ &=&1+e^{-\beta_0}+e^{-(\beta_0+\beta_1+\beta_2)}+e^{-(2\beta_0+\beta_1+\beta_2)} \\ &&-\left\{1+e^{-(\beta_0+\beta_1)}+e^{-(\beta_0+\beta_2)}+e^{-(2\beta_0+\beta_1+\beta_2)}\right\}\\ &=&e^{-\beta_0}(1-e^{-\beta_1})(1-e^{-\beta_2}), \end{eqnarray*} the relative magnitude of $A$ and $B$ depends on the signs of $\beta_1$ and $\beta_2$. If $P(S=1\mid D=d,E=e)$ is non-increasing or non-decreasing in both $d$ and $e$, then $\beta_1\beta_2 \geq 0$, $A\leq B$ and thus $\textsc{OR}_{ED\mid S=1} =\textsc{OR}_{ED} \times \text{Inter}_{\RR} \leq \textsc{OR}_{ED}$. If $P(S=1\mid D=d,E=e)$ has opposite monotonicity in $d$ and $e$, then $\beta_1\beta_2 \leq 0$, $A\geq B$ and thus $\textsc{OR}_{ED\mid S=1} =\textsc{OR}_{ED} \times \text{Inter}_{\RR} \geq \textsc{OR}_{ED}$. \hfill\ensuremath{\square} \vspace{5mm} \noindent {\it Proof of Result 3.} We can assume a saturated model on the linear probability scale \begin{eqnarray} \label{eqn:linear} P(S=1 \mid D=d,E=d)=\gamma_0+\gamma_1d+\gamma_2e+\gamma_3ed, \end{eqnarray} where $\gamma_2=\textsc{RD}_{ES\mid D=0}$ and $\gamma_2+\gamma_3=\textsc{RD}_{ES\mid D=1}$. From $\textsc{RD}_{ES\mid D=1}=\textsc{RD}_{ES\mid D=0}$, we know $\gamma_3=0$ and hence the linear probability model \begin{eqnarray*} P(S=1 \mid D=d,E=d)=\gamma_0+\gamma_1d+\gamma_2e \end{eqnarray*} has no interaction term between $d$ and $e.$ Thus, \begin{eqnarray*} \text{Inter}_{\RR} = \frac{\gamma_0(\gamma_0+\gamma_1+\gamma_2)}{(\gamma_0+\gamma_1)(\gamma_0+\gamma_2)}= 1-\frac{\gamma_1\gamma_2}{(\gamma_0+\gamma_1)(\gamma_0+\gamma_2)} \end{eqnarray*} depends on the signs of $\gamma_1$ and $\gamma_2$, because $\gamma_0+\gamma_1>0$ and $\gamma_0+\gamma_2>0$. If $P(S=1\mid D=d,E=e)$ is non-increasing or non-decreasing in both $d$ and $e$, then $\gamma_1 \gamma_2 \geq 0$, $\text{Inter}_{\RR} \leq 1$ and hence $\textsc{OR}_{ED\mid S=1} \leq \textsc{OR}_{ED} $. \hfill\ensuremath{\square} \vspace{5mm} \noindent {\it Proof of Result 4.} First, we prove the result on the odds ratio scale. Similar to the proof of Result 2, we assume the logistic model \eqref{eqn:logit} and obtain $\text{Inter}_{\RR}=A'/B'$, where \begin{eqnarray*} &&A'=P(S=1 \mid E=0,D=0)P(S=1 \mid E=1,D=1)\\ &=&\text{expit}(\beta_0)\text{expit}(\beta_0+\beta_1+\beta_2)=\frac{1}{1+e^{-\beta_0}+e^{-(\beta_0+\beta_1+\beta_2+\beta_3)}+e^{-(2\beta_0+\beta_1+\beta_2+\beta_3)}}, \end{eqnarray*} and \begin{eqnarray*} &&B'=P(S=1 \mid E=0,D=1)P(S=1 \mid E=0,D=1)\\ &=&\text{expit}(\beta_0+\beta_1)\text{expit}(\beta_0+\beta_2)=\frac{1}{1+e^{-(\beta_0+\beta_1)}+e^{-(\beta_0+\beta_2)}+e^{-(2\beta_0+\beta_1+\beta_2)}}. \end{eqnarray*} If there is a negative interaction of $E$ and $D$ on $S$ on the odds ratio scale, and $P(S=1\mid D=d,E=e)$ is non-increasing or non-decreasing in both $d$ and $e$, then $\beta_3<0$ and $\beta_1\beta_2\geq 0$. Thus, \begin{eqnarray*} &&1/A'-1/B'\\ &=&1+e^{-\beta_0}+e^{-(\beta_0+\beta_1+\beta_2+\beta_3)}+e^{-(2\beta_0+\beta_1+\beta_2+\beta_3)} \\ &&-\left\{1+e^{-(\beta_0+\beta_1)}+e^{-(\beta_0+\beta_2)}+e^{-(2\beta_0+\beta_1+\beta_2)}\right\}\\ &\geq& 1+e^{-\beta_0}+e^{-(\beta_0+\beta_1+\beta_2)}+e^{-(2\beta_0+\beta_1+\beta_2)}-\left\{1+e^{-(\beta_0+\beta_1)}+e^{-(\beta_0+\beta_2)}+e^{-(2\beta_0+\beta_1+\beta_2)}\right\}\\ &=&e^{-\beta_0}(1-e^{-\beta_1})(1-e^{-\beta_2}) \geq 0, \end{eqnarray*} implying $\textsc{OR}_{ED\mid S=1} \leq \textsc{OR}_{ED} $. If there is a positive interaction of $E$ and $D$ on $S$ on the odds ratio scale, and $P(S=1\mid D=d,E=e)$ has opposite monotonicity in $d$ and $e$, then $\beta_3>0$ and $\beta_1\beta_2\leq 0$. Thus, \begin{eqnarray*} &&1/A'-1/B'\\ &=&1+e^{-\beta_0}+e^{-(\beta_0+\beta_1+\beta_2+\beta_3)}+e^{-(2\beta_0+\beta_1+\beta_2+\beta_3)} \\ &&-\left\{1+e^{-(\beta_0+\beta_1)}+e^{-(\beta_0+\beta_2)}+e^{-(2\beta_0+\beta_1+\beta_2)}\right\}\\ &\leq& 1+e^{-\beta_0}+e^{-(\beta_0+\beta_1+\beta_2)}+e^{-(2\beta_0+\beta_1+\beta_2)}-\left\{1+e^{-(\beta_0+\beta_1)}+e^{-(\beta_0+\beta_2)}+e^{-(2\beta_0+\beta_1+\beta_2)}\right\}\\ &=&e^{-\beta_0}(1-e^{-\beta_1})(1-e^{-\beta_2}) \leq 0, \end{eqnarray*} implying $\textsc{OR}_{ED\mid S=1} \geq \textsc{OR}_{ED} $. Second, we prove the result on the risk ratio scale. If there is a positive interaction of $E$ and $D$ on $S$ on the risk ratio scale, then $\text{Inter}_{\RR} \geq 1$ and $\textsc{OR}_{ED\mid S=1} \geq \textsc{OR}_{ED} $. If there is a negative interaction of $E$ and $D$ on $S$ on the risk ratio scale, then $\text{Inter}_{\RR} \leq 1$ and $\textsc{OR}_{ED\mid S=1} \leq \textsc{OR}_{ED} $. Third, we prove the result on the risk difference scale. Similar to Result 4, we assume the linear model \eqref{eqn:linear} and obtain \begin{eqnarray*} \text{Inter}_{\RR} = \frac{\gamma_0(\gamma_0+\gamma_1+\gamma_2+\gamma_3)}{(\gamma_0+\gamma_1)(\gamma_0+\gamma_2)}= 1-\frac{\gamma_1\gamma_2}{(\gamma_0+\gamma_1)(\gamma_0+\gamma_2)}+\frac{\gamma_0\gamma_3}{(\gamma_0+\gamma_1)(\gamma_0+\gamma_2)}. \end{eqnarray*} From \eqref{eqn:linear}, $\gamma_0 =P(S=1 \mid D=0,E=0) \geq 0$. If there is a negative interaction of $E$ and $D$ on $S$ on the risk difference scale, and $P(S=1\mid D=d,E=e)$ is non-increasing or non-decreasing in both $d$ and $e$, then $\gamma_3<0$ and $\gamma_1\gamma_2\geq 0$. Thus, \begin{eqnarray*} \text{Inter}_{\RR} \leq 1-\frac{\gamma_1\gamma_2}{(\gamma_0+\gamma_1)(\gamma_0+\gamma_2)} \leq 1, \end{eqnarray*} implying $\textsc{OR}_{ED\mid S=1} \leq \textsc{OR}_{ED} $. If there is a positive interaction of $E$ and $D$ on $S$ on the risk difference scale, and $P(S=1\mid D=d,E=e)$ has opposite monotonicity in $d$ and $e$, then $\gamma_3>0$ and $\gamma_1\gamma_2\leq 0$. Thus, \begin{eqnarray*} \text{Inter}_{\RR} \geq 1-\frac{\gamma_1\gamma_2}{(\gamma_0+\gamma_1)(\gamma_0+\gamma_2)} \geq 1, \end{eqnarray*} implying $\textsc{OR}_{ED\mid S=1} \geq \textsc{OR}_{ED} $. \hfill\ensuremath{\square} \section*{Acknowledgements} The authors thank Dr. Elias Bareinboim at the Department of Computer Science of Purdue and Dr. Tyler VanderWeele at the Harvard T.H. Chan School of Public Health for helpful comments. The editor and two reviewers made useful suggestions.
1609.07468
\section{Introduction} For every group $G$ there is a functor $\mathbb{A} \colon \mathrm{Or}G \to \mathrm{Spectra}$ from the orbit category of $G$ to the category of spectra sending $G/H$ to (a spectrum weakly equivalent to) the non-connective $A$-theory spectrum $\mathbb{A}(BH)$. For any such functor $\mathbb{F} \colon \mathrm{Or}G \to \mathrm{Spectra}$, a $G$--homology theory $H_\mathbb{F}$ can be constructed via \[H_\mathbb{F}(X):= \mathrm{Map}_G(\_,X_+) \wedge_{\mathrm{Or}G} \mathbb{F},\] see Davis and Lück \cite{Davis-Lueck(1998)}. We will denote its homotopy groups by $H_n^G(X;\mathbb{F}):=\pi_nH_\mathbb{F}(X)$. The assembly map for the family of virtually cyclic subgroups (in $A$--theory) is the map \begin{equation*} H_n^G(E_{\mathcal{V}\mathcal{C} yc}G;\mathbb{A})\to H^G_n(\mathrm{pt};\mathbb{A})\cong A_n(BG) \end{equation*} induced by the map $E_{\mathcal{V}\mathcal{C} yc} G \to \mathrm{pt}$. Here, $E_{\mathcal{V}\mathcal{C} yc}G$ denotes the classifying space for the family of virtually cyclic subgroups, see L\"uck \cite{Lueck(2005)}. The assembly map can more generally be defined with coefficients, cf.~\cite[Conjecture~7.1]{UW}. In this note, we consider the \emph{$A$--theoretic Farrell--Jones Conjecture with coefficients and finite wreath products}, which predicts for a discrete group $G$ that the assembly map with coefficients is an isomorphism for every wreath product $G \wr F$ of $G$ with a finite group $F$. Our main result is the following: \begin{theorem}\label{maintheorem} Let $G$ be a virtually solvable group. Then $G$ satisfies the Farrell--Jones Conjecture for $A$--theory with coefficients and finite wreath products. \end{theorem} Using this, we can adapt previous work by R\"uping \cite{Rueping(2016)} and Kammeyer, L\"uck and R\"uping \cite{KLR(2016)} to $A$--theory: \begin{corollary}\label{cor:s-arithmetic} The $A$--theoretic Farrell--Jones Conjecture with coefficients and finite wreath products holds for subgroups of $\mathrm{GL}_n(\mathbb{Q})$ or $\mathrm{GL}_n(F(t))$, where $F$ is a finite field. In particular, the conjecture holds for $S$--arithmetic groups. \end{corollary} \begin{proof} The proof works as the one of \cite[Theorem~8.13]{Rueping(2016)}: Since the conjecture is inherited under directed colimits \cite[Theorem~1.1(ii)]{ELPUW}, it suffices to consider linear groups over localizations at finitely many primes. Then \cite[Proposition~2.2]{Rueping(2016)} together with \cite[Corollary~6.6]{ELPUW} shows that such a group satisfies the conjecture relative to a certain family of subgroups, all whose members in turn satisfy the conjecture relative to the class of virtually solvable groups \cite[Theorem~8.12]{Rueping(2016)}. The corollary follows from \Cref{maintheorem} together with the Transitivity Principle \cite[Proposition~11.2]{UW}. \end{proof} \begin{corollary}\label{cor:lattices} The $A$--theoretic Farrell--Jones Conjecture with coefficients and finite wreath products holds for arbitrary lattices in almost connected Lie groups. More generally, it holds for lattices $\Gamma$ in second countable, locally compact Hausdorff groups $G$ whose group of path components $\pi_0(G)$ is discrete and satisfies the $A$--theoretic Farrell--Jones Conjecture with coefficients and finite wreath products. \end{corollary} \begin{proof} In \cite{KLR(2016)}, it is shown that a class of groups satisfying the list of properties from \cite[Theorem~2]{KLR(2016)} also contains the groups considered in the corollary. The statement of \cite[Theorem~2]{KLR(2016)} holds for the class of groups satisfying the $A$--theoretic Farrell--Jones Conjecture with coefficients and finite wreath products by \cite[Theorem~1.1]{ELPUW}, \Cref{maintheorem} and \Cref{cor:s-arithmetic}. \end{proof} As explained in \cite[Section~3]{ELPUW}, the analogous statements of \Cref{maintheorem}, \Cref{cor:s-arithmetic} and \Cref{cor:lattices} for (topological, PL or smooth) Whitehead spectra and pseudoisotopy spectra also hold true. \begin{remark} We have been informed that Thomas Farrell and Xiaolei Wu have independently obtained a proof of \Cref{maintheorem}. \end{remark} \subsection*{Acknowledgements} This paper was conceived and written during the Junior Trimester Program ``Topology" at the Hausdorff Research Institute for Mathematics (HIM) in Bonn. The first author was financially supported by the Max Planck Society. \section{Dress--Farrell--Hsiang--Jones groups} The proof of the $A$--theoretic Farrell--Jones Conjecture for solvable groups relies on a concoction of the Farrell--Hsiang method \cite{UW} and transfer reducibility \cite{ELPUW} which mimics the combination of the methods from \cite{Bartels-Lueck(2012b)} and \cite{Bartels-Lueck(2012), Wegner(2012)} in \cite{Wegner(2015)}. \begin{definition}\label{def:dress.groups} Let $F$ be a finite group. We call $F$ a \emph{Dress group} if there exists a normal series $P \unlhd H \unlhd F$ such that $P$ is a $p$--group for some prime $p$, $H/P$ is cyclic and $F/H$ is a $q$--group for some prime $q$. \end{definition} We refer to \cite[Definition~2.7]{Wegner(2015)} and \cite[Definition~2.12]{Wegner(2015)} for the definitions of ``homotopy coherent $G$--action" and ``controlled domination". \begin{definition}\label{def:dfhj.group} Let $G$ be a discrete group and let $S \subseteq G$ be a finite and symmetric generating set of $G$ which contains the trivial element. Let $\mathcal{F}$ be a family of subgroups of $G$. Then $G$ is a \emph{Dress--Farrell--Hsiang--Jones group with respect to $\mathcal{F}$}, or \emph{DFHJ-group (with respect to $\mathcal{F}$)} for short, if there exists $N \in \mathbb{N}$ such that for every $n \in \mathbb{N}$ there is a homomorphism $\pi \colon G \to F$ to a finite group with the property that for every Dress subgroup $D \leq F$ there exist \begin{enumerate} \item\label{it:dfhj.group1} a compact, contractible metric space $X_D$ such that for every $\epsilon > 0$ there is an $\epsilon$--controlled domination of $X_D$ by an at most $N$--dimensional finite simplicial complex; \item\label{it:dfhj.group2} a homotopy coherent $G$--action $\Gamma_D$ on $X_D$; \item\label{it:dfhj.group3} a $\pi^{-1}(D)$--simplicial complex $\Sigma_D$ of dimension at most $N$ whose isotropy is contained in $\mathcal{F}$; \item\label{it:dfhj.group4} a $\pi^{-1}(D)$--equivariant map $f_D \colon G \times X_D \to \Sigma_D$ such that \begin{itemize} \item for all $g \in G$, $x \in X_D$ and $s \in S^n$ \[ d^{l^1}\big( f_D(g,x),f_D(gs^{-1},\Gamma_D(s,x)) \big) \leq \frac{1}{n}, \] \item for all $g \in G$, $x \in X_D$ and $s_0,\ldots,s_n \in S^n$ \[ \textup{diam}\big\{ f_D(g,\Gamma_D(s_n,t_n,\ldots,s_0,x)) \,\big|\, t_1,\ldots,t_n \in [0,1] \big\} \leq \frac{2}{n}. \] \end{itemize} \end{enumerate} \end{definition} \begin{remark}\label{rem:dfhj.transfer-reducible.dfh} \ \begin{enumerate} \item If $G$ is homotopy transfer reducible with respect to $\mathcal{F}$ \cite[Definition~6.2]{ELPUW}, then it is a DFHJ-group with respect to $\mathcal{F}$: Choose the finite quotient to be trivial for all $n$. \item If $G$ is a Dress--Farrell--Hsiang group with respect to $\mathcal{F}$ \cite[Definition~7.3]{UW}, then it is a DFHJ-group with respect to $\mathcal{F}$: Choose the transfer space $X_D$ to be a point for all $n$ and $D$. \end{enumerate} \end{remark} \begin{remark}\label{rem:comparison.contracting.conditions} Condition \eqref{it:dfhj.group4} in \Cref{def:dfhj.group} looks a bit different than \cite[Definition~4.1]{Wegner(2015)}. The difference lies mostly in notation. As we argue in the proof of \Cref{prop:gw.is.dfhj} below, the condition in \cite[Definition~4.1]{Wegner(2015)} implies ours. Conversely, the proof showing the existence of the functor $F$ in diagram~\eqref{eq:outline.of.proof} (cf.~\cite[Lemma~6.11]{ELPUW}) shows that condition~\eqref{it:dfhj.group4} also yields the condition in \cite[Definition~4.1]{Wegner(2015)}, up to some constants. \end{remark} \begin{theorem}\label{thm:afjc.for.dfhjgroups} Suppose that $G$ is a DFHJ-group with respect to a family $\mathcal{F}$ of subgroups of $G$. Then the $A$--theoretic isomorphism conjecture with coefficients relative $\mathcal{F}$ holds for $G$. \end{theorem} The remainder of this section is dedicated to a proof of \Cref{thm:afjc.for.dfhjgroups} and is modelled on \cite[Section~4.2]{Wegner(2015)}. Just like the proofs in \cite{UW, ELPUW}, we show that the fiber of the assembly map is weakly contractible. This uses the fact that this fiber can be modelled by the $K$--theory of certain categories of controlled retractive spaces, whose definition we recall next (cf.~also \cite[Sections~2 and 3]{UW}). A \emph{coarse structure} is a triple $\mathfrak{Z}=(Z, \mathfrak{C}, \mathfrak{S})$ such that $Z$ is a Hausdorff $G$--space, $\mathfrak{C}$ is a collection of reflexive, symmetric and $G$--invariant relations on $Z$ which is closed under taking finite unions and compositions, and $\mathfrak{S}$ is a collection of $G$--invariant subsets of $Z$ which is closed under taking finite unions. See \cite[Definition~3.23]{UW} for the notion of a \emph{morphism of coarse structures}. Fix a coarse structure $\mathfrak{Z}$. A \emph{labeled $G$--CW--complex relative W}, see \cite[Definition~2.3]{UW}, is a pair $(Y, \kappa)$, where $Y$ is a free $G$--CW--complex relative $W$ together with a $G$--equivariant function $\kappa \colon \cells Y \rightarrow Z$. Here, $\diamond Y$ denotes the (discrete) set of relative cells of $Y$. A \emph{$\mathfrak{Z}$--controlled map} $f \colon (Y_1,\kappa_1) \rightarrow (Y_2, \kappa_2)$ is a $G$--equivariant, cellular map $f \colon Y_1 \rightarrow Y_2$ relative $W$ such that for all $k \in \mathbb{N}$ there is some $C \in \mathfrak{C}$ for which \[ (\kappa_2,\kappa_1)(\{(e_2,e_1) \mid e_1 \in \cells_k Y_1, e_2\in \cells Y_2, \langle f(e_1) \rangle \cap e_2 \neq \emptyset\}) \subseteq C \] holds. A \emph{$\mathfrak{Z}$--controlled $G$--CW--complex relative W} is a labeled $G$--CW--complex $(Y,\kappa)$ relative $W$, such that the identity is a $\mathfrak{Z}$--controlled map and for all $k \in \mathbb{N}$ there is some $S \in \mathfrak{S}$ such that \[ \kappa(\diamond_k Y)\subseteq S. \] A \emph{$\mathfrak{Z}$--controlled retractive space relative $W$} is a $\mathfrak{Z}$--controlled $G$--CW--complex $(Y,\kappa)$ relative $W$ together with a $G$--equivariant retraction $r \colon Y \to W$, ie.~a left inverse to the structural inclusion $W \hookrightarrow Y$. The $\mathfrak{Z}$--controlled retractive spaces relative $W$ form a category $\mathcal{R}^G(W,\mathfrak{Z})$ in which \emph{morphisms} are $\mathfrak{Z}$--controlled maps which additionally respect the chosen retractions. The category of controlled $G$--CW--complexes (relative $W$) and controlled maps admits a notion of \textit{controlled homotopies}, see \cite[Definition~2.5]{UW} via the objects $(Y \leftthreetimes [0,1], \kappa \circ pr_Y)$, where $Y \leftthreetimes [0,1]$ denotes the reduced product which identifies $W \times [0,1] \subseteq Y \times [0,1]$ to a single copy of $W$ and $pr_Y: \cells Y \leftthreetimes [0,1] \rightarrow \diamond Y$ is the canonical projection. In particular, we obtain a notion of \emph{controlled homotopy equivalence} (or \emph{$h$--equivalence}). A $\mathfrak{Z}$--controlled retractive space $(Y, \kappa)$ is called \textit{finite} if it is finite-dimensional, the image of $Y \backslash W$ under the retraction meets the orbits of only finitely many path components of $W$ and for each $z \in Z$ there is some open neighborhood $U$ of $z$ such that $\kappa^{-1}(U)$ is finite, see \cite[Definition~3.3]{UW}. A $\mathfrak{Z}$--controlled retractive space $(Y, \kappa)$ is called \textit{finitely dominated}, if there are a finite $\mathfrak{Z}$--controlled, retractive space $D$, a morphism $p \colon D \rightarrow Y$ and a $\mathfrak{Z}$--controlled map $i \colon Y \rightarrow D$ such that $p \circ i$ is controlled homotopic to $\textup{id}_Y$. The finite, respectively finitely dominated, $\mathfrak{Z}$--controlled retractive spaces form full subcategories $\mathcal{R}^G_f(W,\mathfrak{Z}) \subset \mathcal{R}^G_{fd}(W,\mathfrak{Z}) \subset \mathcal{R}^G(W,\mathfrak{Z})$. All three of these categories support a Waldhausen category structure in which inclusions of $G$--invariant subcomplexes up to isomorphism are the cofibrations and controlled homotopy equivalences are the weak equivalences, see \cite[Corollary~3.22]{UW}. Let $X$ be a $G$--CW--complex and let $M$ be a metric space with free, isometric $G$--action. Define $\mathfrak{C}_{bdd}(M)$ to be the collection of all subsets $C \subset M \times M$ which are of the form \begin{equation*} C = \{ (m,m') \in M \times M \mid d(m,m') \leq \alpha \} \end{equation*} for some $\alpha \geq 0$. Define further $\mathfrak{C}_{Gcc}(X)$ to be the collection of all $C \subset (X \times [1,\infty[) \times (X \times [1,\infty[)$ which satisfy the following: \begin{enumerate} \item For every $x \in X$ and every $G_x$--invariant open neighborhood $U$ of $(x,\infty)$ in $X \times [1,\infty]$, there exists a $G_x$--invariant open neighborhood $V \subset U$ of $(x,\infty)$ such that $(((X \times [1,\infty[) \setminus U) \times V) \cap C = \emptyset$. \item Let $p_{[1,\infty[} \colon X \times [1,\infty[ \to [1,\infty[$ be the projection map. Equip $[1,\infty[$ with the Euclidean metric. Then there exists some $B \in \mathfrak{C}_{bdd}([1,\infty[)$ such that $C \subset p^{-1}_{[1,\infty[}(B)$. \item $C$ is symmetric, $G$--invariant and contains the diagonal. \end{enumerate} Next define $\mathfrak{C}(M,X)$: Let $p_M \colon M \times X \times [1,\infty[ \to M$ and $p_{X \times [1,\infty[} \colon M \times X \times [1,\infty[ \to X \times [1,\infty[$ denote the projection maps. Then $\mathfrak{C}(M,X)$ is the collection of all subsets $C \subset (M \times X \times [1,\infty[)^2$ which are of the form \begin{equation*} C = p_M^{-1}(B) \cap p_{X \times [1,\infty[}^{-1}(C') \end{equation*} for some $B \in \mathfrak{C}_{bdd}(M)$ and $C' \in \mathfrak{C}_{Gcc}(X)$. Finally, define $\mathfrak{S}(M,X)$ to be the collection of all subsets $S \subset M \times X \times [1,\infty[$ which are of the form $S = K \times [1,\infty[$ for some $G$--compact subset $K \subset M \times X$. All these data combine to a coarse structure \begin{equation*} \mathbb{J}(M,X) := (M \times X \times [1,\infty[, \mathfrak{C}(M,X), \mathfrak{S}(M,X)) \end{equation*} which serves to define the ``obstruction category" $\mathcal{R}^G_f(W,\mathbb{J}(G,\EGF{G}{\mathcal{F}})),h)$, cf.~\cite[Example~2.2 and Definition~6.1]{UW}. The spectrum $\mathbb{F}(G,W,\EGF{G}{\mathcal{F}})$ alluded to above is the non-connective $K$--theory spectrum of $\mathcal{R}^G_f(W,\mathbb{J}(G,\EGF{G}{\mathcal{F}}))$ with respect to the $h$--equivalences, cf.~\cite[Section~5]{UW}. By \cite[Corollary~6.11]{UW}, a group $G$ satisfies the Farrell--Jones Conjecture with coefficients in $A$--theory with respect to $\mathcal{F}$ if and only if $\mathbb{F}(G,W,\EGF{G}{\mathcal{F}})$ is weakly contractible for every free $G$--CW--complex $W$. Suppose now that $G$ is a DFHJ-group. By definition, there exists some $N \in \mathbb{N}$ such that for every $n \in \mathbb{N}$ there is a homomorphism $\pi_n \colon G \to F_n$ to a finite group with the property that for every Dress subgroup $D \leq F_n$ there exist \begin{enumerate} \item a compact, contractible metric space $X_{n,D}$ such that for every $\epsilon > 0$ there is an $\epsilon$--controlled domination of $X_{n,D}$ by an at most $N$--dimensional finite simplicial complex; \item a homotopy coherent $G$--action $\Gamma_{n,D}$ on $X_{n,D}$; \item a $\pi_n^{-1}(D)$--simplicial complex $\Sigma_{n,D}$ of dimension at most $N$ whose isotropy is contained in $\mathcal{F}$; \item a $\pi_n^{-1}(D)$--equivariant map $f_{n,D} \colon G \times X_{n,D} \to \Sigma_{n,D}$ such that \begin{itemize} \item for all $g \in G$, $x \in X_D$ and $s \in S^n$ \[ d^{l^1}\big( f_{n,D}(g,x),f_{n,D}(gs^{-1},\Gamma_D(s,x)) \big) \leq \frac{1}{n}, \] \item for all $g \in G$, $x \in X_D$ and $s_0,\ldots,s_n \in S^n$ \[ \textup{diam}\big\{ f_{n,D}(g,\Gamma_{n,D}(s_n,t_n,\ldots,s_0,x)) \,\big|\, t_1,\ldots,t_n \in [0,1] \big\} \leq \frac{2}{n}. \] \end{itemize} \end{enumerate} Assume we have chosen all of this. Then the proof is organized around the following diagram, in which we abbreviate $E := \EGF{G}{\mathcal{F}}$ (further explanations follow below): \begin{equation}\label{eq:outline.of.proof} \begin{tikzpicture} \matrix (m) [matrix of math nodes, column sep=2em, row sep=2em, text depth=.5em, text height=1em, ampersand replacement=\&] {(\mathcal{R}^G_f(W, \mathbb{J}(G,E)),h) \& (\mathcal{R}^G_{fd}(W, \mathbb{J}(G,E)),h) \\ (\mathcal{R}^G_f(W, \mathbb{J}((G)_n,E)),h) \& (\mathcal{R}^G_{fd}(W, \mathbb{J}((G)_n,E)),h^{fin}) \\ (\mathcal{R}^G_f(W, \mathbb{J}((S_n \times G)_n,E)), h) \& \\ (\mathcal{R}^G_{fd}(W, \mathbb{J}((X_n \times G)_n,E)), h^{fin}) \& (\mathcal{R}_{fd}^G(W, \mathbb{J}((\Sigma_n \times G)_n, E)), h^{fin})\\}; \path[->] (m-1-1) edge node[above]{$i$} (m-1-2) (m-1-1) edge node[left]{$\Delta_f$} (m-2-1) (m-1-2) edge node[left]{$\Delta_{fd}$} (m-2-2) (m-2-1) edge node[above]{$j$} (m-2-2) (m-3-1) edge node[right]{$P_S$} (m-2-1) (m-4-1) edge node[below right]{$P_X$} (m-2-2) edge node[above]{$F$} (m-4-2) (m-4-2) edge node[right]{$P_\Sigma$} (m-2-2); \path[dashed][->] (m-1-1.180) edge[bend right=30] node[left]{$\textup{trans}_1$} (m-3-1.170) (m-3-1) edge node[left]{$\textup{trans}_2$} (m-4-1); \end{tikzpicture} \end{equation} Diagram~\eqref{eq:outline.of.proof} involves some additional notation which we explain first. Suppose that $(M_n)_n$ is a sequence of metric spaces with a free, isometric $G$--action. Let $X$ be a $G$--CW--complex. Following \cite[Section~7]{UW}, define the coarse structure \begin{equation*} \mathbb{J}((M_n)_n,X) := \big( \coprod_n M_n \times X \times [1,\infty[, \mathfrak{C}((M_n)_n,X), \mathfrak{S}((M_n)_n,X) \big) \end{equation*} as follows: Members of $\mathfrak{C}((M_n)_n,X)$ are of the form $C = \coprod_n C_n$ with $C_n \in \mathfrak{C}(M_n,X)$, and we additionally require that $C$ satisfies the \emph{uniform metric control conditon}: There is some $\alpha > 0$, independent of $n$, such that for all $((m,x,t)$, $(m',x',t')) \in C$ we have $d(m,m') < \alpha$. Members of $\mathfrak{S}((M_n)_n,X)$ are sets of the form $T = \coprod_n T_n$ with $T_n \in \mathfrak{S}(M_n,X)$. The resulting category $\mathcal{R}^G(W,\mathbb{J}((M_n)_n,X))$ is canonically a subcategory of the product category $\prod_n \mathcal{R}^G(W,\mathbb{J}(M_n,X))$. Some instances of the category $\mathcal{R}^G(W,\mathbb{J}((M_n)_n,X))$ we consider in diagram \eqref{eq:outline.of.proof} come equipped with another notion of weak equivalence: Let $(Y_n)_n$ be an object of $\mathcal{R}^G(W, \mathbb{J}((M_n)_n,E))$. For $\nu \in \mathbb{N}$, we denote by $(-)_{n > \nu}$ the endofunctor which sends $(Y_n)_n$ to the sequence $(\widetilde{Y}_n)_n$ with $\widetilde{Y}_n = \ast$ for $n \leq \nu$ and $\widetilde{Y}_n = Y_n$ for $n > \nu$. A morphism $(f_n)_n \colon (Y_n)_n \rightarrow (Y_n')_n$ is an \emph{$h^{fin}$--equivalence} if there is some $\nu \in \mathbb{N}$, such that $(f_n)_{n>\nu}\colon (Y_n)_{n>\nu} \rightarrow (Y_n')_{n>\nu}$ is an $h$--equivalence. Next, we define the families of metric spaces that we plug into the coarse structure $\mathbb{J}(-,E)$. As a shorthand, we denote the preimage $\pi_n^{-1}(D)$ of any Dress group $D \leq F_n$ by $\overline{D}$. \begin{enumerate} \item The family $(G)_n$ is the constant family in which we equip each component with the word metric on $G$ with respect to $S$. \item Let $\mathcal{D}r_n$ denote the family of Dress subgroups of $F_n$. Then define the $G$--space $S_n := \coprod_{D \in \mathcal{D}r_n} G/\overline{D}$. We equip $S_n \times G$ with the diagonal $G$--action and the quasi-metric $d_{S_n}$ given by \[ d_{S_n}((g_1\overline{D}, g_2), (h_1\overline{D'}, h_2)) := \begin{cases} d_G(g_2,h_2) & \overline{D} = \overline{D'}, g_1\overline{D} = h_1\overline{D}, \\ \infty & \text{otherwise.} \end{cases} \] \item The space $X_n$ is defined to be $\coprod_{D \in \mathcal{D}r_n} X_{n,D} \times G/\overline{D}$. Define for each $D \in \mathcal{D}r_n$ the constant $\Lambda_{n,D}$ as in \cite[Section~6]{ELPUW}. We equip $X_n \times G$ with the $G$--action $\gamma \cdot (x, g_1\overline{D}, g_2) := (x, \gamma g_1\overline{D}, \gamma g_2)$ and the metric $d_{X_n}$ given by \begin{align*} &d_{X_n}((x, g_1\overline{D}, g_2),(y, h_1\overline{D'}, h_2) \\ & := \begin{cases} d_G(g_2,h_2) + d_{\Gamma_{n,D},S^n,n,\Lambda_{n,D}}((x,g_2), (y,h_2)) & \overline{D} = \overline{D'}, g_1\overline{D} = g_2\overline{D} \\ \infty & \text{otherwise,} \\ \end{cases} \end{align*} where we use the metric $d_{\Gamma_{n,D},S^n,n,\Lambda_{n,D}}$ defined in \cite[Definition~2.9]{Wegner(2015)}. \item Finally, $\Sigma_n$ is defined to be the $G$--simplicial complex $\coprod_{D \in \mathcal{D}r_n} G \times_{\overline{D}} \Sigma_{n,D}$, equipped with the metric $n \cdot d^{\ell^1}$, where $d^{\ell^1}$ denotes the $\ell^1$--metric of a simplicial complex. \end{enumerate} When crossing one of the above metric spaces with the group $G$, we regard the resulting space as a metric space by equipping it with the sum of the given metric and the word metric on $G$. This defines all categories appearing in diagram~\eqref{eq:outline.of.proof}. Let us now define the functors connecting these categories. The functors $i$ and $j$ are the exact inclusions functors from finite to finitely dominated objects. The functors $\Delta_f$ and $\Delta_{fd}$ are the diagonal functors sending a given object $Y$ to the constant sequence $(Y)_n$. Note that $j \circ \Delta_f = \Delta_{fd} \circ i$. The functors $P_S$, $P_X$ and $P_\Sigma$ are induced the projection maps from $S_n \times G$, $X_n \times G$ and $\Sigma_n \times G$ to $G$. The functor $F$ is induced by the sequence of maps $(f_n \colon X_n \times G \to \Sigma_n \times G)_n$, which we define by \[ f_n(x, g_1\overline{D}, g_2) := (g_1, f_{n,D}(g_1^{-1}g_2, x)). \] The formula uses secretly the identification $G/\overline{D} \times G \cong G \times_{\overline{D}} G$. Using the contracting properties \Cref{def:dfhj.group}~\eqref{it:dfhj.group4}, one checks that the functor $F$ is well-defined, the proof being completely analogous to \cite[Lemma~6.11]{ELPUW}. Moreover, $P_X = P_\Sigma \circ F$. We make the following claims: \begin{proposition}\label{prop:properties.of.main.diagram} \ \begin{enumerate} \item\label{it:properties.of.main.diagram1} After applying $K$--theory, the dashed arrow $\textup{trans}_1$ exists such that $K_m(\Delta_f) = K_m(P_S) \circ \textup{trans}_1$. \item\label{it:properties.of.main.diagram2} After applying $K$--theory, the dashed arrow $\textup{trans}_2$ exists such that $K_m(j \circ P_S) = K_m(P_X) \circ \textup{trans}_2$. \item\label{it:properties.of.main.diagram3} The $K$--theory of $(\mathcal{R}_{fd}^G(W, \mathbb{J}((\Sigma_n \times G)_n, E)), h^{fin})$ is trivial. \item\label{it:properties.of.main.diagram4} $K_m(\Delta_{fd} \circ i)$ is injective for all $m$. \end{enumerate} \end{proposition} \Cref{thm:afjc.for.dfhjgroups} follows from \Cref{prop:properties.of.main.diagram} by an easy diagram chase. \begin{proof}[Proof of \Cref{prop:properties.of.main.diagram}] Claim~\eqref{it:properties.of.main.diagram1} is an immediate consequence of \cite[Proposition~9.2]{UW}. Claim~\eqref{it:properties.of.main.diagram3} is established in \cite[Section~10]{UW}. Claim~\eqref{it:properties.of.main.diagram4} is \cite[Lemma~6.12]{ELPUW}. So all that is left to show is claim~\eqref{it:properties.of.main.diagram2}. The map $\textup{trans}_2$ arises as a slight modification of the transfer constructed in \cite[Section~7]{ELPUW}, whose notation we will also use in the following discussion. Let $\mathcal{R}^G_f(W,\mathbb{J}((S_n \times G)_n, E))_{\alpha,d}$ denote the subcategory of $\mathcal{R}^G_f(W,\mathbb{J}((S_n \times G)_n, E))$ containing only those objects $(Y_n,\kappa_n)_n$ such that $Y_n$ has dimension at most $d$ and is $\alpha$--controlled over $S_n \times G$, together with morphisms $(\phi_n \colon (Y_n,\kappa_n) \to (Y_n',\kappa_n'))_n$ which are \emph{cellwise $0$--controlled} in the following sense: Each $\phi_n$ is a regular map (ie. it maps open cells onto open cells), and for each cell $c \in \cells Y_n$, we have $\kappa_n'(\phi_n(c)) = \kappa_n(c)$. Note that such morphisms automatically satisfy the uniform metric control condition. Arguing as in \cite[Section~7.1]{ELPUW}, we observe that it suffices to construct compatible transfers on each $\mathcal{R}^G_f(W,\mathbb{J}((S_n \times G)_n, E))_{\alpha,d}$ individually. Let $(Y_n,\kappa_n)_n$ be an object in $\mathcal{R}^G_f(W,\mathbb{J}((S_n \times G)_n, E))_{\alpha,d}$. By the definition of the metric $d_{S_n}$, the complex $Y_n$ decomposes $G$--equivariantly as $Y_n = \coprod_{D \in \mathcal{D}r_n} Y_{n,D}$, with $Y_{n,D}$ living over the metric component $G/\pi_n^{-1}(D) \times G$. Let $\kappa_{n,D}$ denote the restriction of $\kappa_n$ to the set of cells of $Y_{n,D}$. Then define \[ \textup{trans}^{\alpha,d}_n(Y_n) := \coprod_{D \in \mathcal{D}r_n} \textup{trans}^{\alpha,d}_{X_{n,D}}(Y_{n,D}), \] cf.~\cite[Definition~7.9]{ELPUW}. The control map $\textup{trans}^{\alpha,d}_n(\kappa_n)$ of $\textup{trans}^{\alpha,d}_n(Y_n)$ is defined as in {\it loc.~cit.} (formula directly before Lemma~7.10), replacing $G$ by $S_n \times G$. Then the obvious analog of \cite[Lemma~7.10]{ELPUW} holds, so that \[ \textup{trans}^{\alpha,d}((Y_n,\kappa_n)_n) := (\textup{trans}^{\alpha,d}_n(Y_n),\textup{trans}^{\alpha,d}_n(\kappa_n))_n \] is indeed an object in $\mathcal{R}^G_{fd}(W,\mathbb{J}(X_n \times G)_n,E))$. By the obvious analog of \cite[Lemma~7.11]{ELPUW}, $\textup{trans}^{\alpha,d}$ defines a functor \[ \textup{trans}^{\alpha,d} \colon \mathcal{R}^G_f(W,\mathbb{J}((S_n \times G)_n, E))_{\alpha,d} \to \mathcal{R}^G_{fd}(W,\mathbb{J}(X_n \times G)_n,E)). \] Since we leave the $S_n \times G \times E \times [1,\infty[$--component of each $\kappa_n$ unchanged, the rest of \cite[Section~7]{ELPUW} carries over to show the existence of the map $\textup{trans}_2$, and thus claim~\eqref{it:properties.of.main.diagram2}. \end{proof} \begin{remark}\label{rem:deloopings} In fact, the discussion we have given so far only establishes the vanishing of $K_m(\mathcal{R}^G_f(W,\mathbb{J}(E)),h)$ for $m > 0$. In order to show vanishing in all degrees, we need to consider appropriate deloopings constructed by introducing another metric coordinate $\mathbb{R}^k$. Since this coordinate remains unchanged throughout, the previous discussion applies verbatim. Cf.~also \cite[Section~9]{UW} and the discussion in Section~6 of \cite{ELPUW}. \end{remark} \section{Proof of the main theorem} As in \cite[Section~3]{Wegner(2015)}, the first step in proving \Cref{maintheorem} lies in reducing the general theorem to some special cases. For any non-zero algebraic number $w$, set $G_w := \mathbb{Z}[w,w^{-1}] \rtimes_{\cdot w} \mathbb{Z}$. \begin{lemma}\label{lem:reduction.to.gw} If $G_w$ satisfies the $A$--theoretic Farrell--Jones Conjecture with coefficients and finite wreath products for every non-zero algebraic number $w$, then so does every virtually solvable group. \end{lemma} \begin{proof} We claim that the arguments in \cite[Section~3]{Wegner(2015)} carry over to $A$--theory. Indeed, the argument relies only on the following statements about the Farrell--Jones Conjecture with coefficients and finite wreath products: \begin{enumerate} \item The class of groups satisfying the conjecture has the following closure properties \cite[Theorem~1.1(ii)]{ELPUW}: \begin{itemize} \item If a group satisfies the conjecture, so does every subgroup. \item If two groups satisfy the conjecture, so do their direct and free products. \item If $\{ G_i \}_{i \in I}$ is a directed system of groups satisfying the conjecture, so does the colimit. \item If $p \colon G \twoheadrightarrow Q$ is an epimorphism, and $Q$ as well as every preimage $p^{-1}(C)$ of virtually cyclic subgroups of $Q$ satisfy the conjecture, so does $G$. \end{itemize} \item The following groups satisfy the conjecture: \begin{itemize} \item Semidirect products $A \rtimes \mathbb{Z}$ with $A$ torsion abelian: This case follows from the case of hyperbolic groups \cite[Theorem~1.1(i)]{ELPUW}, cf.~\cite[Lemma~4.1]{Farrell-Linnell(2003)}. \item The wreath product $\mathbb{Z} \wr \mathbb{Z}$: This is, for example, a directed colimit of $\mathrm{CAT}(0)$--groups, and hence satisfies the conjecture by \cite[Theorem~1.1(i)]{ELPUW}. Alternatively, one can argue as in \cite[Lemma~4.3]{Farrell-Linnell(2003)}. \item Virtually abelian groups \cite[Corollary~11.11]{UW}, \cite[Theorem~1.1(i)]{ELPUW}. \end{itemize} \end{enumerate} For details, we refer to \cite[Section~3]{Wegner(2015)}. \end{proof} If $w$ is a root of unity, $G_w$ is a virtually abelian group (cf.~\cite[Lemma~5.32]{Wegner(2015)}) and satisfies the $A$--theoretic Farrell Jones Conjecture with coefficients and finite wreath products by \cite[Corollary~11.11]{UW}. So we may assume that $w$ is not a root of unity in the sequel. We recall some notation from \cite[Section~5]{Wegner(2015)}. In what follows, we fix a non-zero algebraic number $w$ which is not a root of unity. Let $\mathcal{O}$ be the ring of integers in $\mathbb{Q}(w)$. Define the ring $\mathcal{O}_w$ to be \[ \mathcal{O}_w := \{ x \in \mathbb{Q}(w) \mid v_\mathfrak{p}(x) \geq 0 \text{ for all prime ideals $\mathfrak{p} \subset \mathcal{O}$ with $v_\mathfrak{p}(w) = 0$.} \}, \] so that $\mathcal{O} \subseteq \mathcal{O}_w$ and $w$, $w^{-1} \in \mathcal{O}_w$. For $s \in \mathbb{N}$ we define $t_w(s) \geq 0$ to be the number determined by \[ t_w(s)\mathbb{Z} = \{ z \in \mathbb{Z} \mid w^z \equiv 1 \mod s\mathcal{O}_w \}. \] \begin{lemma}\label{lem:images.of.dress.groups} Let $q_1$, $q_2$ be prime numbers satisfying $q_1 \neq q_2$ and $v_\mathfrak{p}(w) = 0$ for all prime factors $\mathfrak{p}$ of $q_1$ or $q_2$ in $\mathcal{O}$. Let $m_1, m_2$ be natural numbers. Consider the finite group $F := ( \mathcal{O}_w / q_1^{m_1} q_2^{m_2} \mathcal{O}_w ) \rtimes \mathbb{Z} / t_w(q_1^{m_1} q_2^{m_2}) \mathbb{Z}$. For every Dress group $D \leq F$, there exists $i \in \{ 1,2 \}$ such that the image of $D$ under the canonical projection $\eta_i \colon F \twoheadrightarrow \mathcal{O}_w / q_i^{m_i} \mathcal{O}_w \rtimes \mathbb{Z} / t_w(q_i^{m_i}) \mathbb{Z}$ is hyperelementary. \end{lemma} \begin{proof} Let $D$ be a Dress subgroup of $F$. Then $D$ fits into a normal series $P \unlhd H \unlhd D$ such that $P$ is a $p$--group, $D/H$ is a $p'$--group, $P$ is normal in $D$ and $\abs{H/P}$ is coprime to both $p$ and $p'$ \cite[Lemma~5.1]{Winges(2015)}. The prime $p$ cannot be $q_1$ and $q_2$ at the same time; without loss of generality, assume that $p \neq q_1$. Set $t := t_w(q_1^{m_1}q_2^{m_2})$ and $t_1 := t_w(q_1^{m_1})$. Consider the normal subgroup $N := q_1^{m_1} \mathcal{O}_w / q_1^{m_1}q_2^{m_2} \mathcal{O}_w \rtimes t_1 \mathbb{Z} / t\mathbb{Z}$ and let $\eta_1$ denote the projection map \[ \eta_1 \colon F \twoheadrightarrow F / N \cong \mathcal{O}_w / q_1^{m_1} \mathcal{O}_w \rtimes \mathbb{Z} / t_1\mathbb{Z}. \] Then $\eta_1(P) \cap \mathcal{O}_w / q_1^{m_1} \mathcal{O}_w = \{ 0 \}$ since the latter is a $q_1$--group and $p \neq q_1$. Hence, $\eta_1(P)$ is mapped isomorphically to a subgroup of $\mathbb{Z} / t_1\mathbb{Z}$ by the projection map $\mathcal{O}_w / q_1^{m_1} \mathcal{O}_w \rtimes \mathbb{Z} / t_1\mathbb{Z} \twoheadrightarrow \mathbb{Z}/t_1\mathbb{Z}$. So $\eta_1(P)$ is cyclic. Since $p$ is coprime to $\abs{ H/P }$ and $H/P$ is cyclic, the image $\eta_1(H)$ is also cyclic. It follows that $\eta_1(D)$ is hyperelementary. \end{proof} \begin{proposition}\label{prop:gw.is.dfhj} Let $w \neq 0$ be an algebraic number which is no root of unity. Then $G_w = \mathbb{Z}[w, w^{-1}] \rtimes \mathbb{Z}$ is a DFHJ--group with respect to the family of virtually abelian subgroups. \end{proposition} \begin{proof} Let $N$ be the natural number determined by \cite[Proposition~5.26]{Wegner(2015)}. Let $S \subseteq G_w$ be a finite, symmetric generating set containing the trivial element. In the proof of \cite[Proposition~5.33]{Wegner(2015)}, it is shown that for every $n \in \mathbb N$ and for every sufficiently large prime number $q$ (depending on $n$) there is a natural number $m \in N$ such that for every hyperelementary subgroup \[ H \leq F_n := \mathcal{O}_w / q^m \mathcal{O}_w \rtimes \mathbb{Z} / t_w(q^m)\mathbb{Z} \] there exist \begin{enumerate} \item a compact, contractible metric space $X_{n,H}$ such that for every $\epsilon > 0$ there is an $\epsilon$--controlled domination of $X_{n,H}$ by an at most $N$--dimensional finite simplicial complex;\footnote{In the proof of \cite[Proposition~5.33]{Wegner(2015)} the space $X_{n,H}$ is denoted by $X_w^R$.} \item a homotopy coherent $G_w$--action $\Psi_{n,H}$ on $X_{n,H}$; \item a positive real number $\Lambda_{n,H}$; \item a $\alpha_n^{-1}(H)$--simplicial complex $E_{n,H}$ of dimension at most $N$ whose isotropy groups are virtually cyclic or abelian; \item a $\alpha_n^{-1}(H)$--equivariant map $f_{n,H} \colon G_w \times X_{n,H} \to E_{n,H}$ such that \[ n \cdot d^{l^1}\big( f_{n,H}(g,x),f_{n,H}(h,y) \big) \leq d_{\Psi_{n,H},S^n,n,\Lambda_{n,H}}\big( (g,x),(h,y) \big) \] for all $(g,x), (h,y) \in G_w \times X_{n,H}$ with $h^{-1}g \in S^n$. \end{enumerate} Here, $\alpha_n \colon G_w \to F_n$ denotes the composition of the inclusion $G_w \hookrightarrow \mathcal{O}_w \rtimes \mathbb{Z}$ with the quotient map $\mathcal{O}_w \rtimes \mathbb{Z} \twoheadrightarrow F_n$. The metric $d_{\Psi_{n,H},S^n,n,\Lambda_{n,H}}$ on $G_w \times X_{n,H}$ is defined in \cite[Definition~2.9]{Wegner(2015)}. It has the property \[ d_{\Psi_{n,H},S^n,n,\Lambda_{n,H}}\big( (g,x),(g (s_n \cdots s_0)^{-1},\Psi_{n,H}(s_n,t_n,\ldots,s_0,x)) \big) \leq 1 \] for all $g \in G_w$, $x \in X_{n,H}$ and $s_0,\ldots,s_n \in S^n$. Hence, \[ d^{l^1}\big( f_{n,H}(g,x),f_{n,H}(gs^{-1},\Psi_{n,H}(s,x)) \big) \leq \frac{1}{n} \] for all $g \in G_w$, $x \in X_{n,H}$ and $s \in S^n$, and \[ \textup{diam}\big\{ f_{n,H}(g,\Psi_{n,H}(s_n,t_n,\ldots,s_0,x)) \,\big|\, t_1,\ldots,t_n \in [0,1] \big\} \leq \frac{2}{n} \] for all $g \in G_w$, $x \in X_{n,H}$ and $s_0,\ldots,s_n \in S^n$. Now let us come to the actual proof. For a given $n \in \mathbb N$ we choose two distinct (large) prime numbers $q_1$, $q_2$ with appropriate natural numbers $m_1, m_2 \in N$ (as described above). Consider the finite group \[ F := \mathcal{O}_w / q_1^{m_1} q_2^{m_2} \mathcal{O}_w \rtimes \mathbb{Z} / t_w(q_1^{m_1} q_2^{m_2})\mathbb{Z}. \] Let $D \leq F$ be a Dress subgroup. By \Cref{lem:images.of.dress.groups}, there exists $i \in \{1,2\}$ such that $\eta_i(D)$ is hyperelementary. We have a finite group $F_n := \mathcal{O}_w / q_i^{m_1} \mathcal{O}_w \rtimes \mathbb{Z} / t_w(q_i^{m_i})\mathbb{Z} = \im(\eta_i)$ with a hyperelementary subgroup $H := \eta_i(D) \leq F_n$. As mentioned at the beginning of the proof, we obtain a homotopy coherent $G_w$--action $\Gamma_{n,H}$ on a metric space $X_{n,H}$, an $\alpha_n^{-1}(H)$--simplicial complex $E_{n,H}$ and an $\alpha_n^{-1}(H)$--equivariant map $f_{n,H}$ with the properties described above. We define $\pi \colon G_w \to F$ as the composition of the inclusion $G_w \hookrightarrow \mathcal{O}_w \rtimes \mathbb{Z}$ with the quotient map $\mathcal{O}_w \rtimes \mathbb{Z} \twoheadrightarrow F$. Then $\pi^{-1}(D)$ is a subgroup of $\alpha_n^{-1}(H)$. We finally set $X_D := X_{n,H}$, $\Gamma_D := \Psi_{n,H}$, $\Sigma_D := E_{n,H}$, $f_D := f_{n,H}$. \end{proof} Since virtually abelian groups satisfy the $A$--theoretic Farrell--Jones Conjecture with coefficients and finite wreath products, \Cref{maintheorem} follows from \Cref{lem:reduction.to.gw}, \Cref{prop:gw.is.dfhj} and \Cref{thm:afjc.for.dfhjgroups} together with the Transititvity Principle \cite[Proposition~11.2]{UW} in view of the following: \begin{lemma}\label{lem:virtually.dfhj} Suppose that $G$ is a DFHJ-group with respect to the family of all subgroups which satisfy the $A$--theoretic Farrell--Jones Conjecture with coefficients and finite wreath products. Let $F$ be a finite group. Then $G \wr F$ is a DFHJ-group with respect to the family of all subgroups which satisfy the $A$--theoretic Farrell--Jones Conjecture with coefficients and finite wreath products. \end{lemma} \begin{proof} The proof is analogous to that of \cite[Lemma~4.3]{Wegner(2015)}, replacing ``hyperelementary" by ``Dress" and using the fact that the collection of Dress groups is also closed under taking subgroups and quotients.\end{proof} \bibliographystyle{alpha}
2209.12877
\section{Introduction} \label{sec:intro} The central problem in Boolean function complexity is to understand exactly how hard it is to compute explicit functions. The hardness naturally depends on the computation model to be used, and depending on the model, several complexity measures for functions have been studied extensively in the literature. To name a few -- size and depth for circuits and formulas, size and width for branching programs, query complexity, communication complexity, length for span programs, and so on. All of these are measures of the computational hardness of a function. There are also several ways to understand hardness of a function intrinsically, independent of a computational model. For instance, the sensitivity of a function, its certificate complexity, the sparsity of its Fourier spectrum, its degree and approximate degree, stability, and so on. Many bounds on computational measures are obtained by directly relating them to appropriate intrinsic complexity measures. See \cite{Jukna-BFCbook} for a wonderful overview of this area. Formal definitions of relevant measures appear in \cref{sec:prelim}. Every Boolean function $f$ can be computed by a simple decision tree (simple in the sense that each node queries a single variable), which is one of the simplest computation models for Boolean functions. The most interesting and well-studied complexity measure in the decision tree model is the minimal depth $\textrm{Depth}(f)$, measuring the query complexity of the function. This measure is known to be polynomially related to several intrinsic measures: sensitivity, block sensitivity, certificate complexity. But there are also other measures which reveal information about the function. The minimal size of a decision tree, $\textrm{DTSize}(f)$, is one such measure, which measures the storage space required to store the function as a tree, and has received some attention in the past. A measure which has received relatively less attention is the minimal rank of a decision tree computing the function, first defined and studied in \cite{EH-IC1989}; see also \cite{ABDORU10}. In general, the rank of a rooted tree (also known as its Strahler number, or Horton-Strahler number, or tree dimension) measures its branching complexity, and is a tree measure that arises naturally in a wide array of applications; see for instance \cite{EsparzaLS-LATA14}. The rank of a Boolean function $f$, denoted $\textrm{Rank}(f)$, is the minimal rank of a decision tree computing it. The original motivation for considering rank of decision trees was from learning theory -- an algorithm, proposed in \cite{EH-IC1989}, and later simplified in \cite{Blum92}, shows that constant-rank decision trees are efficiently learnable in Valiant's PAC learning framework \cite{Valiant-CACM84}. Subsequently, the rank measure has played an important role in understanding the decision tree complexity of search problems over relations \cite{pudlak2000lower,esteban2003combinatorial,Kullmann-ECCC-99} -- see more in the Related Work part below. The special case when the relation corresponds to a Boolean function is exactly the rank of the function. However, there is very little work focussing on the context of, and exploiting the additional information from, this special case. This is precisely the topic of this paper. In this paper, we study how the rank of boolean functions relates to other measures. In contrast with $\textrm{Depth}(f)$, $\textrm{Rank}(f)$ is not polynomially related with sensitivity or to certificate complexity $\textrm{C}(f)$, although it is bounded above by $\textrm{Depth}(f)$. Hence it can reveal additional information about the complexity of a function over and above that provided by $\textrm{Depth}$. For instance, from several viewpoints, the $\mbox{{\sc Parity}}_n$ function is significantly harder than the $\mbox{{\sc And}}_n$ function. But both of them have the same $\textrm{Depth}$, $n$. However, $\textrm{Rank}$ does reflect this difference in hardness, with $\textrm{Rank}(\mbox{{\sc And}}_n)=1$ and $\textrm{Rank}(\mbox{{\sc Parity}}_n)=n$. On the other hand, rank is also already known to characterise the logarithm of decision tree size (\textrm{DTSize}), upto a $\log n$ multiplicative factor. Thus lower bounds on rank give lower bounds on the space required to store a decision tree explicitly. (However, the $\log n$ factor is crucial; there is no dimension-free characterisation. Consider e.g.\ $\log \textrm{DTSize}(\mbox{{\sc And}}_n) = \Theta(\log n)$.) Our main findings can be summarised as follows: \begin{enumerate} \item $\textrm{Rank}(f)$ is equal to the value of the Prover-Delayer game of Pudl{\'a}k and Impagliazzo \cite{pudlak2000lower} played on the corresponding relation $R_f$ (\cref{thm:game-rank}). (This is implicit in earlier literature \cite{Kullmann-ECCC-99,esteban2003combinatorial}.) \item While $\textrm{Rank}$ alone cannot give upper bounds on $\textrm{Depth}(f)$, $\textrm{Depth}(f)$ is bounded above by the product of $\textrm{Rank}(f)$ and $1+\log\textrm{spar}(f)$ (\cref{thm:rank-sparsity-depth}). \item $\textrm{Rank}(f)$ is bounded between the minimum certificate complexity of $f$ at any point, and $(\textrm{C}(f)-1)^2+1$; \cref{thm:rank-cert-bounds}. The upper bound (\cref{lem:rank-cert}) is an improvement on the bound inherited from $\textrm{Depth}(f)$, and is obtained by adapting that construction. \item For a composed function $f\circ g$, $\textrm{Rank}(f\circ g)$ is bounded above and below by functions of $\textrm{Depth}(f)$ and $\textrm{Rank}(g)$; \cref{thm:compose-rank-bounds}. The main technique in both bounds (\cref{thm:compose-rank-ub,thm:compose-rank-lb}) is to use weighted decision trees, as was used in the context of depth \cite{Montanaro-cj14}. For iterated composed functions, these bounds can be used recursively (\cref{corr:iterated-rank}), and can be used to easily recover known bounds on $\textrm{Rank}$ for some functions (\cref{corr:examples}). \item The measures $\textrm{Rank}$ and $\log\textrm{DTSize}$ for simple decision trees sandwich the query complexity in the more general decision tree model where each node queries a conjunction of literals (\cref{thm:simple-conj-relation}). \item In the relation between $\textrm{Rank}(f)$ and $\textrm{DTSize}(f)$ from \cite{EH-IC1989}, the upper bound on $\log\textrm{DTSize}$ is asymptotically tight for the $\mbox{{\sc Tribes}}$ function (\cref{sec:rank-size}). \end{enumerate} By calculating the exact rank for specific functions, we show that all the bounds we obtain on rank are tight. We also describe optimal strategies for the Prover and Delayer, for those more familiar with that setting. \paragraph*{Related work.} A preliminary version of this paper, with some proofs omitted or only briefly sketched, appears in the proceedings of the FSTTCS 2021 conference \cite{DM21}. In \cite{KS-JCSS04} (Corollary 12), non-trivial learnability of $s$-term DNFs is demonstrated. The crucial result that allows this learning is the transformation of the DNF expression into a polynomial threshold function of not too large degree. An important tool in the transformation is the rank of a hybrid kind of decision tree; in these trees, each node queries a single variable, while the subfunctions at the leaves, though not necessarily constant, have somewhat small degree. The original DNF is fist converted to such a hyrid tree with a bound on its rank, and this is exploited to achieve the full conversion to low-degree polynomial threshold functions. This generalises an approach credited in \cite{KS-JCSS04} to Satya Lokam. In \cite{ABDORU10}, a model called $k^+$-decision trees is considered, and the complexity is related to both simple decision tree rank and to communication complexity. In particular, Theorems 7 and 8 from \cite{ABDORU10} imply that communication complexity lower bounds with respect to any variable partition (see \cite{KN-CC-book}) translate to decision tree rank lower bounds, and hence by \cite{EH-IC1989} to decision tree size lower bounds. In \cite{TV97}, the model of linear decision trees is considered (here each node queries not a single variable but a linear threshold function of the variables), and for such trees of bounded rank computing the inner product function, a lower bound on depth is obtained. Thus for this function, in this model, there is a trade-off between rank and depth. In \cite{UT2015}, rank of linear decision trees is used in obtaining non-trivial upper bounds on depth-2 threshold circuit size. In \cite{pudlak2000lower}, a 2-player game is described, on an unsatisfiable formula $F$ in conjunctive normal form, that constructs a partial assignment falsifying some clause. The players are referred to in subsequent literature as the Prover and the Delayer. The value of the game, $\textrm{Value}(F)$, is the maximum $r$ such that the Delayer can score at least $r$ points no matter how the Prover plays. It was shown in \cite{pudlak2000lower} that the size of any tree-like resolution refutation of $F$ is at least $2^{\textrm{Value}(F)}$. Subsequently, the results of \cite{Kullmann-ECCC-99,esteban2003combinatorial} yield the equivalence $\textrm{Value}(F) = \textrm{Rank}(F)$, where $\textrm{Rank}(F)$ is defined to be the minimal rank of the tree underlying a tree-like resolution refutation of $F$. (Establishing this equivalence uses refutation-space and tree pebbling as intermediaries.) The relevance here is because there is an immediate, and well-known, connection to decision trees for search problems over relations: tree-like resolution refutations are decision trees for the corresponding search CNF problem. (See Lemma 7 in \cite{BIW04}). Note that the size lower bound from \cite{pudlak2000lower}, and the rank-value equivalence from \cite{Kullmann-ECCC-99,esteban2003combinatorial}, hold for the search problem over arbitrary relations, not just searchCNF. (See e.g.\ Exercise 14.16 in Jukna for the size bound.) In particular, for Boolean function $f$, it holds for the corresponding canonical relation $R_f$ defined in \cref{sec:prelim}. Similarly, the value of an asymmetric variant of this game is known to characterise the size of a decision tree for the search CNF problem \cite{BGL13}, and this too holds for general relations and Boolean functions. \paragraph*{Organisation of the paper.} After presenting basic definitions and known results in \cref{sec:prelim}, we describe the Prover-Delayer game from \cite{pudlak2000lower} in \cref{sec:game}, and observe that its value equals the rank of the function. We also describe the asymmetric game from \cite{BGL13}. We compute the rank of some simple functions in \cref{sec:simple-calc}. In \cref{sec:rank-rels}, we describe the relation between rank, depth, Fourier sparsity, and certificate complexity. In \cref{sec:rank-composed}, we present results concerning composed functions. In \cref{sec:application} we give two applications. Firstly, using our rank lower bound result, we prove the tight $\log$ size lower bound. Secondly, we prove a query lower bound in the $\mbox{{\sc Conj}}$ decision tree model. In \cref{sec:rank-size} we examine the size-rank relationship for the $\mbox{{\sc Tribes}}$ function. The bounds in \cref{sec:simple-calc,sec:rank-rels,sec:rank-composed,sec:rank-size} are all obtained by direct inductive arguments/decision tree constructions. They can also be stated using the equivalence of the game value and rank -- while this does not particularly simplify the proofs, it changes the language of the proofs and may be more accessible to the reader already familiar with that setting. Hence we include such game-based arguments for our results in \cref{sec:game-proofs}. \section{Preliminaries} \label{sec:prelim} \paragraph*{Decision trees} For a Boolean function $f: \boolfn{n}$, a decision tree computing $f$ is a binary tree with internal nodes labeled by the variables and the leaves labelled by $\{0,1\}$. To evaluate a function on an unknown input, the process starts at the root of the decision tree and works down the tree, querying the variables at the internal nodes. If the value of the query is $0$, the process continues in the the left subtree, otherwise it proceeds in the right subtree. The label of the leaf so reached is the value of the function on that particular input. A decision tree is said to be reduced if no variable is queried more than once on any root-to-leaf path. Without loss of generality, any decision tree can be reduced, so in our discussion, we will only consider reduced decision trees. The depth $\textrm{Depth}(T)$ of a decision tree $T$ is the length of the longest root-to-leaf path, and its size $\textrm{DTSize}(T)$ is the number of leaves. The decision tree complexity or the depth of $f$, denoted by $\textrm{Depth}(f)$, is defined to be the minimum depth of a decision tree computing $f$. Equivalently, $\textrm{Depth}(f)$ can also be seen as the minimum number of worst-case queries required to evaluate $f$. The size of a function $f$, denoted by $\textrm{DTSize}(f)$, is defined similarly i.e.\ the minimum size of a decision tree computing $f$. Since decision trees can be reduced, $\textrm{Depth}(f) \le n$ and $\textrm{DTSize}(f) \le 2^n$ for every $n$-variate function $f$. A function is said to be evasive if its depth is maximal, $\textrm{Depth}(f)=n$. \paragraph*{Weighted decision trees} Weighted decision trees describe query complexity in settings where querying different input bits can have differing cost, and arises naturally in the recursive construction. Formally, these are defined as follows: Let $w_i$ be the cost of querying variable $x_i$. For a decision tree $T$, its weighted depth with respect to the weight vector $[w_1,\ldots,w_n]$, denoted by $\textrm{Depth}_w(T,[w_1,w_2,...,w_n] )$, is the maximal sum of weights of the variables specified by the labels of nodes of $T$ on any root-to-leaf path. The weighted decision tree complexity of $f$, denoted by $\textrm{Depth}_w(f,[w_1,w_2,...,w_n] )$, is the minimum weighted depth of a decision tree computing $f$. Note that $\textrm{Depth}(f)$ is exactly $\textrm{Depth}_w(f,[1,1,\ldots ,1])$. The following fact is immediate from the definitions. \begin{fact}\label{fact:wtd-dec-tree} For any reduced decision tree $T$ computing an $n$-variate function, weights $w_1, \ldots, w_n$, and $i\in [n]$, \[ \textrm{Depth}_w(T,[w_1,\ldots , w_{i-1}, w_i+1, w_{i+1}, \ldots ,w_n] ) \le \textrm{Depth}_w(T,[w_1,w_2,...,w_n] ) +1. \] \end{fact} \paragraph*{Certificate Complexity} The certificate complexity of a function $f$, denoted $\textrm{C}(f)$, measures the number of variables that need to be assigned in the worst case to fix the value of $f$. More precisely, for a Boolean function $f:\boolfn{n}$ and an input $a\in \boolset{n}$, an $f$-certificate of $a$ is a subset $S \subseteq \{1,...,n\}$ such that the value of $f(a)$ can be determined by just looking at the bits of $a$ in set $S$. Such a certificate need not be unique. Let $\textrm{C}(f,a)$ denote the minimum size of an $f$-certificate for the input $a$. That is, \[\textrm{C}(f,a) = \min\left\{ |S| \mid S\subseteq [n]; \forall a'\in \{0,1\}^n, \left[\left(a'_j=a_j \forall j\in S\right) \implies f(a')=f(a)\right]\right\}. \] Using this definition, we can define several measures. \begin{align*} \textrm{For ~}b\in\{0,1\},~~ \textrm{C}_b(f) & = \max\{ \textrm{C}(f,a) \mid a\in f^{-1}(b)\} \\ \textrm{C}(f) & = \max\{ \textrm{C}(f,a) \mid a\in \{0,1\}^n\} = \max\{\textrm{C}_0(f),\textrm{C}_1(f) \}\\ \textrm{C}_{avg}(f) & = 2^{-n}\sum_{a\in \{0,1\}^n} \textrm{C}(f,a) \\ \textrm{C}_{\min}(f) & = \min\{ \textrm{C}(f,a) \mid a\in \{0,1\}^n\} \end{align*} \paragraph*{Composed functions} For boolean functions $f, g_1,g_2,\ldots ,g_n$ of arity $n, m_1, m_2, \ldots , m_n$ respectively, the composed function $f\circ(g_1,g_2,...,g_n)$ is a function of arity $\sum_i m_i$, and is defined as follows: for $a^i\in \{0,1\}^{m_i}$ for each $i\in n$, $f\circ(g_1,g_2,...,g_n)(a^1,a^2,...,a^n)=f(g_1(a^1),g_2(a^2),\ldots ,g_n(a^n))$. We call $f$ the outer function and $g_1,\ldots ,g_n$ the inner functions. For functions $f:\boolfn{n}$ and $g:\boolfn{m}$, the composed function $f\circ g$ is the function $f\circ (g,g,\ldots ,g):\boolfn{mn}$. The composed function $\mbox{{\sc Or}}_n\circ\mbox{{\sc And}}_m$ has a special name, $\mbox{{\sc Tribes}}_{n,m}$, and when $n=m$, we simply write $\mbox{{\sc Tribes}}_n$. Its dual is the function $\mbox{{\sc And}}_n\circ\mbox{{\sc Or}}_m$ that we denote $\mbox{{\sc Tribes}}^d_{n,m}$. (The dual of $f(x_1, \ldots, x_n)$ is the function $\neg f(\neg x_1, \ldots, \neg x_n)$.) For any function $f:\boolfn{n}$, that we will call the base function, the iterated composed function $f^{\otimes k}:\boolfn{n^k}$ is recursively defined as $f^{\otimes 1}=f$, $f^{\otimes k}=f\circ f^{\otimes (k-1)}$. The iterated composed functions for the base functions $\mbox{{\sc And}}_2\circ \mbox{{\sc Or}}_2$ and $\mbox{{\sc Maj}}_3$ will interest us later. \paragraph*{Symmetric functions} A Boolean function is symmetric if its value depends only on the number of ones in the input, and not on the positions of the ones. \begin{proposition}\label{prop:symm_evasive} For every non-constant symmetric boolean function $f: \boolfn{n}$, \begin{enumerate} \item $f$ is evasive (has $\textrm{Depth}(f)=n$). (See eg.\ Lemma 14.19 \cite{Jukna-BFCbook}.) \item Hence, for any weights $w_i$, $\textrm{Depth}_w(f,[w_1,w_2,...,w_n] ))=\sum_i w_i$. \end{enumerate} \end{proposition} For a symmetric Boolean function $f: \boolfn{n}$, let $f_0,f_1,...,f_n\in \{0,1\}$ denote the values of the function $f$ on inputs of Hamming weight $0,1,...,n$ respectively. The $\textrm{Gap}$ of $f$ is defined as the length of the longest interval (minus one) where $f_i$ is constant. That is, \[\textrm{Gap}(f) = \max_{0\leq a \leq b \leq n} \{b-a: f_a=f_{a+1}=...=f_b \}.\] Analogously, $\textrm{Gap}_{\min}(f)$ is the length of the shortest constant interval (minus one); that is, setting $f_{-1}\neq f_0$ and $f_{n+1}\neq f_{n}$ for boundary conditions, \[\textrm{Gap}_{\min}(f)=\min_{0\leq a \leq b \leq n} \{b-a: f_{a-1}\neq f_a=f_{a+1}=...=f_b\neq f_{b+1} \}.\] \paragraph*{Fourier Representation of Boolean functions} We include here some basic facts about Fourier representation relevant to our work. For a wonderful comprehensive overview of this area, see \cite{o2014analysis}. Consider the inner product space of functions $\mathcal{V}=\{f: \boolset{n}\longrightarrow \mathbb{R}\}$ with the inner product defined as \[ \langle f,g \rangle = \frac{1}{2^n} \sum_{x\in \boolset{n}}f(x)g(x). \] For $S\subseteq [n]$, the function $\chi_{S}: \boolset{n}\longrightarrow \{-1,1\}$ defined by $\chi_{S}(x)=(-1)^{\sum_{i\in S}x_i}$ is the $\pm 1$ parity of the bits in $S$ and therefore is referred to as a parity function. The set of all parity functions $\{\chi_S: S\subseteq[n]\}$ forms an orthonormal basis for $\mathcal{V}$. Thus, every function $f\in \mathcal{V}$, in particular boolean functions, has a unique representation $f=\sum_{S\subseteq[n]}\hat{f}(S)\chi_{S}$. The coefficients $\{\hat{f}(S): S\subseteq [n]\}$ are called the Fourier coefficients(spectrum) of $f$. The Fourier sparsity of $f$, denoted by $\textrm{spar}(f)$, is the number of non-zero Fourier coefficients in the expansion of $f$, i.e. $\lvert \{S \subseteq [n]: \hat{f}(S)\neq 0 \}\rvert$. It will be convenient for us to disregard the Fourier coefficient of the empty set. We therefore define $\sparn(f) = \lvert \{S \subseteq [n]: S\neq \emptyset; \hat{f}(S)\neq 0 \}\rvert$. For every $f$, $0\le \sparn(f) \le \textrm{spar}(f) \le \sparn(f)+1$, and only the constant functions have $\sparn=0$. Sparsity is related to decision tree complexity; large sparsity implies large depth. \begin{proposition}[see Proposition 3.16 in \cite{o2014analysis}]\label{prop:depth-sparsity} For a Boolean function $f:\boolfn{n}$, $\log \textrm{spar}(f)\le \log \textrm{DTSize}(f)+ \textrm{Depth}(f)\le 2\textrm{Depth}(f)$. \end{proposition} In our discussion, we will be interested in the effect of restrictions on the Fourier representation of a function. Of particular interest to us will be restrictions to subcubes. A subcube is a set of all inputs consistent with a partial assignment of $n$ bits. Formally, a subcube $J$ is a partial assignment (to some of the $n$ variables) defined by $(S,\rho)$ where $S\subseteq [n]$ is the set of input bits fixed by $J$ and $\rho: S \longrightarrow \{0,1\}$ is the map according to which the bits in $S$ are fixed. A subcube is a special type of affine subspace; hence, inheriting notation from subspaces, for $J=(S,\rho)$, the cardinality of $S$ is called the co-dimension of $J$, and is denoted by $\textrm{co-dim}(J)$. A function $f:\boolfn{n}$ restricted to $J=(S,\rho)$ is the function $f|J:\{0,1\}^{n- |S|} \longrightarrow \{0,1\}$ obtained by fixing variables in $S$ according to $\rho$. The following result quantifies the effect on Fourier spectrum of subcube restriction. \begin{restatable}{theorem}{thmrestsparsity}[\cite{ShpilkaTalVolk17,MandeSanyal-FSTTCS20}] \label{thm:rest-sparsity} Let $f$ be any Boolean function $f:\boolfn{n}$. Fix any $S \subseteq [n]$, $S \neq \emptyset$. If $f|(S,\rho)$ is a constant, then for every $\rho': S \longrightarrow \{0,1\}$, $\sparn(f|(S,\rho')) \le \sparn(f)/2$. \end{restatable} This lemma follows from \cite{ShpilkaTalVolk17} (in the proof of Theorem 1.7 there) and \cite{MandeSanyal-FSTTCS20} (see the discussion in Sections 2.1 and 3.1 there). Both papers consider affine subspaces, of which subcubes are a special case. Since the result is not explicitly stated in this form in either paper, for completeness we give a proof for the subcubes case in the appendix. The subcube kill number of $f$, denoted by $\textrm{K}(f)$, measures a largest subcube over which $f$ is constant, and is defined as \[ \textrm{K}(f)=\min \{\textrm{co-dim}(J)| f|J \text{ is constant} \}. \] \paragraph*{Decision Tree Rank} For a rooted binary tree $T$, the rank of the tree is the rank of the root node, where the rank of each node of the tree is defined recursively as follows: For a leaf node $u$, $\textrm{Rank}(u)=0$. For an internal node $u$ with children $v,w$, \[ \textrm{Rank}(u) = \left\{ \begin{array}{ll} \textrm{Rank}(v) + 1 & \textrm{~~if~} \textrm{Rank}(v)=\textrm{Rank}(w) \\ \max\{\textrm{Rank}(v),\textrm{Rank}(w)\} & \textrm{~~if~} \textrm{Rank}(v)\neq \textrm{Rank}(w) \\ \end{array} \right. \] The following proposition lists some known properties of the rank function for binary trees. \begin{proposition}\label{prop:prop_rank_tree} For any binary tree $T$, \begin{enumerate} \item \label{item-rank-size} (Rank and Size relationship): $\textrm{Rank}(T) \le \log(\textrm{DTSize}(T)) \le \textrm{Depth}(T)$. \item \label{item-monotonicity} (Monotonicity of the Rank): Let $T'$ be any subtree of $T$, and let $T''$ be an arbitrary binary tree of higher rank than $T'$. If $T'$ is replaced by $T''$ in $T$, then the rank of the resulting tree is not less than the rank of $T$. \item \label{item-leaf-depth-rank} (Leaf Depth and Rank): If all leaves in $T$ have depth at least $r$, then $\textrm{Rank}(T)\ge r$. \end{enumerate} \end{proposition} For a Boolean function $f$, the rank of $f$, denoted $\textrm{Rank}(f)$, is the minimum rank of a decision tree computing $f$. From \cref{prop:prop_rank_tree}(\ref{item-monotonicity}), we see that the rank of a subfunction of $f$ (a function obtained by assigning values to some variables of $f$) cannot exceed the rank of the function itself. \begin{proposition}\label{prop:rank_subfn} (Rank of a subfunction): Let $f_S$ be a subfunction obtained by fixing the values of variables in some set $S\subseteq [n]$ of $f$. Then $\textrm{Rank}(f_S) \le \textrm{Rank}(f)$. \end{proposition} The following rank and size relationship is known for boolean functions. \begin{proposition}[Lemma 1 \cite{EH-IC1989}]\label{prop:rank_size} For a non-constant Boolean function $f: \boolfn{n}$, $$\textrm{Rank}(f)\le \log \textrm{DTSize}(f) \le \textrm{Rank}(f)\log \left(\frac{e n}{\textrm{Rank}(f)}\right).$$ \end{proposition} It follows that $\textrm{Rank}(f) \in \Theta(\log \textrm{DTSize}(f))$ if and only if $\textrm{Rank}(f) = \Omega(n)$. However, even when $\textrm{Rank}(f)\in o(n)$, it characterizes $\log\textrm{DTSize}(f)$ upto a logarithmic factor, since for every $f$, $\textrm{Rank}(f) \in \Omega(\log\textrm{DTSize}(f)/\log n)$. For symmetric functions, $\textrm{Rank}$ is completely characterized in terms of $\textrm{Gap}$. \begin{proposition}[Lemma C.6 \cite{ABDORU10}]\label{lem:ABDORU} For symmetric Boolean function $f: \boolfn{n}$, $\textrm{Rank}(f) = n - \textrm{Gap}(f)$. \end{proposition} \begin{remark}\label{rem:measures-neg-dual} For (simple) deterministic possibly weighted decision trees, each of the measures \textrm{DTSize}, \textrm{Depth}, and \textrm{Rank}, is the same for a Boolean function $f$, its complement $\neg f$, and its dual $f^d$. \end{remark} \paragraph*{Relations and Search problems} A relation $R \subseteq X \times W$ is said to be $X$-complete, or just complete, if its projection to $X$ equals $X$. That is, for every $x\in X$, there is a $w\in W$ with $(x,w)\in R$. For an $X$-complete relation $R$, where $X$ is of the form $\boolset{n}$ for some $n$, the search problem $\textrm{SearchR}$ is as follows: given an $x\in X$, find a $w\in W$ with $(x,w)\in R$. A decision tree for $\textrm{SearchR}$ is defined exactly as for Boolean functions; the only diference is that leaves are labeled with elements of $W$, and we require that for each input $x$, if the unique leaf reached on $x$ is labeled $w$, then $(x,w) \in R$. The rank of the relation, $\textrm{Rank}(R)$, is the minimum rank of a decision tree solving the $\textrm{SearchR}$ problem. A Boolean function $f:\boolfn{n}$ naturally defines a complete relation $R_f$ over $X=\boolset{n}$ and $W=\{0,1\}$, with $R_f = \{(x,f(x)) \mid x\in X \}$, and $\textrm{Rank}(f) = \textrm{Rank}(R_f)$. \section{Game Characterisation for Rank} \label{sec:game} In this section we observe that the rank of a Boolean function is characterised by the value of a Prover-Delayer game introduced by Pudl{\'a}k and Impagliazzo in \cite{pudlak2000lower}. As mentioned in \cref{sec:intro}, the game was originally described for searchCNF problems on unsatsifiable clause sets. The appropriate analog for a Boolean function $f$, or its relation $R_f$, and even for arbitrary $X$-complete relations $R\subseteq X\times W$, is as follows: The game is played by two players, the Prover and the Delayer, who construct a (partial) assignment $\rho$ in rounds. Initially, $\rho$ is empty. In each round, the Prover queries a variable $x_i$ not set by $\rho$. The Delayer responds with a bit value $0$ or $1$ for $x_i$, or defers the choice to the Prover. In the later case, Prover can choose the value for the queried variable, and the Delayer scores one point. The game ends when there is a $w\in W$ such that for all $x$ consistent with $\rho$, $(x,w)\in R$. (Thus, for a Boolean function $f$, the game ends when $f\vert_\rho$ is a constant function.) The value of the game, $\textrm{Value}(R)$, is the maximum $k$ such that the Delayer can always score at least $k$ points, no matter how the Prover plays. \begin{theorem}[implied from \cite{pudlak2000lower,Kullmann-ECCC-99,esteban2003combinatorial}] \label{thm:game-rank} For any $X$-complete relation $R \subseteq X \times W$, where $X = \boolset{n}$, $\textrm{Rank}(R) = \textrm{Value}(R)$. In particular, for a boolean function $f: \boolfn{n}$, $\textrm{Rank}(f) = \textrm{Value}(R_f)$. \end{theorem} The proof of the theorem follows from the next two lemmas. \begin{lemma}[implicit in \cite{Kullmann-ECCC-99}]\label{lem:game-rank-ub} For an $X$-complete relation $R \subseteq \boolset{n}\times W$, in the Prover-Delayer game, the Prover has a strategy which restricts the Delayer's score to at most $\textrm{Rank}(R)$ points. \end{lemma} \begin{proof} The Prover chooses a decision tree $T$ for $\textrm{SearchR}$ and starts querying variables starting from the root and working down the tree. If the Delayer responds with a $0$ or a $1$, the Prover descends into the left or right subtree respectively. If the Delayer defers the decision to Prover, then the Prover sets the variable to that value for which the corresponding subtree has smaller rank (breaking ties arbitrarily), and descends into that subtree. We claim that such a ``tree-based'' strategy restricts the Delayer's score to $\textrm{Rank}(T)$ points. The proof is by induction on $\textrm{Depth}(T)$. \begin{enumerate} \item Base Case: $\textrm{Depth}(T)=0$. This means that $\exists w\in W$, $X\times \{w\}\subseteq R$. Hence the game terminates with the empty assignment and the Delayer scores 0. \item Induction Step: $\textrm{Depth}(T)\ge 1$. Let $x_i$ be the variable at the root node and $T_0$ and $T_1$ be the left and right subtree. The Prover queries the variable $x_i$. Note that for all $b$, $\textrm{Depth}(T_b) \le \textrm{Depth}(T)-1$, and $T_b$ is a decision tree for the search problem on $R_{i,b}\triangleq \{ (x,w)\in R \mid x_i=b\} \subseteq X_{i,b} \times W$, where $X_{i,b} = \{x\in X\mid x_i=b\}$. If the Delayer responds with a bit $b$, then by induction, the subsequent score of the Delayer is limited to $\textrm{Rank}(T_b) \le \textrm{Rank}(T)$. Since the current round does not increase the score, the overall Delayer score is limited to $\textrm{Rank}(T)$. If the Delayer defers the decision to Prover, the Delayer gets one point in the current round. Subsequently, by induction, the Delayer's score is limited to $\min(\textrm{Rank}(T_0), \textrm{Rank}(T_1))$; by definition of rank, this is at most $\textrm{Rank}(T)-1$. So the overall Delayer score is again limited to $\textrm{Rank}(T)$. \end{enumerate} In particular, if the Prover chooses a rank-optimal tree $T_R$, then the Delayer's score is limited to $\textrm{Rank}(T_R) = \textrm{Rank}(R)$ as claimed. \end{proof} \begin{lemma}[implicit in \cite{esteban2003combinatorial}]\label{lem:game-rank-lb} For an $X$-complete relation $R \subseteq \boolset{n}\times W$, in the Prover-Delayer game, the Delayer has a strategy which always scores at least $\textrm{Rank}(R)$ points. \end{lemma} \begin{proof} The Delayer strategy is as follows: When variable $x_i$ is queried, the Delayer responds with $b\in\{0,1\}$ if $\textrm{Rank}(R_{i,b}) > \textrm{Rank}(R_{i,1-b})$, and otherwise defers. We show that the Delayer can always score $\textrm{Rank}(R)$ points using this strategy. The proof is by induction on the number of variables $n$. Note that if $\textrm{Rank}(R)=0$, then there is nothing to prove. If $\textrm{Rank}(R)\ge1$, then the prover must query at least one variable. \begin{enumerate} \item Base Case: $n=1$. If $\textrm{Rank}(R)=1$, then the prover must query the variable, and the Delayer strategy defers the choice, scoring one point. \item Induction Step: $n>1$. Let $x_i$ be first variable queried by the prover. If $\textrm{Rank}(R_{i,0}) = \textrm{Rank}(R_{i,1})$, then the Delayer defers, scoring one point in this round. Subsequently, suppose the Prover sets $x_i$ to $b$. The game is now played on $R_{i,b}$, and by induction, the Delayer can subsequently score at least $\textrm{Rank}(R_{i,b})$ points. But also, because of the equality, we have $\textrm{Rank}(R) \le 1+\textrm{Rank}(R_{i,b})$, as witnessed by a decision tree that first queries $x_i$ and then uses rank-optimal trees on each branch. Hence the overall Delayer score is at least $\textrm{Rank}(R)$. If $\textrm{Rank}(R_{i,b}) > \textrm{Rank}(R_{i,1-b})$, then the Delayer chooses $x_i=b$ and the subsequent game is played on $R_{i,b}$. The subsequent (and hence overall) score is, by induction, at least $\textrm{Rank}(R_{i,b})$. But $\textrm{Rank}(R) \le \textrm{Rank}(R_{i,b})$, as witnessed by a decision tree that first queries $x_i$ and then uses rank-optimal trees on each branch. \end{enumerate} \end{proof} \cref{lem:game-rank-ub,lem:game-rank-lb} give us a way to prove rank upper and lower bounds for boolean functions. In a Prover-Delayer game for $R_f$, exhibiting a Prover strategy which restricts the Delayer to at most $r$ points gives an upper bound of $r$ on $\textrm{Rank}(f)$. Similarly, exhibiting a Delayer strategy which scores at least $r$ points irrespective of the Prover strategy shows a lower bound of $r$ on $\textrm{Rank}(f)$. In \cite{BGL13}, an aysmmmetric version of this game is defined. In each round, the Prover queries a variable $x$, the Delayer specifies values $p_0,p_1 \in [0,1]$ adding up to 1, the Prover picks a value $b$, the Delayer adds $\log \frac{1}{p_b}$ to his score. Let $\textrm{ASym-Value}$ denote the maximum score the Delayer can always achieve, independent of the Prover moves. Note that $\textrm{ASym-Value}(R) \ge \textrm{Value}(R)$; an asymmetric-game Delayer can mimic a symmetric-game Delayer by using $p_b=1$ for choice $b$ and $p_0=p_1=1/2$ for deferring. As shown in \cite{BGL13}, for the search CNF problem, the value of this asymmetric game is exactly the optimal leaf-size of a decision tree. We note below that this holds for the $\textrm{SearchR}$ problem more generally. \begin{proposition}[implicit in \cite{BGL13}] \label{prop:game-size} For any $X$-complete relation $R \subseteq X \times W$, where $X = \boolset{n}$, $\log\textrm{DTSize}(R) = \textrm{ASym-Value}(R)$. In particular, for a boolean function $f: \boolfn{n}$, $\log\textrm{DTSize}(f) = \textrm{ASym-Value}(R_f)$. \end{proposition} (In \cite{BGL13}, the bounds have $\log (S/2)$; this is because $S$ there counts all nodes in the decision tree, while here we count only leaves.) Thus we have the relationship \[\textrm{Rank}(f) = \textrm{Value}(R_f) \le \textrm{ASym-Value}(R_f) = \log \textrm{DTSize}(f).\] This relationship explains where the slack may lie in the inequalities from \cref{prop:rank_size} relating $\textrm{Rank}(f)$ and $\log \textrm{DTSize} (f)$. The symmetric game focusses on the more complex subtree, ignoring the contribution from the less complex subtree (unless both are equally complex), and thus characterizes rank. The asymmetric game takes a weighted contribution of both subtrees and thus is able to characterize size. \section{The Rank of some natural functions} \label{sec:simple-calc} For symmetric functions, rank can be easily calculated using \cref{lem:ABDORU}. In \cref{tab:tabulation} we tabulate various measures for some standard symmetric functions. As can be seen from the $\mbox{{\sc Or}}_n$ and $\mbox{{\sc And}}_n$ functions, the $\textrm{Rank}(f)$ measure is not polynomially related with the measures $\textrm{Depth}(f) $ or certificate complexity $\textrm{C}(f)$. \begin{table}[h] \[\begin{array}{|c|c|c|c|c|c|c|} \hline $f$ & \textrm{Depth} & \textrm{C}_0 & \textrm{C}_1 & \textrm{C} & \textrm{Gap} & \textrm{Rank} \\ \hline 0 ~\textrm{or}~ 1 & 0 & 0 & 0 & 0 & n & 0 \\ \hline \mbox{{\sc And}}_n & n & 1 & n & n & n-1 & 1 \\ \hline \mbox{{\sc Or}}_n & n & n & 1 & n & n-1 & 1 \\ \hline \mbox{{\sc Parity}}_n & n & n & n & n & 0 & n \\ \hline \mbox{{\sc Maj}}_{2k} & 2k & k & k+1 & k+1 & k & k \\ \hline \mbox{{\sc Maj}}_{2k+1} & 2k+1 & k+1 & k+1 & k+1 & k & k+1 \\ \hline \begin{array}{c}\mbox{{\sc Thr}}^k_n \\(k \ge 1) \end{array} & n & n-k+1 & k & \max \begin{Bmatrix}n-k+1,\\k\end{Bmatrix} & \max\begin{Bmatrix}k-1,\\ n-k \end{Bmatrix} & n-\textrm{Gap} \\ \hline \end{array}\] \caption{Some simple symmetric functions and their associated complexity measures} \label{tab:tabulation} \end{table} For two composed functions that will be crucial in our discussions in \cref{sec:rank-rels}, we can directly calculate the rank as described below. (The rank can also be caluclated using \cref{thm:game-rank}; see \cref{sec:game-proofs}, or using \cref{thm:compose-rank-bounds}, which is much more general. We show these specific bounds here since we use them in \cref{sec:rank-rels}.) \begin{theorem}\label{thm:rank-tribes} For every $n\ge 1$, \begin{enumerate} \item $\textrm{Rank}(\mbox{{\sc Tribes}}_{n,m}) = \textrm{Rank}(\mbox{{\sc Tribes}}^d_{n,m}) = n$ for $m\ge 2$. \item $\textrm{Rank}(\mbox{{\sc And}}_n \circ \mbox{{\sc Parity}}_m) = n(m-1) +1$ for $m\ge 1$. \end{enumerate} \end{theorem} We prove this theorem by proving each of the lower and upper bounds separately in a series of lemmas below. The lemmas use the following property about the rank function. \begin{proposition}\label{prop:compose_rank_dt} (Composition of Rank): Let $T$ be a rooted binary tree with depth $\ge 1$, rank $r$, and with leaves labelled by $0$ and $1$. Let $T_0, T_1$ be arbitrary rooted binary trees of ranks $r_0, r_1$ respectively. For $b\in\{0,1\}$, attach $T_b$ to each leaf of $T$ labeled $b$, to obtain rooted binary tree $T'$ of rank $r'$. \begin{enumerate} \item $r' \le r + \max\{r_0,r_1\}$. Furthermore, if $T$ is a complete binary tree, and if $r_0=r_1$, then this is an equality; $r'=r+r_0$. \item If every non-trivial subtree (more than one leaf) of $T$ has both a $0$ leaf and a $1$ leaf, then $r'\ge r + \max\{r_0,r_1\} -1$. If, furthermore, $T$ is a complete binary tree, then this is an equality when $r_0\neq r_1$, \end{enumerate} \end{proposition} \begin{proof} The upper bound on $r'$ follows from the definition of rank when $r_0=r_1$, in which case it also gives equality for complete $T$. When $r_0\neq r_1$, it follows from \cref{prop:prop_rank_tree}(\ref{item-monotonicity}). For non-trivially labeled $T$, we establish the lower bound by induction on $d=\textrm{Depth}(T)$. In the base case $d=1$, $T$ has one 0-leaf and one 1-leaf, and $r=1$. By definition of rank, $r'$ satisfies the claimed inequality. For the inductive step, let $\textrm{Depth}(T)= k > 1$. Let $v$ be the root of $T$, and let $T_\ell$, $T_r$ be its left and right sub-trees respectively, with ranks $r_\ell$ and $r_r$ respectively. Both $\textrm{Depth}(T_\ell)$ and $\textrm{Depth}(T_r)$ are at most $k-1$, and at least one of these is exactly $k-1 \ge 1$. Also, at least one of $r_\ell, r_r$ is non-zero. Let $T'_\ell$ be the tree obtained by replacing 0 and 1 leaves of $T_\ell$ by $T_0$ and $T_1$ respectively; let its rank be $r'_\ell$. Similarly construct $T'_r$, with rank $r'_r$. Then $T'$ has root $v$ with left and right subtrees $T'_\ell$ and $T'_r$. If $r_\ell=0$, then $r=r_r$ and $\textrm{Depth}(T_r) = k-1\ge 1$. By the induction hypothesis, $r_r + \max\{r_0,r_1\} -1 \le r'_r$. Since $r' \ge r'_r$, the claimed bound follows. If $r_r=0$, a symmetric argument applies. If both $r_\ell, r_r$ are positive, then by the induction hypothesis, $r_\ell + \max\{r_0,r_1\} -1 \le r'_\ell$ and $r_r + \max\{r_0,r_1\} -1 \le r'_r$. If $r_\ell=r_r$ then $r=r_\ell+1$, and by definition of rank, $r' \ge 1 + \min\{r'_\ell, r'_r\} \ge r_\ell + \max\{r_0,r_1\} = r+ \max\{r_0,r_1\} -1$, as claimed. On the other hand, if $r_\ell\neq r_r$, then $r = \max\{r_\ell,r_r\}$, and by definition of rank, $r' \ge \max\{r'_\ell, r'_r\} \ge \max\{r_\ell,r_r\} + \max\{r_0,r_1\} -1 = r+ \max\{r_0,r_1\} -1$, as claimed. For complete binary tree $T$ satisfying the labelling requirements, $r_\ell=r_r=r-1$. The same arguments, simplified to this situation, show the claimed equality: \end{proof} We first establish the bounds for $\mbox{{\sc Tribes}}^d_{n,m} = \bigwedge_{i\in [n]} \bigvee_{j\in [m]} x_{i,j}$. \begin{lemma}\label{lem:rank-tribes-ub} For every $n,m \ge 1$, $\textrm{Rank}(\mbox{{\sc Tribes}}^d_{n,m}) \le n$. \end{lemma} \begin{proof} We show the bound by giving a recursive construction and bounding the rank by induction on $n$. In the base case, $n=1$. $\mbox{{\sc Tribes}}^d_{1,m} = \mbox{{\sc Or}}_m$, which has rank 1. For the inductive step, $n> 1$. For $j\le n$, let $T_{j,m}$ denote the recursively constructed trees for $\mbox{{\sc Tribes}}^d_{j,m}$. Take the tree $T$ which is $T_{1,m}$ on variables $x_{n,j}$, $j\in[m]$. Attach the tree $T_{n-1,m}$ on variables $x_{i,j}$ for $i\in[n-1]$, $j\in[m]$, to all the 1-leaves of $T$, to obtain $T_{n,m}$. It is straightforward to see that this tree computes $\mbox{{\sc Tribes}}^d_{n,m}$. Using \cref{prop:compose_rank_dt} and induction, we obtain $\textrm{Rank}(T_{n,m}) \le \textrm{Rank}(T_{1,m}) + \textrm{Rank}(T_{n-1,m}) \le 1 + (n-1) = n$. \end{proof} \begin{remark}\label{rem:and-composed-f-ub} More generally, this construction shows that $\textrm{Rank}(\mbox{{\sc And}}_n\circ f) \le n\textrm{Rank}(f)$. \end{remark} \begin{lemma}\label{lem:rank-tribes-lb} For every $n\ge 1$ and $m \ge 2$, $\textrm{Rank}(\mbox{{\sc Tribes}}^d_{n,m}) \ge n$. \end{lemma} \begin{proof} We prove this by induction on $n$. The base case, $n=1$, is straightforward: $\mbox{{\sc Tribes}}^d_{1,m}$ is the function $\mbox{{\sc Or}}_m$, whose rank is $1$. For the inductive step, let $n> 1$, and consider any decision tree $Q$ for $\mbox{{\sc Tribes}}^d_{n,m}$. Without loss of generality (by renaming variables if necessary), let $x_{1,1}$ be the variable queried at the root node. Let $Q_0$ and $Q_1$ be the left and the right subtrees of $Q$. Then $Q_0$ computes the function $\mbox{{\sc And}}_n\circ(\mbox{{\sc Or}}_{m-1}, \mbox{{\sc Or}}_{m},..., \mbox{{\sc Or}}_{m})$, and $Q_1$ computes $\mbox{{\sc Tribes}}^d_{n-1,m}$, on appropriate variables. For $m\ge 2$, $\mbox{{\sc Tribes}}^d_{n-1,m}$ is a sub-function of $\mbox{{\sc And}}_n\circ(\mbox{{\sc Or}}_{m-1}, \mbox{{\sc Or}}_{m},..., \mbox{{\sc Or}}_{m})$, and so \cref{prop:rank_subfn} implies that $\textrm{Rank}(Q_0)\ge \textrm{Rank}(\mbox{{\sc And}}_n\circ(\mbox{{\sc Or}}_{m-1}, \mbox{{\sc Or}}_{m},..., \mbox{{\sc Or}}_{m})) \ge \textrm{Rank}(\mbox{{\sc Tribes}}^d_{n-1,m})$. By induction, $\textrm{Rank}(Q_1) \ge \textrm{Rank}(\mbox{{\sc Tribes}}^d_{n-1.m})\ge n-1$. Hence, by definition of rank, $\textrm{Rank}(Q) \ge 1+\min\{\textrm{Rank}(Q_0),\textrm{Rank}(Q_1\} \ge n$. Since this holds for every decision tree $Q$ for $\mbox{{\sc Tribes}}^d_{n,m}$, we conclude that $\textrm{Rank}(\mbox{{\sc Tribes}}^d_{n,m})\ge n$, as claimed. \end{proof} Next, we establish the bounds for $\mbox{{\sc And}}_n \circ \mbox{{\sc Parity}}_m = \bigwedge_{i\in [n]} \bigoplus_{j\in [m]} x_{i,j}$. The upper bound below is slightly better than what is implied by \cref{rem:and-composed-f-ub}. \begin{lemma}\label{lem:and-parity-ub} For every $n,m \ge 1$, $\textrm{Rank}(\mbox{{\sc And}}_n \circ \mbox{{\sc Parity}}_m) \le n(m-1) +1$. \end{lemma} \begin{proof} Recursing on $n$, we construct decision trees $T_{n,m}$ for $\mbox{{\sc And}}_n \circ \mbox{{\sc Parity}}_m$, as in \cref{lem:rank-tribes-ub}. By induction on $n$, we bound the rank, also additionally using the fact that the rank-optimal decision tree for $\mbox{{\sc Parity}}_m$ is a complete binary tree. Base Case: $n=1$. $\mbox{{\sc And}}_1 \circ \mbox{{\sc Parity}}_m = \mbox{{\sc Parity}}_m$. From \cref{tab:tabulation}, $\textrm{Rank}(\mbox{{\sc Parity}}_m)=m$; let $T_{1,m}$ be the optimal decision tree computing $\mbox{{\sc Parity}}_m$. Inductive Step: $n> 1$. For $j\le n$, let $T_{j,m}$ denote the recursively constructed trees for $\mbox{{\sc And}}_j\circ\mbox{{\sc Parity}}_{m}$. Take the tree $T$ which is $T_{1,m}$ on variables $x_{n,j}$, $j\in[m]$. Attach the tree $T_{n-1,m}$ on variables $x_{i,j}$ for $i\in[n-1]$, $j\in[m]$, to all the 1-leaves of $T$, to obtain $T_{n,m}$. It is straightforward to see that this tree computes $\mbox{{\sc And}}_n\circ\mbox{{\sc Parity}}_{m}$. By induction, $\textrm{Rank}(T_{n-1,m}) \le (n-1)(m-1)+1\ge 1$. Since we do not attach anything to the 0-leaves of $T_{1,m}$ (or equivalently, we attach a rank-0 tree to these leaves), and since $T_{1,m}$ is a complete binary tree, the second statement in \cref{prop:compose_rank_dt} yields $\textrm{Rank}(T_{n,m}) = \textrm{Rank}(T_{1,m}) + \textrm{Rank}(T_{n-1,m}) -1$. Hence $\textrm{Rank}(T_{n,m}) \le n(m-1)+1$, as claimed. \end{proof} \begin{lemma}\label{lem:and-parity-lb} For every $n, m_1, m_2, \ldots , m_n\ge 1$, and functions $g_1, g_2, \ldots , g_n$ each in $\{\mbox{{\sc Parity}}_m, \neg\mbox{{\sc Parity}}_m\}$, $\textrm{Rank}(\mbox{{\sc And}}_n \circ (g_1,g_2,...,g_n)) \ge (\sum_{i=1}^n (m_i-1))+1$. In particular, $\textrm{Rank}(\mbox{{\sc And}}_n\circ \mbox{{\sc Parity}}_m)\ge n(m-1)+1$. \end{lemma} \begin{proof} We proceed by induction on $n$. Let $h$ be the function $\mbox{{\sc And}}_n \circ (g_1,g_2,...,g_n)$. Base Case: $n=1$. $h = g_1$. Note that for all functions $f$, $\textrm{Rank}(f)=\textrm{Rank}(\neg~f)$. So $\textrm{Rank}(h)=\textrm{Rank}(\mbox{{\sc Parity}}_{m_1}) = m_1$. Inductive Step: $n>1$. We proceed by induction on $M=\sum_{i=1}^n m_i$. \begin{enumerate} \item Base Case: $M=n$. Each $m_i$ is equal to $1$. So $h$ is the conjunction of $n$ literals on distinct variables. (A literal is a variable or its negation.) Hence $\textrm{Rank}(h) = \textrm{Rank}(\mbox{{\sc And}}_n) = 1$. \item Inductive Step: $M>n>1$. Consider any decision tree $Q$ computing $h$. Without loss of generality (by renaming variables if necessary), let $x_{1,1}$ be the variable queried at the root node. Let $Q_0$ and $Q_1$ be the left and the right subtrees of $Q$. For $b\in\{0,1\}$, let $g_{1b}$ denote the function $g_1$ restricted to $x_{1,1}=b$. Then $Q_b$ computes the function $\mbox{{\sc And}}_n\circ(g_{1b},g_2, \ldots , g_n)$ on appropriate variables. If $m_1=1$, then the functions $g_{10},g_{11}$ are constant functions, one 0 and the other 1. So one of $Q_0,Q_1$ is a 0-leaf, and the other subtree computes $\mbox{{\sc And}}_{n-1}\circ (g_2,...,g_n)$. Using induction on $n$, we conclude \[\textrm{Rank}(Q) \ge \textrm{Rank}(\mbox{{\sc And}}_{n-1}\circ (g_2,...,g_n)) \ge \left[\sum_{i=2}^n(m_i-1)\right]+1 = \left[\sum_{i=1}^n(m_i-1)\right]+1. \] For $m_1\ge 2$, $\{g_{10},g_{11}\}=\{\mbox{{\sc Parity}}_{m_1-1},\neg \mbox{{\sc Parity}}_{m_1-1}\}$. So one of $Q_0,Q_1$ computes $\mbox{{\sc And}}_{n}\circ (\mbox{{\sc Parity}}_{m_1-1},g_2,...,g_n)$, and the other computes $\mbox{{\sc And}}_{n}\circ (\neg \mbox{{\sc Parity}}_{m_1-1},g_2,...,g_n)$. Using induction on $M$, we obtain \[ \textrm{Rank}(Q) \ge 1 + \min_b \textrm{Rank}(Q_b) \ge 1 + (m_1-2)+\left[\sum_{i=2}^n (m_i-1)\right] + 1 = \left[\sum_{i=1}^n (m_i-1)\right] + 1. \] Since this holds for every decision tree $Q$ for $h$, the induction step is proved. \end{enumerate} \end{proof} \section{Relation between Rank and other measures} \label{sec:rank-rels} \subsection{Relating Rank to Depth and Sparsity} \label{sec:rank-depth} From \cref{prop:prop_rank_tree,prop:rank_size}, we know that $\textrm{Depth}(f)$ is at least $\textrm{Rank}(f)$. In the other direction, the $\mbox{{\sc And}}$ function with rank $1$ and depth $n$ shows that $\textrm{Depth}(f)$ cannot be bounded from above by any function of $\textrm{Rank}(f)$ alone. Similarly, we know from \cref{prop:depth-sparsity} that $\textrm{Depth}(f)$ is bounded from below by $\log\textrm{spar}(f)/2$, and yet, as witnessed by the $\mbox{{\sc Parity}}$ function with depth $n$ and sparsity 1, it cannot be bounded from above by any function of $\log\textrm{spar}(f)$ alone. We show in this section that a combination of these two measures does bound $\textrm{Depth}(f)$ from above. Thus, in analogy to \cref{prop:depth-sparsity,prop:rank_size}, we see where $\textrm{Depth}(f)$ is sandwiched: \[ \max\{\textrm{Rank}(f),\log\textrm{spar}(f)/2\} \le \textrm{Depth}(f) \le \textrm{Rank}(f) (1+\log\textrm{spar}(f))\] To establish the upper bound, we first observe that subcube kill number is bounded above by rank. \begin{lemma}\label{lemma:kill} For every Boolean function $f$, $\textrm{K}(f) \le \max_{\text{subcube } J } \textrm{K}(f|J) \le \textrm{Rank}(f)$. \end{lemma} \begin{proof} The first inequality holds since $f=f|J$ for the unique subcube $J$ of codimension 0. Next, note that showing $\textrm{K}(g)\le \textrm{Rank}(g)$ for every boolean function $g$ suffices to prove the lemma. This is because if we prove this, then for every subcube $J$, $\textrm{K}(f|J) \le \textrm{Rank}(f|J) \le \textrm{Rank}(f)$; the latter inequality follows by monotonicity of rank (\cref{prop:rank_subfn}). To show $\textrm{K}(g)\le \textrm{Rank}(g)$, observe the following property of rank, which follows from the definition: For every internal node $v$ in a tree, at least one of its children has rank strictly less than the rank of $v$. Now, let $T$ be a rank-optimal tree for $g$, of rank $r$. Starting from the root in $T$, traverse down the tree in the direction of smaller rank until a leaf is reached. Using the above property, we see that we reach a leaf node in $T$ at depth at most $r$. The variables queried on the path leading to the leaf node, and their settings consistent with the path, give a subcube of co-dimension at most $r$. On this subcube, since a decision tree leaf is reached, $g$ becomes constant, proving the claim. \end{proof} Combining the \cref{lemma:kill} and \cref{thm:rest-sparsity}, we show the following. \begin{theorem} \label{thm:rank-sparsity-depth} For every Boolean function $f: \boolfn{n}$, $$ \textrm{Depth}(f) \le \textrm{Rank}(f)(1 + \log (\textrm{spar}(f))).$$ The inequality is tight as witnessed by $\mbox{{\sc Parity}}$ function. \end{theorem} \begin{proof} Recall that $\sparn(f)$ refers to the number of non-zero Fourier coefficients in the expansion of $f$ apart from $\hat{f}(\emptyset)$. We prove the theorem by induction on $\sparn(f)$. When $\sparn(f)<1$, $f$ is a constant function with $\textrm{Depth}(f)=\textrm{Rank}(f)=0$, and the inequality holds. Now assume that $\sparn(f) \ge 1$. We give a recursive construction of a decision tree for $f$. Choose a subcube $J=(S,\rho)$ of minimum co-dimension $|S|=K(f)$ on which $f$ becomes constant. By \cref{lemma:kill}, $|S| \le \textrm{Rank}(f)$. Start by querying all the variables indexed in $S$ in any order. When the outcome of all these queries matches $\rho$, the function becomes a constant and the tree terminates at this leaf. On any other outcome $\rho'$, the function is restricted to the subcube $J'=(S,\rho')$, and by \cref{thm:rest-sparsity}, $\sparn(f|J') \le \sparn(f)/2$. Proceed recursively to build the decision tree of $f|J'$ which is then attached to this leaf. Each stage in the recursion makes at most $\textrm{Rank}(f)$ queries and halves sparsity $\sparn$. After at most $1+\log \sparn(f)$ stages, the sparsity of the restricted function drops to below 1 and the function becomes a constant. Thus the overall depth of the entire tree is bounded by $\textrm{Rank}(f)\cdot(1+ \log \sparn(f))$, which is at most $\textrm{Rank}(f)\cdot(1+ \log \textrm{spar}(f))$, as claimed. \end{proof} \subsection{Relation between Rank and Certificate Complexity} \label{subsec:rank-cert} The certificate complexity and decision tree complexity are known to be related as follows. \begin{proposition}[\cite{blum1987generic},\cite{hartmanis1991one},\cite{tardos1989query}, see also Theorem 14.3 in \cite{Jukna-BFCbook}]\label{prop:depth-cert} For every boolean function $f: \boolfn{n}$, $$\textrm{C}(f) \le \textrm{Depth}(f) \le \textrm{C}_0(f)\textrm{C}_1(f)$$ \end{proposition} Both these inequalities are tight; the first for the $\mbox{{\sc Or}}$ and $\mbox{{\sc And}}$ functions, and the second for the $\mbox{{\sc Tribes}}_{n,m}$ and $\mbox{{\sc Tribes}}^d_{n,m}$ functions. (For $\mbox{{\sc Tribes}}^d_{n,m}$, $\textrm{C}_0(\mbox{{\sc Tribes}}^d_{n,m}) = m$, $\textrm{C}_1(\mbox{{\sc Tribes}}^d_{n,m}) = n$ and $\textrm{Depth}(\mbox{{\sc Tribes}}^d_{n,m})=nm$, see e.g.\ Exercise 14.1 in \cite{Jukna-BFCbook}.) Since $\textrm{Rank} \le \textrm{Depth}$, the same upper bound also holds for $\textrm{Rank}$ as well. But it is far from tight for the $\mbox{{\sc Tribes}}_{n,m}$ function. In fact, the upper bound can be improved in general. Adapting the construction given in the proof of \cref{prop:depth-cert} slightly, we show the following. \begin{lemma}\label{lem:rank-cert} For every Boolean function $f: \boolfn{n}$, $$ \textrm{Rank}(f) \le (\textrm{C}_0(f)-1)(\textrm{C}_1(f)-1) + 1$$ Moreover, the inequality is tight as witnessed by $\mbox{{\sc And}}$ and $\mbox{{\sc Or}}$ functions. \end{lemma} \begin{proof} The inequality holds trivially for constant functions since for such functions, $\textrm{Rank}=\textrm{C}_0=\textrm{C}_1=0$. So assume $f$ is not constant. The proof is by induction on $\textrm{C}_1(f)$. Base Case: $\textrm{C}_1(f)=1$. Let $S\subseteq [n]$ be the set of indices that are 1-certificates for some $a\in f^{-1}(1)$. We construct a decision tree by querying all the variables indexed in $S$. For each such query, one outcome immediately leads to a 1-leaf (by definition of certificate), and we continue along the other outcome. If all variables indexed in $S$ are queried without reaching a 1-leaf, the restricted function is 0 everywhere and so we create a 0-leaf. This gives a rank-1 decision tree computing $f$. For the inductive step, assume $\textrm{Rank}(g) \le (\textrm{C}_0(g)-1)(\textrm{C}_1(g)-1) + 1$ is true for all $g$ with $\textrm{C}_1(g)\le k$. Let $f$ satisfy $\textrm{C}_1(f)=k+1$. Pick an $a\in f^{-1}(0)$ and a mimimum-size 0-certificate $S$ for $a$. Without loss of generality, assume that $S=\{x_1,x_2,\ldots,x_\ell\}$ for some $\ell=|S|\le \textrm{C}_0(f)$. Now, take a complete decision tree $T_0$ of depth $l$ on these $l$ variables. Each of its leaves corresponds to the unique input $c=(c_1,c_2,...,c_l) \in \{0,1\}^l $ reaching this leaf. At each such leaf, attach a minimal rank decision tree $T_c$ for the subfunction $f_c\triangleq f(c_1,c_2,...,c_l,x_{l+1},...,x_n)$. This gives a decision tree $T$ for $f$. We now analyse its rank. For at least one input $c$, we know that $f_c$ is the constant function $0$. For all leaves where $f_c$ is not 0, $\textrm{C}_0(f_c) \le \textrm{C}_0(f)$ since certificate size cannot increase by assigning some variables. Further, $\textrm{C}_1(f_c) \le \textrm{C}_1(f)-1$; this because of the well-known fact (see e.g.\ \cite{Jukna-BFCbook}) that every pair of a 0-certificate and a 1-certificate for $f$ have at least one common variable, and $T_0$ has queried all variables from a 0-certificate. Hence, by induction, for each $c$ with $f_c\neq 0$, $\textrm{Rank}(T_c) \leq (\textrm{C}_0(f_c)-1)(k-1) +1\leq (\textrm{C}_0(f)-1)(k-1) +1$. Thus $T$ is obtained from a rank-$\ell$ tree $T_0$ (with $\ell \le \textrm{C}_0(f)$) by attaching a tree of rank 0 to at least one leaf, and attaching trees of rank at most $(\textrm{C}_0(f)-1)(k-1) +1$ to all leaves. From \cref{prop:compose_rank_dt}, we conclude that $\textrm{Rank}(f)\leq \textrm{Rank}(T)\le ((\textrm{C}_0(f)-1)(k-1) +1)+(l-1) \leq (\textrm{C}_0(f)-1)(\textrm{C}_1(f)-1) + 1$. \end{proof} From \cref{thm:rank-tribes}, we see that the lower bound on $\textrm{Depth}$ in \cref{prop:depth-cert} does not hold for $\textrm{Rank}$; for $m>n$, $\textrm{Rank}(\mbox{{\sc Tribes}}^d_{n,m})=n < m = C(\mbox{{\sc Tribes}}^d_{n,m})$. However, $\min\{\textrm{C}_0(\mbox{{\sc Tribes}}^d_{n,m}),\textrm{C}_1(\mbox{{\sc Tribes}}^d_{n,m})\} = n = \textrm{Rank}(\mbox{{\sc Tribes}}^d_{n,m})$. Further, for all the functions listed in \cref{tab:tabulation}, $\textrm{Rank}(f)$ is at least as large as $\min\{\textrm{C}_0(f),\textrm{C}_1(f)\}$. However, even this is not a lower bound in general. \begin{restatable}{lemma}{lemMinCNotLB}\label{lem:minC-not-a-lb-for-rank} $\min\{\textrm{C}_0(f),\textrm{C}_1(f)\}$ is not a lower bound on $\textrm{Rank}(f)$; for the symmetric function $f=\mbox{{\sc Maj}}_n \vee \mbox{{\sc Parity}}_n$, when $n>4$, $\textrm{Rank}(f) < \min\{\textrm{C}_0(f),\textrm{C}_1(f)\}$. \end{restatable} \begin{proof} Let $f$ be the function $\mbox{{\sc Maj}}_n \vee \mbox{{\sc Parity}}_n$, for $n>4$. Then $f(0^n)=0$ and $\textrm{C}_0(f,0^n)=n$, and $f(10^{n-1})=1$ and $\textrm{C}_1(f,10^{n-1})=n$. Also, $f$ is symmetric, with $\textrm{Gap}(f)=n/2$, so by \cref{lem:ABDORU}, $\textrm{Rank}(f)=n/2$. \end{proof} The average certificate complexity is also not directly related to rank. \begin{lemma}\label{lem:avgC-not-a-lb-for-rank} Average certificate complexity is neither a upper bound nor a lower bound on the rank of a function; there exist functions $f$ and $g$, such that $\textrm{Rank}(f) < \textrm{C}_{avg}(f)$ and $\textrm{C}_{avg}(g) < \textrm{Rank}(g)$. \end{lemma} \begin{proof} Let $f$ be the $\mbox{{\sc And}}_n$ function for $n\ge 2$; we know that $\textrm{Rank}(f)=1$. Since the $1$-certificate has length $n$ and all minimal $0$-certificates have length $0$, the average certificate complexity of $f$ is $\textrm{C}_{avg}(f)=2^{-n}.n + (1-2^{-n}).1= 1+ 2^{-n}(n-1)$. Consider $g=\mbox{{\sc Tribes}}^d_{n,2}$ for $n>2$. By \cref{thm:rank-tribes}, $\textrm{Rank}(g)=n$. Since $|g^{-1}(1)|=3^n$ and each minimal 1-certificate has length $n$, and since $|g^{-1}(0)|=4^n-3^n$ and each minimal 0-certificate has length $2$, we see that \[\textrm{C}_{avg}(g) = \left(\frac{3}{4}\right)^n \cdot n + \left[1- \left(\frac{3}{4}\right)^n\right] \cdot 2 < n =\textrm{Rank}(g).\] For a larger gap between $\textrm{Rank}$ and $\textrm{C}_{avg}$, consider the function $h=\mbox{{\sc And}}_n\circ \mbox{{\sc Parity}}_n$. From \cref{thm:rank-tribes}, $\textrm{Rank}(h) = n(n-1)+1$. There are $2^{(n-1)n}$ 1-inputs, and all the $1$-certificates have length $n^2$. Also, all minimal $0$-certificates have length $n$. Hence $\textrm{C}_{avg}(h)= 2^{-n}n^2 + (1-2^{-n})n = n + o(1)$. \end{proof} What can be shown in terms of certificate complexity and rank is the following: \begin{lemma}\label{lem:mincert-rank} For every Boolean function $f$, $\textrm{C}_{\min}(f) \le \textrm{Rank}(f)$. This is tight for $\mbox{{\sc Or}}_n$. \end{lemma} \begin{proof} Let $T$ be a rank-optimal decision tree for $f$. Since the variables queried in any root-to-leaf path in $T$ form a $0$ or $1$-certificate for $f$, we know that depth of each leaf in $T$ must be at least $\textrm{C}_{\min}(f)$. By \cref{prop:prop_rank_tree}(\ref{item-leaf-depth-rank}), $\textrm{Rank}(f) = \textrm{Rank}(T)\ge \textrm{C}_{\min}(f)$. \end{proof} \cref{lem:rank-cert} and \cref{lem:mincert-rank} give these bounds sandwiching $\textrm{Rank}(f)$: \begin{theorem}\label{thm:rank-cert-bounds} $\textrm{C}_{\min}(f) \le \textrm{Rank}(f) \le (\textrm{C}_0(f)-1)(\textrm{C}_1(f)-1) + 1 \le (\textrm{C}(f)-1)^2+1$. \end{theorem} As mentioned in \cref{lem:ABDORU}, for symmetric functions the rank is completely characterised in terms of $\textrm{Gap}$ of $f$. How does $\textrm{Gap}$ relate to certificate complexity for such functions? It turns out that certificate complexity is characterized not by $\textrm{Gap}$ but by $\textrm{Gap}_{\min}$. Using this relation, the upper bound on $\textrm{Rank}(f)$ from \cref{lem:rank-cert} can be improved for symmetric functions to $\textrm{C}(f)$. \begin{lemma}\label{lem:symm-cert} For every symmetric Boolean function $f$ on $n$ variables, $\textrm{C}(f)=n-\textrm{Gap}_{\min}(f)$ and $n-\textrm{C}(f)+1 \le \textrm{Rank}(f) \le \textrm{C}(f)$. Both the inequalities on rank are tight for $\mbox{{\sc Maj}}_{2k+1}$. \end{lemma} \begin{proof} We first show $\textrm{C}(f)=n-\textrm{Gap}_{\min}(f)$. Consider any interval $[a,b]$ such that $f_{a-1}\neq f_a=f_{a+1}=...=f_b\neq f_{b+1}$. Let $x$ be any input with Hamming weight in the interval $[a,b]$. We show that $C(f,x)=n-(b-a)$. \begin{enumerate} \item Pick any $S\subseteq[n]$ containing exactly $a$ bit positions where $x$ is 1, and exactly $n-b$ bit positions where $x$ is 0. Any $y$ agreeing with $x$ on $S$ has Hamming weight in $[a,b]$, and hence $f(y)=f(x)$. Thus $S$ is a certificate for $x$. Hence $C(f,x)\le n-(b-a)$. \item Let $S\subseteq [n]$ be any certificate for $x$. Suppose $S$ contains fewer than $a$ bit positions where $x$ is 1. Then there is an input $y$ that agrees with $x$ on $S$ and has Hamming weight exactly $a-1$. (Flip some of the 1s from $x$ that are not indexed in $S$.) So $f(y) \neq f(x)$, contradicting the fact that $S$ is a certificate for $x$. Similarly, if $S$ contains fewer that $n-b$ bit positions where $x$ is 0, then there is an input $z$ that agrees with $x$ on $S$ and has Hamming weight exactly $b+1$. So $f(z) \neq f(x)$, contradicting the fact that $S$ is a certificate for $x$. Thus any certificate for $x$ must have at least $a+(n-b)$ positions; hence $C(f,x) \ge n-(b-a)$. \end{enumerate} Since the argument above works for any interval $[a,b]$ where $f$ is constant, we conclude that $\textrm{C}(f) = n - \textrm{Gap}_{\min}(f)$. Next, observe that $\textrm{Gap}(f) + \textrm{Gap}_{\min}(f) \leq n-1$. Hence, \[n-\textrm{C}(f)+1 = \textrm{Gap}_{\min}(f)+1 \le n-Gap(f)=\textrm{Rank}(f) \le n-\textrm{Gap}_{\min}(f) = \textrm{C}(f). \] As seen from \cref{tab:tabulation}, these bounds on $\textrm{Rank}$ are tight for $\mbox{{\sc Maj}}_{2k+1}$. \end{proof} Even for the (non-symmetric) functions in \cref{thm:rank-tribes}, $\textrm{Rank}(f) \le \textrm{C}(f)$. However, this is not true in general. \iffalse The proof is deferred to \cref{sec:rank-composed}, where we develop techniques to bound the rank of composed functions. We also give, in \cref{sec:game-proofs}, a proof based on the Prover-Delayer game characterisation from \cref{thm:game-rank}. \fi \begin{lemma}\label{lem:cert-not-ub} Certificate Complexity does not always bound $\textrm{Rank}$ from above; for $k\ge1$ and $n=4^k$ the function $f = (\mbox{{\sc And}}_2\circ \mbox{{\sc Or}}_2)^{\otimes k}$ on $n$ variables has $\textrm{Rank}(f) = \Omega(Cert(f)^2)$. \end{lemma} This lemma shows that the relation between rank and certificate complexity (from \cref{lem:rank-cert}) is optimal upto constant factors. The proof of the lemma is deferred to the end of \cref{subsec:size-lb}, before which we develop techniques to bound the rank of composed functions. \section{Rank of Composed and Iterated Composed functions} \label{sec:rank-composed} In this section we study the rank for composed functions. For composed functions, $f\circ g$, decision tree complexity $\textrm{Depth}$ is known to behave very nicely. \begin{proposition}[\cite{Montanaro-cj14}]\label{prop:depth-compose} For Boolean functions $f,g$, $\textrm{Depth}(f\circ g)=\textrm{Depth}(f)\textrm{Depth}(g)$. \end{proposition} We want to explore how far something similar can be deduced about $\textrm{Rank}(f\circ g)$. The first thing to note is that a direct analogue in terms of $\textrm{Rank}$ alone is ruled out. \begin{lemma}\label{lem:compose-rank-example} For general Boolean functions $f$ and $g$, $\textrm{Rank}(f\circ g)$ cannot be bounded by any function of $\textrm{Rank}(f)$ and $\textrm{Rank}(g)$ alone. \end{lemma} \begin{proof} Let $f=\mbox{{\sc And}}_n$ and $g=\mbox{{\sc Or}}_n$. Then $\textrm{Rank}(f)=\textrm{Rank}(g)=1$. But $\textrm{Rank}(f\circ g) = \textrm{Rank}(\mbox{{\sc Tribes}}^d_n) = n$, as seen in \cref{thm:rank-tribes}. \end{proof} For $f\circ g$, let $T_f$, $T_g$ be decision trees for $f$, $g$ respectively. One way to construct a decision tree for $f\circ g$ is to start with $T_f$, inflate each internal node $u$ of $T_f$ into a copy of $T_g$ on the appropriate inputs, and attach the left and the right subtree of $u$ as appropriate at the leaves of this copy of $T_g$. By \cref{prop:depth-compose}, the decision tree thus obtained for $f\circ g$ is optimal for $\textrm{Depth}$ if one start with depth-optimal trees $T_f$ and $T_g$ for $f$ and $g$ respectively. In terms of rank, we can also show that the rank of the decision tree so constructed is bounded above by $\textrm{Depth}(T_f)\textrm{Rank}(T_g)=\textrm{Depth}_w(f,[r,r,\ldots , r])$, where $r=\textrm{Rank}(T_g)$. (This is the construction used in the proofs of \cref{lem:rank-tribes-ub,lem:and-parity-ub}, where further properties of the $\mbox{{\sc Parity}}$ function are used to show that the resulting tree's rank is even smaller than $\textrm{Depth}(f)\textrm{Rank}(g)$.) In fact, we show below (\cref{thm:compose-rank-ub}) that this holds more generally, when different functions are used in the composition. While this is a relatively straightforward generalisation here, it is necessary to consider such compositions for the lower bound we establish further on in this section. \begin{restatable}{theorem}{thmComposeRankUb}\label{thm:compose-rank-ub} For non-constant boolean functions $g_1, \ldots , g_n$ with $\textrm{Rank}(g_i)=r_i$, and for $n$-variate non-constant booolean function $f$, $$\textrm{Rank}(f\circ(g_1,g_2,...,g_n)) \le \textrm{Depth}_w(f,[r_1,r_2,...,r_n] ).$$ \end{restatable} \begin{proof} Let $h$ denote the function $f\circ(g_1,g_2,...,g_n)$. For $i\in[n]$, let $m_i$ be the arity of $g_i$. We call $x_{i,1}, x_{i,2}, \ldots , x_{i,m_i}$ the $i$th block of variables of $h$; $g_i$ is evaluated on this block. Let $T_f$ be any decision tree for $f$. For each $i\in [n]$, let $T_{g_i}$ be a rank-optimal tree for $g_i$. Consider the following recursive construction of a decision tree $T_{h}$ for $h$. \begin{enumerate} \item Base Case: $\textrm{Depth}(T_f)=0$. Then $f$ and $h$ are the same constant function, so set $T_h=T_f$. \item Recursion Step: $\textrm{Depth}(T_f)\ge 1$. Let $x_{i}$ be the variable queried at the root node of $T_f$, and let $T_0$ and $T_1$ be the left and the right subtree of $T_f$, computing functions $f_0,f_1$ respectively. For notational convenience, we still view $f_0,f_1$ as functions on $n$ variables, although they do not depend on their $i$th variable. Recursively construct, for $b\in\{0,1\}$, the trees $T'_b$ computing $f_b\circ(g_1,\ldots,g_{i-1},b,g_{i+1},\ldots,g_n)$ on the variables $x_{k,\ell}$ for $k\neq i$. Starting with the tree $T_{g_i}$ on the $i$th block of variables, attach tree $T'_b$ to each leaf labeled $b$ to obtain the tree $T_h$. \end{enumerate} From the construction, it is obvious that $T_{h}$ is a decision tree for $f\circ (g_1,\ldots ,g_n)$. It remains to analyse the rank of $T_{h}$. Proceeding by induction on $\textrm{Depth}(T_f)$, we show that $\textrm{Rank}(T_{h}) \le D_w(T_f,[r_1,r_2,...,r_n] )$. \begin{enumerate} \item Base Case: $\textrm{Depth}(T_f)=0$. Then $T_h=T_f$, so $\textrm{Rank}(T_{h}) = D_w(T_f,[r_1,r_2,...,r_n] )=0$. \item Induction: $\textrm{Depth}(T_f) \ge 1$. \begin{align*} \textrm{Rank}(T_{h}) &\le \textrm{Rank}(T_{g_i}) + \max\{\textrm{Rank}(T_0'),\textrm{Rank}(T_1')\} \quad (\textrm{by \cref{prop:compose_rank_dt}})\\ &= r_i + \max_{b\in\{0,1\}}\{\textrm{Rank}(T_b')\} \\ &\le r_i + \max_{b\in\{0,1\}}\{D_w(T_b,[r_1,r_2,...,r_n])\} \quad (\textrm{by induction})\\ &= D_w(T_f,[r_1,r_2,...,r_n] ) \quad \text{by definition of $D_w$} \end{align*} \end{enumerate} Picking $T_f$ to be a tree for $f$ that is optimal with respect to weights $[r_1,r_2,...,r_n]$ , we obtain $\textrm{Rank}(h) \le \textrm{Rank}(T_{h}) \le D_w(T_f,[r_1,r_2,...,r_n] ) = D_w(f,[r_1,r_2,...,r_n] )$. \end{proof} The really interesting question, however, is whether we can show a good lower bound for the rank of a composed function. This will help us understand how good is the upper bound in \cref{thm:compose-rank-ub}. To begin with, note that for non-constant Boolean functions $f,g$, both $f$ and $g$ are sub-functions of $f\circ g$. Hence \cref{prop:rank_subfn} implies the following. \begin{proposition}\label{prop:compose-rank-max-lb} For non-constant boolean functions $f,g$, $$\textrm{Rank}(f\circ g) \ge \max\{\textrm{Rank}(f),\textrm{Rank}(g)\}.$$ \end{proposition} A better lower bound in terms of weighted depth complexity of $f$ is given below. This generalises the lower bounds from \cref{lem:rank-tribes-lb,lem:and-parity-lb}. The proofs of those lemmas crucially used nice symmetry properties of the inner function, whereas the bound below applies for any non-constant inner function. It is significantly weaker than the bound from \cref{lem:rank-tribes-lb} but matches that from \cref{lem:and-parity-lb}. \begin{restatable}{theorem}{thmComposeRankLb}\label{thm:compose-rank-lb} For non-constant boolean functions $g_1, \ldots , g_n$ with $\textrm{Rank}(g_i)=r_i$, and for $n$-variate non-constant boolean function $f$, \begin{align*} \textrm{Rank}(f\circ(g_1,g_2,...,g_n)) & \ge \textrm{Depth}_w(f,[r_1-1,r_2-1,...,r_n-1] ) + 1 \\ & \ge \textrm{Depth}_w(f,[r_1,r_2,...,r_n] ) - (n-1). \end{align*} \end{restatable} \begin{proof} The second inequality above is straightforward: let $T$ be a decision tree for $f$ that is optimal with respect to weights $r_1-1,\ldots ,r_n-1$. Since $T$ can be assumed to be reduced, repeated application of \cref{fact:wtd-dec-tree} shows that the depth of $T$ with respect to weights $r_1,\ldots ,r_n$ increases by at most $n$. Thus $\textrm{Depth}_w(f,[r_1,\ldots,r_n]) \le \textrm{Depth}_w(T,[r_1,\ldots,r_n]) \le \textrm{Depth}_w(T,[r_1-1,\ldots,r_n-1])+n=\textrm{Depth}_w(f,[r_1-1,\ldots,r_n-1])+n$, giving the claimed inequality. We now turn our attention to the first inequality, which is not so straightforward. We prove it by induction on $n$. Let $h$ denote the function $f\circ(g_1,g_2,...,g_n)$. For $i\in[n]$, let $m_i$ be the arity of $g_i$. We call $x_{i,1}, x_{i,2}, \ldots , x_{i,m_i}$ the $i$th block of variables of $h$; $g_i$ is evaluated on this block. In the base case, $n=1$. Since $f$ is non-constant, $f$ can either be $x$ or $\neg x$; accordingly, $h$ is either $g_1$ or $\neg g_1$. So $D_w(f,[r_1-1])=r_1-1$ and $\textrm{Rank}(h)=\textrm{Rank}(g_1)=r_1$, and the inequality holds. For the inductive step, when $n>1$, we proceed by induction on $M=\sum_{i=1}^n m_i$. In the base case, $M=n$, and each $m_i$ is equal to $1$. Since all $g_i$'s are non-constant, $r_i=1$ for all $i$. So $D_w(f,[r_1-1,r_2-1,...,r_n-1])+1 = D_w(f,[0,0,...,0])+1=1$. Since all $r_i$'s are $1$, each $g_i$'s is either $x_{i,1}$ or $\neg x_{i,1}$, Thus $h$ is the same as $f$ upto renaming of the literals. Hence $\textrm{Rank}(h)=\textrm{Rank}(f)\ge 1$. For the inductive step, $M>n>1$. Take a rank-optimal decision tree $T_h$ for $h$. We want to show that $\textrm{Depth}_w(f,[r_1-1,\ldots, r_n-1]) \le \textrm{Rank}(T_h)-1$. Without loss of generality, let $x_{1,1}$ be the variable queried at the root. Let $T_0$ and $T_1$ be the left and the right subtree of $T_h$. For $b\in\{0,1\}$, let $g_1^b$ be the subfunction of $g_1$ when $x_{1,1}$ is set to $b$. Note that $T_b$ computes $h_b\triangleq f\circ(g_1^b,g_2,...,g_n)$, a function on $M-1$ variables. We would like to use induction to deduce information about $\textrm{Rank}(T_b)$. However, $g_1^b$ may be a constant function, and then induction does not apply. So we do a case analysis on whether or not $g_1^0$ and $g_1^1$ are constant functions; this case analysis is lengthy and tedious but most cases are straightforward. \begin{itemize} \item Case 1: Both $g_1^0$ and $g_1^1$ are constant functions. Since $g_1$ is non-constant, $g_1^0 \neq g_1^1$, and $r_1=\textrm{Rank}(g_1)=1$. Assume that $g_1^0=0$ and $g_1^1=1$; the argument for the other case is identical. For $b\in\{0,1\}$, let $f_b$ be the function $f(b,x_2,\ldots,x_n)$; then $h_b=f_b\circ(g_2,\ldots ,g_n)$. View $f_b$ as functions on $n-1$ variables. \begin{itemize} \item Case 1a: Both $f_0$ and $f_1$ are constant functions. Then $f$ is either $x_1$ or $\neg x_1$, so $\textrm{Depth}_w(f,[r_1-1,r_2-1,...,r_n-1]) = \textrm{Depth}_w(f,[0,r_2-1,...,r_n-1]) = 0$. Also, in this case, $h$ is either $x_{1,1}$ or $\neg x_{1,1}$, so $\textrm{Rank}(h)=1$. Hence the inequality holds. \item Case 1b: Exactly one of $f_0$ and $f_1$ is a constant function; without loss of generality, let $f_0$ be a constant function. First, observe that for any weights $w_2,\ldots,w_n$, $D_w(f,[0,w_2,...,w_n]) \le D_w(f_1,[w_2,...,w_n])$: we can obtain a decision tree for $f$ witnessing this by first querying $x_1$, making the $x_1=0$ child a leaf labeled $f_0$, and attaching the optimal tree for $f_1$ on the $x_1=1$ branch. Second, note that since $f_1$ and all $g_i$ are non-constant, so is $h_1$. Now \begin{align*} \textrm{Rank}(h)&=\textrm{Rank}(h_1) && \text{since $\textrm{Rank}(h_0)=0$} \\ &\ge D_w(f_1,[r_2-1,...,r_n-1])+1 && \text{by induction hypothesis on $n$}\\ &\ge D_w(f,[0,r_2-1,...,r_n-1])+1 && \text{by first observation above}\\ &= D_w(f,[r_1-1,r_2-1,...,r_n-1])+1 && \text{since $r_1=1$} \end{align*} \item Case 1c: Both $f_0$ and $f_1$ are non-constant functions. \begin{align*} \textrm{Rank}(h) &\ge \max(\textrm{Rank}(h_0),\textrm{Rank}(h_1))\\ &\ge \max_{b\in\{0,1\}}\{D_w(f_b,[r_2-1,...,r_n-1])\} + 1 && \text{by induction hypothesis on $n$}\\ &\ge D_w(f,[0,r_2-1,...,r_n-1]) + 1 && \text{by def.\ of weighted depth}\\ &&& \text{of a tree querying $x_1$ first} \\ &= D_w(f,[r_1-1,r_2-1,...,r_n-1]) + 1 && \text{since $r_1=1$} \end{align*} \end{itemize} \item Case 2: One of $g_1^0$ and $g_1^1$ is a constant function; assume without loss of generality that $g_1^0$ be constant. In this case, we can conclude that $\textrm{Rank}(g_1) = \textrm{Rank}(g_1^1)$: $\textrm{Rank}(g_1^1) \le \textrm{Rank}(g_1)$ by \cref{prop:rank_subfn}, and $\textrm{Rank}(g_1) \le \textrm{Rank}(g_1^1)$ as witnessed by a decision tree for $g_1$ that queries $x_{1,1}$ first, sets the $x_{1,1}=0$ branch to a leaf labeled $g_1^0$, and attaches an optimal tree for $g_1^1$ on the other branch. Now \begin{align*} \textrm{Rank}(h)&\ge \textrm{Rank}(h_1)\\ &\ge D_w(f,[\textrm{Rank}(g_1^1)-1,r_2-1,...,r_n-1])+1 && \text{by induction on $M$}\\ &= D_w(f,[r_1-1,r_2-1,...,r_n-1])+1 && \text{since $\textrm{Rank}(g_1^1)=\textrm{Rank}(g_1)$} \end{align*} \item Case 3: Both $g_1^0$ and $g_1^1$ are non-constant functions. Let $r_1^b=\textrm{Rank}(g_1^b)\ge 1$. A decision tree for $g_1$ that queries $x_{1,1}$ first and then uses optimal trees for $g_1^0$ and $g_1^1$ has rank $R \ge r_1$ and witnesses that $1+\max\{r_1^0,r_1^1\} \ge R \ge r_1$. (Note that $R$ may be more than $r_1$, since a rank-optimal tree for $g_1$ may not query $x_{1,1}$ first.) \begin{itemize} \item Case 3a: $\max_b\{r_1^b\} = r_1-1$. Then $R=1+\max\{r_1^0,r_1^1\}$, which can only happen if $r_1^0=r_1^1$, and hence $r_1^0=r_1^1=r_1-1$. We can further conclude that $r_1 \ge 2$. Indeed, if $r_1=1$, then $r_1-1=r_1^0=r_1^1=0$, contradicting the fact that we are in Case 3. For $b\in\{0,1\}$, \begin{align*} \textrm{Rank}(h_b) &= \textrm{Rank}(f\circ(g_1^b,g_2,\ldots ,g_n))\\ &\ge \textrm{Depth}_w(f,[r_1^b-1, r_2-1, \ldots, r_n-1])+1 \quad \text{by induction on $M$} \\ &= \textrm{Depth}_w(f,[r_1-2,r_2-1,\ldots,r_n-1])+1 \quad \text{since $r_1-1= r_1^b$}.\\ \text{Hence~} \textrm{Rank}(h) &\ge 1 + \min_b \textrm{Rank}(h_b) \\ &\ge \textrm{Depth}_w(f,[r_1-2,r_2-1,\ldots,r_n-1])+2 \quad \text{derivation above} \\ &\ge \textrm{Depth}_w(f,[r_1-1, r_2-1, \ldots, r_n-1])+1 \quad \text{by \cref{fact:wtd-dec-tree}} \end{align*} \item Case 3b: $\max_b\{r_1^b\} > r_1-1$. So $\max_b\{r_1^b\} \ge r_1$. \begin{align*} \textrm{Rank}(h) &\ge \max_b \textrm{Rank}(h_b) \\ &\ge \max_b \textrm{Depth}_w(f,[r_1^b-1,r_2-1,\ldots,r_n-1])+1 \quad \text{by induction on $M$} \\ &\ge \textrm{Depth}_w(f,[r_1-1,r_2-1,\ldots,r_n-1])+1 \quad \text{since $\max_b\{r_1^b\} \ge r_1$} \end{align*} \end{itemize} \end{itemize} This completes the inductive step for $M>n>1$ and completes the entire proof. \end{proof} From \cref{thm:rank-tribes,thm:compose-rank-ub,thm:compose-rank-lb}, we obtain the following: \begin{theorem}\label{thm:compose-rank-bounds} For non-constant boolean functions $f,g$, \[ \textrm{Depth}(f) (\textrm{Rank}(g)-1) +1 \le \textrm{Rank}(f\circ g) \le \textrm{Depth}(f) \textrm{Rank}(g). \] Both inequalities are tight; the first for $\mbox{{\sc And}}_n \circ \mbox{{\sc Parity}}_m$ and the second for $\mbox{{\sc Tribes}}_n$ and $\mbox{{\sc Tribes}}^d_n$. \end{theorem} It is worth noting that in the above bounds, the role of $\textrm{Rank}$ and $\textrm{Depth}$ cannot be exchanged. With $f=\mbox{{\sc And}}_n$ and $g=\mbox{{\sc Parity}}_n$, $\textrm{Rank}(f)\textrm{Depth}(g)=n < n(n-1)+1 \le \textrm{Rank}(f\circ g)$, and $\textrm{Rank}(g\circ f) \le n < n(n-1)+1 = \textrm{Rank}(g)(\textrm{Depth}(f)-1)+1 = \textrm{Depth}(f)(\textrm{Rank}(g)-1)+1$. Since any non-constant symmetric function is evasive (\cref{prop:symm_evasive}), from \cref{thm:compose-rank-ub,thm:compose-rank-lb}, we obtain the following: \begin{corollary} For non-constant boolean functions $g_1, \ldots , g_n$ with $\textrm{Rank}(g_i)=r_i$, and for $n$-variate symmetric non-constant booolean function $f$, $$\sum_i r_i - (n-1) \le \textrm{Rank}(f\circ(g_1,g_2,...,g_n))\le \sum_i r_i .$$ \end{corollary} \iffalse Using \cref{thm:compose-rank-bounds}, we can now complete the proof of \cref{lem:cert-not-ub}. \begin{proof}(of \cref{lem:cert-not-ub}) Consider the composed function $f=\mbox{{\sc Maj}}_{2k+1}\circ\mbox{{\sc Maj}}_{2k+1}$. Note that from the lower bound in \cref{thm:compose-rank-bounds}, and the entries in \cref{tab:tabulation}, $\textrm{Rank}(\mbox{{\sc Maj}}_{2k+1}\circ\mbox{{\sc Maj}}_{2k+1}) \ge (2k+1)k+1$. On the other hand, it is straightforward to verify that $\textrm{C}(f)=(k+1)^2$. Thus for $k> 1$, $\textrm{Rank}(f) > \textrm{C}(f)$. \end{proof} \fi For iterated composed functions, we obtain the following corollary. \begin{corollary}\label{corr:iterated-rank} For $k\ge 1$ and non-constant boolean functions $f$, \[ \textrm{Depth}(f)^{k-1} (\textrm{Rank}(f)-1) +1 \le \textrm{Rank}(f^{\otimes k}) \le \textrm{Depth}(f)^{k-1} \textrm{Rank}(f). \] \end{corollary} \begin{proof} The result follows from \cref{thm:compose-rank-bounds} applying induction on $k$. The base case, k=1, is straightforward. For the induction step, $k>1$, applying the recursive definition of iterated composed functions, we have \begin{align*} \textrm{Rank}(f^{\otimes k}) &= \textrm{Rank}(f\circ f^{\otimes (k-1)})\\ &\ge \textrm{Depth}(f)(\textrm{Rank}(f^{\otimes (k-1)}) -1)+1 \quad \text{by \cref{thm:compose-rank-bounds}} \\ &\ge \textrm{Depth}(f)(\textrm{Depth}(f)^{k-2}(\textrm{Rank}(f)-1))+1 \quad \text{by induction on $k$}\\ &= \textrm{Depth}(f)^{k-1}(\textrm{Rank}(f)-1)+1. \\ \textrm{Rank}(f^{\otimes k}) &= \textrm{Rank}(f\circ f^{\otimes (k-1)})\\ &\le \textrm{Depth}(f)\textrm{Rank}(f^{\otimes (k-1)}) \quad \text{by \cref{thm:compose-rank-bounds}} \\ &\le \textrm{Depth}(f)(\textrm{Depth}(f)^{k-2}(\textrm{Rank}(f)) \quad \text{by induction on $k$}\\ &= \textrm{Depth}(f)^{k-1}\textrm{Rank}(f). \end{align*} \end{proof} \section{Applications}\label{sec:application} In this section, we give some applications of our results and methods. We first show how to obtain tight lower bounds on $\log \textrm{DTSize}$ for composed functions using the rank lower bound from \cref{thm:compose-rank-bounds}. Next, we relate rank to query complexity in more general decision trees, namely $\mbox{{\sc Conj}}$ decision trees, and show that rank (for ordinary decision trees) characterizes query complexity in this model up to $\log n$ factors. \subsection{Tight lower bounds for $\log\textrm{DTSize}$ for Composed functions} \label{subsec:size-lb} It was shown in \cite{EH-IC1989} that every boolean function $f$ in $n$ variables has a decision tree of size at most $\exp(O(\log n \log^2 N(f))$, where $N(f)$ is the total number of monomials in the minimal $\textrm{DNF}$ for $f$ and $\neg f$. Later, in \cite{jukna1999p}, this relation was proved to be optimal up to $\log n$ factor. To prove this, the authors of \cite{jukna1999p} showed that iterated $\mbox{{\sc And}}_2\circ \mbox{{\sc Or}}_2$ and iterated $\mbox{{\sc Maj}}_3$ on $n=4^k$ and $n=3^k$ variables require decision trees of size $\exp(\Omega(\log^{\log_2 3} N))$ and $\exp(\Omega(\log^{2} N))$ respectively. It is easy to show that $N((\mbox{{\sc And}}_2\circ \mbox{{\sc Or}}_2)^{\otimes k})$ and $N(\mbox{{\sc Maj}}_3^{\otimes k})$ is $\exp(O(n^{1/\log_2 3}))$ and $exp(O(n^{1/2}))$ respectively. So showing optimality essentially boiled down to showing that the decision tree size of iterated $\mbox{{\sc And}}_2\circ \mbox{{\sc Or}}_2$ and iterated $\mbox{{\sc Maj}}_3$ on $n$ variables is exponential $\exp(\Omega(n))$. This was established in \cite{jukna1999p} using spectral methods. We recover these size lower bounds using our rank lower bound for composed functions. \begin{corollary}\label{corr:examples} For $k\ge 1$ and $n=3^k$, $$\log \textrm{DTSize}(\mbox{{\sc Maj}}_3^{\otimes k}) \ge \textrm{Rank}(\mbox{{\sc Maj}}_3^{\otimes k}) \ge n/3 +1.$$ For $k\ge 1$ and $n=4^k$, $$\log \textrm{DTSize}((\mbox{{\sc And}}_2\circ \mbox{{\sc Or}}_2)^{\otimes k}) \ge \textrm{Rank}((\mbox{{\sc And}}_2\circ \mbox{{\sc Or}}_2)^{\otimes k}) \ge n/4+1.$$ \end{corollary} \begin{proof} The $\mbox{{\sc Maj}}_3$ function has depth $3$ and rank $2$. Applying \cref{corr:iterated-rank}, we see that $\textrm{Rank}(\mbox{{\sc Maj}}_3^{\otimes k})\ge 3^{k-1}+1=n/3 +1$. Since rank is a lower bound on $\log \textrm{DTSize}$ (\cref{prop:rank_size}), we get the desired size lower bound for iterated $\mbox{{\sc Maj}}_3$. The $\mbox{{\sc And}}_2\circ \mbox{{\sc Or}}_2$ function has depth $4$ and rank $2$. Again applying \cref{corr:iterated-rank}, we get $\log \textrm{DTSize}((\mbox{{\sc And}}_2\circ \mbox{{\sc Or}}_2)^{\otimes k})\ge \textrm{Rank}((\mbox{{\sc And}}_2\circ \mbox{{\sc Or}}_2)^{\otimes k})\ge 4^{k-1}+1=n/4+1$, giving the size lower bound for iterated $\mbox{{\sc And}}_2\circ \mbox{{\sc Or}}_2$. \end{proof} The size lower bound for iterated $\mbox{{\sc And}}_2\circ \mbox{{\sc Or}}_2$ on $n$ variables from \cite{jukna1999p}, in conjunction with the rank-size relation from \cref{prop:rank_size}, implies that the rank of the iterated function is $\Omega(n)$. \cref{corr:examples} demonstrates that these tight rank and size lower bounds can be recovered simultaneously \cref{corr:iterated-rank}. Recently (after the preliminary version of our work appeared), in \cite{cornelissen2022improved}, the rank of the iterated $\mbox{{\sc And}}_2\circ \mbox{{\sc Or}}_2$ function on $n$ variables was revisited, in the context of separating rank from randomised rank. Using the Prover-Delayer game-based characterisation of rank from \cref{thm:game-rank}, it was shown there that this function has rank exactly $(n+2)/3$. While an $\Omega(n)$ bound is now easy to obtain as in \cref{corr:examples}, getting the exact constants required significantly more work. The arguments given in \cite{jukna1999p} and \cite{cornelissen2022improved} are tailored to the specific functions being considered, and do not work in general. On the other hand, our rank lower bound from \cref{thm:compose-rank-bounds} implies size lower bounds for composed functions in general. For completeness, we state our rank lower bound of \cref{thm:compose-rank-bounds} in terms of size. \begin{corollary} For $k\ge 1$ and non-constant boolean functions $f$ and $g$, \[ \log \textrm{DTSize}(f\circ g) \ge \textrm{Depth}(f) (\textrm{Rank}(g)-1) +1 . \] \[ \log \textrm{DTSize}(f^{\otimes k}) \ge \textrm{Depth}(f)^{k-1} (\textrm{Rank}(f)-1) +1. \] \end{corollary} Using \cref{corr:examples}, we can now complete the proof of \cref{lem:cert-not-ub}. \begin{proof}(of \cref{lem:cert-not-ub}) From \cref{{corr:examples}}, we know that $\textrm{Rank}((\mbox{{\sc And}}_2\circ \mbox{{\sc Or}}_2)^{\otimes k}) \ge n/4+1$. It is easy to see that $\textrm{C}_0((\mbox{{\sc And}}_2\circ \mbox{{\sc Or}}_2))=\textrm{C}_1((\mbox{{\sc And}}_2\circ \mbox{{\sc Or}}_2))=2$, and that for $k>1$, $\textrm{C}_0((\mbox{{\sc And}}_2\circ \mbox{{\sc Or}}_2)^{\otimes k}) = 2 \textrm{C}_0((\mbox{{\sc And}}_2\circ \mbox{{\sc Or}}_2)^{\otimes k-1})$ and $\textrm{C}_1((\mbox{{\sc And}}_2\circ \mbox{{\sc Or}}_2)^{\otimes k}) = 2 \textrm{C}_1((\mbox{{\sc And}}_2\circ \mbox{{\sc Or}}_2)^{\otimes k-1})$. Thus $\textrm{C}_0((\mbox{{\sc And}}_2\circ \mbox{{\sc Or}}_2)^{\otimes k})=\textrm{C}_1((\mbox{{\sc And}}_2\circ \mbox{{\sc Or}}_2)^{\otimes k})= 2^k=\sqrt{n}$. Hence, for $f=(\mbox{{\sc And}}_2\circ \mbox{{\sc Or}}_2)^{\otimes k}$, $\textrm{Rank}(f)\ge \frac{\textrm{C}_0}{2}\frac{\textrm{C}_1}{2} + 1=\textrm{C}(f)^2/4 +1$. \end{proof} \subsection{$\mbox{{\sc Conj}}$ decision trees} In this section, we consider a generalization of the ordinary decision tree model, namely $\mbox{{\sc Conj}}$ decision trees. In the $\mbox{{\sc Conj}}$ decision tree model, each query is a conjunction of literals, where a literal is a variable or its negation. (In \cite{KS-JCSS04}, such a tree where each conjunction involves at most $k$ literals is called a $k$-decision tree; thus in that notation these are $n$-decision trees. A 1-decision tree is a simple decision tree.) A model essentially equivalent to $\mbox{{\sc Conj}}$ decision trees, $(\mbox{{\sc And}},\mbox{{\sc Or}})$-decision trees, was investigated in \cite{benasher1995decision} for determining the complexity of $\mbox{{\sc Thr}}_{n}^{k}$ functions. $(\mbox{{\sc And}},\mbox{{\sc Or}})$-decision trees query either an $\mbox{{\sc And}}$ of a subset of variables or an $\mbox{{\sc Or}}$ of a subset of variables. It was noted in \cite{benasher1995decision} that the $(\mbox{{\sc And}},\mbox{{\sc Or}})$ query model is related to the computation using Ethernet channels, and a tight query lower bound was shown in this model for $\mbox{{\sc Thr}}_{n}^{k}$ functions for all $k\ge 1$. It is easy to see that the $(\mbox{{\sc And}},\mbox{{\sc Or}})$-query model is equivalent to the $\mbox{{\sc Conj}}$ query model upto a factor of 2. Let $\textrm{Depth}_{\bar{\wedge}}$ and $\textrm{Depth}_{\wedge,\vee}(f)$ denote the query complexity of function $f$ in $\mbox{{\sc Conj}}$ and $(\mbox{{\sc And}},\mbox{{\sc Or}})$ query model respectively. Then \begin{proposition} For every boolean functions $f$, \[ \textrm{Depth}_{\bar{\wedge}}(f)\le \textrm{Depth}_{\wedge,\vee}(f) \le 2\textrm{Depth}_{\bar{\wedge}}(f).\] \end{proposition} Such a connection is not obvious for the rank of $\mbox{{\sc Conj}}$ and $(\mbox{{\sc And}},\mbox{{\sc Or}})$ decision trees. Recently, in \cite{knop2021log}, a monotone version of $\mbox{{\sc Conj}}$ decision trees called $\mbox{{\sc And}}$ decision trees is studied, where queries are restricted to $\mbox{{\sc And}}$ of variables, not literals. To emphasize the difference, we refer to these trees as monotone $\mbox{{\sc And}}$ trees. Understanding monotone $\mbox{{\sc And}}$ decision trees in \cite{knop2021log} led to the resolution of the log-rank conjecture for the class of $\mbox{{\sc And}}$ functions (any function composed with the 2-bit $\mbox{{\sc And}}$ function), up to a $\log n$ factor. As remarked in \cite{knop2021guest}, understanding these more general decision tree models has shed new light on central topics in communication complexity, including restricted cases of the log-rank conjecture and the query-to-communication lifting methodology. In this section, we show that simple decision tree rank characterizes the query complexity in the $\mbox{{\sc Conj}}$ decision tree model, up to a $\log n$ factor. Formally, \begin{theorem} \label{thm:simple-conj-relation} For every boolean functions $f$, \[ \textrm{Rank}(f) \le \textrm{Depth}_{\bar{\wedge}}(f)\le \log \textrm{DTSize}(f)\le \textrm{Rank}(f)\log \left(\frac{e n}{\textrm{Rank}(f)}\right).\] Consequently, if $\textrm{Rank}(f)=\Theta(n)$, then so is $\textrm{Depth}_{\bar{\wedge}}(f)$. \end{theorem} \begin{proof} First, we show that $\textrm{Rank}(f) \le \textrm{Depth}_{\bar{\wedge}}(f)$. This is the straightforward construction and its analysis. Let $T_f$ be a depth-optimal $\mbox{{\sc Conj}}$ decision tree for $f$ of depth $d$. We give a recursive construction of an ordinary decision tree $T$ for $f$ of rank at most $d$. In the base case, when $\textrm{Depth}(T_f)=0$, set $T=T_f$. In the recursion Step, $\textrm{Depth}(T_f)\ge 1$. Let $Q$ be the literal-conjunction queried at the root node of $T_f$, and let $T_0$ and $T_1$ be the left and right subtree of $T_f$, computing $f_0$ and $f_1$ respectively. Recursively construct ordinary decision trees $T_0'$ and $T_1'$ for $f_0$ and $f_1$. Let $T_Q$ be the ordinary decision tree obtained by querying the variables in $Q$ one by one to evaluate the query $Q$. Note that $T_Q$ evaluates the $\mbox{{\sc And}}$ function on literals in $Q$; hence it has rank $1$, and has exactly one leaf labelled 1 . Attach $T_0'$ to each leaf labelled $0$ in $T_Q$, and and $T_1'$to the unique leaf labelled $1$ in $T_Q$, to obtain $T$. From the construction, it is clear that $T$ evaluates $f$. To analyse the rank of $T$, proceed by induction on $\textrm{Depth}(T_f)$. \begin{enumerate} \item Base Case: $\textrm{Depth}(T_f)=0$. Trivially true as $T=T_f$ with rank $0$. \item Induction: $\textrm{Depth}(T_f)\ge 1$. \begin{align*} \textrm{Rank}(T) &\le \textrm{Rank}(T_{Q}) + \max\{\textrm{Rank}(T_0'),\textrm{Rank}(T_1')\} \quad (\textrm{by \cref{prop:compose_rank_dt}})\\ &= 1 + \max_{b\in\{0,1\}}\{\textrm{Rank}(T_b')\} \\ &\le 1 + \max_{b\in\{0,1\}}\{\textrm{Depth}_{\bar{\wedge}}(f_b)\} \quad (\textrm{by induction})\\ &\le 1 + (\textrm{Depth}(T_f)-1)= \textrm{Depth}(T_f). \end{align*} \end{enumerate} Next, we show that $\textrm{Depth}_{\bar{\wedge}}(f)\le \log \textrm{DTSize}(f)$. The main idea is that an ordinary decision tree of size $s$ can be balanced using $\mbox{{\sc Conj}}$ queries into a $\mbox{{\sc Conj}}$ decision tree of depth $O(\log s)$. Let $T_f$ be a size-optimal simple tree for $f$ of size $s$. Associate with each node $v$ of $T_f$ a subcube $J_v$ containing all the inputs that reaches node $v$. The root node has the whole subcube $\boolset{n}$. For a node $v$, $J_v$ is defined by the variables queried on the path leading to the node $v$. The recursive construction of a $\mbox{{\sc Conj}}$ decision tree $T$ of depth at most $2 \log_{3/2} s$ proceeds as follows. If $s=1$, set $T=T_f$. Otherwise, in the recursion step, $s> 1$. Obtain a node $v$ in $T_f$ such that number of leaves in the subtree rooted at $v$, denoted by $T_v$, in the range $[s/3,2s/3)$. (Such a node necessarily exists, and can be found by starting at the root and traversing down to the child node with more leaves until the number of leaves in the subtree rooted at the current node satisfies the condition.) Let $J_v=(S,\rho)$ be the subcube associated with $v$, and let $Q$ be the $\mbox{{\sc Conj}}$ query testing membership in $J_v$; $Q=(\bigwedge_{i\in S: \rho(i)=1} x_i)( \bigwedge_{i\in S: \rho(i)=0} \neg x_i)$. Note that $v$ cannot be the root node of $T$. Let $u$ be the sibling of $v$ in $T_f$, and let $T_u$ be the subtree rooted at $u$. Let $w$ be the parent of $v$ and $w$ in $T_f$. Let $T'_f$ be the tree obtained from $T_f$ by removing the entire subtree $T_v$, removing the query at $w$, and attaching subtree $T_u$ at $w$. For all inputs not in $J_v$, $T'_f$ and $T_f$ compute the same value. $T$ starts by querying $Q$. If $Q$ evaluates to $1$, proceed by recursively constructing the $\mbox{{\sc Conj}}$ decision tree for $T_v$. If $Q$ evaluates to $0$, proceed by recursively constructing the $\mbox{{\sc Conj}}$ decision tree for $T'_f$. The correctness of $T$ is obvious. It remains to estimate the depth of $T$. Let $D(s)$ be the number of queries made by the constructed $\mbox{{\sc Conj}}$ decision tree. By construction, we have $D(s)\le 1+ D(2s/3)$ giving us $D(s)=2\log_{3/2}s$, thereby proving our claim. The last inequality about size and rank, $\log \textrm{DTSize}(f) \le \textrm{Rank}(f)\log \left(\frac{e n}{\textrm{Rank}(f)}\right)$, comes from \cref{prop:rank_size}. \end{proof} \section{Tightness of Rank and Size relation for $\mbox{{\sc Tribes}}$} \label{sec:rank-size} In \cref{prop:rank_size}, we saw a relation between rank and size. The relationship is essentially tight. As remarked there, whenever $\textrm{Rank}(f)=\Omega(n)$, the relation is tight. The function $f=\mbox{{\sc Parity}}_n$ is one such function that witnesses the tightness of both the inequalities. Since $\textrm{Rank}(\mbox{{\sc Parity}})=n$, \cref{prop:rank_size} tells us that $\log\textrm{DTSize}(\mbox{{\sc Parity}})$ lies in the range $[n, n \log e]$, and we know that $\log \textrm{DTSize}(\mbox{{\sc Parity}})=n$. For the $\mbox{{\sc Tribes}}_n$ function, which has $N=n^2$ variables, we know from \cref{thm:rank-tribes} that $\textrm{Rank}(\mbox{{\sc Tribes}}_n)=n\in o(N)$. Thus \cref{prop:rank_size} tells us that $\log\textrm{DTSize}(\mbox{{\sc Tribes}}_n)$ lies in the range $[n, n \log (en)]$. (See also Exercise 14.9 \cite{Jukna-BFCbook} for a direct argument showing $n \le \log\textrm{DTSize}(\mbox{{\sc Tribes}}_n)$). But that still leaves a $(\log (en))$-factor gap between the two quantities. We show that the true value is closer to the upper end. To do this, we establish a stronger size lower bound for decision trees computing $\mbox{{\sc Tribes}}^d_n$. \begin{lemma}\label{lem:size-lb-tribes} For every $n,m \ge 1$, every decision tree for $\mbox{{\sc Tribes}}^d_{n,m}$ has at least $m^n$ $1$-leaves and $n$ 0-leaves. \end{lemma} \begin{proof} Recall that $\mbox{{\sc Tribes}}^d_{n,m} = \bigwedge_{i\in [n]} \bigvee_{j\in [m]} x_{i,j}$. We call $x_{i,1}, x_{i,2}, \ldots , x_{i,m}$ the $i$th block of variables. We consider two special kinds of input assignments: 1-inputs of minimum Hamming weight, call this set $S_1$, and 0-inputs of maximum Hamming weight, call this set $S_0$. Each $a\in S_1$ has exactly one 1 in each block; hence $|S_1|=m^n$. Each $b\in S_0$ has exactly $m$ zeroes, all in a single block; hence $|S_0|=n$. We show that in any decision tree $T$ for $\mbox{{\sc Tribes}}^d_{n,m}$, all the inputs in $S=S_1\cup S_0$ go to pairwise distinct leaves. Since all inputs in $S_1$ must go to 1-leaves of $T$, and all inputs of $S_0$ must go to 0-leaves, this will prove the claimed statement. Let $a,b$ be distinct inputs in $S_1$. Then there is some block $i\in[n]$, where they differ. In particular there is a unique $j\in[m]$ where $a_{i,j}=1$, and at this position, $b_{i,j}=0$. The decision tree $T$ must query variable $x_{i,j}$ on the path followed by $a$, since otherwise it will reach the same 1-leaf on input $a'$ that differs from $a$ at only this position, contradicting the fact that $\mbox{{\sc Tribes}}^d_{n,m}(a')=0$. Since $b_{i,j}=0$, the path followed in $T$ along $b$ will diverge from $a$ at this query, if it has not already diverged before that. So $a,b$ reach different 1-leaves. Let $a,b$ be distinct inputs in $S_0$. Let $i$ be the unique block where $a$ has all zeroes; $b$ has all 1s in this block. On the path followed by $a$, $T$ must query all variables from this block, since otherwise it will reach the same 0-leaf on input $a''$ that differs from $a$ only at an unqueried position in block $i$, contradicting $\mbox{{\sc Tribes}}^d_{n,m}(a'')=1$. Since $a$ and $b$ differ everywhere on this block, $b$ does not follow the same path as $a$, so they go to different leaves of $T$. \end{proof} We thus conclude that the second inequality in \cref{prop:rank_size} is also asymptotically tight for the $\mbox{{\sc Tribes}}^d_n$ function. The size lower bound from \cref{lem:size-lb-tribes} can also be obtained by specifying a good Delayer strategy in the asymmetric Prover-Delayer game and invoking \cref{prop:game-size}.; see \cref{sec:game-proofs}. \section{Proofs using Prover-Delayer Games}\label{sec:game-proofs} In this section we give Prover-Delayer Game based proofs of our results. \paragraph*{Prover strategy for $\mbox{{\sc Tribes}}_{n,m}$, proving \cref{lem:rank-tribes-ub}} We give a Prover strategy which restricts the Delayer to $n$ points, proving the upper bound on $\textrm{Rank}(\mbox{{\sc Tribes}}_{n,m})$. Whenever the Delayer defers a decision, the Prover chooses $1$ for the queried variable. The Prover queries variables $x_{i,j}$ in row-major order. In each row of variables, the Prover queries variables until some variable is set to 1 (either by the Delayer or by the Prover). Once a variable is set to 1, the Prover moves to the next row of variables. This Prover strategy allows the Delayer to defer a decision for at most one variable per row; hence the Delayer's score at the end is at most $n$. \paragraph*{Delayer strategy for $\mbox{{\sc Tribes}}_{n,m}$, proving \cref{lem:rank-tribes-lb}} We give a Delayer strategy which always score at least $n$ points, proving the lower bound. On a query $x_{i,j}$, the Delayer defers the decision to the Prover unless all other variables in row $i$ have already been queried. In that case Delayer responds with a $1$. Note that with this strategy, the Delayer ensures that the game ends with function value $1$. (No row has all variables set to $0$.) Observe that to certify a $1$-input of the function, the Prover must query at least one variable in each row. Since $m\ge 2$, the Delayer gets to score at least one point per row, and thus has a score of at least $n$ at the end of the game. \paragraph*{Prover strategy for $\mbox{{\sc And}}_n \circ \mbox{{\sc Parity}}_m$, proving \cref{lem:and-parity-ub}} We give a Prover strategy which restricts Delayer to $n(m-1)+1$ points. The Prover queries variables in row-major order. If on query $x_{i,j}$ the Delayer defers a decision to the Prover, the Prover chooses arbitrarily unless $j=m$. If $j=m$, then the Prover chooses a value which makes the parity of the variables in row $i$ evaluate to $0$. Let $j$ be the first row such that the Delayer defers the decision on $x_{j,m}$ to the Prover. (If there is no such row, set $j=n$.) With the strategy above, the Prover will set $x_{j,m}$ in such a way that the parity of the variables in $j$-th row evaluates to $0$, making $f$ evaluate to $0$ and ending the game. The Delayer scores at most $m-1$ points per row for rows before this row $j$, and at most $m$ points in row $j$. Hence the Delayer's score is at most $(j-1)(m-1)+m$ points. Since $j\le n$, the Delayer is restricted to $n(m-1)+1$ points at the end of the game. \paragraph*{Delayer strategy for $\mbox{{\sc And}}_n \circ \mbox{{\sc Parity}}_m$, proving \cref{lem:and-parity-lb}} We give a Delayer strategy which always scores at least $n(m-1)+1$ points. On query $x_{i,j}$, if this is the last un-queried variable, or if there is some un-queried variable in the same $i$-th row, the Delayer defers the decision to the Prover. Otherwise the Delayer responds with a value that makes the parity of the variables in row $i$ evaluate to $1$. This strategy forces the Prover to query all variables to decide the function. The Delayer picks up $m-1$ points per row, and an additional point on the last query, giving a total score of $n(m-1)+1$ points. \paragraph*{Prover strategy in terms of certificate complexity, proving \cref{lem:rank-cert}} We give a Prover strategy which restricts the Delayer to $ (\textrm{C}_0(f)-1)(\textrm{C}_1(f)-1)+1$ points. Let $\tilde{f}$ be the function obtained by assigning values to the variables queried so far. As long as $\textrm{C}_1(\tilde{f})>1$, Prover picks an $a\in \tilde{f}^{-1}(0)$ and its $0$-certificate $S$, and queries all the variables in $S$ one by one. If at any point the Delayer defers a decision to the Prover, the Prover chooses the value according to $a$. When $\textrm{C}_1(\tilde{f})$ becomes $1$, the Prover picks an $a\in \tilde{f}^{-1}(1)$ and its $1$-certificate $\{i\}$ and queries the variable $x_i$. If the Delayer defers the decision, the Prover chooses $a_i$. The above strategy restricts the Delayer to $ (\textrm{C}_0(f)-1)(\textrm{C}_1(f)-1) + 1$ points; the proof is essentially same as \cref{lem:rank-cert}. \iffalse \paragraph*{Delayer strategy for $f = \mbox{{\sc Maj}}_{2k+1}\circ \mbox{{\sc Maj}}_{2k+1}$, proving \cref{lem:cert-not-ub}} The following Delayer strategy always scores $(k+1)^2+k^2$ points, greater than $C(f)=(k+1)^2$. At an intermediate stage of the game, say that a row is $b$-determined if the variables that are already set in this row already fix the value of $\mbox{{\sc Maj}}_{2k+1}$ on this row to be $b$, and is determined if it is $b$-determined for some $b$. Let $M_b$ be the number of $b$-determined rows. If the game has not yet ended, then $M_0\le k$ and $M_1\le k$. On query $x_{i,j}$, let $n_0,n_1$ be the number of variables in row $i$ already set to $0$ and to $1$ respectively. The Delayer defers the decision if \begin{itemize} \item row $i$ is already determined, or \item $n_0=n_1 < k$, or \item $n_0=n_1=k$ and $M_0=M_1$. \end{itemize} Otherwise, if $n_0\neq n_1$, then the Delayer chooses the value $b$ where $n_b < n_{1-b}$. If $n_0=n_1=k$, then the Delayer chooses the value $b$ where $M_b < M_{1-b}$. This strategy ensures that at all stages until the game ends, $|M_0-M_1|\le 1$, and furthermore, in all rows that are not yet determined, $|n_0-n_1|\le 1$. Thus a row is determined only after all variables in the row are queried, and the Delayer gets a point for every other query, making a total of $k$ points per determined row. Further, for $k+1$ rows, the Delayer also gets an additional point on the last queried variable. The game cannot conclude before all $2k+1$ rows are determined, so the Delayer scores at least $(k+1)^2+k^2$ points. \fi \paragraph*{Prover and Delayer strategies for composed functions, proving \cref{thm:compose-rank-bounds}} For showing the upper bound, the Prover strategy is as follows: the Prover chooses a depth-optimal tree $T_f$ for $f$ and moves down this tree. Let $X^i$ denote the $i$th block of variables; i.e.\ the set of variables $x_{i,1}, x_{i,2}, \ldots , x_{i,m}$. The Prover queries variables blockwise, choosing to query variables from a particular block according to $T_f$. If $x_i$ is the variable queried at the current node of $T_f$, the Prover queries variables from $X^i$ following the optimal Prover strategy for the function $g$, until the value of $g(X^i)$ becomes known. At this point, the Prover moves to the corresponding subtree in $T_f$. For lower bound, the Delayer strategy is as follows: When variable $y_k$ is queried, the Delayer responds with $b\in\{0,1\}$ if $\textrm{Rank}(h_{k,b}) > \textrm{Rank}(h_{k,1-b})$, and otherwise defers. Here $h_{k,b}$ is the sub-function of $h$ when $y_k$ is set to $b$. The proof that above strategies give the claimed bounds is essentially what constitutes the proof of \cref{thm:compose-rank-bounds}. \paragraph*{Delayer strategy in asymmetric game in $\mbox{{\sc Tribes}}^d_n$, proving \cref{lem:size-lb-tribes}} We give a Delayer strategy in an asymmetric Prover-Delayer game which scores at least $n\log n$. On query $x_{ij}$, Delayer responds with $(p_0,p_1)=(1-\frac{1}{k}, \frac{1}{k})$, where $k$ is the number of free variables in row $i$ at the time of the query. We show that the strategy above scores at least $n\log n$ points. The game can end in two possible ways: \begin{enumerate} \item Case 1: The Prover concludes with function value $0$. In this case, the Prover must have queried all variables in some row, say the $i$-th row, and chosen $0$ for all of them. For the last variable queried in the $i$-th row, the Delayer would have responded with $(p_0,p_1)=(0,1)$, and hence scored $\infty$ points in the round and the game. \item Case 2: The Prover concludes with function value $1$. In this case, the Prover must have set a variable to $1$ in each row. We show that the Delayer scores at least $\log n$ points per row. Pick a row arbitrarily, and let $k$ be the number of free variables in the row when the first variable in that row is set to $1$. The Prover sets $n-k$ variables in this row to $0$ before he sets the first variable to $1$. For $b\in\{0,1\}$, let $p_{b,j}$ represents the $p_b$ response of the Delayer when there are $j$ free variables in the row. That is, $p_{0,j}=1-\frac{1}{j}=\frac{j-1}{j}$ and $p_{1,j}=\frac{1}{j}$. The contribution of this row to the overall score is at least \[ \log \frac{1}{p_{0,n}} + \log \frac{1}{p_{0,{n-1}}} + \ldots + \log \frac{1}{p_{0,k+1}} + \log \frac{1}{p_{1,k}} = \log \left(\frac{1}{p_{0,n}}\frac{1}{p_{0,n-1}}...\frac{1}{p_{0,k+1}}\frac{1}{p_{1,k}}\right) =\log n . \] Since each row contributes at least $\log n$ points, the Delayer scores at least $n\log n$ points at the end of the game. \end{enumerate} \section{Conclusion} \label{sec:concl} The main thesis of this paper is that the minimal rank of a decision tree computing a Boolean function is an interesting measure for the complexity of the function, since it is not related to other well-studied measures in a dimensionless way. Whether bounds on this measure can be further exploited in algorithmic settings like learning or sampling remains to be seen. \section{Acknowledgments} The authors thank Anna G\'al and Srikanth Srinivasan for interesting discussions about rank at the Dagstuhl seminar 22371. \bibliographystyle{plain}
2010.00615
\section{\@startsection {section}{1}{\z@}{-3.5ex plus -1ex minus \makeatother \newcommand{\bbm}[1]{\left[\begin{matrix} #1 \end{matrix}\right]} \newcommand{\sbm}[1]{\left[\begin{smallmatrix} #1 \end{smallmatrix}\right]} \section{Introduction} \label{sec1} \setcounter{equation}{0} \vspace{-1mm} \ \ \ Consider an unstable finite-dimensional linear plant. Suppose that this plant is driven via an actuator with stable PDE (partial differential equation) dynamics which is not influenced by the plant dynamics (i.e. the actuator is sufficiently strong). Then the actuator-plant model is a cascade interconnection of a PDE system driven by an input and an ODE (ordinary differential equation) system driven by the output of the PDE system. Similarly, suppose that the plant output is measured using a sensor with stable PDE dynamics which does not influence the plant dynamics. Then the plant-sensor model is a cascade interconnection of a ODE system whose output drives the PDE system, whose output is in-turn measured. In this paper, we address the problem of designing state and output feedback control laws for stabilizing the former interconnection and the problem of designing an observer for the latter interconnection. Motivated by applications in chemical process control, combustion systems, traffic flow and water channel flow, the above stabilization and estimation problems have been solved for many specific one-dimensional PDE models by Krstic and coauthors using the backstepping method, see \cite{LiKr:2010}, \cite{Kri:2009}, \cite{Kri:2009a}, \cite{Kri:2010}, \cite{KrSm:2008}, \cite{SuKr:2010}. In \cite{KrSm:2008}, the actuator dynamics and sensor dynamics, which are pure delays, are compensated by first modeling them using first-order hyperbolic PDEs and then solving the above problems via the backstepping approach. In \cite{Kri:2009} the PDE model for the actuator and the sensor is a 1D diffusion equation, while in \cite{Kri:2009a} it is a 1D wave equation. In both these works, the output of the PDE is the boundary value of its state (Dirichlet measurement). The results in \cite{Kri:2009} and \cite{Kri:2009a} were extended in \cite{SuKr:2010} by studying interconnections in which the output of the PDE is the boundary value of the spatial derivative of its state (Neumann measurement). The ODE plants in \cite{Kri:2009}, \cite{Kri:2009a}, \cite{KrSm:2008} and \cite{SuKr:2010} have a single input and a single output. The paper \cite{LiKr:2010} considers plants with multiple inputs and outputs, with the actuator and sensor models being a set of 1D wave PDEs. The controllers that solve the stabilization problem in the above works are of the state feedback form. Recently, a dynamic output feedback controller was proposed in \cite{SaGaKr:2018} for solving the stabilization problem when the PDE (actuator) is either a first-order hyperbolic equation or a 1D diffusion equation. In \cite{ZhGuWu:16}, combining the backstepping approach with the active disturbance rejection control method, an output feedback controller has been developed for stabilizing a wave PDE and ODE cascade system subject to boundary disturbance. In this paper, we will solve the aforementioned stabilization problem for PDE-ODE (actuator-plant) cascade systems and the estimation problem for ODE-PDE (plant-sensor) cascade systems using the Sylvester equation. To explain our approach to solving the stabilization problem, let us suppose that the actuator model is also an ODE. Then the cascade system can be written as \vspace{-1mm} \begin{equation} \label{plant-act} \dot w(t) = E w(t) + F C z(t), \qquad \dot z(t) = A z(t) + B u(t), \vspace{-1mm} \end{equation} where $w(t)\in\rline^n$ and $z(t)\in\rline^p$ are the states of the plant and the actuator, $u(t)\in\rline^m$ is the input and $C z(t)\in\rline^q$ is the actuator output and $E\in \rline^{n\times n}$, $A\in \rline^{p\times p}$, $B\in \rline^{p\times m}$, $C\in\rline^{q\times p}$ and $F\in\rline^{n\times q}$. Under the state transformation $\bbm{w & z} \to \bbm{p=w+\Pi z & z}$, where $\Pi\in\rline^{n\times p }$ is a solution to the Sylvester \vspace{-1mm} equation $$ E \Pi = \Pi A +FC, \vspace{-1mm} $$ the state matrix of the cascade system \eqref{plant-act} becomes diagonal: \vspace{-1mm} $$ \bbm{\dot p(t) \\ \dot z(t)} = \bbm{E & 0 \\ 0 & A} \bbm{p(t) \\ z(t)} + \bbm{\Pi B \\ B} u(t). \vspace{-1mm}$$ Suppose that $A$ is Hurwitz (i.e. the actuator model is stable) and the pair $(E,\Pi B)$ is stabilizable, so that $E+\Pi B K$ is Hurwitz for some $K\in\rline^{m\times n}$. Then the control law $u=K p$ stabilizes the above system, i.e. $u=Kw+K\Pi z$ is a stabilizing state feedback control law for the cascade system \eqref{plant-act}. In Section \ref{sec3}, we apply the above approach of diagonalizing the state matrix of the cascade system, to solve the stabilization problem for PDE actuator models belonging to the class of regular linear systems (RLSs). This approach reduces the stabilization problem to a problem of solving an appropriate Sylvester equation with unbounded operators and then stabilizing a finite-dimensional system, see Theorem \ref{th:act:stab}. In Section \ref{sec4}, we use an analogous approach to reduce the estimation problem for the ODE-PDE cascade system, when the PDE system is a RLS, to a problem of solving an appropriate Sylvester equation with unbounded operators and then constructing an estimator for a finite-dimensional system, see Theorem \ref{th:sen:det}. Sylvester equations with unbounded operators play a central role in the state space approach to the output regulation of RLSs, see \cite{linreg}, \cite{Deu:11}, \cite{NaGiWe:14}, \cite{Pau:16}, \cite{XuDu:17} and references therein. This is due to the natural occurrence of ODE-PDE (exosystem-plant) cascade systems in the output regulation problem for RLSs. In fact, the Sylvester equation based diagonalization approach for stabilizing PDE-ODE cascade systems discussed in the previous paragraph was used in \cite[Theorem 13]{HaPo:10} to design observer-based controllers for solving the output regulation problem. In \cite{HaPo:10}, it is assumed that the control and observation operators of the PDE plant are bounded and the eigenvalues of the state matrix of the exosystem are on the imaginary axis. By relaxing the first assumption, the controller design technique and the associated diagonalization approach in \cite{HaPo:10} were generalized in \cite[Theorem 15]{Pau:16} by allowing the PDE plant to be any RLS with possibly unbounded control and observation operators. Furthermore, the Sylvester equation based diagonalization approach for building observers for ODE-PDE cascade systems, referred to as the `analogous approach' in the previous paragraph, is used implicitly in the controller design in \cite[Theorem 12]{Pau:16}. Recently, this `analogous approach' was used directly in \cite{XuDu:17} to construct observers for ODE-PDE (exosystem-plant) cascade systems, assuming that the control operator for the PDE plant is bounded and the eigenvalues of the state matrix of the exosystem are simple and lie on the imaginary axis. This work highlighted the advantage of the diagonalization approach by explicitly demonstrating how it simplifies the estimation problem for ODE-PDE cascade systems to an estimation problem for ODE systems, and thereby inspired the developments in the current work. In this paper, we use the Sylvester equation based diagonalization approach to present a unified framework for constructing output feedback controllers for stabilizing PDE-ODE cascade systems in Section \ref{sec3} and observers for ODE-PDE cascade systems in Section \ref{sec4}. We let the PDE system be any stable (or easily stabilizable) RLS and the ODE system need not be marginally stable (unlike in the regulator theory). We derive necessary and sufficient conditions for verifying the solvability of the stabilization and estimation problems. We prove that the controller solving the stabilization problem is robust to certain unbounded perturbations of the PDE. The regularity assumption on the PDE system can be relaxed, see Remarks \ref{rm:act:nonreg} and \ref{rm:sen:nonreg}. Using these results we can solve the robust stabilization and estimation problems for (almost) all the PDE-ODE and ODE-PDE cascade systems which have been considered in the literature using the backstepping approach, see Section \ref{sec6} for a detailed discussion. In Section \ref{sec5}, we illustrate the results in Section \ref{sec3} using a 1D diffusion equation and the results in Section \ref{sec4} using a 1D wave equation. We remark that for 1D constant coefficient PDEs, it is straight forward to solve the Sylvester equation and construct the desired controllers and observers, see Remark \ref{rm:sylsol}. The current paper is a significantly expanded version of the conference paper \cite{Nat:19}. In \cite{Nat:19} only the stabilization problem was considered, for which only a state feedback controller was developed under an assumption that is hard to verify. The proofs in \cite{Nat:19} were either shortened or omitted due to space constraints and the robustness of the controller was also not studied. {\em Notation}: Define $\cline^-_\o=\{s\in\cline\big| \Re s<\o\}$ and $\cline^+_\o=\{s\in\cline\big| \Re s>\o\}.$ The closure of $\cline^-_\o$ and $\cline^+_\o$ in $\cline$ are denoted by $\overline{\cline^-_\o}$ and $\overline{\cline^+_\o}$. When $\o=0$, we drop the subscript. Let $X$ and $Y$ be Hilbert spaces. Then $\Lscr(X,Y)$, written as $\Lscr(X)$ if $X=Y$, denotes the space of bounded linear operators from $X$ to $Y$. The space of $X$-valued locally square integrable functions on $[0,\infty)$ is denoted as $L^2_{\rm loc}([0,\infty);X)$. For each $\alpha\in\rline$, the space $L^2_\alpha([0,\infty);X)$ $=\{ u\in L^2_{\rm loc}([0,\infty);X) \big| \int_0^\infty e^{-2\alpha t}\|u(t)\|^2\dd t <\infty\}$ is a Hilbert space with norm being the square root of the integral in the expression. For a linear operator $A:D(A)\subset X\to X$, where $D(A)$ is the domain of $A$, let $\sigma(A)$ be its spectrum and $\rho(A)$ its resolvent set. For a Banach space $X$, $H^\infty(X)$ is the Banach space of $X$-valued bounded analytic functions on $\cline^+$ with the sup norm. Let $I_X$, or just $I$ when $X$ is clear, denote the identity operator on the space $X$. \vspace{-5mm} \section{Regular linear systems} \label{sec2} \setcounter{equation}{0} \vspace{-1mm} \ \ \ In this section, we summarize some results on regular linear systems and their feedback interconnections. For more details, see \cite{obs_book}, \cite{Wei:94a}, \cite{Wei:94b} and \cite{WeCu:97}. Let $Z$, $U$ and $Y$ be Hilbert spaces. Let $A$ be the generator of a strongly continuous semigroup $\tline$ on $Z$ with growth bound $\o_\tline$. The semigroup $\tline$ (or equivalently $A$) is exponentially stable if $\o_\tline<0$. For some $\beta\in\rho(A)$, let $Z_1$ be the domain of $A$ with the norm $\|z\|_1=\|(\beta I - A)z\|$ and let $Z_{-1}$ be the completion of $Z$ with respect to the norm $\|z\|_{-1}=\|(\beta I - A)^{-1}z\|$. Let $B\in\Lscr(U,Z_{-1})$ be an admissible control operator for $\tline$. Let $C\in\Lscr(Z_1,Y)$ be an admissible observation operator for $\tline$ and let $C_\L$ be its $\L$-extension with respect to $A$. Then for each $\alpha> \o_\tline$ there exists $K_\alpha, M_\alpha \geq 0$ such that \vspace{-1mm} \begin{equation} \label{eq:Best} \|(sI-A)^{-1}B\|_{\Lscr(U,Z)} \m\leq\m \frac{K_\alpha}{\sqrt{\Re s-\alpha}} \FORALL s\in\cline_\alpha^+, \vspace{-2mm} \end{equation} \begin{equation} \label{eq:Cest} \|C(sI-A)^{-1}\|_{\Lscr(Z,Y)} \m\leq\m \frac{M_\alpha}{\sqrt{\Re s-\alpha}} \FORALL s\in\cline_\alpha^+. \vspace{-1mm} \end{equation} Suppose that (i) $C_\L(sI-A)^{-1}B$ exists for each $s\in\rho(A)$ and (ii) $\sup_{s\in\cline_\alpha^+} \|C_\L(sI-A)^{-1} B\|_{\Lscr(U,Y)} <\infty$ for any $\alpha>\o_\tline$, then the triple $(A,B,C)$ is said to be regular. The regular linear system (RLS) $\Sigma$ corresponding to a regular triple $(A,B,C)$ and a feedthrough operator $D\in \Lscr(U,Y)$ is the pair of equations \vspace{-1mm} \begin{align} \dot z(t) &= A z(t) + B u(t), \label{eq:RLSstate}\\[0.5ex] y(t) &= C_\L z(t) + D u(t). \label{eq:RLSoutput}\\[-4ex]\nonumber \end{align} The operators $(A,B,C,D)$ are the generating operators (GOs) of $\Sigma$, $A$ is the state operator and $Z$, $U$ and $Y$ are the state, input and output spaces, respectively. The RLS $\Sigma$ is exponentially stable if $A$ is exponentially stable. For each initial state $z(0)\in Z$ and input $u\in L^2_{\rm loc}([0,\infty);U)$, the state trajectory $z$ of $\Sigma$ (or \eqref{eq:RLSstate}) is \vspace{-1mm} \begin{equation*} z(t) \m=\m \tline_t z(0) + \int_0^t \tline_{t-\tau}B u(\tau)\dd \tau \FORALL t\geq0. \vspace{-1mm} \end{equation*} This trajectory is the unique function in $C([0,\infty);Z) \cap H^1_{\rm loc} ([0,\infty); Z_{-1})$ which satisfies \eqref{eq:RLSstate} in $Z_{-1}$ for almost all $t\geq0$. Moreover, $z(t)\in D(C_\L)$ for almost all $t\geq0$ and \eqref{eq:RLSoutput} defines an output $y\in L^2_{\rm loc}([0,\infty);Y)$. The transfer function of $\Sigma$ is \vspace{-1mm} \begin{equation} \label{eq:rlstf} \GGG(s) = C_\L(sI-A)^{-1}B+D \FORALL s\in\cline_{\o_\tline}^+. \vspace{-1mm} \end{equation} For each $\o>\o_\tline$, the map $\GGG:\cline_{\o}^+\to\Lscr(U,Y)$ is bounded. If $u\in L^2_\alpha([0,\infty);Y)$, then the output $y\in L^2_\gamma([0,\infty);Y)$ for each $\gamma>\max\{ \alpha,\o_\tline\}$ and $\hat y(s)=C(sI-A)^{-1}z(0)+\GGG(s)\hat u(s)$ for all $s\in\cline_{\gamma}^+$. An operator $P\in\Lscr(Y,U)$ is an {\em admissible feedback operator} for the transfer function $\GGG$ in \eqref{eq:rlstf} if $[I_Y-P\GGG(s)]^{-1}$ exists and is bounded on $\cline_\alpha^+$ for some $\alpha\in\rline$. \begin{definition} \label{def:stab} The pair $(A,B)$ is {\em stabilizable} if there exists an admissible observation operator $K\in\Lscr(Z_1,U)$ for $\tline$ such that $(A,B,K)$ is a regular triple, $I\in\Lscr(U)$ is an admissible feedback operator for $K_\L(sI-A)^{-1}B$ and $A+BK_\L$ is the generator of an exponentially stable semigroup on $Z$. \end{definition} For any $K$ satisfying the conditions in the above definition, $u=K_\L z$ is called a stabilizing state feedback control law for \eqref{eq:RLSstate}. For each initial state $z(0)\in Z$, this control law defines an $u\in L^2([0,\infty);U)$ which ensures that the state trajectory $z$ of \eqref{eq:RLSstate} converges to zero. Suppose that $K\in\Lscr(Z,U)$, so that $K_\L=K$. Then, using \eqref{eq:Best}, it follows that $K$ satisfies all the conditions in the definition, except that the semigroup generated by $A+BK$ may not be exponentially stable. In particular, for some $\alpha\in\rline$ and each initial state $z(0)\in Z$ there exists a unique state trajectory $z\in L^2_\alpha([0,\infty);Z)$ of \eqref{eq:RLSstate} with $u=K z$. The operator $A+BK$ is exponential stable if this trajectory satisfies $\|z(t)\|\leq M e^{-\o t}\|z(0)\|$ for some $M,\o>0$ and each $t\geq0$. \begin{definition} \label{def:det} The pair $(C,A)$ is {\em detectable} if there exists an admissible control operator $L\in\Lscr(Y,Z_{-1})$ for $\tline$ such that $(A,L,C)$ is a regular triple, $I\in\Lscr(Y)$ is an admissible feedback operator for $C_\L(sI-A)^{-1}L$ and $A+LC_\L$ is the generator of an exponentially stable semigroup on $Z$. \end{definition} For $L$ as in the definition, since $(A,B,C)$ is a regular triple, the triple $(A+LC_\L,[B\ \ L],C)$ is regular. The state equation \vspace{-1mm} \begin{equation} \label{eq:act:observer} \dot{\hat z}(t) = (A+LC_\L) \hat z(t) - L y(t) + (B+LD) u(t) \vspace{-1mm} \end{equation} is called an {\em observer} for \eqref{eq:RLSstate}-\eqref{eq:RLSoutput} and for every initial state $z(0)$ of \eqref{eq:RLSstate} and $\hat z(0)$ of \eqref{eq:act:observer} and $u\in L^2_{\rm loc}([0,\infty);U)$, we have $\|z(t)-\hat z(t)\|\leq Me^{-\o t} \|z(0)-\hat z(0)\|$. For $k=1,2$, let $\Sigma_k$ be a RLS with state space $Z_k$, input space $U_k$, output space $Y_k$, input $u_k$, output $y_k$ and transfer function $\GGG_k$. Suppose that $Y_1=U_2$, $Y_2=U_1$, the identity operator $I_{U_1}$ is an admissible feedback operator for $\GGG_2\GGG_1$ and $I-D_2D_1$ is invertible. Then the feedback interconnection in Figure 1 is a RLS, denoted as $\Sigma_{fb}$, with state space $Z_1\times Z_2$, input space $U_1$, output space $Y_2$, input $v$ and output $y_2$. If the state operator of the RLS $\Sigma_{fb}$ is exponentially stable, then we call $\Sigma_2$ a {\em stabilizing output feedback controller} for $\Sigma_1$. \vspace{-4mm} $$\hspace{40mm}\parbox{5in}{\includegraphics[scale=0.14]{feedback.eps}}$$ \centerline{ \parbox{5.2in}{Figure 1. Feedback interconnection of regular linear systems $\Sigma_1$ and $\Sigma_2$.\vspace{1mm}}} \section{ODE plant with PDE actuator} \label{sec3} \setcounter{equation}{0} \vspace{-1mm} \ \ \ Consider a PDE-ODE cascade system in which the output of the PDE system drives the ODE system. The ODE models the plant dynamics, while the PDE models the actuator dynamics. The state dynamics of the cascade system is described by the following differential equations: for $t>0$ \vspace{-1mm} \begin{align} \dot w(t) &= E w(t) + FC_\L z(t), \label{eq:act:ode}\\[0.5ex] \dot z(t) &= A z(t) + B u(t), \label{eq:act:pde}\\[-4ex]\nonumber \end{align} where $w(t)\in\rline^n$ is the plant state, $z(t)\in Z$ is the actuator state, $Z$ is a Hilbert space, $u(t)\in\rline^m$ is the input, $E\in \rline^{n\times n}$, $F\in\rline^{n\times q}$, $A$ is the generator of a strongly continuous semigroup $\tline$ on $Z$, $B\in \Lscr(\rline^m,Z_{-1})$ is an admissible control operator for $\tline$, $C\in\Lscr(Z_1,\rline^q)$ is an admissible observation operator for $\tline$ and $(A,B,C)$ is a regular triple. The admissibility of $B$ is not essential and can be relaxed, see Remark \ref{rm:act:nonreg}. The output $y$ of the plant takes values in $\rline^p$ and is given by \vspace{-1mm} \begin{equation} \label{eq:act:output} y(t) = G w(t) + H C_\L z(t),\qquad t\geq0, \vspace{-1mm} \end{equation} where $G\in\rline^{p\times n}$ and $H\in\rline^{p\times q}$. For the PDE system (actuator), the output is $C_\L z$ and transfer function is \vspace{-2mm} \begin{equation} \label{eq:act:RLStf} \GGG(s) = C_\L(sI-A)^{-1}B \FORALL s\in\cline^+_{\o_\tline}. \vspace{-2mm} \end{equation} The combined state space for the plant and actuator is $Z_{cs}=\rline^n\times Z$ and the state, control and observation operators for the combined dynamics (with input $u$, state $[w\ \ z]^\top$ and output $y$) are \vspace{-1mm} \begin{equation} \label{eq:act:cas} A_{cs} = \bbm{E & FC_\L \\ 0 & A}, \qquad B_{cs} = \bbm{0\\ B}, \qquad C_{cs} = \bbm{G & HC_\L}. \vspace{-1mm} \end{equation} From the feedback theory for regular linear systems \cite[Lemma 5.1]{WeCu:97} it follows that the cascade system \eqref{eq:act:ode}-\eqref{eq:act:output} is a RLS, denoted as $\Sigma_{cs}$, with generating operators $(A_{cs}, B_{cs}, C_{cs},0)$, input space $\rline^m$, state space $Z_{cs}$ and output space $\rline^p$. We suppose, with no loss of generality, that $E$ is of the form \vspace{-2mm} \begin{equation} \label{eq:act:Edecom} E = \bbm{E_1 & 0\\ 0 & E_2}, \vspace{-2mm} \end{equation} where $E_1\in\rline^{n_1\times n_1}$, $\sigma(E_1) \subset \overline{\cline^+}$, $E_2\in\rline^{n_2\times n_2}$ and $\sigma(E_2) \subset {\cline^-}$. The corresponding partitioning of $w$, $F$ and $G$ are \vspace{-2mm} \begin{equation} \label{eq:act:part} w=\bbm{w_1\\ w_2},\qquad F = \bbm{F_1 \\ F_2}, \qquad G=\bbm{G_1 & G_2}. \vspace{-2mm} \end{equation} We will derive a stabilizing state feedback control law $u=K_{cs}[w\ \ z]^\top$ for \eqref{eq:act:ode}-\eqref{eq:act:pde} in Theorem \ref{th:act:stab}. A stabilizing output feedback controller for $\Sigma_{cs}$ is presented in Theorem \ref{th:act:det}. These results can be extended to derive stabilizing controllers for the system \eqref{eq:act:ode}-\eqref{eq:act:pde} modified to include a term $J u$ in \eqref{eq:act:ode}, see Remark \ref{rm:act:notcas}. We will need the following two assumptions. \begin{assumption} \label{as:act:exp} The semigroup $\tline$ (or equivalently $A$) is exponentially stable. \vspace{-1mm} \end{assumption} Assumption \ref{as:act:exp} is made to simplify the presentation and it is no more restrictive than the requirement that the pair $(A,B)$ be stabilizable. Indeed, if $A$ is not stable and $(A,B)$ is stabilizable, consider the cascade interconnection of \eqref{eq:act:ode} and \vspace{-1mm} \begin{equation} \label{eq:act:repde} \dot z(t) = (A+BK_\L) z(t) + B u(t), \vspace{-1mm} \end{equation} where $K$ is as in Definition \ref{def:stab}. A state feedback control law $K_1w+K_{2\L}z+K_\L z$ is stabilizing for \eqref{eq:act:ode}-\eqref{eq:act:pde} if and only if $K_1 w+K_{2\L}z$ is a stabilizing state feedback control law for \eqref{eq:act:ode}, \eqref{eq:act:repde}. Hence when $A$ is not stable, we can work with \eqref{eq:act:ode}, \eqref{eq:act:repde} for which Assumption \ref{as:act:exp} holds, instead of \eqref{eq:act:ode}-\eqref{eq:act:pde}. We remark that when $B$ is bounded, Assumption \ref{as:act:exp} is no more conservative than the natural assumption that the system \eqref{eq:act:ode}-\eqref{eq:act:pde} is stabilizable. This follows from the observation that the stabilizability of \eqref{eq:act:ode}-\eqref{eq:act:pde} implies the optimizability of $(A,B)$, which then implies the stabilizability of $(A,B)$ \cite{Cur-Zw} (for unbounded $B$ the latter implication is not known \cite{WeRe:00}). In the particular case in which the unstable subspace of $A$ is finite-dimensional, we can combine it with the unstable subspace of $E$, redefine $A$, $B$, $C$, $E$ and $F$ suitably and then work with \eqref{eq:act:ode}-\eqref{eq:act:pde} (with redefined operators) for which Assumption \ref{as:act:exp} holds, see Example 5.1 for an illustration of this approach. \begin{assumption} \label{as:act:stab} $v^\top F_1\GGG(\l)\neq0$ for each eigenvalue $\l\in\sigma(E_1)$ and nonzero vector $v\in\rline^{n_1}$ satisfying $v^\top E_1=\l v^\top$. \vspace{-1mm} \end{assumption} Note that $\GGG(\l)$ exists for all $\l\in\sigma(E_1)$ since $\sigma(E_1) \subset \cline^+_{ \o_\tline}\subset\rho(A)$ by Assumption \ref{as:act:exp}. Assumption \ref{as:act:stab} implies that $v^\top F_1\neq0$ for each left eigenvector $v^\top$ of $E_1$, which in turn implies via the Hautus test that the pair $(E_1,F_1)$ is stabilizable. In Proposition \ref{pr:act:stabcon} we will show that when $A$ is exponentially stable, the pair $(A_{cs},B_{cs})$ is stabilizable if and only if Assumption \ref{as:act:stab} holds. So when $A$ is not exponentially stable, in light of the discussion below Assumption \ref{as:act:exp}, the pair $(A_{cs},B_{cs})$ is stabilizable if (and also {\em only if} when $B$ is bounded) $(A,B)$ is stabilizable and for some $K$ as in Definition \ref{def:stab}, Assumption \ref{as:act:stab} holds with $\GGG_K(\l)=C_\L(\l I-A-BK_\L)^{-1}B$ in place of $\GGG(\l)$. If $\GGG(\l)$ exists for a $\l\in\sigma(E_1)$, then it is easy to check that $ \GGG_K(\l)=\GGG(\l)(I+K_\L(\l I-A-BK_\L)^{-1}B)$ and $\GGG(\l)=\GGG_K(\l)(I-K_\L(\l I-A)^{-1}B), $ which implies that $v^\top F_1\GGG(\l)\neq0$ if and only if $v^\top F_1\GGG_K(\l)\neq0$. Therefore, if $\GGG(\l)$ exists for each $\l\in\sigma(E_1)$, the pair $(A_{cs},B_{cs})$ is stabilizable if $(A,B)$ is stabilizable and Assumption \ref{as:act:stab} holds, see Example 5.1 for an illustration. Next we present a result on the existence of solutions to Sylvester equations with unbounded operators. This result has been established in \cite{Pau:16} assuming that $\sigma(\Escr)$ lies on the imaginary axis. \begin{framed} \vspace{-3mm} \begin{lemma} \label{pr:act:syl} Let $\Ascr$ be the generator of an exponentially stable strongly continuous semigroup $\sline$ on a Hilbert space $X$. Let $\Escr\in\rline^{n\times n}$ be such that $\sigma(\Escr)\subset \overline{\cline^+}$. Let $Q\in \Lscr(X_1,\rline^n)$ be an admissible observation operator for $\sline$. Then there exists a linear map $\Pi:\Ascr D(Q_\L)\to\rline^n$ with $\Pi\in \Lscr(X,\rline^n)$ such that \vspace{-2mm} \begin{equation} \label{eq:act:syl} \Escr\Pi x = \Pi \Ascr x + Q_\L x \FORALL x\in D(Q_\L). \vspace{-3mm} \end{equation} Furthermore, if $P\in\Lscr(\rline^m,X_{-1})$ is an admissible control operator for $\sline$ and $(\Ascr,P,Q)$ is a regular triple, then $\Pi P\in \Lscr(\rline^m,\rline^n)$. \vspace{-3mm} \end{lemma} \end{framed} \vspace{-3mm} \begin{proof} Observe that $e^{-\Escr t}$ can be written as follows: \vspace{-1mm} \begin{equation} \label{eq:act:matexp} e^{-\Escr t} = \sum_{k=1}^{v}\sum_{j=0}^{r} E_{kj} e^{-\l_k t} \frac{t^j}{j!}, \vspace{-1mm} \end{equation} where each $E_{kj}\in\rline^{n\times n}$ is a constant matrix and $\l_k\in\overline{\cline^+}$ is an eigenvalue of $\Escr$. Taking the derivative of \eqref{eq:act:matexp} with respect to $t$ gives \vspace{-1mm} $$ -\Escr \sum_{k=1}^{v}\sum_{j=0}^{r} E_{kj} e^{-\l_k t} \frac{t^j}{j!} = \sum_{k=1}^{v}\sum_{j=0}^{r}\left(-E_{kj}\l_k + E_{k\, j+1} \right) e^{-\l_k t} \frac{t^j}{j!}, \vspace{-1mm}$$ where $E_{k\, r+1}=0$ by definition. Comparing the coefficients of $e^{-\l_k t}t^j$ on both sides it then follows that for $k\in\{1,2,\ldots v\}$ and $j\in\{0,1,\ldots r\}$, \vspace{-1mm} \begin{equation} \label{eq:act:matexpid} \Escr E_{kj} = \l_k E_{kj} - E_{k\,j+1}. \vspace{-1mm} \end{equation} Define $\Pi\in\Lscr(X,\rline^n)$ as follows: \vspace{-1mm} \begin{equation} \label{eq:act:Pisum} \Pi = \sum_{k=1}^v\sum_{j=0}^r E_{kj} Q_\L (\l_k-\Ascr)^{-1-j}. \vspace{-1mm} \end{equation} Then $\Pi$ maps $\Ascr D(Q_\L)$ to $\rline^n$ and solves \eqref{eq:act:syl}. Indeed, for any $x\in D(Q_\L)$, \vspace{-1mm} \begin{align*} \Pi\Ascr x &= \sum_{k=1}^v\sum_{j=0}^r \l_k E_{kj} Q_\L (\l_k-\Ascr)^{-1-j}x - E_{kj} Q_\L (\l_k-\Ascr)^{-j}x \\ &= -\sum_{k=1}^v E_{k0}Q_\L x + \sum_{k=1}^v\sum_{j=0}^r (\l_k E_{kj}-E_{k\, j+1}) Q_\L (\l_k-\Ascr)^{-1-j}x .\\[-4ex] \end{align*} Using $ \sum_{k=1}^{v} E_{k0} = I$, which follows by letting $t=0$ in \eqref{eq:act:matexp}, and \eqref{eq:act:matexpid} it follows that the expression on the last line is $\Escr\Pi x - Q_\L x$. Finally, if $(\Ascr,P,Q)$ is a regular triple, then by definition $Q_\L (sI-\Ascr)^{-1}P\in\Lscr(\rline^m,\rline^n)$ for each $s\in\rho(\Ascr)$. This, the fact that $\rho(A)\cap\sigma(\Escr)= \emptyset$ and the expression for $\Pi$ in \eqref{eq:act:Pisum} imply that $\Pi P\in\Lscr(\rline^m,\rline^n)$. \end{proof} Next we present a stabilizing state feedback control law for the PDE-ODE system \eqref{eq:act:ode}-\eqref{eq:act:pde}. Recall the notation $E_1$, $E_2$, $F_1$, $F_2$, $w_1$ and $w_2$ from \eqref{eq:act:Edecom} and \eqref{eq:act:part}. \vspace{-1mm} \begin{framed} \vspace{-3mm} \begin{theorem} \label{th:act:stab} Consider the PDE-ODE cascade system \eqref{eq:act:ode}-\eqref{eq:act:pde}. Suppose that Assumption \ref{as:act:exp} holds. Define \vspace{-2mm} $$ A_1 = \bbm{E_2 & F_2 C_\L\\ 0 & A}, \qquad B_1 = \bbm{\ 0_{n_2\times m}\\ \!\!\!\!\!\!\!\!\!B}, \qquad C_1 = \bbm{0_{q\times n_2} & C}. \vspace{-2mm}$$ Then $A_1$ is the generator of an exponentially stable strongly continuous semigroup $\sline$ on $X=\rline^{n_2}\times Z$, the control operator $B_1\in\Lscr (\rline^m,X_{-1})$ and the observation operator $C_1\in\Lscr(X_1,\rline^q)$ are admissible for $\sline$ and the triple $(A_1,B_1,C_1)$ is regular. There exists $\Pi:A_1 D(C_{1\L})\to\rline^{n_1}$ with $\Pi\in \Lscr(X, \rline^{n_1})$ such that \vspace{-2mm} \begin{equation} \label{eq:act:sylth} E_1 \Pi x = \Pi A_1 x + F_1 C_{1\L} x \FORALL x\in D(C_{1\L}) \vspace{-2mm} \end{equation} and $\Pi B_1\in\Lscr(\rline^m,\rline^{n_1})$. Suppose that Assumption \ref{as:act:stab} also holds. Then the pair $(E_1,\Pi B_1)$ is stabilizable. Let $K\in \rline^{m\times n_1}$ be such that $E_1+\Pi B_1 K$ is Hurwitz. Then $u = Kw_1+K\Pi [w_2\ \ z]^\top$ is a stabilizing state feedback control law for \eqref{eq:act:ode}-\eqref{eq:act:pde}. Moreover, for all $\delta\in\rline$ sufficiently small, this control law also stabilizes the perturbed RLS \vspace{-4mm} \begin{align} \dot w(t) &= E w(t) + FC_\L z(t), \label{eq:act:Rode}\\[0.5ex] \dot z(t) &= (A+\delta A) z(t) + B u(t). \label{eq:act:Rpde}\\[-5ex]\nonumber \end{align} \end{theorem}\vspace{-3mm} \end{framed} \vspace{-4mm} \begin{proof} The semigroup generated by $A_1$ on $X$ is $\sline_t=\sbm{e^{E_2 t} & \star\\ 0 & \tline_t}$ for all $t\geq0$, where the $\star$ denotes some non-zero entry. Since $\sigma(E_2) \subset\cline^-$ and $\tline$ is exponentially stable, $\sline$ is exponentially stable. All this and the admissibility of $B_1$ and $C_1$ and the regularity of the triple $(A_1,B_1,C_1)$ follow from the feedback theory for RLSs \cite[Lemma 5.1]{WeCu:97}. Since $(A_1,B_1,C_1)$ is regular and $F_1$ is a bounded map, we can conclude that $F_1C_1$ is an admissible observation operator for $\sline$, its $\L$-extension is $F_1C_{1\L}$ with $D(F_1C_{1\L})=D(C_{1\L})$ and $(A_1,B_1,F_1C_1)$ is regular. Hence applying Lemma \ref{pr:act:syl} with $\Escr=E_1$, $\Ascr=A_1$, $Q=F_1C_1$ and $P=B_1$, we get that there exists $\Pi\in\Lscr(X, \rline^{n_1})$ which solves \eqref{eq:act:sylth} and $\Pi B_1\in\Lscr(\rline^m, \rline^{n_1})$. From \eqref{eq:act:Pisum} we have $\Pi = \sum_{k=1}^v\sum_{j=0}^r E_{kj} F_1C_{1\L} (\l_k-A_1)^{-1-j}$ for some matrices $E_{kj}$ and $\l_k\in\sigma(E_1)$. Suppose that Assumption \ref{as:act:stab} holds. Then the pair $(E_1,\Pi B_1)$ is stabilizable. Indeed, if not, then via the Hautus test there exists a $\l\in\sigma(E_1)$ and non-zero $v\in\rline^{n_1}$ such that \vspace{-1mm} \begin{equation} \label{eq:act:Hau} v^\top E_1 = \l v^\top, \qquad v^\top \Pi B_1=0. \vspace{-1mm} \end{equation} Since $(A_1,B_1,C_1)$ is a regular triple, $(\l I - A_1)^{-1}B_1 U \subset D(C_{1\L})$. Choosing $x=(\l I-A_1)^{-1} B_1 u_1$ in \eqref{eq:act:sylth} with $u_1\in\rline^m$ and then applying $v^\top$ from the left to both sides of the resulting expression, we get using $C_{1\L}(\l I-A_1)^{-1}B_1=\GGG(\l)$ and the first equation in \eqref{eq:act:Hau} that \vspace{-2mm} \begin{equation} \label{eq:act:vPiB1} v^\top\Pi B_1 u_1 = v^\top F_1 \GGG(\l)u_1 \FORALL u_1\in \rline^m. \vspace{-1mm} \end{equation} Using the second equation in \eqref{eq:act:Hau} it follows from \eqref{eq:act:vPiB1} that $v^\top F_1\GGG(\l)=0$, which contradicts Assumption \ref{as:act:stab}. Hence the pair $(E_1,\Pi B_1)$ is stabilizable. Fix $K\in\Lscr(\rline^{n_1},\rline^m)$ such that $E_1+\Pi B_1 K$ is Hurwitz. Define $K_{cs}\in\Lscr(Z_{cs},\rline^m)$ by $K_{cs}[w\ \ z]^\top=Kw_1+K\Pi z_1$, where $z_1=[w_2\ \ z]^\top$. Recall that \eqref{eq:act:ode}-\eqref{eq:act:pde} can be written as $\dot \nu = A_{cs} \nu + B_{cs} u$, where $\nu = [w \ \ z]^\top$. Since $K_{cs}$ is bounded, it follows from the discussion below Definition \ref{def:stab} that for some $\alpha\in\rline$ and each initial state $[w(0)\ \ z(0)]^\top\in Z_{cs}$ there exists a unique state trajectory $[w \ \ z]^\top\in L^2_\alpha([0,\infty);Z_{cs})$ of \eqref{eq:act:ode}-\eqref{eq:act:pde} with $u=K_{cs}[w\ \ z]^\top$. Since $(A,B,C)$ is a regular triple, we have $C_\L z \in L^2_\gamma([0,\infty); \rline^q)$ for some $\gamma>\alpha$. Along this state trajectory, $w_1$ and $z_1$ satisfy \vspace{-1mm} $$ \dot w_1(t) = E_1 w_1(t) + F_1 C_{1\L} z_1(t),\qquad \dot z_1(t) = A_1 z_1(t) + B_1 (Kw_1(t)+K\Pi z_1(t)) \vspace{-1mm}$$ in $\rline^{n_1}\times X_{-1}$, for almost all $t\geq0$ . Note that $C_{1\L}=\bbm{0&C_\L}$ and hence $C_{1\L} z_1=C_\L z\in L^2_\gamma([0,\infty); \rline^q)$. Define $p_1=w_1+\Pi z_1$. Taking the Laplace transform of the above equations we get that for all $s\in\cline^+_{\max\{0,\gamma\}}\cap\rho(E_1)$ \vspace{-1mm} \begin{align} \hat z_1(s) &= (sI-A_1)^{-1}z_1(0)+(sI-A_1)^{-1}B_1K\hat p_1(s), \nonumber\\[0.5ex] \hat p_1(s) &= (sI-E_1)^{-1}\big[\m F_1C_{1\L}+ (sI-E_1)\Pi\m\big] (sI-A_1)^{-1}\big[\m B_1 K\hat p_1(s) + z_1(0)\m\big] \nonumber\\ &\qquad + (sI-E_1)^{-1} w_1(0). \label{eq:act:p1hat} \end{align} Here {\em hat} denotes the Laplace transform. From \eqref{eq:act:sylth}, we have $F_1C_{1\L}+ (sI-E_1)\Pi=\Pi(sI-A_1)$. Using this in \eqref{eq:act:p1hat} we get \vspace{-1mm} $$ \hat p_1(s) = (sI-E_1)^{-1}\Pi B_1K\hat p_1(s) +(sI-E_1)^{-1} p_1(0). \vspace{-1mm} $$ Hence $p_1$ satisfies the ODE \vspace{-2mm} \begin{equation} \label{eq:act:peqn} \dot p_1(t) = (E_1+\Pi B_1K) p_1(t). \vspace{-1mm} \end{equation} The above equation can also be derived by proving that $\dd (\Pi z_1(t)) /\dd t=\Pi(\dd z_1(t)/\dd t)$. Hence along the trajectory $[w \ \ z]^\top$, the transformed state $[p_1\ \ z_1]^\top$ satisfies \vspace{-1mm} \begin{equation} \label{eq:act:pz} \bbm{\dot p_1(t)\\ \dot z_1(t)} = \bbm{E_1+\Pi B_1 K & 0\\ B_1 K & A_1}\bbm{p_1(t)\\z_1(t)}, \vspace{-1mm} \end{equation} with $p_1(0)=w_1(0)+\Pi w_1(0)$ and $z_1(0)=[w_2(0)\ \ z(0)]^\top$. Since $E_1+\Pi B_1 K$ and $A_1$ are both exponentially stable, it follows from the feedback theory of RLSs \cite[Lemma 5.1]{WeCu:97} that $\sbm{E_1+\Pi B_1 K & 0\\ B_1 K & A_1}$ is the generator of an exponentially stable strongly continuous semigroup on $\rline^{n_1}\times X$. Hence there exist $M_1,\o>0$ such that \vspace{-1mm} \begin{equation} \label{eq:act:pzdec} \|p_1(t)\|+\|z_1(t)\| \leq M_1 e^{-\o t}(\|p_1(0)\|+\|z_1(0)\|) \FORALL t\geq0, \vspace{-1mm} \end{equation} which implies that there exists $M,\o>0$ such that \vspace{-1mm} \begin{equation} \label{eq:act:wzdec} \|w(t)\|+\|z(t)\| \leq M e^{-\o t}(\|w(0)\|+\|z(0)\|) \FORALL t\geq0. \vspace{-1mm} \end{equation} It now follows from the discussion below Definition \ref{def:stab} that $K_{cs}[w\ \ z]^\top$ is a stabilizing state feedback control law for \eqref{eq:act:ode}-\eqref{eq:act:pde}, i.e. $A_{cs}+B_{cs}K_{cs}$ is exponentially stable. We will now establish the robustness claim in the theorem. For each $\delta\in(-1,\infty)$ define $A_1^\delta$ and $A_{cs}^\delta$ similarly to $A_1$ and $A_{cs}$, but with $A+\delta A$ in place of $A$. Then $A_1^\delta$ is the generator of an exponentially stable semigroup $\sline^\delta$ given by $\sline^\delta_t =\sline_{(1+\delta)t}$ for all $t\geq0$. For any $\l\in\cline^+$ and integer $k\geq1$, the triangular structure of $A_1$ and $C_{1\L}=\bbm{0&C_\L}$ imply that $C_{1\L}(\l-A_1)^{-k}=[\ 0 \ \ C_{\L}(\l-A)^{-k}\ ]$. Using this and the expression for $\Pi$ we get \vspace{-3mm} \begin{align} \Pi (A_1^\delta-A_1) &= \sum_{k=1}^v\sum_{j=0}^r E_{kj} C_{1\L} (\l_k-A_1)^{-1-j} (A_1^\delta-A_1)\nonumber\\ &= \delta \sum_{k=1}^v\sum_{j=0}^r E_{kj} \bbm{0 & C_\L (\l_k-A)^{-1-j}A}. \label{eq:act:piadm}\\[-5ex]\nonumber \end{align} The admissibility of $[0 \ \ C_\L]$ for $\sline$ and \eqref{eq:act:piadm} imply that $C_2=\Pi (A_1^\delta-A_1)/\delta$ is an admissible observation for $\sline^\delta$ and $C_{2\L}=C_2$. The regularity of the triple $(A_1^\delta,B_1,C_2)$ follows from the regularity of the triple $(A_1,B_1,C_1)$. The system \eqref{eq:act:Rode}-\eqref{eq:act:Rpde} can be written as $\dot \nu = A_{cs}^\delta \nu + B_{cs} u$, where $\nu = [w \ \ z]^\top$, and $B_{cs}$ is admissible for the semigroup generated by $A_{cs}^\delta$ \cite[Lemma 5.1]{WeCu:97}. Since $K_{cs}$ is bounded, it follows from the discussion below Definition \ref{def:stab} that for each initial state $[w(0)\ \ z(0)]^\top\in Z_{cs}$ there exists a unique state trajectory $[w \ \ z]^\top$ for \eqref{eq:act:Rode}-\eqref{eq:act:Rpde} with input $u=K_{cs}[w\ \ z]^\top$. By adapting the arguments used to derive \eqref{eq:act:pz}, we get that along this state trajectory the transformed state $[p_1=w_1+\Pi z_1\ \ z_1]^\top$ satisfies \vspace{-1mm} \begin{equation} \label{eq:act:p1z1} \bbm{\dot p_1(t)\\ \dot z_1(t)} = \bbm{E_1+\Pi B_1 K & \Pi (A_1^\delta-A_1)\\ B_1 K & A_1^\delta}\bbm{p_1(t)\\z_1(t)}, \vspace{-1mm} \end{equation} with $p_1(0)=w_1(0)+\Pi w_1(0)$ and $z_1(0)=[w_2(0)\ \ z(0)]^\top$. Consider the RLS $\Sigma_1^\delta$ with GOs $(A_1^\delta,B_1,C_2,0)$ and transfer function $\GGG_1^\delta$ and the RLS $\Sigma_2^\delta$ with GOs $(E_1+\Pi B_1 K, \delta I_{\rline^{n_1}}, K, 0)$ and transfer function $\GGG_2^\delta$. Since $\Sigma_1^\delta$ and $\Sigma_2^\delta$ are exponentially stable, their positive feedback interconnection $\Sigma_{fb}^\delta$ is also an exponentially stable RLS if $(I-\GGG_1^\delta\GGG_2^\delta)^{-1}\in H^\infty (\Lscr(\rline^{n_1}))$ \cite[Proposition 4.6]{WeCu:97}. The exponential stability of $\Sigma_1^\delta$ implies that $\GGG_1^0\in H^\infty (\Lscr(\rline^m,\rline^{n_1}))$ and we have $\GGG_1^\delta(s)=(1+\delta)^{-1}\GGG_1^0(s(1+\delta)^{-1})$. Therefore, for $\delta$ belonging to any compact subset of $(-1,\infty)$, $\|\GGG_1^\delta\|_{ H^\infty(\Lscr(\rline^m,\rline^{n_1}))}$ can be bounded by a constant independent of $\delta$. In addition, $\lim_{\delta \to 0}\|\GGG_2^\delta\|_{H^\infty(\Lscr(\rline^{n_1},\rline^m))}=0$. Therefore $(I-\GGG_1^\delta\GGG_2^\delta )^{-1}\in H^\infty (\Lscr(\rline^{n_1}))$ for all $\delta$ sufficiently small. Consequently $\sbm{E_1+\Pi B_1 K & \Pi (A_1^\delta-A_1)\\ B_1 K & A_1^\delta}$, being the state operator of $\Sigma_{fb}^\delta$, is exponentially stable. It now follows from \eqref{eq:act:p1z1} that $[p_1\ \ z_1]^\top$ satisfies an estimate of the form \eqref{eq:act:pzdec} and so $[w\ \ z]^\top$ satisfies an estimate of the form \eqref{eq:act:wzdec}. Hence, according to the discussion below Definition \ref{def:stab}, for $\delta$ small $K_{cs}[w\ \ z]^\top$ is a stabilizing state feedback control law for \eqref{eq:act:Rode}-\eqref{eq:act:Rpde}, i.e. $A_{cs}^\delta+ B_{cs}K_{cs}$ is exponentially stable. \vspace{-1mm} \end{proof} Theorem \ref{th:act:stab} shows that Assumption \ref{as:act:stab} is sufficient for the existence of a stabilizing control law for the PDE-ODE system \eqref{eq:act:ode}-\eqref{eq:act:pde}. The next proposition establishes that this assumption is also necessary. \vspace{-1mm} \begin{framed} \vspace{-3mm} \begin{proposition} \label{pr:act:stabcon} Consider the PDE-ODE system \eqref{eq:act:ode}-\eqref{eq:act:pde}.\! Let Assumption \ref{as:act:exp} hold. Then the pair $(A_{cs}, B_{cs})$ is stabilizable if and only if Assumption \ref{as:act:stab} holds. \end{proposition} \vspace{-3mm} \end{framed} \vspace{-7mm} \begin{proof} Suppose that Assumption \ref{as:act:stab} holds. We have shown in Theorem \ref{th:act:stab} that $(A_{cs}, B_{cs})$ is stabilizable and found $K_{cs}$ such that $A_{cs}+B_{cs}K_{cs}$ is exponentially stable. Conversely, suppose that the pair $(A_{cs},B_{cs})$ is stabilizable. If Assumption \ref{as:act:stab} does not hold, then there exists a non-zero $v\in\rline^{n_1}$ such that $v^\top E_1=\l v^\top$ for some $\l\in\sigma(E_1)$ and $v^\top F_1 \GGG(\l)=0$. It now follows from \eqref{eq:act:vPiB1} that $v^\top\Pi B_1=0$ which, via the Hautus test, implies that the pair $(E_1,\Pi B_1)$ is not stabilizable. Consequently there exists a $p_0\in \rline^{n_1}$ such that the state trajectory of \vspace{-2mm} \begin{equation} \label{eq:act:p1contr} \dot p_1(t) = E_1 p_1(t) + \Pi B_1 u(t), \qquad p_1(0)=p_0, \vspace{-2mm} \end{equation} satisfies \vspace{-2mm} \begin{equation} \label{eq:act:limp1} \liminf_{t\to\infty}\|p_1(t)\|>0 \FORALL u\in L^2([0,\infty); \rline^m). \vspace{-1mm} \end{equation} On the other hand, since the pair $(A_{cs},B_{cs})$ is stabilizable, there exists a $\tilde u\in L^2([0,\infty); \rline^m)$ such that the state trajectory $[w\ \ z]^\top$ of \eqref{eq:act:ode}-\eqref{eq:act:pde} for the input $u=\tilde u$ and initial state $w(0)=[w_1(0)\ \ w_2(0)]^\top=[p_0\ \ 0]^\top$ and $z(0)=0$ satisfies $\lim_{t\to\infty} (\|w(t)\|+\|z(t)\|) = 0$, see comment below Definition \ref{def:stab}. Via arguments similar to those used to derive \eqref{eq:act:peqn}, it can be shown that along this trajectory $p_1$ defined as $w_1+\Pi\,[w_2 \ \ z]^\top$ solves \eqref{eq:act:p1contr} with $u=\tilde u$. Clearly $\lim_{t\to\infty} \|p_1(t)\| = 0$ (as $w(t), z(t)$ decay to 0), which contradicts \eqref{eq:act:limp1}. So Assumption \ref{as:act:stab} must hold. \vspace{-1mm} \end{proof} The next theorem presents an observer-based stabilizing output feedback controller $\Sigma_c$ for the PDE-ODE cascade system \eqref{eq:act:ode}-\eqref{eq:act:output}. Recall that this system is a RLS, denoted as $\Sigma_{cs}$, with GOs $(A_{cs},B_{cs},C_{cs},0)$ introduced in \eqref{eq:act:cas}. \begin{assumption} \label{as:act:det} The pair $(G,E)$ is detectable. \end{assumption} From \eqref{eq:act:Edecom} and \eqref{eq:act:part} it follows that Assumption \ref{as:act:det} is equivalent to the detectability of the pair $(G_1,E_1)$ and if $E_1+L_1G_1$ is Hurwitz, then so is $E+L G$, where $L=[L_1 \ \ 0]^\top$. Recall the control law $u=K w_1+K\Pi z_1$ proposed in Theorem \ref{th:act:stab} which can be written as $u = K_1 w+K_2 z$ with $K_1\in\Lscr(\rline^n, \rline^m)$ and \vspace{-1mm} $K_2\in\Lscr(Z,\rline^m)$. \begin{framed} \vspace{-3mm} \begin{theorem} \label{th:act:det} Consider the PDE-ODE cascade system \eqref{eq:act:ode}-\eqref{eq:act:output}. Suppose that Assumptions \ref{as:act:exp}, \ref{as:act:stab} and \ref{as:act:det} hold. Let $L_1\in\Lscr(\rline^p,\rline^{n_1})$ be such that $E_1+L_1G_1$ is Hurwitz. Define $L=[L_1 \ \ 0]^\top\in\Lscr(\rline^p,\rline^n)$. Let $u= K_1 w+K_2 z$ be the stabilizing state feedback control law for \eqref{eq:act:ode}-\eqref{eq:act:pde} proposed in Theorem \ref{th:act:stab}. Then the quadruple of operators $(A_c,B_c,C_c,D_c)$ defined as \vspace{-1mm} $$ A_c = \bbm{E+LG & (F+LH)C_\L \\ BK_1 & A+B K_2}, \quad B_c = \bbm{ -L\\0}, \quad C_c = \bbm{K_1 & K_2}, \quad D_c =0, \vspace{-1mm}$$ are the GOs of a RLS $\Sigma_c$ with input space $\rline^p$, state space $Z_{cs}$ and output space $\rline^m$ and $\Sigma_c$ is a stabilizing output feedback controller for $\Sigma_{cs}$. For each $\delta\in(-1,\infty)$, let $\Sigma_{cs}^\delta$ be the RLS with GOs $(A_{cs}^\delta, B_{cs}, C_{cs}, 0)$, where $A_{cs}^\delta$ is defined similarly to $A_{cs}$ but with $A+\delta A$ in place of $A$. Then, for all $\delta\in\rline$ sufficiently small, $\Sigma_c$ is a stabilizing output feedback controller for $\Sigma_{cs}^\delta$. \end{theorem} \vspace{-3mm} \end{framed} \vspace{-6mm} \begin{proof} Let $A_{cs}'=\sbm{E+LG & (F+LH)C_\L \\ 0 & A}$. Since $A_{cs}'$ has the same triangular structure as $A_{cs}$ with matrices $E+LG$ and $F+LH$ in place of $E$ and $F$, we can conclude using the regularity of $(A,B,C)$ that $A_{cs}'$, like $A_{cs}$, is the generator of a semigroup on $Z_{cs}$ and $B_{cs}$ is an admissible control operator for this semigroup. This and the boundedness of $K_{cs}=[K_1\ \ K_2]$ implies, see discussion below Definition \ref{def:stab}, that $A_c=A_{cs}'+B_{cs}K_{cs}$ is the generator of a semigroup on $Z_{cs}$. Consequently, noting that $B_c$ and $C_c$ are bounded operators, we get that $(A_c,B_c,C_c,D_c)$ are the GOs of a RLS $\Sigma_c$. This RLS can be written as follows: for $t>0$ \vspace{-1mm} \begin{align} \dot{\tilde w}(t) &= (E+LG) \tilde w(t) + (F+LH)C_\L \tilde z(t)-L\tilde u(t), \label{eq:act:obsode}\\[0.5ex] \dot{\tilde z}(t) &= (A+BK_2) \tilde z(t) + BK_1\tilde w(t), \label{eq:act:obspde}\\[0.5ex] \tilde y(t) &= K_1 \tilde w(t) + K_2 \tilde z(t), \label{eq:act:obsoutput}\\[-4ex] \nonumber \end{align} where $[\tilde w(t)\ \ \tilde z(t)]\in Z_{cs}$, $\tilde u(t)\in\rline^p$ and $\tilde y(t)\in\rline^m$ are the state, input and output. The transfer functions of $\Sigma_{cs}$ and $\Sigma_c$ are $\GGG_{cs}=C_{cs}(sI-A_{cs})^{-1}B_{cs}$ and $\GGG_c=C_c(sI-A_c)^{-1}B_c$. Since $B_c$ and $C_c$ are bounded it follows using \eqref{eq:Best} or \eqref{eq:Cest} that $\lim_{\Re s\to\infty} \|\GGG_c(s)\|_{\Lscr(\rline^p,\rline^m)}=0$. Therefore $\lim_{\Re s\to\infty} \|\GGG_c(s) \GGG_{cs}(s)\|_{\Lscr(\rline^m)}=0$ and so $I$ is an admissible feedback operator for $\GGG_c\GGG_{cs}$. Clearly, $I-D_cD_{cs}$ is invertible. Hence the positive feedback interconnection of $\Sigma_{cs}$ and $\Sigma_c$ (i.e. $\Sigma_1=\Sigma_{cs}$ and $\Sigma_2=\Sigma_c$ in Figure 1) is a RLS denoted as $\Sigma_{fb}$. Thus for each initial state $[w(0) \ \ z(0)]^\top$ of \eqref{eq:act:ode}-\eqref{eq:act:pde} and $[\tilde w(0)\ \ \tilde z(0)]^\top$ of \eqref{eq:act:obsode}-\eqref{eq:act:obspde}, there exist unique state trajectories $[w \ \ z]^\top$ of \eqref{eq:act:ode}-\eqref{eq:act:pde} and $[\tilde w\ \ \tilde z]^\top$ of \eqref{eq:act:obsode}-\eqref{eq:act:obspde} with $\tilde u=G w+HC_\L z$ and $u=K_1\tilde w+K_2\tilde z$. We will prove the exponential stability of $\Sigma_{fb}$ by showing that \vspace{-1mm} \begin{equation} \label{eq:act:plantobs} \|[w(t)\ \ z(t) \ \ \tilde w(t) \ \ \tilde z(t)]^\top \|_{Z_{cs} \times Z_{cs}} \leq Me^{-\o t} \|[w(0) \ \ z(0) \ \ \tilde w(0) \ \ \tilde z(0)]^\top\|_{Z_{cs}\times Z_{cs}} \vspace{-1mm} \end{equation} for some $M,\o>0$ and all $t\geq0$. Define $e_w=\tilde w-w$ and $e_z=\tilde z-z$. Then from \eqref{eq:act:ode}, \eqref{eq:act:pde}, \eqref{eq:act:obsode} and \eqref{eq:act:obspde} we get that for almost all $t\geq0$, \vspace{-2mm} \begin{equation} \label{eq:act:wzewez} \bbm{\dot {\tilde w}(t)\\ \dot {\tilde z}(t) \\ \dot e_w(t)\\ \dot e_z(t)} = \bbm{E & FC_\L & LG & LHC_\L \\ BK_1 & A+BK_2 & 0 & 0 \\ 0 & 0 & E+LG & (F+LH)C_\L\\ 0 & 0 & 0 & A} \bbm{\tilde w(t) \\ \tilde z(t) \\ e_w(t) \\ e_z(t)}. \vspace{-2mm} \end{equation} Observe that $A_{cs}+B_{cs}K_{cs}=\sbm{E & FC_\L \\ BK_1 & A+BK_2}$ is exponentially stable, see discussion below \eqref{eq:act:wzdec}, and the exponential stability of $E+LG$ and $A$ imply that $A_{cs}=\sbm{E+LG & (F+LH)C_\L \\ 0 & A}$ is also exponentially stable. It now follows, using \cite[Lemma 5.1]{WeCu:97}, that the semigroup generated by the state operator in \eqref{eq:act:wzewez} is exponentially stable and there exist $M_0,\o>0$ such that for all $t\geq0$, \vspace{-1mm} $$ \|[w(t)\ \ z(t) \ \ e_w(t) \ \ e_z(t)]^\top \|_{Z_{cs} \times Z_{cs}} \leq M_0e^{-\o t} \|[w(0) \ \ z(0) \ \ e_w(0) \ \ e_z(0)]^\top\|_{Z_{cs}\times Z_{cs}}. \vspace{-1mm} $$ The estimate in \eqref{eq:act:plantobs} follows and therefore $\Sigma_c$ is a stabilizing output feedback controller for $\Sigma_{cs}$. Next we will establish the robustness claim in the theorem. For each $\delta\in(-1,\infty)$, using the arguments presented above \eqref{eq:act:plantobs}, we get that the feedback interconnection of $\Sigma_{cs}^\delta$ and $\Sigma_c$ is a RLS, denoted as $\Sigma_{fb}^\delta$. So for each initial state $[w(0) \ \ z(0)]^\top$ of \eqref{eq:act:Rode}-\eqref{eq:act:Rpde} and $[\tilde w(0)\ \ \tilde z(0)]^\top$ of \eqref{eq:act:obsode}-\eqref{eq:act:obspde}, there exist unique state trajectories $[w \ \ z]^\top$ of \eqref{eq:act:Rode}-\eqref{eq:act:Rpde} and $[\tilde w\ \ \tilde z]^\top$ of \eqref{eq:act:obsode}-\eqref{eq:act:obspde} with $\tilde u=G w+HC_\L z$ and $u=K_1\tilde w+K_2\tilde z$. We will prove the exponential stability of $\Sigma_{fb}^\delta$ for small $\delta$ by proving that these state trajectories satisfy \eqref{eq:act:plantobs} for some $M,\o>0$. Let $z_a$ be the state trajectory of \vspace{-1mm} \begin{equation} \label{eq:act:auxstate} \dot z_a(t) = A z_a(t) + BK_1 \tilde w(t) + BK_2 \tilde z(t), \qquad z_a(0)=z(0). \end{equation} Write $\tilde w$ as $[\tilde w_1\ \ \tilde w_2]^\top$, where $\tilde w_1\in \rline^{n_1}$ and $\tilde w_2\in\rline^{n_2}$. Recall $K$, $\Pi$, $X$, $A_1$ and $B_1$ from Theorem \ref{th:act:stab}. Define $\tilde p_1 = \tilde w_1 + \Pi \tilde z_1$, $\tilde z_1 = [\tilde w_2\ \ \tilde z]^{\top}$,$e_w=\tilde w-w$, $e_z=\tilde z-z_a$, $q_1 = [\tilde p_1 \ \ \tilde z_1 \ \ e_w\ \ e_z]^\top$ and $q_2 = [z\ \ z_a]$. Define $$ \Ascr_1=\bbm{E_1+\Pi B_1 K & 0 & L_1 G & L_1HC_\L \\ B_1 K & A_1 & 0 & 0 \\ 0 & 0 & E+LG & (F+LH)C_\L \\ 0 & 0 & 0 & A}, \qquad \Bscr_1= \bbm{L_1H\\0\\ F+LH\\0},$$ $$ \Cscr_1 = \bbm{K & 0 & 0 & 0}, \quad \Ascr_2^\delta=\bbm{A+\delta A & 0\\ 0 & A}, \quad \Bscr_2= \bbm{B\\B},\quad \Cscr_2 = \bbm{-C_\L & C_\L}.$$ Then from \eqref{eq:act:Rode}, \eqref{eq:act:Rpde}, \eqref{eq:act:obsode}, \eqref{eq:act:obspde} and \eqref{eq:act:auxstate} it follows that for almost all $t\geq0$ \begin{equation} \label{eq:act:extended} \bbm{\dot q_1(t) \\ \dot q_2(t)} = \bbm{\Ascr_1 & \Bscr_1\Cscr_2 \\ \Bscr_2\Cscr_1 & \Ascr_2^\delta} \bbm{q_1(t) \\ q_2(t)}. \end{equation} Since $\sbm{E_1+\Pi B_1 K & 0 \\ B_1 K & A_1 }$ and $\sbm{E+LG & (F+LH)C_\L \\ 0 & A}$ are exponentially stable, $\Ascr_1$ is the generator of an exponentially stable semigroup on $V=\rline^{n_1}\times X \times \rline^n\times Z$. (In fact, $\Ascr_1$ and the state operator in \eqref{eq:act:wzewez} are similar via a bounded transformation.) Clearly $\Bscr_1\in\Lscr(\rline^q,V)$ and $\Cscr_1\in\Lscr(V,\rline^m)$. From these it follows that $(\Ascr_1,\Bscr_1,\Cscr_1,0)$ are the GOs of an exponentially stable RLS $\Sigma_1$. From the regularity of $(A,B,C)$ and Assumption \ref{as:act:exp}, it follows that $(\Ascr_2^\delta,\Bscr_2,\Cscr_2,0)$ are the GOs of an exponentially stable RLS $\Sigma_2^\delta$. The transfer function $\GGG_1$ of $\Sigma_1$ is in $H^\infty(\Lscr( \rline^q, \rline^m))$ and for all $s\in\overline{\cline^+}$, $$ \GGG_1(s)=K(sI-E_1-\Pi B_1K)^{-1}L_1[G(sI-E-LG)^{-1}(F+LH)+H]. $$ The transfer function $\GGG_2^\delta$ of $\Sigma_2^\delta$ is in $H^\infty(\Lscr(\rline^m,\rline^q))$ and for all $s\in\overline{\cline^+}$, \begin{align*} \GGG_2^\delta(s) &= C_\L(sI-A)^{-1}B - C_\L(sI-A-\delta A)^{-1}B \\ &= \delta C_\L(sI-A-\delta A)^{-1}B - \delta s C_\L(sI-A)^{-1}(sI-A-\delta A)^{-1}B. \end{align*} Since all the operators (matrices) in the expression for $\GGG_1$ are bounded, it follows that $\lim_{|s|\to\infty,\, s\in \overline{\cline^+}} \|\GGG_1(s)\|=0$. From the expression for $\GGG_2^\delta$, using \eqref{eq:Best} and \eqref{eq:Cest}, we have $\lim_{\delta\to 0}\sup_{s\in S} \|\GGG_2^\delta(s)\|=0$ for any compact subset $S$ of $\overline{\cline^+}$ and, furthermore, $\sup_{\delta\in \Delta} \|\GGG_2^\delta\|_{H^\infty}<\infty$ for any compact subset $\Delta$ of $(-1,\infty)$. Consequently, for all $\delta$ sufficiently small, $\|\GGG_1\GGG_2^\delta\|_{H^\infty}<1$ and so $(I-\GGG_1 \GGG_2^\delta)^{-1} \in H^\infty(\Lscr(\rline^m))$. Thus the positive feedback interconnection of $\Sigma_1$ and $\Sigma_2^\delta$ is an exponentially stable RLS \cite[Proposition 4.6]{WeCu:97} and its state operator is the state operator in \eqref{eq:act:extended}. So the state trajectory $[q_1\ \ q_2]$ of \eqref{eq:act:extended} converges to zero exponentially, implying that the state trajectories $[w \ \ z]^\top$ of \eqref{eq:act:Rode}-\eqref{eq:act:Rpde} and $[\tilde w\ \ \tilde z]^\top$ of \eqref{eq:act:obsode}-\eqref{eq:act:obspde} satisfy \eqref{eq:act:plantobs} for some $M,\o>0$. Hence $\Sigma_{fb}^\delta$ is exponentially stable, i.e. $\Sigma_c$ is a stablizing output feedback controller for $\Sigma_{cs}^\delta$ for small $\delta$. \end{proof} The following remark discusses how the controller design techniques proposed in this section can be applied to the PDE-ODE cascade system \eqref{eq:act:ode}-\eqref{eq:act:output} when $B\in\Lscr(\rline^m, Z_{-1})$ is not an admissible control operator for $\tline$. \begin{remark} \label{rm:act:nonreg} In the PDE-ODE cascade system \eqref{eq:act:ode}-\eqref{eq:act:output}, suppose that the control operator $B\in\Lscr(\rline^m,Z_{-1})$ is not admissible for $\tline$. However, let $\GGG$ as defined in \eqref{eq:act:RLStf} exist and be bounded on $\cline^+_\o$ for each $\o>\o_\tline$. Let Assumptions \ref{as:act:exp}, \ref{as:act:stab} and \ref{as:act:det} hold. To apply Theorems \ref{th:act:stab} and \ref{th:act:det} to \eqref{eq:act:ode}-\eqref{eq:act:output}, introduce a stable first-order filter in cascade with the PDE system \eqref{eq:act:pde}, i.e. $u$ in \eqref{eq:act:pde} is obtained as follows: \vspace{-1mm} \begin{equation} \label{eq:act:Bfilt} \dot x_u(t) = -x_u(t) + v(t), \qquad u(t)=x_u(t), \vspace{-1mm} \end{equation} where $x_u(t), v(t)\in\rline^m$. Via integration by parts we get \vspace{-1mm} \begin{equation*} \label{eq:act:intp} \int_0^t \tline_{t-\tau}B x_u(\tau)\dd\tau = \tline_t A^{-1}B x_u(0) - A^{-1}Bx_u(t) - \int_0^t \tline_{t-\tau} A^{-1} B (x_u(\tau)-v(\tau)) \dd\tau. \vspace{-1mm} \end{equation*} Consider the operators $\Ascr=\sbm{A & B\\ 0 & -I}$, $\Bscr=\sbm{0\\I}$ and $\Cscr=\bbm{C_\L & 0}$. Using the above integral expression it follows that $\Ascr$ is the generator of a strongly continuous semigroup $\sline$ on $Z\times\rline^m$ defined as $\sline_t=\sbm{\tline_t &\ \ \int_0^t \tline_{t-\tau}B e^{-\tau} \dd\tau \\ 0 & e^{-t} I}$ for $t\geq0$. Since $\Bscr$ is bounded, it is an admissible control operator for $\sline$. Since $C$ is admissible for $\tline$, $\Cscr$ is an admissible observation operator for $\sline$. Furthermore, $\Gscr(s)=\Cscr_\L(sI-\Ascr)^{-1} \Bscr=\GGG(s)/(s+1)$ and so $(\Ascr,\Bscr,\Cscr)$ is a regular triple. Consider the PDE-ODE cascade system \eqref{eq:act:ode}-\eqref{eq:act:output} along with the filter \eqref{eq:act:Bfilt}. This system can be written (with input $v$) as \vspace{-1mm} \begin{align} \dot w(t) &= E w(t) + F\Cscr_\L z_c(t), \label{eq:act:odeB} \\[0.5ex] \dot z_c(t) &= \Ascr z_c(t) + \Bscr v(t), \label{eq:act:pdeB} \\[0.5ex] y(t) &= G w(t) + H \Cscr_\L z_c(t), \label{eq:act:outputB} \end{align} where $z_c(t)=[z(t)\ \ x_u(t)]^\top$. The PDE-ODE cascade system \eqref{eq:act:odeB}-\eqref{eq:act:outputB} satisfies all the hypothesis stated in the beginning of this section. Assumptions \ref{as:act:exp}, \ref{as:act:stab} and \ref{as:act:det} also hold for it (this follows from the fact that they hold for \eqref{eq:act:ode}-\eqref{eq:act:output}). Applying Theorems \ref{th:act:stab} and \ref{th:act:det} we obtain state feedback and output feedback controllers which stabilize \eqref{eq:act:odeB}-\eqref{eq:act:outputB}. Clearly, the cascade interconnection of any of these controllers with the filter \eqref{eq:act:Bfilt} is a stabilizing controller for \eqref{eq:act:ode}-\eqref{eq:act:output} (here stabilizing means that the state trajectories of \eqref{eq:act:ode}-\eqref{eq:act:output} in $Z_{cs}$ and the state trajectories of the controller converge to zero exponentially for any initial state). It is also a stabilizing controller for the perturbed system \eqref{eq:act:Rode}-\eqref{eq:act:Rpde}, \eqref{eq:act:output} (this follows via small changes to the robustness arguments in the proof of Theorems \ref{th:act:stab}, \ref{th:act:det}). \hfill$\square$ \end{remark} In \cite{Kri:2009} and \cite{SaGaKr:2018}, the actuator is modeled as a 1D diffusion equation with Dirichlet boundary control. This model can be written as an abstract linear system with state space $L^2(0,1)$, input space $\rline$ and output space $\rline$. Its state, control, observation and feedthrough operators are defined as follows: $A=\frac{\partial^2}{\partial x^2}$ with $D(A)=\{f\in H^2(0,1) \big| f'(0)=0, f(1)=0\}$, $B=\delta'(1)$ (derivative of Dirac pulse at $x=1$), $Cz=z(0)$ for all $z\in D(A)$ and $D=0$. Its transfer function is $\GGG(s)=1/\cosh(\sqrt s)$. These operators satisfy the hypothesis in Remark \ref{rm:act:nonreg}. Hence for the PDE-ODE cascade systems in \cite{Kri:2009} and \cite{SaGaKr:2018}, stabilizing controllers can be designed using the approach described in the remark. Suppose \eqref{eq:act:ode} has an additional term $Ju$, i.e. the plant dynamics is governed by \begin{equation} \label{eq:act:Jode} \dot w(t) = E w(t) + FC_\L z(t) + Ju(t), \vspace{-1mm} \end{equation} where $J\in\rline^{n\times m}$. Let $[J_1 \ \ J_2]^\top$ be the partitioning of $J$ corresponding to \eqref{eq:act:Edecom}. \begin{assumption} \label{as:act:stabnotcas} $v^\top F_1\GGG(\l)+v^\top J_1\neq0$ for each eigenvalue $\l\in\sigma(E_1)$ and nonzero vector $v\in\rline^{n_1}$ satisfying $v^\top E_1=\l v^\top$. \end{assumption} \begin{remark} \label{rm:act:notcas} Theorem \ref{th:act:stab}, Proposition \ref{pr:act:stabcon} and Theorem \ref{th:act:det} continue to hold if we replace \eqref{eq:act:ode} and \eqref{eq:act:Rode} with \eqref{eq:act:Jode}, Assumption \ref{as:act:stab} with Assumption \ref{as:act:stabnotcas}, $\Pi B_1$ with $\Pi B_1+J_1$ and let $B_{cs} = \sbm{J\\ B}$, $B_1 = \sbm{J_2\\ B}$, $A_c = \sbm{E+LG+JK_1 & (F+LH)C_\L+JK_2 \\ BK_1 & A+B K_2}$. This claim can be proved easily by mimicking the proofs in this section. Hence the results in this section can be used to construct stabilizing controllers for the RLS described by \eqref{eq:act:Jode}, \eqref{eq:act:pde} and \eqref{eq:act:output}. This remark is useful when Assumption \ref{as:act:exp} does not hold, but the unstable subspace of $A$ is finite-dimensional, see Example 5.1. \hfill $\square$ \end{remark} \section{ODE plant with PDE sensor} \label{sec4} \setcounter{equation}{0} \ \ \ Consider an ODE-PDE cascade system in which the output of the ODE system drives the PDE system. The ODE models the plant dynamics, while the PDE models the sensor dynamics. The state dynamics of the cascade system is described by the following differential equations: for $t>0$ \vspace{-2mm} \begin{align} \dot w(t) &= E w(t) + F u(t), \label{eq:sen:ode}\\[0.5ex] \dot z(t) &= A z(t) + B(G w(t)+H u(t)), \label{eq:sen:pde}\\[-4.5ex] \nonumber \end{align} where $w(t)\in\rline^n$ is the plant state, $z(t)\in Z$ is the sensor state, $Z$ is a Hilbert space, $u(t)\in\rline^m$ is the input, $E\in \rline^{n\times n}$ is as in \eqref{eq:act:Edecom}, $F\in \rline^{n\times m}$, $G\in\rline^{q\times n}$, $H\in\rline^{q\times m}$, $A$ is the generator of a strongly continuous semigroup $\tline$ on $Z$ and $B\in \Lscr(\rline^q,Z_{-1})$ is an admissible control operator for $\tline$. The admissibility assumption can be relaxed, see Remark \ref{rm:sen:nonreg}. The output $y$ of the sensor takes values in $\rline^p$ and is given by \vspace{-2mm} \begin{equation} \label{eq:sen:output} y(t) = C_\L z(t),\qquad t\geq0, \vspace{-2mm} \end{equation} where $C\in\Lscr(Z_1,\rline^p)$ is an admissible observation operator for $\tline$. We suppose that the triple $(A,B,C)$ is regular. For the PDE system (sensor), the transfer function $\GGG$ is given in \eqref{eq:act:RLStf}. The combined state space for the plant and sensor is $Z_{cs}=\rline^n\times Z$ and the state, control and observation operators for the combined dynamics (with input $u$, state $[w\ \ z]^\top$ and output $y$) are \vspace{-2.5mm} \begin{equation*} \label{eq:sen:cas} A_{cs} = \bbm{E & 0 \\ BG & A}, \qquad B_{cs} = \bbm{F\\ BH}, \qquad C_{cs} = \bbm{0 & C_\L}. \vspace{-2.5mm} \end{equation*} From the feedback theory for RLSs it follows that the cascade system \eqref{eq:sen:ode}-\eqref{eq:sen:output} is a RLS, denoted as $\Sigma_{cs}$, with GOs $(A_{cs}, B_{cs}, C_{cs},0)$, input space $\rline^m$, state space $Z_{cs}$ and output space $\rline^p$. In Theorem \ref{th:sen:det}, we present an observer for \eqref{eq:sen:ode}-\eqref{eq:sen:output}. This result can be extended easily to a setting in which the output \eqref{eq:sen:output} also contains a term $J w$, see Remark \ref{rm:sen:notcas}. Since the assumptions and results in this section are dual to those in Section \ref{sec3}, we will keep our discussions about them brief. \begin{assumption} \label{as:sen:exp} The semigroup $\tline$ (or equivalently $A$) is exponentially stable. \vspace{-1mm} \end{assumption} In the context of observer design for \eqref{eq:sen:ode}-\eqref{eq:sen:output}, Assumption \ref{as:sen:exp} is no more restrictive than requiring the pair $(C,A)$ to be detectable. In case this assumption does not hold and the unstable subspace of $A$ is finite-dimensional, we can combine it with the unstable subspace of $E$, redefine $A$, $B$, $C$, $E$, $F$ and $G$ suitably and work with \eqref{eq:sen:ode}-\eqref{eq:sen:output} (with redefined operators) for which Assumption \ref{as:sen:exp} holds, also see Remark \ref{rm:sen:notcas}. Recall the partitioning of $G$ in \eqref{eq:act:part}. \begin{assumption} \label{as:sen:det} $\GGG(\l)G_1 v\neq0$ for each eigenvalue $\l\in\sigma(E_1)$ and nonzero vector $v\in\rline^{n_1}$ satisfying $E_1 v=\l v$. \end{assumption} When $A$ is exponentially stable, the pair $(C_{cs},A_{cs})$ is detectable if and only if Assumption \ref{as:sen:det} holds, see Proposition \ref{pr:sen:detcon}. When $A$ is not exponentially stable, but $\GGG(\l)$ exists for each $\l\in\sigma(E_1)$, the pair $(C_{cs},A_{cs})$ is detectable if $(C,A)$ is detectable and Assumption \ref{as:sen:det} holds. The next result follows from \cite[Lemma III.4]{NaGiWe:14}. \vspace{-1mm} \begin{framed} \vspace{-3mm} \begin{lemma} \label{pr:sen:syl} Let $\Ascr$ be the generator of an exponentially stable strongly continuous semigroup $\sline$ on a Hilbert space $X$. Let $\Escr\in\rline^{n\times n}$ be such that $\sigma(\Escr)\subset \overline{\cline^+}$. Recall the expression for $e^{-\Escr t}$ from \eqref{eq:act:matexp}. Let $\Bscr\in \Lscr(\rline^n,X_{-1})$. Then $\Pi\in\Lscr(\rline^n, X)$ defined as \vspace{-2.5mm} \begin{equation} \label{eq:sen:Pisum} \Pi = \sum_{k=1}^v\sum_{j=0}^r (\l_k-\Ascr)^{-1-j} \Bscr E_{kj} \vspace{-2.5mm} \end{equation} solves the Sylvester equation \vspace{-3mm} \begin{equation} \label{eq:sen:syl} \Pi \Escr = \Ascr \Pi + \Bscr. \vspace{-2mm} \end{equation} \end{lemma} \vspace{-4mm} \end{framed} \vspace{-5mm} \begin{proof} From the proof of Lemma III.4 in \cite{NaGiWe:14} we get that $\Pi\in\Lscr(\rline^n,X)$ defined as \vspace{-2.5mm} \begin{equation} \label{eq:sen:NGW} \Pi w = \int_0^\infty \sline_t \Bscr e^{-\Escr t}w \dd t \FORALL w\in\rline^n \vspace{-2.5mm} \end{equation} solves \eqref{eq:sen:syl}. Substituting for $e^{-\Escr t}$ from \eqref{eq:act:matexp} into \eqref{eq:sen:NGW} and then using the integral expression for the powers of the resolvent operator, it is easy to verify that $\Pi$ in \eqref{eq:sen:NGW} can equivalently be expressed via the formula in \eqref{eq:sen:Pisum}. \vspace{-1mm} \end{proof} We now present an observer for the ODE-PDE cascade system \eqref{eq:sen:ode}-\eqref{eq:sen:output}. Recall the notation $E_1$, $E_2$, $F_1$, $F_2$, $G_1$, $G_2$, $w_1$ and $w_2$ from \eqref{eq:act:Edecom}, \eqref{eq:act:part}. Define \vspace{-1mm} $z_1 = [w_1 \ \ z]^\top$. \begin{framed} \vspace{-3mm} \begin{theorem} \label{th:sen:det} Consider the cascade system \eqref{eq:sen:ode}-\eqref{eq:sen:output}. Suppose that Assumption \ref{as:sen:exp} holds. Define \vspace{-1mm} $$ A_1 = \bbm{E_2 & 0\\ BG_2 & A}, \qquad B_1 = \bbm{0_{n_2\times q}\\ B}, \qquad C_1 = \bbm{0_{p\times n_2} & C}. \vspace{-1mm}$$ Then $A_1$ is the generator of an exponentially stable strongly continuous semigroup $\sline$ on $X=\rline^{n_2}\times Z$, the control operator $B_1\in\Lscr(\rline^q, X_{-1})$ and the observation operator $C_1\in\Lscr(X_1,\rline^p)$ are admissible for $\sline$ and the triple $(A_1,B_1,C_1)$ is regular. There exists $\Pi\in\Lscr(\rline^{n_1},X)$ such that \vspace{-1mm} \begin{equation} \label{eq:sen:sylth} \Pi E_1 w_1 = A_1\Pi w_1 + B_1G_1 w_1 \FORALL w_1\in\rline^{n_1} \end{equation} and $C_{1\L}\Pi\in\Lscr(\rline^{n_1},\rline^p)$. Suppose that Assumption \ref{as:sen:det} also holds. Then the pair $(C_{1\L}\Pi,E_1)$ is detectable. Fix $L\in \rline^{n_1\times p}$ such that $E_1+LC_{1\L}\Pi$ is Hurwitz. Let $\Pi=[\Pi_1 \ \ \Pi_2]^\top$, where $\Pi_1\in\Lscr(\rline^{n_1},\rline^{n_2})$ and $\Pi_2\in\Lscr( \rline^{n_1},Z)$. Define $\tilde L = [L \ \ \Pi_1 L]^\top$. Then \vspace{-1mm} \begin{equation} \label{eq:sen:obs} \bbm{\dot{\tilde w} \\ \dot{\tilde z}} = \bbm{E & \tilde L C_\L \\ B G & A+\Pi_2 L C_\L}\bbm{\tilde w \\ \tilde z} - \bbm{\tilde L \\ \Pi_2 L} y + \bbm{F\\B H}u. \vspace{-1mm} \end{equation} is an observer for $\Sigma_{cs}$. \end{theorem} \vspace{-3mm} \end{framed} \vspace{-4mm} \begin{proof} The exponential stability of the semigroup $\sline$ generated by $A_1$ and the regularity of the triple $(A_1,B_1,C_1)$ can be established like in the proof of Theorem \ref{th:act:stab}. Since $B_1$ is admissible for $\sline$, so is $B_1G_1$. Applying Lemma \ref{pr:sen:syl} with $\Escr=E_1$, $\Ascr=A_1$ and $\Bscr=B_1G_1$, we get that there exists a $\Pi\in\Lscr(\rline^{n_1},X)$ which solves \eqref{eq:sen:sylth}. It follows from the regularity of the triple $(A_1,B_1,C_1)$ and the expression for $\Pi$ in \eqref{eq:sen:Pisum} that $C_{1\L}\Pi\in\Lscr(\rline^{n_1},\rline^p)$. Suppose that Assumption \ref{as:sen:det} holds. Then the pair $(C_{1\L}\Pi,E_1)$ is detectable. Indeed, if not, then via the Hautus test there exists a $\l\in\sigma(E_1)$ and a non-zero $v\in\rline^{n_1}$ such that \vspace{-1mm} \begin{equation} \label{eq:sen:hau} E_1 v= \l v, \qquad C_{1\L}\Pi v=0. \vspace{-1mm} \end{equation} Choosing $w_1=v$ in \eqref{eq:sen:sylth} and then applying $C_{1\L}(\l I-A_1)^{-1}$ from the left to both sides of the resulting expression, we get using the first expression in \eqref{eq:sen:hau} and $C_{1\L}(\l I-A_1)^{-1}B_1=\GGG(\l)$ that \vspace{-1mm} \begin{equation} \label{eq:sen:hausyl} C_{1\L}\Pi v = \GGG(\l)G_1 v . \vspace{-1mm} \end{equation} Using the second expression in \eqref{eq:sen:hau} it follows from \eqref{eq:sen:hausyl} that $\GGG(\l)G_1 v =0$, which contradicts Assumption \ref{as:sen:det}. Hence the pair $(C_{1\L}\Pi,E_1)$ is detectable. Fix $L\in \rline^{n_1\times p}$ such that $E_1+LC_{1\L}\Pi$ is Hurwitz. As in the statement of the theorem, let $\Pi=[\Pi_1 \ \ \Pi_2]^\top$ and $\tilde L = [L \ \ \Pi_1 L]^\top$. Define $L_{cs}=[\tilde L\ \ \Pi_2 L]^\top\in\Lscr(\rline^p,Z_{cs})$. Since $L_{cs}$ is bounded, it is an admissible control operator for the semigroup generated by $A_{cs}$ and $\GGG_{L}(s)=C_{cs,\L} (sI-A_{cs})^{-1} L_{cs}$ exists for all $s\in\rho(A_{cs})$. From \eqref{eq:Cest}, $\lim_{\Re s\to\infty} \|\GGG_{L}(s)\|_{\Lscr(\rline^p)}=0$, which implies that $(A_{cs}, L_{cs}, C_{cs})$ is a regular triple and $I$ is an admissible feedback operator for $\GGG_{L}$. To establish that \eqref{eq:sen:obs} is an observer for $\Sigma_{cs}$, according to Definition \ref{def:det} and the discussion below it, we only need to show that $A_{cs}+L_{cs}C_{cs}$ is exponentially stable, i.e. for each $[e_w(0)\ \ e_z(0)]^\top\in Z_{cs}$ the state trajectory of \vspace{-2mm} \begin{equation} \label{eq:sen:ewez} \bbm{\dot e_w(t)\\ \dot e_z(t)} = \bbm{E & \tilde L C_\L \\ B G & A+\Pi_2L C_\L}\bbm{e_w(t)\\e_z(t)} \vspace{-1mm} \end{equation} satisfies the following estimate for some $M,\o>0$: \vspace{-1mm} \begin{equation} \label{eq:sen:ewezdec} \|e_w(t)\|+\|e_z(t)\| \leq M e^{-\o t}(\|e_w(0)\|+\|e_z(0)\|) \FORALL t\geq0. \end{equation} Let $e_w=[e_{w1}\ \ e_{w2}]^\top$ with $e_{w1}\in\rline^{n_1}$ and $e_{w2}\in\rline^{n_2}$. Define $e_{z1}=[e_{w2}\ \ e_z]^\top-\Pi e_{w1}$. Then along the trajectory of \eqref{eq:sen:ewez} we get that for almost all $t\geq0$ $$ \bbm{\dot e_{w1}(t)\\ \dot e_{z1}(t)} = \bbm{E_1+LC_{1\L}\Pi & L C_{1\L} \\ 0 & A_1}\bbm{e_{w1}(t) \\ e_{z1}(t)}. $$ From the exponential stability of $E_1+LC_{1\L}\Pi$ and $A_1$ and the upper triangular form of the state operator, we get that $ \|e_{w1}(t)\|+\|e_{z1}(t)\| \leq M_1 e^{-\o t}(\|e_{w1}(0)\|+\|e_{z1}(0)\|)$ for some $M_1,\o>0$ and all $t\geq0$, from which \eqref{eq:sen:ewezdec} follows. \vspace{-2mm} \end{proof} Theorem \ref{th:sen:det} shows that Assumption \ref{as:act:stab} is sufficient for the existence of an observer for the ODE-PDE system \eqref{eq:sen:ode}-\eqref{eq:sen:output}. The next proposition establishes that this assumption is also necessary. \vspace{-1mm} \begin{framed} \vspace{-3mm} \begin{proposition} \label{pr:sen:detcon} Consider the cascade system \eqref{eq:sen:ode}-\eqref{eq:sen:output}. Let Assumption \ref{as:sen:exp} hold. Then the pair $(C_{cs}, A_{cs})$ is detectable if and only if Assumption \ref{as:sen:det} holds. \end{proposition} \vspace{-3mm} \end{framed} \vspace{-5mm} \begin{proof} Suppose that Assumption \ref{as:sen:det} holds. We have shown in Theorem \ref{th:sen:det} that $(C_{cs},A_{cs})$ is detectable and found $L_{cs}$ such that $A_{cs}+L_{cs}C_{cs}$ is exponentially stable. Conversely, suppose that the pair $(C_{cs},A_{cs})$ is detectable. If Assumption \ref{as:sen:det} does not hold, then there exists a non-zero $v\in\rline^{n_1}$ such that $E_1 v=\l v$ for some $\l\in\sigma(E_1)$ and $\GGG(\l)G_1 v=0$. It now follows from \eqref{eq:sen:hausyl} that $C_{1\L} \Pi v=0$. Define $V=[v\ \ \Pi v]^\top$. Noting that $A_{cs} = \sbm{E_1 & 0 \\ B_1G_1 & A_1}$ and $C_{cs} = \bbm{0 & C_{1\L}}$, it is easy to verify using \eqref{eq:sen:sylth} that $V\in \Dscr(A_{cs})$, $A_{cs}V=\l V$ and $C_{cs}V=0$. Hence for any $L_{cs}\in \Lscr(\rline^p,Z_{cs,-1})$ we have $(A_{cs}+L_{cs}C_{cs}) V=\l V$ which, along with $\Re\lambda\geq0$, implies that $A_{cs}+L_{cs}C_{cs}$ is not exponentially stable, which in turn contradicts the detectability of the pair $(C_{cs},A_{cs})$. Hence Assumption \ref{as:sen:det} must hold. \vspace{-1mm} \end{proof} The next remark discusses the construction of an observer for \eqref{eq:sen:ode}-\eqref{eq:sen:output} when the control operator $B\in\Lscr(\rline^q,Z_{-1})$ is not admissible for $\tline$. \begin{remark} \label{rm:sen:nonreg} In the cascade system \eqref{eq:sen:ode}-\eqref{eq:sen:output}, suppose that $B\in\Lscr(\rline^q,Z_{-1})$ is not admissible for $\tline$. However, let $\GGG$ in \eqref{eq:act:RLStf} exist and be bounded on $\cline^+_\o$ for each $\o>\o_\tline$ and let Assumptions \ref{as:sen:exp} and \ref{as:sen:det} hold. Then, via arguments similar to those used in Remark \ref{rm:act:nonreg} to show that $(\Ascr,\Bscr,\Cscr)$ is a regular triple, we can establish that $A_1$ is the generator of an exponentially stable semigroup and $(A_{cs},L_o,C_{cs})$ is a regular triple for any $L_o\in\Lscr(\rline^p,Z_{cs})$ (the role of the first-order filter in the arguments in Remark \ref{rm:act:nonreg} will be played by the ODE system in the arguments here). Clearly $B_1\in\Lscr(\rline^q, \rline^{n_2}\times Z_{-1})$ and $C_{1\L}(sI-A_1)^{-1}B_1$ (being equal to $\GGG(s)$) exists if $\Re s>\o_\tline$. Let $\Pi$ solve \eqref{eq:sen:sylth} and define $L_{cs}$ as in the proof of Theorem \ref{th:sen:det}. Then like in that proof we can show that $I$ is an admissible feedback operator for $C_{cs,\L} (sI-A_{cs})^{-1} L_{cs}$ and $A_{cs}+L_{cs}C_{cs}$ is exponentially stable, i.e. the pair $(C_{cs},A_{cs})$ is detectable. In addition, if $H=0$, then \eqref{eq:sen:obs} is an observer for \eqref{eq:sen:ode}-\eqref{eq:sen:output}. \hfill$\square$ \end{remark} The sensor model in \cite{Kri:2009} is the 1D diffusion equation described below Remark \ref{rm:act:nonreg}, and for it all the hypothesis in the above remark (including $H=0$) hold. Suppose that we modify \eqref{eq:sen:output} to include an additional term $Jw$, i.e. \vspace{-1.5mm} \begin{equation} \label{eq:sen:Joutput} y(t) = C_\L z(t) + Jw(t), \vspace{-1.5mm} \end{equation} where $J\in\rline^{p\times n}$. Let $[J_1 \ \ J_2]$ be the partitioning of $J$ corresponding to \eqref{eq:act:Edecom}. \begin{assumption} \label{as:sen:detnotcas} $C_\L(\l I -A)^{-1} B G_1 v+ J_1 v\neq0$ for each eigenvalue $\l\in\sigma(E_1)$ and nonzero vector $v\in\rline^{n_1}$ satisfying $E_1 v=\l v$. \end{assumption} \begin{remark} \label{rm:sen:notcas} Theorem \ref{th:sen:det} and Proposition \ref{pr:sen:detcon} continue to hold if we replace \eqref{eq:sen:output} with \eqref{eq:sen:Joutput} provided we replace Assumption \ref{as:sen:det} with Assumption \ref{as:sen:detnotcas}, $C_{1\L}\Pi$ with $C_{1\L}\Pi+J_1$ and let $C_{cs}=[J\ \ C]$ and $C_1=[J_2\ \ C]$ and change $\sbm{E & \tilde L C_\L \\ B G & A+\Pi_2 L C_\L}$ to $\sbm{E+\tilde L J & \tilde L C_\L \\ B G+\Pi_2 L J & A+\Pi_2 L C_\L}$. This claim can be established easily by mimicking the proofs in this section. This remark, like Remark \ref{rm:act:notcas}, is useful when Assumption \ref{as:sen:exp} does not hold, but the unstable subspace of $A$ is finite-dimensional. In this case, if we adopt the approach of redefining operators discussed below Assumption \ref{as:sen:exp}, a $Jw$ term will typically appear in \eqref{eq:sen:output} after the redefinition. \hfill$\square$ \vspace{-2mm} \end{remark} \section{Illustrative examples} \label{sec5} \setcounter{equation}{0} \vspace{-2mm} \ \ \ In Example 5.1, we illustrate the results in Section \ref{sec3} by constructing a robust output feedback controller for stabilizing an unstable plant driven by an unstable actuator modeled as a 1D diffusion equation. In Example 5.2, we illustrate the results in Section \ref{sec4} by constructing an observer for an unstable plant with a stable sensor modeled as a 1D wave equation. \begin{example} Let the plant \eqref{eq:act:ode} and its output \eqref{eq:act:output} be determined by the matrices \vspace{-2mm} $$ E = \bbm{0 & 1\\ -1 & 0}, \quad F = \bbm{0 \\ 1}, \quad G = \bbm{1 & 0}, \quad H = 0. \vspace{-1mm} $$ Since $E$ has no stable eigenvalues, $E_1=E$ and $F_1=F$. Let the actuator dynamics be governed by the diffusion PDE \vspace{-1.5mm} \begin{align} z_t(x,t) &= z_{xx}(x,t) \FORALL x\in(0,1), \FORALL t>0,\nonumber\\ z_x(0,t) &= 0, \qquad z_x(1,t)=u(t),\label{eq:ex1:actuator} \\[-4ex]\nonumber \end{align} where the function $z(\cdot,t)$ is the state and $u(t)\in\rline$ is the input to the actuator. The plant is driven by the actuator output $z(0,t)\in \rline$. The actuator dynamics can be written as an abstract evolution equation of the form \eqref{eq:act:pde} on the state space $Z=L^2(0,1)$ with state operator $A$ defined as $A\phi= \phi_{xx}$ for all $\phi\in D(A)$, where $D(A) = \{\phi\in H^2(0,1) \big| \phi_x(0)= \phi_x(1)=0\},$ and control operator $B=\delta_1$, where $\delta_1$ is the Dirac pulse at $x=1$. The observation operator $C$ for the actuator output is defined as $C\phi=\phi(0)$ for all $\phi\in D(A)$. The operator $A$ has eigenvalues $\l_n=-n^2\pi^2$, $n\geq0$, with corresponding eigenfunctions $\phi_n(x)=\sqrt{2}\cos n\pi x$ for $n\geq1$, $\phi_0=1$, which form an orthonormal basis in $L^2(0,1)$ \cite[Example 2.3.7]{Cur-Zw}. Hence $A$ is a Riesz spectral operator and it generates a semigroup $\tline$ on $Z$. The admissibility of $B\in\Lscr(U,Z_{-1})$ and $C\in\Lscr(Z_1,\rline)$ for $\tline$ and the regularity of the triple $(A,B,C)$ follow from \cite{BGSW}, see also \cite[Example VI.1]{NaGiWe:14}. The actuator transfer function, see \eqref{eq:act:RLStf}, is $\GGG(s)= 1{\big /}(\sqrt{s}\sinh\sqrt{s})$ for $\Re s >0$. While $A$ is not stable, the pair $(A,B)$ is stabilizable. Indeed, $A+B K_\L$ is stable for $K$ defined as $K\phi=-\phi(1)$ for all $\phi\in D(A)$ \cite[Example VI.1]{NaGiWe:14}. Furthermore, $\GGG(\l)$ exists for each $\l\in E_1$ and Assumption \ref{as:act:stab} holds. It follows from the discussions below Assumptions \ref{as:act:exp} and \ref{as:act:stab} that the pair $(A_{cs},B_{cs})$ is stabilizable. The unstable subspace $Z_u$ of $A$ is the span of $\phi_0$ and its stable subspace $Z_s$ is the orthogonal complement of $\phi_0$ in $L^2(0,1)$. Since $Z_u$ is finite-dimensional, as suggested below Assumptions \ref{as:act:exp}, we will combine it with the unstable subspace of $E$, redefine the operators suitably so that Assumptions \ref{as:act:exp} and \ref{as:act:stab} hold for the redefined operators and finally design a stabilizing output-feedback controller for the above interconnection using Theorem \ref{th:act:det} and Remark \ref{rm:act:notcas}. The restriction of the actuator dynamics in \eqref{eq:ex1:actuator} to $Z_u$, obtained by taking the innerproduct of \eqref{eq:act:pde} with $\phi_0$, is $\dot z_u(t) = u(t)$ and its restriction to $Z_s$ is $\dot z_s(t) = A_s z_s(t) + B_s u(t)$. Here $A_s$ is the restriction of $A$ to $Z_s$ and $B_s=(B-\phi_0)$. Clearly $A_s$ is exponentially stable and the regularity of the triple $(A_s,B_s,C)$ follows from the regularity of $(A,B,C)$. Combining the unstable part of the actuator dynamics with the plant dynamics, the new finite-dimensional dynamics is given by \eqref{eq:act:Jode} and output is given by \eqref{eq:act:output}, where \vspace{-2mm} $$ E = \bbm{0 & 1 & 0\\ -1 & 0 & 1 \\ 0 & 0 & 0}, \quad F = \bbm{0\\ 1\\ 0}, \quad J =\bbm{0\\ 0\\ 1}, \quad G = \bbm{1 & 0 & 0}, \quad H = 0. \vspace{-2mm} $$ This dynamics is driven by the stable part of the actuator dynamics which, after redefining $A$ and $B$ to be $A_s$ and $B_s$, is given by \eqref{eq:act:pde}. Clearly, for the redefined operators, $E_1=E$, $F_1=F$, $J_1=J$, $G_1=G$, $A_1=A$, $B_1=B$ and $C_1=C$, Assumption \ref{as:act:exp} holds and, since $(A_{cs},B_{cs})$ is stabilizable, Assumption \ref{as:act:stabnotcas} must also hold according to Proposition \ref{pr:act:stabcon} and Remark \ref{rm:act:notcas}. It is easy to verify that Assumption \ref{as:act:det} is satisfied. We will apply Theorem \ref{th:act:det}, taking into account Remark \ref{rm:act:notcas}, to design a robust stabilizing output feedback controller. In what follows, we work with the redefined operators. Using Lemma \ref{pr:act:syl}, \eqref{eq:act:matexp} and \eqref{eq:act:Pisum}, it follows after a simple calculation that \vspace{-2mm} \begin{equation} \label{eq:ex1:Pi} \Pi= \frac{1}{2}\bbm{i\\ 1 \\ 0 }C_\L (-iI-A)^{-1} + \frac{1}{2}\bbm{-i\\ 1 \\ 0}C_\L (iI-A)^{-1} \vspace{-2mm} \end{equation} solves \eqref{eq:act:sylth}. From the Riesz spectral property of $A$ we have $(\l I-A)^{-1}z=\sum_{n=1}^\infty \frac{\langle z, \phi_n \rangle}{\l-\l_n}\phi_n$ for all $z\in Z_s$ and $\l\in \rho(A)$. This series converges in $Z_1$. Hence we can compute $C(\l I-A)^{-1}z$ by applying $C$ to each term of the series. Using this it follows from \eqref{eq:ex1:Pi} that \vspace{-4mm} \begin{equation} \label{eq:ex1:Piseries} \Pi z = -\sqrt{2} \sum_{n=1}^\infty \frac{\langle z, \phi_n \rangle}{1+\l_n^2}\bbm{1\\ \l_n\\0}. \vspace{-2mm} \end{equation} Noting that $C_\L(sI-A)^{-1}B=\GGG(s)-1/s$, we get from \eqref{eq:ex1:Pi} after a simple calculation that $ \Pi B = \bbm{0.019 & -0.165 & 0}^\top.$ Let $K=\bbm{2.522&-1.361&-3.273}$ and $L=\bbm{-3 & -1.75 & -0.75}^\top$ so that $E+(\Pi B+J)K$ and $E+LG$ are Hurwitz. By definition $K_1=K$ and $K_2=K\Pi$. The RLS $\Sigma_c$ with GOs $(A_c,B_c,C_c, D_c)$, where $B_c$, $C_c$ and $D_c$ are as in Theorem \ref{th:act:det} and $A_c$ is as in Remark \ref{rm:act:notcas}, is the required robust stabilizing output feedback controller. We have validated this controller by implementing the closed-loop of the actuator-plant cascade system and the controller numerically. In our simulation, the initial condition for the plant is $[1\ \ 1]^\top$. All the other initial conditions are zero. To implement $K_2$, we approximate $\Pi$ by truncating the series in \eqref{eq:ex1:Piseries} after 10 terms. Figure 2 shows the plant state trajectory. \m\vspace{-14mm} $$\includegraphics[scale=0.9]{example1.eps}$$ \centerline{ \parbox{5.7in}{\vspace{1mm} Figure 2. The controller designed for the actuator-plant cascade system in Example 5.1 ensures that plant state $w=[w_1\ \ w_2]^\top$ converges to zero exponentially. \vspace{-2mm}}} \end{example} \begin{example} Let the plant in \eqref{eq:sen:ode} be determined by the matrices $E$ and $F$ defined in Example 5.1. The plant output which drives the sensor is $\bar Gw$, where $\bar G=[1\ \ 0]$. Let the sensor dynamics be governed by the wave PDE \vspace{-2mm} \begin{align} \bar z_{tt}(x,t) &= \bar z_{xx}(x,t) \FORALL x\in(0,1), \FORALL t>0,\nonumber\\[0.5ex] \bar z_x(0,t) &= \bar z_t(0,t), \qquad \bar z(1,t)=\bar G w(t). \label{eq:ex2:sensor} \\[-4.8ex]\nonumber \end{align} The sensor output is $\bar z(0,t)$. A similar sensor model is considered in \cite{Kri:2009a}, where the stabilizing term $\bar z_t(0,t)$ is a part of the observer rather than the sensor model. In both cases, the resulting observer error dynamics to be stabilized is the same. It is difficult to formulate the above sensor dynamics directly as an abstract evolution equation. Hence we introduce the transformation $z(x,t)=\bar z(x,t)-x^2 \bar G w(t)$. Then $z$ satisfies the wave PDE \vspace{-2.5mm} \begin{align} z_{tt}(x,t) &= z_{xx}(x,t)+(2\bar G-x^2\bar GE^2)w(t) - x^2\bar GEF u(t) \quad \forall x\in(0,1), \quad \forall\ t>0,\nonumber\\[0.5ex] z_x(0,t) &= z_t(0,t), \qquad z(1,t)=0, \label{eq:ex2:sensormod}\\[-4.8ex]\nonumber \end{align} which we regard as the sensor dynamics for observer design. The output of this sensor is $z(0,t)$, which is the same as $\bar z(0,t)$. The dynamics in \eqref{eq:ex2:sensor} and \eqref{eq:ex2:sensormod} are equivalent under some regularity assumptions on their solutions; such an assumption is implicit in the observer design in \cite{Kri:2009a}. For instance, for any $C^1$ input $u$, $z$ is a classical solution of \eqref{eq:ex2:sensormod} if and only if $\bar z(x,t)=z(x,t)+x^2\bar G w(t)$ is a classical solution of \eqref{eq:ex2:sensor}. Also, the mild solution of \eqref{eq:ex2:sensormod} in $Z=H^1_0(0,1)\times L^2(0,1)$ can be shown to yield a weak solution of \eqref{eq:ex2:sensor}. An observer built for the plant-sensor system by regarding \eqref{eq:ex2:sensormod} as the sensor dynamics is also an observer for the plant-sensor system in which the sensor dynamics is \eqref{eq:ex2:sensor}. To be precise, it will generate exponentially accurate estimates of $w$ and $\bar z(x,t)-x^2 \bar G w(t)$. We illustrate this below in our simulation. \vspace{-1mm} Let $G=\sbm{2\bar G\\ -\bar G E^2}$ and $H=\sbm{0\\-\bar G E F}$. Let $Z=H^1_0(0,1)\times L^2(0,1)$, where $H^1_0(0,1) = \{f\in H^1(0,1) \big| f(1)=0\}$. Define $A$ by $A\sbm{f \\ g}=\sbm{g \\ f_{xx}}$ for all $(f,g)\in D(A)$, where $D(A) = \{(f,g)\in H^2(0,1)\cap H^1_0(0,1)\times H^1_0(0,1) \big| f_x(0)= g(0)\}$. Define $B\in\Lscr(\rline^2,Z)$ by $B\sbm{a\\b}=(0,a+b x^2)\in Z$. Define $C\in\Lscr(Z,\rline)$ by $C\sbm{f \\ g}=f(0)$ for all $(f,g)\in Z$. It is well-known that $A$ generates an exponentially stable semigroup $\tline$ on $Z$. Since $B$ and $C$ are bounded, they are admissible for $\tline$ and the triple $(A,B,C)$ is regular. With these operators $A$, $B$, $C$, $G$ and $H$ the sensor dynamics \eqref{eq:ex2:sensormod} can be formulated as an abstract evolution equation of the form \eqref{eq:sen:pde} on $Z$ with output \eqref{eq:sen:output}. Next we next design an observer for \eqref{eq:sen:ode}-\eqref{eq:sen:output} determined by the above operators. \vspace{-1mm} For each $s\in\overline{\cline^+}$ and $\sbm{f \\ g}\in Z$, we compute $(sI-A)^{-1}\sbm{f \\ g}$ by solving the ODE $(sI-A)\sbm{\phi \\ \psi}=\sbm{f \\ g}$ to get \vspace{-2mm} \begin{align} &(sI-A)^{-1} \bbm{f \\ g}(x)=\bbm{\phi(x) \\ \psi(x)} \nonumber \\ &= \bbm{p \cosh sx + \frac{q\sinh sx}{s} - \int_0^x \frac{\sinh s(x-y)}{s}[sf(y)+g(y)]\dd y \\ p s\cosh sx + q\sinh sx - \int_0^x \sinh s(x-y)[sf(y)+g(y)]\dd y -f(x)}, \label{eq:ex2:sI-A} \\[-5ex]\nonumber \end{align} where $p$ and $q$ are such that $\phi_x(0)=\psi(0)$ and $\psi(1)=0$. From this expression we get that the sensor transfer function, see \eqref{eq:act:RLStf}, is given by \vspace{-2mm} $$ \GGG(s)= \bbm{\dfrac{\cosh s -1}{s^2(\sinh s+ \cosh s)} & \dfrac{2\cosh s -2-s^2}{s^4(\sinh s+ \cosh s)}} \FORALL s\in \overline{\cline^+}.\vspace{-2mm} $$ We have $E_1=E$ and so $G_1=G$. It is easy to see that Assumptions \ref{as:sen:exp} and \ref{as:sen:det} hold. Using \eqref{eq:sen:Pisum} and \eqref{eq:ex2:sI-A}, it follows after a lengthy calculation that $ \Pi= \bbm{\cos(x-1)-x^2 & \sin(x-1) \\ -\sin(x-1) & \cos(x-1)-x^2} $ solves \eqref{eq:sen:sylth}. Clearly $C\Pi = \bbm{\cos 1 & -\sin 1}$. Let $L=\bbm{-2.462 & 1.984}^\top$ so that $E+LC\Pi$ is Hurwitz. Then \eqref{eq:sen:obs} (with $\tilde L = L$, $\Pi_2=\Pi$) is an observer for the plant-sensor system with the sensor dynamics in \eqref{eq:ex2:sensormod}. We have validated this observer numerically on the plant-sensor system in which the sensor dynamics is governed by \eqref{eq:ex2:sensor}. In our simulation, $u(t)=\sin 5t$ in \eqref{eq:sen:ode}. The initial condition for the plant is $[-1\ \ 2]^\top$. All other initial conditions are zero. Figure 3 shows the estimation error in the plant state. \vspace{-6mm} $$\includegraphics[scale=0.9]{example2.eps}$$ \centerline{ \parbox{5.7in}{\vspace{1mm} Figure 3. The error $e_1=w_1-\hat w_1$ and $e_2=w_2-\hat w_2$ between the plant state and its estimate generated by the observer converges to zero exponentially. \vspace{1.5mm}}} \end{example} \begin{remark} \label{rm:sylsol} A key step in the controller/observer design approach presented in this work is solving a Sylvester equation with unbounded operators for $\Pi$ and then computing $\Pi B_1$ (for controller design) or $C_{1\L} \Pi$ (for observer design). These operators can be constructed by first computing the resolvent $(\l I-A_1)^{-1}$ for each $\l\in\sigma(E_1)$ (this follows from the expressions in \eqref{eq:act:Pisum} and \eqref{eq:sen:Pisum}). Also, using the resolvent $(\l I-A)^{-1}$ for each $\l\in\sigma(E_1)$, we can verify the solvability of the stabilization and estimation problems. When the PDE is a 1D system with constant parameters, the resolvent can be computed easily by solving a linear ODE with constant coefficients like in Example 5.2. Developing numerical techniques for computing the resolvent and the operators $\Pi$, $\Pi B_1$ and $C_{1\L} \Pi$ for higher-dimensional PDEs and PDEs with spatially-varying coefficients is a topic for future research. \hfill$\square$ \vspace{-2mm} \end{remark} \section{Conclusions and future work} \label{sec6} \setcounter{equation}{0} \vspace{-1mm} \ \ \ We have presented a Sylvester equation based framework for stabilizing PDE-ODE cascade systems and constructing observers for ODE-PDE cascade systems. Using this framework we can solve the PDE-ODE stabilization and ODE-PDE estimation problems for several PDE models, which have been solved in the literature via the backstepping approach. To be specific, applying Theorem \ref{th:act:stab} we can solve the robust state feedback PDE-ODE stabilization problem considered in \cite{KrSm:2008} for a transport equation, in \cite{Kri:2009} for a diffusion equation (see also Remark \ref{rm:act:nonreg}) and in \cite{Kri:2009a} and \cite{LiKr:2010} for a wave equation. We remark that in the case of the wave equation, we must first stabilize it using the control law in \cite{SmKr:2009} and then apply Theorem \ref{th:act:stab}. Using Theorem \ref{th:act:det}, we can solve the robust output feedback PDE-ODE stabilization problems considered in \cite{SaGaKr:2018} for a transport equation and a diffusion equation. Finally applying Theorem \ref{th:sen:det}, we can solve the ODE-PDE estimation problem considered in \cite{KrSm:2008} for a transport equation, in \cite{Kri:2009} for a diffusion equation (see also Remark \ref{rm:sen:nonreg}) and in \cite{Kri:2009a} and \cite{LiKr:2010} for a wave equation (see Example 5.2). In the case of Neumann interconnections considered in \cite{SuKr:2010}, we can recover some of the results. We can solve the state feedback PDE-ODE stabilization problem considered in \cite{SuKr:2010} for a wave equation by first stabilizing the wave equation via boundary damping and then using Theorem \ref{th:act:stab}. However, the interconnections in \cite{SuKr:2010} containing heat equations cannot be studied in the framework of this paper because in their formulation as an abstract evolution equation, the control and observation operators are not admissible for the semigroup generated by the state operator. It may be possible to circumvent this admissibility problem by introducing two stable first-order filters in the spirit of Remarks \ref{rm:act:nonreg} and \ref{rm:sen:notcas}. Note that such admissibility problems and transformations like the one used in Example 5.2 are not discussed in the backstepping literature since they implicitly work only with smooth solutions. We have also presented simple necessary and sufficient conditions for ascertaining the solvability of the stabilization problem for PDE-ODE cascade systems and estimation problem for ODE-PDE cascade systems. To use these conditions, it is enough to find the value of the transfer function of the PDE system at the unstable eigenvalues of the ODE system. The results in this work, unlike the backstepping results, apply to interconnections containing multi-input multi-output systems, higher-dimensional PDEs and PDEs with spatially-varying coefficients. An important direction for future work is developing an abstract framework, similar to the one in this paper, for studying stabilization problems for coupled PDE-ODE systems. The motivation for this comes from the backstepping works on coupled PDE-ODE systems such as \cite{DeGeKe:18a}, \cite{TaXi:2011}, in which these systems are transformed into PDE-ODE cascade systems. Understanding the transformations they propose in an abstract setting will permit us to develop stabilizing controllers for a class of coupled PDE-ODE systems. Another direction for future research is using the Sylvester equation based approach for adaptive control of PDE-ODE cascade systems. \vspace{-4mm} \input{references} \end{document}
1508.03827
\subsubsection{\@startsection{subsubsection}{3}{10pt}{-1.25ex plus -1ex minus -.1ex}{0ex plus 0ex}{\normalsize\bf}} \def\paragraph{\@startsection{paragraph}{4}{10pt}{-1.25ex plus -1ex minus -.1ex}{0ex plus 0ex}{\normalsize\textit}} \renewcommand\@biblabel[1]{#1} \renewcommand\@makefntext[1 {\noindent\makebox[0pt][r]{\@thefnmark\,}#1} \makeatother \renewcommand{\figurename}{\small{Fig.}~} \sectionfont{\large} \subsectionfont{\normalsize} \fancyfoot{} \fancyfoot[LO,RE]{\vspace{-7pt}\includegraphics[height=9pt]{LF}} \fancyfoot[CO]{\vspace{-7.2pt}\hspace{12.2cm}\includegraphics{RF}} \fancyfoot[CE]{\vspace{-7.5pt}\hspace{-13.5cm}\includegraphics{RF}} \fancyfoot[RO]{\footnotesize{\sffamily{1--\pageref{LastPage} ~\textbar \hspace{2pt}\thepage}}} \fancyfoot[LE]{\footnotesize{\sffamily{\thepage~\textbar\hspace{3.45cm} 1--\pageref{LastPage}}}} \fancyhead{} \renewcommand{\headrulewidth}{1pt} \renewcommand{\footrulewidth}{1pt} \setlength{\arrayrulewidth}{1pt} \setlength{\columnsep}{6.5mm} \setlength\bibsep{1pt} \twocolumn[ \begin{@twocolumnfalse} \noindent\LARGE{\textbf{Thermotropic interface and core relaxation dynamics of \mbox{liquid crystals} in silica glass nanochannels: A dielectric spectroscopy study}} \vspace{1.6cm} \noindent\large{\textbf{Silvia Ca{\l}us,\textit{$^{a}$} Lech Borowik,\textit{$^{a}$} Andriy V. Kityk,$^{\ast}$\textit{$^{a}$} Manfred Eich, \textit{$^{b}$} Mark Busch\textit{$^{c}$} and Patrick Huber$^{\ast}$\textit{$^{c}$}}}\vspace{0.5cm} \noindent \normalsize{We report dielectric relaxation spectroscopy experiments on two rod-like liquid crystals of the cyanobiphenyl family (5CB and 6CB) confined in tubular nanochannels with 7 nm radius and 340 micrometer length in a monolithic, mesoporous silica membrane. The measurements were performed on composites for two distinct regimes of fractional filling: monolayer coverage at the pore walls and complete filling of the pores. For the layer coverage a slow surface relaxation dominates the dielectric properties. For the entirely filled channels the dielectric spectra are governed by two thermally-activated relaxation processes with considerably different relaxation rates: A slow relaxation in the interface layer next to the channel walls and a fast relaxation in the core region of the channel filling. The strengths and characteristic frequencies of both relaxation processes have been extracted and analysed as a function of temperature. Whereas the temperature dependence of the static capacitance reflects the effective (average) molecular ordering over the pore volume and is well described within a Landau-de Gennes theory, the extracted relaxation strengths of the slow and fast relaxation processes provide an access to distinct local molecular ordering mechanisms. The order parameter in the core region exhibits a bulk-like behaviour with a strong increase in the nematic ordering just below the paranematic-to-nematic transition temperature $T_{PN}$ and subsequent saturation during cooling. By contrast, the surface ordering evolves continuously with a kink near $T_{PN}$. A comparison of the thermotropic behaviour of the monolayer with the complete filling reveals that the molecular order in the core region of the pore filling affects the order of the peripheral molecular layers at the wall. } \vspace{0.5cm} \end{@twocolumnfalse} ] \section{Introduction} \footnotetext{\textit{$^{a}$~Faculty of Electrical Engineering, Czestochowa University of Technology, 42-200 Czestochowa, Poland, E-mail: andriy.kityk@univie.ac.at}} \footnotetext{\textit{$^{b}$~Institute of Optical and Electronic Materials, Hamburg University of Technology (TUHH), D-21073 Hamburg-Harburg, Germany}} \footnotetext{\textit{$^{c}$~Institute of Materials Physics and Technology, Hamburg University of Technology (TUHH), D-21073 Hamburg-Harburg, Germany, E-mail: patrick.huber@tuhh.de}} Composites of liquid crystals (LCs) and monolithic, mesoporous solids, optically transparent porous silica in particular, are promising hybrid materials for organic electronics and the emerging field of nano photonics \cite{Martin1994, Schmidt-Mende2001, Bisoyi2011, Abdulhalim2012, Duran2012, Kumar2014}. They can be easily prepared by melt infiltration, profit from the mechanical stability of the porous solid template and from the large variation in electrical and optical properties offered by the plethora of different liquid crystalline/mesoporous host combinations nowadays available. Moreover, they allow one to explore the thermodynamics, structure and transport characteristics of liquid crystalline systems in restricted geometries, and thus phenomenologies which are of high interest both in nanoscience and nanotechnology \cite{Huber2015}. Most prominently, the paranematic phase has been widely explored in theoretical \cite{Sheng1976, Poniewierski1987, Kutnjak2003, Kutnjak2004, Karjalainen2013, Karjalainen2015} and experimental\cite{Yokoyama1988, Bellini1992, Kralj1998, Kityk2008, Schoenhals2010, Grigoriadis2011, Calus2014} studies at planar surfaces and in porous media. It is characterised by a residual nematic order and thus the absence of a ''true'' isotropic liquid state at high temperatures. The evolution of the orientational order parameter from this pre-ordered state to the nematic phase can be described by a ''nematic ordering field'', $\sigma$ within a Landau-de-Gennes free energy approach \cite{Sheng1976, Poniewierski1987, Kutnjak2003, Kutnjak2004}. The strong first order \textit{I-N} transition is replaced by a weak first order or continuous paranematic-to-nematic (\textit{P-N}) transition at a temperature $T_{PN}$ and may also be accompanied by pre-transitional phenomena in the molecular orientational distribution \cite{Eich1984}. For tubular pore geometry, this effective field is strongly dependent both on the average pore radius $R$ ($\propto R^{-1}$) and on the LC-wall interaction. Optical polarimetry \cite{Kityk2008,Kityk2010,Calus2012,Calus2014, Huber2015} provides arguably the most accurate insights in the order parameter behaviour in the vicinity of the paranematic-to-nematic transition \cite{Kutnjak2003,Kutnjak2004}. Note, however, that this technique and most other experimental methods probe an effective (averaged) molecular ordering. By contrast computer simulations on LCs in thin film and pore geometry can give spatially resolved information \cite{Gruhn1997, Gruhn1998, Barmes2004, Care2005, Binder2008, Ji2009, Ji2009b, Mazza2010, Pizzirusso2012, Roscioni2013,Karjalainen2013, Cetinkaya2013, Schulz2014, Karjalainen2013, Busselez2014, Karjalainen2015}. These studies indicate pronounced spatial heterogeneities, encompassing interface-induced molecular layering and radial gradients both in the orientational order and reorientational dynamics \cite{Li2009, Mazza2010}. \begin{figure}[tbp] \center \epsfig{file=structure.EPS, angle=0, width=0.9\columnwidth}\caption{Sketch of the capacitor geometry for the dielectric relaxation experiments: Gold electrodes were evaporated onto the mesoporous silica membrane, so that the sample forms a simple parallel circuit both in the regime of monomolecular layer coverage (a) and for the completely filled channels (b). (c) Illustration of two regions with two distinct molecular mobility for completely filled nanochannels: the interface layer next to the channel wall with a slow dipolar relaxation and the core region characterised by fast relaxation dynamics.} \label{fig1} \end{figure} Dielectric relaxation spectroscopy has proven as particularly suitable to achieve such local information on confined LCs \cite{Cramer1997, Hourri2001, Frunza2001, Leys2005, Sinha2005, Leys2008, Frunza2008, Bras2008, Jasiurkowska2012, Calus2015}. A number of studies on LCs in porous media evidently document that the rate of dipolar relaxations (and thus orientational and translational mobility) usually differ significantly between the molecules in the pore wall proximity and the ones in the channel centre \cite{Cramer1997, Hourri2001, Frunza2001, Leys2005, Sinha2005, Leys2008, Frunza2008, Bras2008, Jasiurkowska2012}. In dielectric studies Aliev \textit{et~al.~} \cite{Sinha1997, Sinha1998,Aliev2005} found a slow surface mobility in comparison to the dynamics in the pore centre for liquid crystals confined in tortuous and tubular mesopores. Moreover, a significant broadening of the dielectric spectra was observed and traced by the authors to inhomogeneous couplings of the molecules to the pore walls and coupling variations among the molecules themselves \cite{Aliev2010}. It turned out that the rate of dipolar relaxation differs by about two orders of magnitude between the molecules located at the host-guest interface and the ones located in the core region of the pore filling, see sketch in Fig.~1c. Therefore, both relaxation processes can be discriminated and the corresponding dipolar relaxation strengths provide then access to local molecular ordering. Employing this distinct dynamical behaviour, we could recently show for 7CB, a prominent member of the rod-like cyanobiphenyl nematogens that the ordering in the core region is reminiscent of the bulk behaviour \cite{Calus2015}. It is characterised by an abrupt increase in the nematic ordering in the transition region. By contrast, the surface ordering exhibits a continuous thermotropic evolution of nematic order with a gradual change in slope and an asymptotic vanishing in the paranematic phase. In this article we extend our previous dielectric spectroscopy studies on two other members of the cyanobiphenyl family, i.e. $n$CB ($n$=5,6) embedded in silica membranes with parallel aligned nanochannels. Overall, we find a very similar behaviour of the effective average as well as local order parameters as in the case of 7CB. As we will outline below, it is again possible to describe the experimental observations in a phenomenological manner by applying a Landau-De Gennes model. The analogous findings and semi-quantitative descriptions document that the dielectric spectroscopy technique along with the phenomenological approach and key physical principles outlined in Ref. \cite{Calus2015} seem to be applicable to confined nematics in general. \section{Experimental} The nematic LCs 5CB and 6CB have been purchased from Merck, Germany. The porous silica ($p$SiO$_2$) membranes were obtained by electrochemical anodic etching of highly $p$-doped $\langle$100$\rangle$ silicon wafers which have been subjected to thermal oxidation for 12 h at $T$=800 $^o$C under standard atmosphere. The resulting array of channels are aligned along the [100] crystallographic direction, i.e. perpendicular to the membrane surface. The average channel radius is $R=6.6 \pm$0.5 nm (porosity $P=55 \pm$2\%) as determined by recording volumetric N$_2$-sorption isotherms at $T$=77~K. For the dielectric measurements, gold electrodes have been deposited onto the porous membrane. All measurements were performed on samples cut from a monolithic porous membrane of $d=$340 $\mu$m thickness. Two samples with electrode area of 97 mm$^2$ (geometric capacitance $C_0=\varepsilon_0 S/d=$ 2.52 pF) and 104 mm$^2$ ($C_0=$ 2.71 pF) have been filled by nematic LCs 5CB and 6CB, respectively. The entirely filled samples (fractional filling $f=$1.0) have been obtained by capillary imbibition of the LC melt \cite{Gruener2011}. To prepare the partially filled samples with layer coverage we immersed it into a binary LC/cyclohexane solution (5 vol. \%) for about 20-30 minutes. After evaporation of the high-vapour-pressure solvent (cyclohexane), the low-vapour-pressure LC remained in the porous matrix. The final filling fraction $f$ and the complete evaporation of the solvent have been verified by comparing the sample weight before and during the adsorption procedure \cite{Huber2013}. The obtained fraction fillings $f=0.16\pm0.01$ (for 5CB in $p$SiO$_2$) and $f=0.18\pm0.01$ (for 6CB in $p$SiO$_2$) correspond approximately to one monomolecular layer. \begin{figure*}[htbp] \center \epsfig{file=Disperse.EPS, angle=0,width=1.5\columnwidth} \caption{\footnotesize Frequency dispersion of the imaginary capacitance $C''(\nu)$ in 5CB-$p$SiO$_2$ and 6CB-$p$SiO$_2$ nanocomposites for five selected temperatures. Panels (a) and (c) correspond to partially filled matrices 5CB-$p$SiO$_2$ ($f=0.16$) and 6CB-$p$SiO$_2$ ($f=0.18$), respectively, in the regime of monomolecular layer coverage. Panels (b) and (d) correspond to nanocomposites with entirely filled pores ($f=1.0$). Insets in panels (a)-(d) displays Cole-Cole plots for three selected temperatures. Symbols in the panels and inserts are the measured data points. Solid lines are the best fits based on the Cole-Cole relaxation model (see Eq.(1)). The geometric capacitance, $C_0$, equals 2.52 pF (5CB-$p$SiO$_2$) and 2.71 pF (6CB-$p$SiO$_2$).} \label{fig2} \end{figure*} Dielectric spectra have been recorded in the frequency range from 0.5 kHz to 15 MHz using the impedance/gain-phase analyser Solatron-1260A. The measurements have been performed at selected temperatures between 292~K and 324~K and controlled with an accuracy of 0.01~K by a LakeShore 340 temperature controller. Note that both, the entirely filled samples and the partially filled samples with monolayer coverage form simple parallel electrical circuits, as schematically sketched in Figs.~1a and 1b, respectively. The advantage of the simple geometry is evident: Since the channels are all arranged parallel to the external electric field, the effective complex permittivity of the composite, $\varepsilon^*$, is given by the permittivities of the components $\varepsilon^*_{SiO_2}$ = 3.8 (silica substrate), $\varepsilon^*_{nCB}$ (nematic LC) and $\varepsilon_{v}$ = 1 (vacuum permittivity) weighted with the corresponding volume fractions: $\varepsilon^* = (1-P)\varepsilon^*_{SiO_2} + P [f\varepsilon^*_{nCB} + (1-f)\varepsilon_{v}]$. In the frequency range 0.0005-15 MHz $\varepsilon^*_{SiO_2}\approx \varepsilon'_{SiO_2}$ and and it does not exhibit any frequency dispersion. Thus, the relaxation behaviour observed in the effective permittivity, $\varepsilon^*(\nu)$ correspond to that of the confined LC. For more complicated pore geometries, e.g. for partial fillings in the capillary condensation regime depolarisation effects may lead to phase shifts of the internal electric field compared to the external field, which considerably complicates the analysis - see also Ref.~\cite{Calus2015}. Of course, the channel walls of the present matrix are rough, and the radius is likely to vary along the long channel axis which may result in deviations from the ideal parallel geometry. Nevertheless, the parallel-circuit assumption is expected to be a good approximation. \section{Results and discussion} \subsection{Molecular mobility probed by dielectric spectroscopy} In Fig.~2 we display the frequency dispersion of the imaginary capacitance $C''(\nu)=\varepsilon''(\nu)C_0$ (symbols) at five selected temperatures, $T$, for entire and partially filled nanoporous silica membranes. In order to analyse the observed relaxation behaviour in more detail, we resort to a representation of the dielectric data in so-called Cole-Cole diagrams, i.e. we plot the imaginary part $C''(\nu)$ on the vertical axis and the real part $C''(\nu)$ on the horizontal axis with frequency $\nu$ as the independent parameter, see insets in Fig.~2. In such a diagram, a material that has a single relaxation frequency, as typical of the classical Debye relaxator, will appear as a semicircle with its center lying on the horizontal at $C=0$ and the peak of the loss factor occurring at 1/$\tau$, where $\tau$ is a measure of the mobility of the molecules (dipoles). It characterises the time required for a displaced system aligned in an electric field to return to $1/e$ of its random equilibrium value (or the time required for dipoles to become oriented in an electric field). A material with multiple relaxation frequencies will be a semicircle (symmetric distribution) or an arc (nonsymmetrical distribution) with its center lying below the horizontal at $C=0$. In that sense, the Cole-Cole representation allows one to geometrically illustrate and analyse the relaxation behaviour of a given system in a quite simple manner. \cite{Cole1941, Kremer2002} An analysis of the experimental data shows that the dielectric dispersion of the complex capacitance $C^*(\omega)$ can be well described by two Cole-Cole processes: \begin{eqnarray} C^*(\omega)&=&\varepsilon^*(\omega) C_0=\nonumber \\ &=&C_{\infty}+\frac{\Delta C_1}{1+(\textrm{i}\omega \tau_1)^{1-\alpha_1}}+\frac{\Delta C_2}{1+(\textrm{i}\omega \tau_2)^{1-\alpha_2}}, \quad \label{eq1} \end{eqnarray} where $\omega=2\pi\nu$ is the cyclic frequency, $C_{\infty}=\varepsilon_{\infty}C_0$ is the high frequency limit capacitance expressed via the high frequency permittivity $\varepsilon_{\infty}$, $\Delta C_1=\Delta \varepsilon_1 C_0$ and $\Delta C_2=\Delta \varepsilon_2 C_0$ are the capacitance relaxation strengths expressed via the dielectric relaxation strengths $\Delta\varepsilon_1$ and $\Delta \varepsilon_2$ of the slow process I and the fast process II, respectively, $\tau_1$ and $\tau_2$ are the mean relaxation times of the corresponding processes. The Cole-Cole exponents $\alpha_1$ and $\alpha_2$ ($0\le\alpha_1,\alpha_2<1$) characterise the distribution of the relaxation times; its zero limit corresponds to the Debye process with a single relaxation time. Solid lines in Fig.~2 are the best fits as obtained by a simultaneous analysis of the measured real and imaginary capacitance based on Eq.~1. The deviations of the experimental data points from the fitting curves at low frequencies originate in ionic dc-conductivity. For this reason the low-frequency region was always excluded from the fitting analysis.This fitting procedure has been applied to the measured dispersion curves in the entire temperature range. In the case of entirely filled samples, 5CB-$p$SiO$_2$ and 6CB-$p$SiO$_2$ ($f=1.0$), uncertainties regarding the extracted fit parameters can occur at higher temperatures, particularly above the paranematic-to-nematic transition point, $T_{PN}$. Then the maximum of the imaginary part, $C''(\nu)$, shifts out of the upper limit of the frequency window ($\nu_m>$15 MHz) employed in the dielectric measurements. Thus only a part of the left wing of the relaxation band is observed. Fortunately, this problem can be resolved by following the route described in detail in Ref.\cite{Calus2015}, where one takes into account that the capacitance relaxation strength $\Delta C_2$ can be determined alternatively by employing Eq. (1) in its static limit ($\nu \rightarrow 0$): \begin{equation} \Delta C_2 =C_{\mathrm{st}}-C_{\infty}-\Delta C_1. \label{eq2} \end{equation} Here the static dielectric constant, $C_{\mathrm{st}}$, is determined directly from the Cole-Cole plot by its extrapolation to the low frequency region. Obviously, $C_{\mathrm{st}}$ is practically independent from other extracted fit parameters. $C_{\infty}$, on the other hand, it is strongly dominated by the electronic polarisability which is weakly temperature dependent. Estimations show that its contribution to the temperature changes of the static capacitance, similarly as in the case of 7CB-$p$SiO$_2$ \cite{Calus2015}, do not exceed 5\% in the entire $T$-range, i.e. it remains within the error of the fitting analysis. In the fitting procedure the best parameter sets have been determined by a minimalisation of the difference between the $\Delta C_2$ values extracted by the two alternative ways: (i) direct fits of measured dispersion $C'(\nu)$ and $C''(\nu)$ and (ii) extracting $C_{\mathrm{st}}$-value from the Cole-Cole plots and subsequent employing of Eq.(2). The combination of these procedures ensures unambiguity of the extracted fit parameters. The corresponding temperature dependences are displayed in Figs.~3-6. The interpretation of the dispersion curves is similar to that reported in Ref.\cite{Calus2015} as well as in a series of previous dielectric studies \cite{Hourri2001,Leys2005,Sinha2005,Leys2008,Bras2008,Kityk2014,Wallacher2004b}. For the entirely filled samples ($f=1$) the slow relaxation corresponds to the rotational dynamics of the molecules in direct contact with the pore walls (so-called surface or more strictly spoken interface relaxation), see sketch in Fig.~1c. The fast relaxation process originates in the molecular rotational dynamics in the core region (the so-called core relaxation). It is obvious that for partially filled samples, i.e. in the regime of monomolecular layer coverage, the dielectric relaxation is strongly dominated by the surface relaxation. Nevertheless, a weak contribution of the fast relaxation is also present. Its presence is indicated by the slightly asymmetric shape of Cole-Cole plots observed at higher temperatures. The corresponding relaxations strength, $\Delta C_2$ (see Fig.~6, $f=0.16$ (5CB) and $f=0.18$ (6CB)), decrease with decreasing temperature and practically vanish below 290~K. We believe that due to increasing intermolecular spacings at higher temperatures some molecules are pushed into the next molecular layer, resulting in a considerably faster dipolar relaxation. The ''fast'' relaxation process in the monomolecular layer regime, on the other hand, differs qualitatively from the one observed in the core region of entirely filled matrices. In the first case the molecules are located at the tbp interface. In the second one they are located in the core region of the pore filling and, accordingly, are influenced by the collective molecular ordering resulting from the paranematic-to-nematic transition. Therefore, the smooth temperature variations of all Cole-Cole parameters observed in the monomolecular layer regime contrast with the ones observed for the entirely filled matrices. Roughly speaking, the monomolecular layer regime does not show a phase transition. This is evident if one compares e.g. the temperature dependences of the static capacitance, $C_{\mathrm{st}}(T)$, for both regimes of pore filling, see Fig.~5. \begin{figure}[tbp] \begin{center} \center \epsfig{file=alpha.EPS, angle=0, width=0.99\columnwidth} \caption{\footnotesize Cole-Cole parameters of the slow relaxation process I, $\alpha_1$, and fast relaxation process II, $\alpha_2$ vs $T$ as extracted within the fitting procedure for 5CB-$p$SiO$_2$ (a) and 6CB-$p$SiO$_2$ (b) nanocomposites in the regimes with entirely ($f=1.0$) and partial ($f=0.16$ (5CB-$p$SiO$_2$); $f=0.18$ (6CB-$p$SiO$_2$)) filled channels.} \label{fig3} \end{center} \end{figure} \begin{figure}[tbp] \begin{center} \center \epsfig{file=tau.EPS, angle=0, width=0.99\columnwidth} \caption{\footnotesize Relaxation times of the slow relaxation process I, $\tau_1$, and fast relaxation process II, $\tau_2$ vs $T^{-1}$ as extracted within the fitting procedure for 5CB-$p$SiO$_2$ (a) and 6CB-$p$SiO$_2$ (b) nanocomposites in the regimes with entirely ($f=1.0$) and partial ($f=0.16$ (5CB-$p$SiO$_2$); $f=0.18$ (6CB-$p$SiO$_2$)) filled channels.} \label{fig4} \end{center} \end{figure} For the nanocomposites with entirely filled matrices ($f=$1.0) the Cole-Cole parameters, $\alpha_1$ and $\alpha_2$ (see Fig.~3) exhibit opposite tendencies in their temperature evolution. The surface relaxation (process I) is characterised by a rather broad distribution of the relaxation times at room temperature ($\alpha_1 \sim$ 0.48-0.50), but becomes somewhat narrower at high temperatures and approaches a magnitude of 0.35-0.40 in the paranematic phase. The dipolar relaxation in the core region (process II), on the other hand, is characterised by a narrow distribution of the relaxation rates at room temperature ($\alpha_2 \sim$ 0.05-0.06) and rises to about 0.12-0.14 in the paranematic state, with a characteristic, step-like change in the vicinity of the paranematic-to-nematic transition. Similar changes in the phase transition region are observed in the temperature evolution of other relaxation parameters, $\tau_2(T)$ (Fig.~4) and $\Delta C_2(T)$ (Fig.~6). This means that an orientational ordering in the core region affects the relaxation parameters. For comparison, the surface relaxation parameters, $\alpha_1$, $\tau_1$ and $\Delta C_1$ exhibit here only smooth temperature variations. At this stage, one can already conclude that the molecular ordering in the core and surface regions of the pore filling are expected to be considerably different. This issue will be discussed in details below. For the pores with monomolecular layer coverage, the parameter $\alpha_1$ exhibits quite similar temperature variation as the one found for the entirely filled samples. The corresponding $\alpha_1(T)$-curves are slightly shifted down, see Fig.~3. \begin{figure}[tbp] \begin{center} \center \epsfig{file=Cst.EPS, angle=0, width=0.99\columnwidth} \caption{\footnotesize Static capacitance, $C_{\mathrm{st}}$ as extracted within the fitting procedure (symbols) for 5CB-$p$SiO$_2$ ($f=1.0$(a), $f=0.16$(b)) and 6CB-$p$SiO$_2$ ($f=1.0$(c), $f=0.18$(d)) nanocomposites. Solid blue lines are the best fits of $C_{\mathrm{st}}(T)$-dependence ($f=$1.0) based on the KKLZ approach. Dash-dot blue lines indicate isotropic baselines, $C_t^{\mathrm{(iso)}}(T)$, obtained within the same fitting procedure.} \label{fig5} \end{center} \end{figure} The relaxation times of the surface and core processes are displayed in Fig.~4. The fast relaxation in the core region exhibits Arrhenius-like behaviour ($\tau_2=\tau_{o2}\exp(E_{a2}/k_bT)$) with somewhat different activation energies in the nematic ($T << T_{PN}$) and paranematic ($T >> T_{PN}$) phases. An exception is the vicinity of the paranematic-to-nematic transition, where smeared, step-like variations are observed. They may be attributable to a specific behaviour of the attempt relaxation time, $\tau_{o2}$, presumably because of changes in the activation entropy caused by the orientational molecular ordering at the phase transformation. The surface relaxation, $\tau_1$, on the other hand, exhibits only a smooth temperature variation accompanied by a gradual change in slope, but with no special features in the vicinity of $T_{PN}$. This process is much slower because of the large viscosity in the surface layer compared to the bulk viscosity. This hinders the rotational dynamics of the molecules \cite{Sinha1998}. Both relaxation times rise with increasing orientational ordering which is typical of rotational dynamics around the short molecular axis \cite{Haws1989,Diez2006}. \subsection{Thermotropic nematic order: Landau-de Gennes Analysis and Surface/Core Partitioning} In the Landau-de~Gennes theory the nematic (orientational) order parameter is defined as $Q=\frac{1}{2}\langle3\cos^2\theta-1\rangle$, where $\theta$ is the angle between the axis of the molecule and the director, $\vec{n}$, and brackets mean an averaging over the ensemble of molecules. The size of an ensemble domain is an important issue in such a description, particularly when the order parameter is spatially inhomogeneous. Smaller sizes are appropriate to describe local properties of the molecular ordering. Larger ones are more suitable in a characterisation of an effective (averaged) value of the order parameter. Its appropriate choice is thus a matter of the specific system under consideration and/or the approach used for its description. Specifically, the surface layer and core region of the systems studied here may be considered as two regions with homogeneous order behaviour, characterised by two distinct, local order parameters, $Q_s$ and $Q_c$, respectively. The effective (averaged) order parameter over the pore filling, $\bar{Q}$, can then be expressed as a superposition of the elementary contributions: $\bar{Q}=\bar{Q_s}+ \bar{Q_c}$, where $\bar{Q_s}=w_sQ_s$ and $\bar{Q_c}=w_cQ_c$, $w_s$ and $w_c$ ($w_s+w_c=1$) are the weight factors (volume fractions) of surface and core components, respectively. Although quite simplistic, this approach has an evident practical benefit: All three quantities, i.e. $\bar{Q}$, $\bar{Q_s}$ and $\bar{Q_c}$ can be extracted independently from the experiment. Following the approach developed in Ref.\cite{Calus2015}, which is based on the Maier and Meier equation \cite{Maier1961} the excess changes of the static capacitance are proportional to $\bar{Q}$: \begin{equation} C_{\mathrm{st}}-C_t^{\mathrm{(iso)}}\propto \bar{Q},\label{eq3} \end{equation} where $C_t^{\mathrm{(iso)}}(T)$ is the \emph{isotropic baseline} of the static capacitance. In the case of the bulk nematic LC $\Delta C_t^{\mathrm{(iso)}}(T)$ represents the bare temperature dependence of the capacitance relaxation strength in the isotropic phase, whereas its extrapolation to the nematic phase gives an isotropic baseline relative to which an excess contribution due to an orientational ordering ($p$-term) is counted. Similarly, an excess changes of the capacitance relaxation strengths $\Delta C_1$ and $\Delta C_2$ are proportional to $\bar{Q_s}$ and $\bar{Q_c}$, respectively: \begin{eqnarray} \Delta C_1(T) - \Delta C_1^{\mathrm{(iso)}}(T)\propto \bar{Q_s}(T); \\ \nonumber \Delta C_2(T) - \Delta C_2^{\mathrm{(iso)}}(T)\propto \bar{Q_c}(T). \label{eq4} \end{eqnarray} where $\Delta C_1^{\mathrm{(iso)}}(T)$ and $\Delta C_2^{\mathrm{(iso)}}(T)$ are the corresponding isotropic baselines. The factor of proportionality depends on the molecular dipole moment and its orientation with respect to the long principal axis of the molecule, the molecular number density, the internal field factors and the temperature, but it is identical in Eqs.~3 and 4. \begin{figure}[tbp] \begin{center} \epsfig{file=dC.EPS, angle=0, width=0.99\columnwidth} \caption{\footnotesize Capacitance relaxation strengths of the slow relaxation process I, $\Delta C_1$ and fast relaxation process II, $\Delta C_2$ vs $T$ as extracted within the fitting procedure for 5CB-$p$SiO$_2$ (panel (a), $f=1.0$, $f=0.16$) and 6CB-$p$SiO$_2$ (panel (b), $f=1.0$ $f=0.18$) nanocomposites. Dash-dot blue lines indicate the baselines of the isotropic state, see labels.} \label{fig6} \end{center} \end{figure} In Fig.~5 we display the temperature dependences of the static capacitance for the composites 5CB-$p$SiO$_2$ and 6CB-$p$SiO$_2$. In the monomolecular layer regime [5CB-$p$SiO$_2$ ($f=0.16$), section (b); 6CB-$p$SiO$_2$ ($f=0.18$) section (d)] only a gradual increase of $C_{\mathrm{st}}$ is observed upon cooling indicating a weak orientational ordering, but no hints of a phase transformation. For the entirely filed sample [5CB-$p$SiO$_2$ ($f=1.0$), section (a); 6CB-$p$SiO$_2$ ($f=1.0$) section (c)], on the other hand, pronounced changes with a characteristic kink at $T_{PN}$ on $C_{\mathrm{st}}(T)$ dependences are observed for both nanocomposites. A typical feature of these dependences is their \emph{continuous} character with a precursor behaviour quite similar to that observed in recent optical birefringence studies \cite{Kityk2008,Kityk2010,Calus2012,Calus2014}. \begin{figure*}[tbp] \begin{center} \center \epsfig{file=OP.EPS, angle=0, width=1.99\columnwidth} \caption{\footnotesize The effective (averaged) orientational order parameter, $\bar{Q}$ vs $T$ and its deconvolution in the elementary contributions (averaged local order parameters) describing molecular orderings in the surface ($\bar{Q_s}(T)$) and core ($\bar{Q_c}(T)$) regions for 5CB-$p$SiO$_2$ ($f=1.0$, section (a)) and 6CB-$p$SiO$_2$ ($f=1.0$, section (b)) nanocomposites. For convenience all order parameters are normalised to the value $\bar{Q}$ at 290~K. The relation $\bar{Q}=\bar{Q_s}+\bar{Q_c}$ holds for each $T$.}\label{fig7} \end{center} \end{figure*} Whereas the molecular ordering in the bulk, i.e. the isotropic-to-nematic phase transition, can be described by a Landau-de Gennes theory, Kutnjak, Kralj, Lahajnar, and Zumer \cite{Kutnjak2004,Kutnjak2003} have extended the corresponding phenomenological approach (hereafter denoted as KKLZ model) towards nematic phases in cylindrical confinement. In principal, this model is applicable to entirely filled channels, only and its description depends on the anchoring conditions at the channel walls. The untreated silica channels enforce planar anchoring \cite{DrevensekOlenik2003} with a preferred orientation of LC molecules along the long channel axes. A key point of the KKLZ theory is a bilinear coupling between the order parameter and the nematic ordering field, $\sigma$, which results in a residual nematic ordering (paranematic state) for $T>T_{PN}$ instead of an isotropic one. Omitting a description of the KKLZ model and details of the fitting procedure, which can be find in Refs.\cite{Kityk2008,Kityk2010,Calus2012,Calus2014}, we present here only the main results of this analysis. Figs.~5a and~5c display the best fits of measured $C_{\mathrm{st}}(T)$-dependences along with the isotropic baselines, $C_t^{\mathrm{(iso)}}(T)$, obtained within the same fitting procedure. The difference between the bulk and effective transition temperatures, $\Delta T^*$, which calibrates the temperature scale of the KKLZ-model, was taken to be equal 3.4~K (5CB) and 3.2~K (6CB) based on the results of recent optical polarimetric studies \cite{Calus2012}. The best fits yield $\sigma$-values of 0.91 (5CB-$p$SiO$_2$) and 1.10 (6CB-$p$SiO$_2$), which corresponds to a supercritical regime in both cases. The magnitudes of the critical radius, $R_c=2R\sigma$ equal to 12.0~nm (5CB-$p$SiO$_2$) and 14.5~nm (6CB-$p$SiO$_2$), respectively, which within an experimental error ($\pm$1.0 nm) is in agreement with that obtained in the optical birefringence studies: 12.1~nm and 14.0~nm \cite{Calus2012}, respectively. Obviously, the KKLZ approach gives an adequate description of the paranematic-to-nematic transition. Deviations of the experimental data points from the fit curves observed at lower temperatures originate in order parameter saturation, which is not appropriately described by the free energy expansion of the KKLZ-model, since it is limited to a fourth-order expansion of the free energy. In accordance with Eq.(3) the difference, $C_{\mathrm{st}}(T)- C_t^{\mathrm{(iso)}}(T)$ is proportional to $\bar{Q}(T)$. Normalised to a $\bar{Q}$-value at $T=290$ K, the $\bar{Q}(T)$-dependences of 5CB-$p$SiO$_2$ and 6CB-$p$SiO$_2$ nanocomposites are displayed in the upper panels of Figs.~7a and 7b, respectively. Using the $\Delta C_1(T)$ and $\Delta C_2(T)$ dependences presented in Fig.~6 for entirely filled matrices ($f=1.0$) and Eqs.(4) the effective (averaged) order parameter $\bar{Q}$ can be deconvoluted into elementary contributions characterising molecular orderings in the surface ($\bar{Q_s}$) and core ($\bar{Q_c}$) regions. A principal challenge are the determination of the isotropic baselines, $\Delta C_i^{\mathrm{(iso)}}(T)$ ($i=1,2$). We trace first $\Delta C_2^{\mathrm{(iso)}}(T)$. Above $T_{PN}$ $\Delta C_2(T)$ rather fast saturates to a nearly temperature independent value. A linear extrapolation of this dependence below $T_{PN}$, as it is displayed in Fig.~6 represents the isotropic baseline in the confined nematic phase. This approach is not applicable, however, to describe the $\Delta C_1^{\mathrm{(iso)}}(T)$-baseline. $\Delta C_1(T)$ exhibits evident nonlinear asymptotic behaviour extending far above $T_{PN}$. Fortunately, having two other isotropic baselines it can be calculated as: $\Delta C_1^{\mathrm{(iso)}}(T)= C_t^{\mathrm{(iso)}}(T)-C_{\infty}-\Delta C_2^{\mathrm{(iso)}}(T)$, see labeled dash-dotted lines in Fig.~6. The difference $\Delta C_i(T)- \Delta C_i^{\mathrm{(iso)}}(T)$ ($i=1,2$) is proportional to elementary contributions, $\bar{Q_s}$ and $\bar{Q_c}$. Normalised again to $\bar{Q}$-value at ($T=$290 K), $\bar{Q_s}(T)$ and $\bar{Q_c}(T)$ dependences are displayed in the lower panels of Fig.~7a and 7b as for 5CB-$p$SiO$_2$ ($f=1.0$) and 6CB-$p$SiO$_2$ ($f=1.0$), respectively. Variations $\bar{Q_s}(T)$ and $\bar{Q_c}(T)$ in the region of $T_{PN}$ are considerably different. The molecular ordering in the core region is reminiscent of the evolution of the bulk order parameter. It exhibits a strong increase in the nematic ordering just below $T_{PN}$ and its subsequent saturation at $T<<T_{PN}$. The surface ordering, on the other hand, exhibits a continuous change with a smeared kink near $T_{PN}$ and an asymptotic decrease above this temperature. Moreover, the molecular ordering in the core region has an influence on the ordering in the periphery, particularly in the molecular layer located next to pore walls. This can be inferred by comparing the $\Delta C_1(T)$-dependences measured for the complete and partial pore fillings, see Fig.~6. Particularly, a change in slope below $T_{PN}$ is quite obvious only for the samples with entirely filled pores. Presumably, the molecular ordering in the core region is transferred to the periphery via intermolecular interactions. \section{Conclusion} In conclusion, we reported a dielectric study on the calamatic nematic crystal 5CB and 6CB confined in cylindrical nanochannels of monolithic silica membranes. The measurements have been performed on composites with two distinct regimes of pore fillings: monomolecular layer coverages and entirely filled pores. Whereas for the composites with monomolecular layer coverages a slow surface relaxation dominates the dielectric properties, the dielectric spectra of nanocomposites with entirely filled pores can be well described by two Cole-Cole processes with well separated relaxation times. The fast relaxation is associated with a rotational dynamics of molecules in the core region of the pore filling. The slow relaxation originates from a surface LC layer next to the pore walls. In the regime of monomolecular layer coverage the temperature evolution of the static capacitance exhibits a smooth temperature behaviour with no hints of a phase transformation in the entire temperature region. This is in contrast to the matrices with entirely filled pores. The static capacitance of such samples evidently demonstrates anomalous behaviour with a precursor behaviour caused by orientational molecular ordering due to the paranematic-to-nematic transition. For the entire pore filling it can be well described by a phenomenological Landau-de Gennes theory (KKLZ approach). The corresponding analysis yields the nematic ordering field, $\sigma$, equals 0.91 (5CB-$p$SiO$_2$) and 1.10 (6CB-$p$SiO$_2$) which in both cases corresponds to a supercritical regime and appears in good agreement with recent optical birefringence studies \cite{Calus2012}. Whereas the changes of static capacitance characterises the behaviour of the effective (averaged) order parameter over the pore filling, analysis of the relaxation strengths of the slow and fast relaxation processes provide an access to local molecular ordering in different parts of the pore filling. Thus, the molecular ordering in the surface or interface layer next to the pore walls and in the core region can be resolved. The molecular ordering in the core region is reminiscent of that in the bulk. It exhibits a strong increase in the nematic ordering just below the paranematic-to-nematic transition point, $T_{PN}$, and subsequent saturation at cooling. The surface ordering, on the other hand, evidently exhibits a continuous evolution with a smeared kink near $T_{PN}$ and asymptotic decreasing above this temperature. The molecular ordering in the core region of the pore filling influences the peripheral molecular layers, presumably via intermolecular interactions. Overall, the dielectric spectroscopy study on the calamatic nematic crystals 5CB and 6CB confined to cylindrical nanochannels of monolithic silica membranes reveals features analogous to those found for 7CB in nanoporous silica substrates \cite{Calus2015}. Therefore, we believe that the dielectric spectroscopy technique along with the phenomenological approach and key physical principles outlined here and in Ref. \cite{Calus2015} are applicable to the important class of the cyanobiphenyl family and confined nematics in general. An inhomogeneous orientational behaviour has also been found in Molecular Dynamics simulations for rod-like LCs interacting with Gay-Berne potentials for confinement in nanochannels \cite{Ji2009, Guegan2007, Lefort2008} and in slit-pore geometry \cite{Gruhn1997, Gruhn1998, Barmes2004}. The qualitative results of these studies agree with the findings presented here. Unfortunately, no partial fillings, in particular monolayer behavior, have been explored in the simulation studies. This may be an interesting task for simulation studies in the future. Moreover, in agreement with experimental studies \cite{Kityk2008, Kityk2010, Gear2015} molecular dynamics studies suggest a significant influence of interface roughness and/or microstructure on the anchoring strength and thus on the nematic-isotropic transition \cite{Cheung2006, Roscioni2013}, an interplay which may be explorable in the near future given the availability of hierarchically tailorable porous solids \cite{Kuester2014}. \section{Acknowledgement} This work has been supported by the Polish National Science Centre (NCN) under the Project "Molecular Structure and Dynamics of Liquid Crystals Based Nanocomposites" (Decision No. DEC-2012/05/B/ST3/02782). The German research foundation (DFG) funded the research by the research Grant No. Hu850/3 and within the collaborative research initiative "Tailor-made Multi-Scale Materials Systems" (SFB 986, project area B and project C2), Hamburg. \footnotesize{ \bibliographystyle{rsc} \providecommand*{\mcitethebibliography}{\thebibliography} \csname @ifundefined\endcsname{endmcitethebibliography} {\let\endmcitethebibliography\endthebibliography}{} \begin{mcitethebibliography}{66} \providecommand*{\natexlab}[1]{#1} \providecommand*{\mciteSetBstSublistMode}[1]{} \providecommand*{\mciteSetBstMaxWidthForm}[2]{} \providecommand*{\mciteBstWouldAddEndPuncttrue} {\def\unskip.}{\unskip.}} \providecommand*{\mciteBstWouldAddEndPunctfalse} {\let\unskip.}\relax} \providecommand*{\mciteSetBstMidEndSepPunct}[3]{} \providecommand*{\mciteSetBstSublistLabelBeginEnd}[3]{} \providecommand*{\unskip.}}{} \mciteSetBstSublistMode{f} \mciteSetBstMaxWidthForm{subitem} {(\emph{\alph{mcitesubitemcount}})} \mciteSetBstSublistLabelBeginEnd{\mcitemaxwidthsubitemform\space} {\relax}{\relax} \bibitem[Martin(1994)]{Martin1994} C.~R. Martin, \emph{Science}, 1994, \textbf{266}, 1961--1966\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Schmidt-Mende \emph{et~al.}(2001)Schmidt-Mende, Fechtenkotter, Mullen, Moons, Friend, and MacKenzie]{Schmidt-Mende2001} L.~Schmidt-Mende, A.~Fechtenkotter, K.~Mullen, E.~Moons, R.~H. Friend and J.~D. MacKenzie, \emph{Science}, 2001, \textbf{293}, 1119--1122\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Bisoyi and Kumar(2011)]{Bisoyi2011} H.~K. Bisoyi and S.~Kumar, \emph{Chem. Soc. Rev.}, 2011, \textbf{40}, 306--319\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Abdulhalim(2012)]{Abdulhalim2012} I.~Abdulhalim, \emph{J. Nanophotonics}, 2012, \textbf{6}, 061001\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Duran \emph{et~al.}(2012)Duran, Hartmann-Azanza, Steinhart, Gehrig, Laquai, Feng, Muellen, Butt, and Floudas]{Duran2012} H.~Duran, B.~Hartmann-Azanza, M.~Steinhart, D.~Gehrig, F.~Laquai, X.~Feng, K.~Muellen, H.-J. Butt and G.~Floudas, \emph{ACS Nano}, 2012, \textbf{6}, 9359--9365\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Kumar(2014)]{Kumar2014} S.~Kumar, \emph{Npg Asia Materials}, 2014, \textbf{6}, e82\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Huber(2015)]{Huber2015} P.~Huber, \emph{J. Phys. : Cond. Matt.}, 2015, \textbf{27}, 103102\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Sheng(1976)]{Sheng1976} P.~Sheng, \emph{Phys. Rev. Lett.}, 1976, \textbf{37}, 1059--1062\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Poniewierski and Sluckin(1987)]{Poniewierski1987} A.~Poniewierski and T.~J. Sluckin, \emph{Liq. Cryst.}, 1987, \textbf{2}, 281--311\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Kutnjak \emph{et~al.}(2003)Kutnjak, Kralj, Lahajnar, and Zumer]{Kutnjak2003} Z.~Kutnjak, S.~Kralj, G.~Lahajnar and S.~Zumer, \emph{Phys. Rev. E}, 2003, \textbf{68}, 021705\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Kutnjak \emph{et~al.}(2004)Kutnjak, Kralj, Lahajnar, and Zumer]{Kutnjak2004} Z.~Kutnjak, S.~Kralj, G.~Lahajnar and S.~Zumer, \emph{Phys. Rev. E}, 2004, \textbf{70}, 051703\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Karjalainen \emph{et~al.}(2013)Karjalainen, Lintuvuori, Telkki, Lantto, and Vaara]{Karjalainen2013} J.~Karjalainen, J.~Lintuvuori, V.-V. Telkki, P.~Lantto and J.~Vaara, \emph{Phys. Chem. Chem. Phys.}, 2013, \textbf{15}, 14047--14057\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Karjalainen \emph{et~al.}(2015)Karjalainen, Vaara, Straka, and Lantto]{Karjalainen2015} J.~Karjalainen, J.~Vaara, M.~Straka and P.~Lantto, \emph{Phys. Chem. Chem. Phys.}, 2015, 7158--7171\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Yokoyama(1988)]{Yokoyama1988} H.~Yokoyama, \emph{J. Chem. Soc. - Faraday Trans. II}, 1988, \textbf{84}, 1023--1040\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Bellini \emph{et~al.}(1992)Bellini, Clark, Muzny, Wu, Garland, Schaefer, and Olivier]{Bellini1992} T.~Bellini, N.~A. Clark, C.~D. Muzny, L.~Wu, X.~Z., C.~W. Garland, D.~W. Schaefer and B.~J. Olivier, \emph{Phys. Rev. Lett.}, 1992, \textbf{69}, 788--791\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Kralj \emph{et~al.}(1998)Kralj, Zidansek, Lahajnar, Zumer, and Blinc]{Kralj1998} S.~Kralj, A.~Zidansek, G.~Lahajnar, S.~Zumer and R.~Blinc, \emph{Physical Review E}, 1998, \textbf{57}, 3021--3032\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Kityk \emph{et~al.}(2008)Kityk, Wolff, Knorr, Morineau, Lefort, and Huber]{Kityk2008} A.~V. Kityk, M.~Wolff, K.~Knorr, D.~Morineau, R.~Lefort and P.~Huber, \emph{Phys. Rev. Lett.}, 2008, \textbf{101}, 187801\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Schonhals \emph{et~al.}(2010)Schonhals, Frunza, Frunza, Unruh, Frick, and Zorn]{Schoenhals2010} A.~Schonhals, S.~Frunza, L.~Frunza, T.~Unruh, B.~Frick and R.~Zorn, \emph{European Physical Journal-special Topics}, 2010, \textbf{189}, 251--255\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Grigoriadis \emph{et~al.}(2011)Grigoriadis, Duran, Steinhart, Kappl, Butt, and Floudas]{Grigoriadis2011} C.~Grigoriadis, H.~Duran, M.~Steinhart, M.~Kappl, H.~J. Butt and G.~Floudas, \emph{ACS Nano}, 2011, \textbf{5}, 9208--9215\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Calus \emph{et~al.}(2014)Calus, Jablonska, Busch, Rau, Huber, and Kityk]{Calus2014} S.~Calus, B.~Jablonska, M.~Busch, D.~Rau, P.~Huber and A.~V. Kityk, \emph{Phys. Rev. E}, 2014, \textbf{89}, 062501\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Eich \emph{et~al.}(1984)Eich, Ullrich, Wendorff, and Ringsdorf]{Eich1984} M.~Eich, K.~Ullrich, J.~H. Wendorff and H.~Ringsdorf, \emph{Polymer}, 1984, \textbf{25}, 1271--1276\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Kityk and Huber(2010)]{Kityk2010} A.~V. Kityk and P.~Huber, \emph{Appl. Phys. Lett.}, 2010, \textbf{97}, 153124\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Calus \emph{et~al.}(2012)Calus, Rau, Huber, and Kityk]{Calus2012} S.~Calus, D.~Rau, P.~Huber and A.~V. Kityk, \emph{Phys. Rev. E}, 2012, \textbf{86}, 021701\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Gruhn and Schoen(1997)]{Gruhn1997} T.~Gruhn and M.~Schoen, \emph{Phys. Rev. E}, 1997, \textbf{55}, 2861--2875\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Gruhn and Schoen(1998)]{Gruhn1998} T.~Gruhn and M.~Schoen, \emph{J. Chem. Phys.}, 1998, \textbf{108}, 9124--9136\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Barmes and Cleaver(2004)]{Barmes2004} F.~Barmes and D.~J. Cleaver, \emph{Physical Review E}, 2004, \textbf{69}, 061705\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Care and Cleaver(2005)]{Care2005} C.~M. Care and D.~J. Cleaver, \emph{Rep. Prog. Phys.}, 2005, \textbf{68}, 2665--2700\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Binder \emph{et~al.}(2008)Binder, Horbach, Vink, and De~Virgiliis]{Binder2008} K.~Binder, J.~Horbach, R.~Vink and A.~De~Virgiliis, \emph{Soft Matter}, 2008, \textbf{4}, 1555--1568\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Ji \emph{et~al.}(2009)Ji, Lefort, Busselez, and Morineau]{Ji2009} Q.~Ji, R.~Lefort, R.~Busselez and D.~Morineau, \emph{J. Chem. Phys.}, 2009, \textbf{130}, 234501\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Ji \emph{et~al.}(2009)Ji, Lefort, and Morineau]{Ji2009b} Q.~Ji, R.~Lefort and D.~Morineau, \emph{Chem. Phys. Lett.}, 2009, \textbf{478}, 161--165\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Mazza \emph{et~al.}(2010)Mazza, Greschek, Valiullin, Kaerger, and Schoen]{Mazza2010} M.~G. Mazza, M.~Greschek, R.~Valiullin, J.~Kaerger and M.~Schoen, \emph{Physical Review Letters}, 2010, \textbf{105}, 227802\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Pizzirusso \emph{et~al.}(2012)Pizzirusso, Berardi, Muccioli, Ricci, and Zannoni]{Pizzirusso2012} A.~Pizzirusso, R.~Berardi, L.~Muccioli, M.~Ricci and C.~Zannoni, \emph{Chem. Science}, 2012, \textbf{3}, 573\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Roscioni \emph{et~al.}(2013)Roscioni, Muccioli, Della~Valle, Pizzirusso, Ricci, and Zannoni]{Roscioni2013} O.~M. Roscioni, L.~Muccioli, R.~G. Della~Valle, A.~Pizzirusso, M.~Ricci and C.~Zannoni, \emph{Langmuir}, 2013, \textbf{29}, 8950--8958\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Cetinkaya \emph{et~al.}(2013)Cetinkaya, Yildiz, Ozbek, Losada-Perez, Leys, and Thoen]{Cetinkaya2013} M.~C. Cetinkaya, S.~Yildiz, H.~Ozbek, P.~Losada-Perez, J.~Leys and J.~Thoen, \emph{Phys. Rev. E}, 2013, \textbf{88}, 042502\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Schulz \emph{et~al.}(2014)Schulz, Mazza, and Bahr]{Schulz2014} B.~Schulz, M.~G. Mazza and C.~Bahr, \emph{Phys. Rev. E}, 2014, \textbf{90}, 040501\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Busselez \emph{et~al.}(2014)Busselez, Cerclier, Ndao, Ghoufi, Lefort, and Morineau]{Busselez2014} R.~Busselez, C.~V. Cerclier, M.~Ndao, A.~Ghoufi, R.~Lefort and D.~Morineau, \emph{Journal of Chemical Physics}, 2014, \textbf{141}, 134902\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Li \emph{et~al.}(2009)Li, Donadio, Ghiringhelli, and Galli]{Li2009} T.~S. Li, D.~Donadio, L.~M. Ghiringhelli and G.~Galli, \emph{Nat. Mater.}, 2009, \textbf{8}, 726\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Cramer \emph{et~al.}(1997)Cramer, Cramer, Kremer, and Stannarius]{Cramer1997} C.~Cramer, T.~Cramer, F.~Kremer and R.~Stannarius, \emph{J. Chem. Phys.}, 1997, \textbf{106}, 3730--3742\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Hourri \emph{et~al.}(2001)Hourri, Bose, and Thoen]{Hourri2001} A.~Hourri, T.~K. Bose and J.~Thoen, \emph{Phys. Rev. E}, 2001, \textbf{63}, 051702\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Frunza \emph{et~al.}(2001)Frunza, Frunza, Goering, Sturm, and Schonhals]{Frunza2001} S.~Frunza, L.~Frunza, H.~Goering, H.~Sturm and A.~Schonhals, \emph{Europhys. Lett.}, 2001, \textbf{56}, 801--807\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Leys \emph{et~al.}(2005)Leys, Sinha, Glorieux, and Thoen]{Leys2005} J.~Leys, G.~Sinha, C.~Glorieux and J.~Thoen, \emph{Phys. Rev. E}, 2005, \textbf{71}, 051709\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Sinha \emph{et~al.}(2005)Sinha, Leys, Glorieux, and Thoen]{Sinha2005} G.~Sinha, J.~Leys, C.~Glorieux and J.~Thoen, \emph{Phys. Rev. E}, 2005, \textbf{72}, 051710\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Leys \emph{et~al.}(2008)Leys, Glorieux, and Thoen]{Leys2008} J.~Leys, C.~Glorieux and J.~Thoen, \emph{J. Phys.: Cond. Matt.}, 2008, \textbf{20}, 244111\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Frunza \emph{et~al.}(2008)Frunza, Frunza, Kosslick, and Schoenhals]{Frunza2008} L.~Frunza, S.~Frunza, H.~Kosslick and A.~Schoenhals, \emph{Phys. Rev. E}, 2008, \textbf{78}, 051701\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Bras \emph{et~al.}(2008)Bras, Dionisio, and Schoenhals]{Bras2008} A.~R. Bras, M.~Dionisio and A.~Schoenhals, \emph{J. Phys. Chem. B}, 2008, \textbf{112}, 8227--8235\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Jasiurkowska \emph{et~al.}(2012)Jasiurkowska, Kossack, Ene, Iacob, Kipnusu, Papadopoulos, Sangoro, Massalska-Arodz, and Kremer]{Jasiurkowska2012} M.~Jasiurkowska, W.~Kossack, R.~Ene, C.~Iacob, W.~K. Kipnusu, P.~Papadopoulos, J.~R. Sangoro, M.~Massalska-Arodz and F.~Kremer, \emph{Soft Matter}, 2012, \textbf{8}, 5194--5200\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Calus \emph{et~al.}(2015)Calus, Kityk, Eich, and Huber]{Calus2015} S.~Calus, A.~V. Kityk, M.~Eich and P.~Huber, \emph{Soft Matter}, 2015, \textbf{11}, 3176\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Sinha and Aliev(1997)]{Sinha1997} G.~P. Sinha and F.~M. Aliev, \emph{Mol. Cryst. Liq. Cryst.}, 1997, \textbf{304}, 309--314\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Sinha and Aliev(1998)]{Sinha1998} G.~P. Sinha and F.~M. Aliev, \emph{Phys. Rev. E}, 1998, \textbf{58}, 2001--2010\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Aliev \emph{et~al.}(2005)Aliev, Bengoechea, Gao, Cochran, and Dai]{Aliev2005} F.~M. Aliev, M.~R. Bengoechea, C.~Y. Gao, H.~D. Cochran and S.~Dai, \emph{Journal of Non-crystalline Solids}, 2005, \textbf{351}, 2690--2693\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Aliev \emph{et~al.}(2010)Aliev, Arroyo, and Dolidze]{Aliev2010} F.~M. Aliev, E.~F. Arroyo and V.~Dolidze, \emph{Journal of Non-crystalline Solids}, 2010, \textbf{356}, 657--660\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Gruener and Huber(2011)]{Gruener2011} S.~Gruener and P.~Huber, \emph{J. Phys. : Cond. Matt.}, 2011, \textbf{23}, 184109\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Huber \emph{et~al.}(2013)Huber, Busch, Calus, and Kityk]{Huber2013} P.~Huber, M.~Busch, S.~Calus and A.~V. Kityk, \emph{Phys. Rev. E}, 2013, \textbf{87}, 042502\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Cole and Cole(1941)]{Cole1941} K.~Cole and R.~Cole, \emph{J. Chem. Phys.}, 1941, \textbf{9}, 341\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Kremer \emph{et~al.}(2002)Kremer, Schoenhals, and Luck]{Kremer2002} F.~Kremer, A.~Schoenhals and W.~Luck, \emph{Broadband Dielectric Spectroscopy}, Springer, 2002\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Kityk \emph{et~al.}(2014)Kityk, Huber, Pelster, and Knorr]{Kityk2014} A.~V. Kityk, P.~Huber, R.~Pelster and K.~Knorr, \emph{J. Phys. Chem. C}, 2014, \textbf{118}, 12548--12554\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Wallacher \emph{et~al.}(2004)Wallacher, Soprunyuk, Knorr, and Kityk]{Wallacher2004b} D.~Wallacher, V.~P. Soprunyuk, K.~Knorr and A.~V. Kityk, \emph{Phys. Rev. B}, 2004, \textbf{69}, 134207\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Haws \emph{et~al.}(1989)Haws, Clark, and Attard]{Haws1989} C.~Haws, M.~Clark and G.~Attard, Side Chain Liquid Crystal Polymers, 1989, pp. 196--223\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Diez \emph{et~al.}(2006)Diez, Perez~Jubindo, De~la Fuente, Lopez, Salud, and Tamarit]{Diez2006} S.~Diez, M.~A. Perez~Jubindo, M.~R. De~la Fuente, D.~O. Lopez, J.~Salud and J.~L. Tamarit, \emph{Liq. Cryst.}, 2006, \textbf{33}, 1083--1091\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Maier and Meier(1961)]{Maier1961} W.~Maier and G.~Meier, \emph{Z. Naturf. A-Astrophys. Phys. Physik. Chem.}, 1961, \textbf{16}, 262--267\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Drevensek~Olenik \emph{et~al.}(2003)Drevensek~Olenik, Kocevar, Musevic, and Rasing]{DrevensekOlenik2003} I.~Drevensek~Olenik, K.~Kocevar, I.~Musevic and T.~Rasing, \emph{Eur. Phys. J. E}, 2003, \textbf{11}, 169--75\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Guegan \emph{et~al.}(2007)Guegan, Morineau, Lefort, Moreac, Beziel, Guendouz, Zanotti, and Frick]{Guegan2007} R.~Guegan, D.~Morineau, R.~Lefort, A.~Moreac, W.~Beziel, M.~Guendouz, J.~M. Zanotti and B.~Frick, \emph{J. Chem. Phys.}, 2007, \textbf{126}, 064902\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Lefort \emph{et~al.}(2008)Lefort, Morineau, Guegan, Guendouz, Zanotti, and Frick]{Lefort2008} R.~Lefort, D.~Morineau, R.~Guegan, M.~Guendouz, J.-M. Zanotti and B.~Frick, \emph{Phys. Rev. E}, 2008, \textbf{78}, 040701\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Gear \emph{et~al.}(2015)Gear, Diest, Liberman, and Rothschild]{Gear2015} C.~Gear, K.~Diest, V.~Liberman and M.~Rothschild, \emph{Optics Express}, 2015, \textbf{23}, 807--814\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Cheung and Schmid(2006)]{Cheung2006} D.~L. Cheung and F.~Schmid, \emph{Chem. Phys. Lett.}, 2006, \textbf{418}, 392\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Kuester \emph{et~al.}(2014)Kuester, Reinhardt, Froeba, and Enke]{Kuester2014} C.~Kuester, B.~Reinhardt, M.~Froeba and D.~Enke, \emph{Zeitschrift Fur Anorganische Und Allgemeine Chemie}, 2014, \textbf{640}, 565--569\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \end{mcitethebibliography}} \end{document}
1508.03510
\section{Calculation of the theoretical PQPD} To describe the reconstructed PQPD $W(S_1,S_2,S_3)$ we calculate the theoretical distribution for our case. Thus we consider a horizontally polarized weak coherent state and only single-photon and no-photon detection events. In this case the density operator looks like $$ \hat{\rho} = \left[p_0\ket{0}\bra{0}_{H}+p_1\ket{1}\bra{1}_H\right]\otimes\ket{0}\bra{0}_V. \refstepcounter{suppeq} \eqno{(\rm S\arabic{suppeq})} \label{rho} $$ Similarly to the case of the experimental data processing, the starting point is the definition of the PQPD and of the polarization characteristic function [see Eqs. (1) and (2)]. Derivation of the theoretical distribution requires the use of spherical coordinates $(\lambda,\xi,\rho)$ that differ from $(\lambda,\alpha,\beta)$ used in the experimental data processing: $$ \begin{array}{c} u_1=\lambda\cos\xi,\qquad u_2=\lambda\sin\xi\cos\rho,\\ u_3=\lambda\sin\xi\sin\rho. \end{array} \refstepcounter{suppeq} \eqno{(\rm S\arabic{suppeq})} $$ The polarization characteristic function for the density operator (\ref{rho}) consists of the no-photon and single-photon parts~\cite{Chekhova2013}: $$ \chi_{\xi\rho}(\lambda) = p_0 + p_1(\cos\lambda + i\sin\lambda\cos\xi). \refstepcounter{suppeq} \eqno{(\rm S\arabic{suppeq})} $$ The no-photon part of the characteristic function can be directly integrated in Eq. (1). The resulting no-photon PQPD $W_0$ is a $\delta$-peak at the origin of the Stokes space, $$ W_0(S_1,S_2,S_3) = p_0\delta(S_1)\delta(S_2)\delta(S_3). \refstepcounter{suppeq} \eqno{(\rm S\arabic{suppeq})} \label{W_0} $$ Derivation of a single-photon PQPD $W_1$ is more tricky. As in the derivation of Eq. (8), we pass from the Cartesian coordinates $(u_1,u_2,u_3)$ to the spherical ones $(\lambda,\xi,\rho)$ and perform the integration over $\lambda$. Then the single-photon distribution looks like $$ \begin{array}{l} W_1(S_1,S_2,S_3)=\\[3pt] \displaystyle-\frac{p_1}{2(2\pi)^2}\int_0^{2\pi}d\rho\int_0^{\pi}d\xi\sin\xi(1+\cos\xi)\delta^{(2)}(S_{\xi\rho}-1), \end{array} \refstepcounter{suppeq} \eqno{(\rm S\arabic{suppeq})} \label{W1_1} $$ where an arbitrary Stokes observable $S_{\xi\rho}$ is defined as $$ S_{\xi\rho}=S_1\cos\xi+(S_2\cos\rho+S_3\sin\rho)\sin\xi. \refstepcounter{suppeq} \eqno{(\rm S\arabic{suppeq})} $$ It is possible to perform the integration over $\rho$ in Eq. (\ref{W1_1}) using the spherical coordinates $(S,\theta,\phi)$ defined in Eqs. (14). In this case $$ S_{\xi\rho}=S(\cos\xi\cos\theta+\sin\xi\sin\theta\cos\bar{\rho}), \refstepcounter{suppeq} \eqno{(\rm S\arabic{suppeq})} $$ where $\bar{\rho}=\rho-\phi$. Using the Leibniz integral rule, Eq. (\ref{W1_1}) is transformed into $$ W_1(S,\theta,\phi)=-\frac{p_1}{2(2\pi)^2}\left.\frac{\partial^2I_\xi}{(\partial y)^2}\right|_{y=1}, \refstepcounter{suppeq} \eqno{(\rm S\arabic{suppeq})} \label{W1_2} $$ where $$ I_\xi=\int_0^{\pi}d\xi\,\frac{1+\cos\xi}{S\sin\theta}I_{\bar\rho}, \refstepcounter{suppeq} \eqno{(\rm S\arabic{suppeq})} \label{I_xi} $$ $$ I_{\bar\rho}=\lim_{\kappa\to0}\int_0^{2\pi}d\bar\rho\,\delta_\kappa(P-\cos\bar\rho), \refstepcounter{suppeq} \eqno{(\rm S\arabic{suppeq})} $$ and $$ P=\frac{y-S\cos\xi\cos\theta}{S\sin\xi\sin\theta}. \refstepcounter{suppeq} \eqno{(\rm S\arabic{suppeq})} $$ Here we use the rectangular approximation [see Eq. (11)] for the Dirac delta function $$ \delta_\kappa(x)=\frac{1}{\kappa}\Pi\left(\frac{x}{\kappa}\right). \refstepcounter{suppeq} \eqno{(\rm S\arabic{suppeq})} $$ After integrating over $\bar\rho$ and simplifying the obtained result by setting $\kappa\to0$, we achieve $$ I_{\bar\rho}=\lim_{\kappa\to0}\left\{ \begin{array}{cl} \displaystyle\frac{2}{\sqrt{1-P^2}},&|P|<1-\frac{\kappa}{2},\\[3pt] \frac{2\sqrt{2}}{\kappa}\sqrt{1-|P|+\frac{\kappa}{2}},&||P|-1|\le\frac{\kappa}{2},\\[3pt] 0,&|P|>1+\frac{\kappa}{2}. \end{array} \right. \refstepcounter{suppeq} \eqno{(\rm S\arabic{suppeq})} $$ Note that the second case leads to a singularity $\propto1/\sqrt{\kappa}$. However, because the width of the corresponding integration segment is proportional to $\kappa$, it gives no contribution into the integral $I_\xi$. Therefore we obtain $$ I_{\bar\rho}=\frac{2}{\sqrt{1-P^2}}\,\Pi\left(\frac{P}{2}\right). \refstepcounter{suppeq} \eqno{(\rm S\arabic{suppeq})} $$ Note that if $S<y$ then $P>1$ independently from $\xi$ and $\theta$, thus $I_{\bar\rho}=0$ and $I_\xi=0$. If $S>y$ then $I_{\bar\rho}\ne0$ only if $|P|<1$. The last condition restricts the limits of integration in $I_\xi$. Taking this into account and integrating over $\xi$ in Eq. (\ref{I_xi}) we obtain $$ I_\xi=\frac{2\pi}{S^2}(S+y\cos\theta)H(S-y), \refstepcounter{suppeq} \eqno{(\rm S\arabic{suppeq})} \label{I_xi_2} $$ where $H(x)$ is the Heaviside step function. Thus, the single-photon PQPD is obtained by taking the derivative of $I_\xi$ in Eq. (\ref{W1_2}), $$ W_1(S,\theta,\phi)=\frac{p_1\cos\theta}{4\pi S^2}\delta(S-1)-\frac{p_1(1+\cos\theta)}{4\pi S}\delta'(S-1). \refstepcounter{suppeq} \eqno{(\rm S\arabic{suppeq})} \label{W_1_3} $$ Here we used the fact that $\frac{d}{dx}\left[xH(x)\right]=H(x)$. This expression can be obtained using any approximation for the Heaviside step function. In our opinion this result is more physical than the one obtained by a formal derivation. The final theoretical distribution (13) is achieved by summing the no-photon $W_0$ (\ref{W_0}) and the single-photon $W_1$ (\ref{W_1_3}) PQPD. \section{Experiment} \paragraph{PQPD reconstruction.} \begin{figure} \includegraphics[width=8.5cm]{Setup.jpg} \caption{Left: experimental setup. A weak coherent state is prepared by attenuating the second harmonic of a Nd:YAG laser (Nd:YAG~2$\omega$) with neutral density filters (NDF). A standard setup for polarization tomography consists of a quarter- and a half-wave plates ($\lambda/4$ and $\lambda/2$), a polarizing beam splitter, and two detectors (D$_1$ and D$_2$). We use a Glan-Taylor prism (GP) as a polarizing beam splitter and two avalanche photodiodes as detectors. Right: the points at which tomographic measurements are performed are shown on the Poincar\'e sphere.} \label{Setup} \end{figure} A standard setup for polarization tomography (see Fig.~\ref{Setup}) consists of a quarter- and a half-wave plates ($\lambda/4$ and $\lambda/2$), a polarizing beam splitter and two detectors (D$_1$ and D$_2$). For each pair of settings of the quarter-wave ($\tilde{\beta}$) and half-wave ($\tilde{\alpha}$) plates, such a setup measures a different arbitrary Stokes operator $\hat{S}_{\alpha\beta}=\hat{n}_{1}-\hat{n}_{2}$. The operators $\hat{n}_{1,2}$ correspond to the photon numbers in the mode transmitted or reflected by the polarizing beam splitter and are measured by D$_{1}$ or D$_{2}$, respectively. The angles $\alpha\in[0,2\pi]$ and $\beta\in[-\pi/2,\pi/2]$ that define a point on the Poincar\'e sphere (see Fig.~\ref{Setup}) are determined by the settings of the wave plates, \begin{equation} \alpha=4\tilde{\alpha}-2\tilde{\beta},\quad\beta=2\tilde{\beta}. \end{equation} An arbitrary Stokes operator $\hat{S}_{\alpha\beta}$ can be represented in Cartesian coordinates $(\hat{S}_1,\hat{S}_2,\hat{S}_3)$ as \begin{equation} \hat{S}_{\alpha\beta}=(\hat{S}_1\cos\alpha+\hat{S}_2\sin\alpha)\cos\beta+\hat{S}_3\sin\beta. \end{equation} It is clear that this operator possesses inversion symmetry $\hat{S}_{(\alpha+\pi)(-\beta)}=-\hat{S}_{\alpha\beta}$, thus measurements only on the half of the Poincar\'e sphere suffice for the full reconstruction of any state. In the experiment, for each point on the Poincar\'e sphere (for each $\alpha$ and $\beta$), acquisition of many $S_{\alpha\beta}$ values is needed. From these values we calculate the probabilities $W_{\alpha\beta}(n)$ that $S_{\alpha\beta}$ are equal to $n$. From these probabilities we restore the polarization characteristic function $\chi_{\alpha\beta}(\lambda)$ in spherical coordinates ($\lambda,\alpha,\beta$) \cite{Chekhova2013}: \begin{equation} \chi_{\alpha\beta}(\lambda)=\sum_{n=-\infty}^{\infty}W_{\alpha\beta}(n)e^{i\lambda n},\quad\lambda\in[0,\infty). \end{equation} These spherical coordinates ($\lambda,\alpha,\beta$) are related to the Cartesian ones ($u_1,u_2,u_3$) by the following transformations: \begin{equation} \begin{array}{c} u_1=\lambda\cos\alpha\cos\beta,\qquad u_2=\lambda\sin\alpha\cos\beta,\\ u_3=\lambda\sin\beta. \end{array} \end{equation} Thus, using these transformations, Eq. (\ref{W_basis}) can be rewritten as \begin{eqnarray} W(S_1,S_2,S_3)&=&-\frac{1}{(2\pi)^2}\int_0^{2\pi}d\alpha\int_0^{\pi/2}d\beta\cos\beta\nonumber\\ &\times&\sum_{n=-\infty}^{\infty}W_{\alpha\beta}(n)\delta^{(2)}(S_{\alpha\beta}-n), \label{W_exp_eq} \end{eqnarray} where $\delta^{(2)}(x)$ is the second derivative of the Dirac delta function. Here we exploit the symmetry of $\hat{S}_{\alpha\beta}$ and perform integration over the radial coordinate $\lambda$. As a result, we obtain the equation for reconstructing the PQPD $W(S_1,S_2,S_3)$ from the experimentally measured probabilities $W_{\alpha\beta}(n)$. The reconstruction of PQPD $W_\epsilon(S_1,S_2,S_3)$ from the experimentally acquired data set using Eq.\,\eqref{W_exp_eq} requires some approximation $\delta_\epsilon(x)$ for the Dirac delta function $\delta(x)$, where $\epsilon$ is the smoothing parameter. We choose the Gaussian approximation, \begin{equation} \delta_\epsilon(x)=\frac{1}{2\epsilon\sqrt{\pi}}e^{-x^2/4\epsilon^2}, \label{Gauss_app} \end{equation} and similarly for the derivatives of $\delta(x)$. The smoothing parameter $\epsilon$ should be chosen from the following considerations. On the one hand, it has to be small enough to represent all features of the PQPD, but on the other hand, small values of $\epsilon$ lead to a lot of artifacts in the reconstructed distribution (the so-called reconstruction noise). \paragraph{Experiment and data processing.} We have performed the polarization tomography of a horizontally polarized weak coherent state $\left|\gamma\right>$. This state was produced by strongly attenuating a coherent beam at the wavelength 532~nm generated by a pulsed Nd:YAG~laser (Nd:YAG~2$\omega$) with the pulse duration 10 ns and repetition rate 10 kHz (see Fig.~\ref{Setup}). Attenuation (or any other linear losses) does not change the statistical properties of a coherent state: the state remains coherent, but the mean number of photons $|\gamma|^2$ is reduced. The attenuation to a single-photon level was performed by a neutral density filter (NDF). It was done in such a way that the probability of single-photon detection events $p_1\approx|\gamma|^2$ was equal to 0.189. In this case $p_1$ was at least one order of magnitude bigger than the probabilities of two-photon and higher-order detection events. Therefore we ignored such events and considered only single-photon and no-photon detection events (with the probability $p_0$). We used avalanche photodiodes as single-photon detectors (D$_1$ and D$_2$). The points $(\alpha_k,\beta_l)$ on the Poincar\'e sphere where tomographic measurements were performed cover the upper hemisphere ($\beta\ge0$) with a step of $8^\circ$ degrees (see Fig.~\ref{Setup}). These points have been accessed by different combinations of the settings for the quarter- and half-wave plates with the steps equal to $4^\circ$ and $2^\circ$ degrees, respectively (and for $\tilde{\beta}=45^\circ$, the `north' pole of the Poincar\'e sphere was accessed). For each point from this discrete set we have calculated the experimental probabilities $\tilde{W}_{\alpha_k\beta_l}(n)$, where $n=\{-1,0,1\}$. The full experimental dataset ${\tilde{W}_{\alpha_k\beta_l}(n)}$ is not suitable for the final integration over $\alpha$ and $\beta$ in Eq. (\ref{W_exp_eq}), because it is defined on a discrete set $\{\alpha_k,\beta_l$\}. Thus it should be interpolated by a continuous function. The interpolated function $W_{\alpha\beta}(n)$ is given by the convolution sum of the data points $\tilde{W}_{\alpha_k\beta_l}(n)$ with the interpolation kernel $u(\alpha,\beta)$, \begin{equation} W_{\alpha\beta}(n)=\sum_{\alpha_k,\beta_l}\tilde{W}_{\alpha_k\beta_l}(n)u(\alpha-\alpha_k,\beta-\beta_l). \end{equation} Various interpolation kernels can be used. The simplest one is a rectangular function $u(\alpha,\beta)=\Pi(\alpha)\Pi(\beta)$, where \begin{equation} \Pi(x) = \left\{ \begin{array}{rl} 1,& |x|<1/2\\ 0,& |x|\ge1/2.\\ \end{array} \right. \end{equation} The integration of thus interpolated function (e.g. as part of the Fourier or Radon transform) gives exactly the same result as when the integration is replaced by the summation. Such a replacement was always used for the reconstruction in the polarization tomography \cite{Marquardt2007,Agafonov2012,Kanseri2012,Mueller2012}. Unfortunately, with this interpolation, the transformations are accompanied by rather high noise. One can overcome this problem by collecting more experimental points $(\alpha_k,\beta_l)$ or by using different interpolation kernels. Interpolation methods are well-developed for image resampling \cite{Maeland1988, Parker1983}. It has been shown that several interpolation kernels could suppress the reconstruction noise by more than 30 dB better than the rectangular-function kernel. In our case the probabilities $W_{\alpha\beta}(n)$ could not be negative; hence we needed a strictly positive kernel. We chose to use a positive cubic spline kernel $u(\alpha,\beta)=u(\alpha)u(\beta)$ \cite{Maeland1988}, where \begin{equation} u(x) = \left\{ \begin{array}{cl} 2|x|^3-3|x|^2+1,& |x|\le1,\\ 0,& |x|>1.\\ \end{array} \right. \end{equation} This kernel suppresses the noise very well and is at the same time quite simple. For each interval between the data points, e.g. $(x_k,x_{k+1})$, the interpolation requires only the experimental data from the endpoints of the interval ($x_k$ and $x_{k+1}$). Hence this kernel has the same simplicity as the linear interpolation kernel, but a better performance. \begin{figure} \includegraphics[width=8.5cm]{W23_exp.jpg} \caption{Cross-sections of the reconstructed PQPD $W_\epsilon(S_1,S_2,S_3)$ (with $\epsilon=0.02$) along the $(S_2,S_3)$ plane at $S_1=1$ (a), $S_1=0.5$ (b), $S_1=0$ (c,d), $S_1=-0.5$ (e), $S_1=-1$ (f) and $S_1=-1.5$ (g). In panel (d), the same color is used for values larger than 5 to highlight the jump at $S=1$.} \label{Exp_S1} \end{figure} \begin{figure} \includegraphics[width=8.5cm]{W23_theor.jpg} \caption{Cross-sections of the theoretical PQPD $W_\epsilon(S_1,S_2,S_3)$ smoothed by $\epsilon=0.02$ along $(S_2,S_3)$ plane at $S_1=1$ (a), $S_1=0.5$ (b), $S_1=0$ (c,d), $S_1=-0.5$ (e), $S_1=-1$ (f) and $S_1=-1.5$ (g). In panel (d), the same color is used for values larger than 5 to highlight the jump at $S=1$.} \label{The_S1} \end{figure} \paragraph{Results.} Using this interpolation and the approximation (\ref{Gauss_app}) with $\epsilon=0.02$, we have reconstructed the PQPD $W_\epsilon(S_1,S_2,S_3)$. Its cross-sections along the $(S_2,S_3)$ plane at different values of $S_1$ are shown in Fig.~\ref{Exp_S1}. In general, each distribution contains a central peak at the origin of the Stokes space ($S=\sqrt{S_1^2+S_2^2+S_3^2}=0$) and a jump from negative values to positive ones at $S=1$. The central peak, which appears because of the no-photon detection events, is more than two orders of magnitude higher than the jump, which happens because of the single-photon ones. At values $S>1$ there is only the reconstruction noise (Fig.~\ref{Exp_S1}g). The reconstructed distribution $W_\epsilon(S_1,S_2,S_3)$ is in agreement with the theoretical one that is derived for our case (single-photon and no-photon detection events) in spherical coordinates $(S,\theta,\phi)$ \cite{SeeSupp}: \begin{eqnarray} W(S,\theta,\phi)&=&p_0\delta_3(S)+\frac{p_1\cos\theta}{4\pi S^2}\delta(S-1)\nonumber\\ &-&\frac{p_1(1+\cos\theta)}{4\pi S}\delta'(S-1), \label{Theor_W} \end{eqnarray} where $\delta_3(S) = \delta(S_1)\delta(S_2)\delta(S_3)$, $\delta'(x)$ is the first derivative of the Dirac delta function, and \begin{equation} \begin{array}{c} S_1=S\cos\theta,\qquad S_2=S\sin\theta\cos\phi,\\ S_3=S\sin\theta\sin\phi. \label{S_cart_sph} \end{array} \end{equation} From these formulas we have calculated the theoretical PQPD $W_\epsilon(S_1,S_2,S_3)$ for the same probabilities of single-photon ($p_1=0.189$) and no-photon detection events ($p_0=0.811$) as in the experimental case. We used the same approximation (\ref{Gauss_app}) and the same value of the smoothing parameter $\epsilon=0.02$. The same cross-sections are shown for both distributions (Fig.~\ref{The_S1}). The experimental and theoretical distributions are almost indistinguishable. The only differences are caused by the reconstruction noise (Fig.~\ref{Exp_S1}g) and imperfections of the half- and quarter-wave plates (Fig.~\ref{Exp_S1}f). \begin{figure} \includegraphics[width=8.5cm]{W_123.jpg} \caption{Cross-sections of the experimental (left) and theoretical (right) PQPD $W_\epsilon(S_1,S_{23},\phi)$ (with $\epsilon=0.02$) at $\phi=0$. In all figures, the same color is used for values larger than 10 to highlight the jump at $S=1$.} \label{Exp_The_Phi} \end{figure} It is clear that the distribution $W_\epsilon(S_1,S_2,S_3)$ possesses a rotation symmetry in the plane $(S_2,S_3)$. Thus it is convenient to use cylindrical coordinates $(S_1,S_{23},\phi)$, with the radial coordinate $S_{23}=\sqrt{S_2^2+S_3^2}=S\sin\theta$, instead of the Cartesian ones $(S_1,S_2,S_3)$. Due to this symmetry, up to experimental imperfections a cross-section at some angle $\phi$ (e.g. $\phi=0$) presents all features of the PQPD (Fig.~\ref{Exp_The_Phi}). \paragraph{Conclusion.} We have shown experimentally the full reconstruction of PQPD with photon-number resolving detectors. As a result we observed the intrinsic negativity of PQPD originating from the discrete nature of the Stokes observables. The last feature has been never observed before because previous experiments were realized with photon-number averaging detectors. For our reconstruction we have elaborated a procedure that leads to high-quality PQPD from a relatively small dataset. The PQPD reconstruction with photon-number resolving detectors is very promising because of novel detectors of this kind that can resolve up to tens of photons with more than 90\% quantum efficiency \cite{Fukuda2011,Miki2014,Allman2015}. These detectors can push forward this direction in the polarization tomography and make it a useful tool for quantum state characterization. We acknowledge the financial support of the Russian Foundation for Basic Research grants 14-02-31030 and 14-02-00399. The work of F.~Ya.~Khalili was supported by LIGO NSF grant PHY-1305863.
0905.4408
\section{Introduction}\label{se:introduction} Nonlinear hyperbolic conservation laws on networks have recently attracted a lot of interest in various fields: car traffic \cite{C-G-P,garavello-piccoli_ARModel_2006,gp-book,Holden-Risebro_siam_1995}, gas dynamics~\cite{MR2418671,MR2223073,MR2219276,MR2247787,MR2311526, MR2377285,colombo-guerra-herty-sachers, MR2438778,ColomboMarcellini,MR2441091}, irrigation channels~\cite{MR2357767,MR2164806,MR2055319,MR1920161} and supply chains~\cite{MR2357763,MR2318380}. A network is modeled by a graph: a finite collection of arcs connected together by vertices. On each arc we consider a scalar conservation law. For instance one may think to the Lighthill-Whitham-Richards model for car traffic~\cite{Lighthill-Whitham_1955,Richards_1956}. However, our results applies to the other application domains. It is easy to check that the dynamic at nodes is not uniquely determined by imposing the conservation of mass through vertices. Then, to completely describe the network load evolution, the first step is to appropriately define the concept of solution at a vertex.\\ As in the classical theory of conservation laws, this problem is equivalent to giving the solution Riemann problems (now at vertices). More precisely, a Riemann problem at a vertex is simply a Cauchy problem with constant initial conditions in each arc of the vertex. The map, which associates the solution to each Riemann problem at a vertex $J$, is called a Riemann solver at $J$. Similarly to the case of a real line, one has to resort to the concept of weak solutions in the sense of distributions and there are infinitely many Riemann solvers producing weak solutions. First one uses entropy type conditions inside arcs as for the real line. Then, in order to select a particular solution (i.e. a Riemann solver) at the vertex, one has to impose some additional conditions. In~\cite{C-G-P}, for example, the authors required some rules about the distribution of the fluxes in the arcs and a maximization condition; see also~\cite{da-m-p,marigo-piccoli_2008_T_junction}. It is then natural to ask if entropy-like conditions can be imposed also at the vertex and not only inside arcs. In this paper, we focus on a single vertex $J$, composed by $n$ incoming and $m$ outgoing arcs and we extend the Kru\v{z}kov~\cite{MR0267257} entropy-type conditions. More precisely, we propose two different entropy conditions for admissibility of solutions, called, respectively, (E1) and (E2). The condition~(E1) is stronger than~(E2), indeed the first asks for Kru\v{z}kov entropy condition to be verified for all entropies, while the second asks only for the precise Kru\v{z}kov entropy corresponding to sonic point. It is interesting to note that the entropy condition~(E1) imposes strong restrictions both on Riemann solvers and on the geometry of the vertex. Indeed, Riemann solvers satisfying~(E1) can exist only in the case of vertices with the same number of incoming and outgoing arcs. We then test our conditions on Riemann solvers considered in the literature. First we can prove that the Riemann solver, introduced in~\cite{da-m-p} for data networks, satisfies~(E2) and, in special situations, also~(E1).\\ Then we show that the Riemann solvers defined in~\cite{C-G-P,marigo-piccoli_2008_T_junction} do not satisfy~(E2). However, at least for the Riemann solver in~\cite{C-G-P}, the entropy condition and the maximization procedure agree on some particular set, over which the maximization is taken. Roughly speaking the solver respects the entropy condition once traffic distribution is imposed. The paper is organized as follows. Section~\ref{se:def} introduces the basic definitions of networks and of solutions. Section~\ref{se:Riemann_problem} deals with the solution to the Riemann problem at the vertex $J$. Moreover, we introduce the entropy conditions (E1) and (E2) for Riemann solvers at $J$. In Section~\ref{se:RS_E1}, we determine which Riemann solvers satisfy the entropy condition (E1). The paper ends with Section~\ref{se:examples}, which considers the Riemann solvers $\mathcal{RS}_1$, $\mathcal{RS}_2$ and $\mathcal{RS}_3$, introduced respectively in \cite{C-G-P,da-m-p,marigo-piccoli_2008_T_junction}, and analyzes what entropy conditions these Riemann solvers satisfy. \section{Basic Definitions and Notations}\label{se:def} Consider a node $J$ with $n$ incoming arcs $I_1,\ldots,I_n$ and $m$ outgoing arcs $I_{n+1},\ldots,I_{n+m}$. We model each incoming arc $I_i$ ($i\in\{1,\ldots,n\}$) of the node with the real interval $I_i=]-\infty,0]$ and each outgoing arc $I_j$ ($j\in\{n+1,\ldots,n+m\}$) of the node with the real interval $I_j=[0,+\infty[$. On each arc $I_l$ ($l\in\{1,\ldots,n+m\}$), the traffic evolution is given by \begin{equation} \label{eq:LWR} (\rho_l)_t+f(\rho_l)_x=0, \end{equation} where $\rho_l=\rho_l(t,x)\in [0,\rho_{max}]$, is the {\em density}, $v_l=v_l(\rho_l)$ is the {\em average velocity} and $f(\rho_l)=v_l(\rho_l)\,\rho_l$ is the {\em flux}. Hence the network load is described by a finite collection of functions $\rho_l$ defined on $[0,+\infty[\times I_l$. For simplicity, we put $\rho_{max}=1$. On the flux $f$ we make the following assumption \begin{itemize} \item[{\bf (${\cal F}$)}] $f : [0,1] \rightarrow \mathbb R$ is a piecewise smooth concave function satisfying \begin{enumerate} \item $f(0)=f(1)=0$; \item there exists a unique $\sigma\in]0,1[$ such that $f$ is strictly increasing in $[0,\sigma[$ and strictly decreasing in $]\sigma,1]$. \end{enumerate} \end{itemize} \begin{definition} \label{deftau} Let $\tau:[0,1] \rightarrow [0,1]$ be the map such that: \begin{enumerate} \item $f(\tau(\rho))=f(\rho)$ for every $\rho\in[0,1]$; \item $\tau(\rho) \not= \rho$ for every $\rho\in[0,1]\setminus\{\sigma\}$. \end{enumerate} \end{definition} \begin{definition} A function $\rho_l\in C([0,+\infty[;L^1_{loc}(I_l))$ is an entropy-admissible solution to~(\ref{eq:LWR}) in the arc $I_l$ if, for every $k \in [0,\rho_{max}]$ and every $\tilde\varphi:[0,+\infty[\times I_l\to\mathbb R$ smooth, positive with compact support in $]0,+\infty[\times \left(I_l\setminus\{0\}\right)$ \begin{equation} \label{eq:entsol-oneroad} \int_0^{+\infty}\int_{I_l}\Big( |\rho_l -k|{\frac{\partial \tilde\varphi} {\partial t}} + \sgn(\rho_l-k)(f(\rho_l)- f(k)) {\frac{\partial \tilde\varphi} {\partial x}}\Big)dxdt \geq 0. \end{equation} \end{definition} \begin{definition}\label{def:weak_solution} A collection of functions $\rho_l\in C([0,+\infty[;L^1_{loc}(I_l))$, ($l\in\{1,\ldots,n+m\}$) is a weak solution at $J$ if \begin{enumerate} \item for every $\l\in\{1,\ldots,n+m\}$, the function $\rho_l$ is an entropy-admissible solution to~(\ref{eq:LWR}) in the arc $I_l$; \item for every $\l\in\{1,\ldots,n+m\}$ and for a.e. $t>0$, the function $x\mapsto\rho_l(t,x)$ has a version with bounded total variation; \item for a.e. $t>0$, it holds \begin{equation}\label{eq:RH} \sum\limits_{i=1}^n f(\rho_i (t, 0-)) = \sum\limits_{j=n +1}^{n+m}f(\rho_j (t, 0+))\,, \end{equation} where $\rho_l$ stands for the version with bounded total variation. \end{enumerate} \end{definition} \section{The Riemann Problem at $J$}\label{se:Riemann_problem} Given $\rho_{1,0},\ldots,\rho_{n+m,0}\in[0,1]$, a Riemann problem at $J$ is a Cauchy problem at $J$ with constant initial data on each arc, i.e. \begin{equation} \label{eq:RPatJ} \left\{ \begin{array}{ll} \begin{array}{l} \frac\partial{\partial t}\rho_l+\frac\partial{\partial x}f(\rho_l)=0, \vspace{.2cm}\\ \rho_l(0,\cdot)=\rho_{0,l}, \end{array} & l\in\{1,\ldots,n+m\}. \end{array} \right. \end{equation} Now, we give some definitions for later use. The first one is the definition of Riemann solver, which is a map giving a solution to the Riemann problem~\eqref{eq:RPatJ}. \begin{definition}\label{def:Riemann_solver} A Riemann solver $\mathcal{RS}$ is a function \begin{equation*} \begin{array}{rccc} \mathcal{RS}: & [0,1]^{n+m} & \longrightarrow & [0,1]^{n+m}\\ & (\rho_{1,0},\ldots,\rho_{n+m,0}) & \longmapsto & (\bar\rho_1,\ldots,\bar\rho_{n+m}) \end{array} \end{equation*} satisfying \begin{enumerate} \item \label{enum:1_def_RS} $\sum_{i=1}^nf(\bar\rho_i)=\sum_{j=n+1}^{n+m}f(\bar\rho_j)$; \item \label{enum:2_def_RS} for every $i\in\{1,\ldots,n\}$, the classical Riemann problem \begin{equation*} \left\{ \begin{array}{l} \rho_t+f(\rho)_x=0,\hspace{1cm}x\in\mathbb R,\, t>0,\vspace{.2cm}\\ \rho(0,x)=\left\{ \begin{array}{ll} \rho_{i,0}, & \textrm{ if } x<0,\\ \bar\rho_i, & \textrm{ if } x>0, \end{array} \right. \end{array} \right. \end{equation*} is solved with waves with negative speed; \item \label{enum:3_def_RS} for every $j\in\{n+1,\ldots,n+m\}$, the classical Riemann problem \begin{equation*} \left\{ \begin{array}{l} \rho_t+f(\rho)_x=0,\hspace{1cm}x\in\mathbb R,\, t>0,\vspace{.2cm}\\ \rho(0,x)=\left\{ \begin{array}{ll} \bar\rho_j, & \textrm{ if } x<0,\\ \rho_{j,0}, & \textrm{ if } x>0, \end{array} \right. \end{array} \right. \end{equation*} is solved with waves with positive speed. \end{enumerate} \end{definition} We introduce the concepts of equilibrium and consistency for Riemann solvers. The fixed points of a Riemann solver are called equilibria, while a Riemann solver has the consistency condition when its image is contained in the equilibria. \begin{definition}\label{def:equilibrium} We say that $(\rho_{1,0},\ldots,\rho_{n+m,0})$ is an equilibrium for the Riemann solver $\mathcal{RS}$ if \begin{equation*} \mathcal{RS}(\rho_{1,0},\ldots,\rho_{n+m,0})=(\rho_{1,0},\ldots,\rho_{n+m,0}). \end{equation*} \end{definition} \begin{definition}\label{def:consistency} We say that a Riemann solver $\mathcal{RS}$ satisfies the consistency condition if, for every $(\rho_{1,0},\ldots,\rho_{n+m,0})\in[0,1]^{n+m}$, then $\mathcal{RS}(\rho_{1,0},\ldots,\rho_{n+m,0})$ is an equilibrium for $\mathcal{RS}$. \end{definition} We introduce now the concepts of entropy functions and admissible entropy conditions (E1) and (E2) for Riemann solvers. We are essentially extending the Kru\v zkov entropy condition to the case of a node; see~\cite{MR0267257}. \begin{definition} The function $\mathcal F:[0,1]^{n+m}\times[0,1]\to\mathbb R$, defined by \begin{eqnarray} \label{eq:entropy_flux_function} \mathcal F(\rho_1,\ldots,\rho_{n+m},k) & = & \sum_{i=1}^n\sgn(\rho_i- k)\left(f(\rho_i)-f(k)\right)\\ & & -\sum_{j=n+1}^{n+m}\sgn(\rho_j- k) \left(f(\rho_j)-f(k)\right),\nonumber \end{eqnarray} is called entropy-flux function. \end{definition} \begin{definition}\label{def:entropy_RS_E1} A Riemann solver $\mathcal{RS}$ satisfies the entropy condition (E1) if, for every initial condition $(\rho_{1,0},\ldots,\rho_{n+m,0})$ and for every $k\in[0,1]$, we have \begin{equation} \label{eq:entropy_RS_E1} \mathcal F(\bar\rho_1,\ldots,\bar\rho_{n+m},k)\ge0, \end{equation} where $(\bar\rho_1,\ldots,\bar\rho_{n+m}) =\mathcal{RS}(\rho_{1,0},\ldots,\rho_{n+m,0})$. \end{definition} \begin{remark} If $k=0$, then equation~(\ref{eq:entropy_RS_E1}) becomes $\sum_{i=1}^nf(\bar\rho_i)\ge\sum_{j=n+1}^{n+m}\f j$.\\ If $k=1$, then equation~(\ref{eq:entropy_RS_E1}) becomes $\sum_{i=1}^nf(\bar\rho_i)\le\sum_{j=n+1}^{n+m}\f j$. Therefore the entropy condition (E1) implies the conservation identity $\sum_{i=1}^nf(\bar\rho_i)=\sum_{j=n+1}^{n+m}\f j$. \end{remark} \begin{definition}\label{def:entropy_RS_E2} A Riemann solver $\mathcal{RS}$ satisfies the entropy condition (E2) if, for every initial condition $(\rho_{1,0},\ldots,\rho_{n+m,0})$, we have \begin{equation} \label{eq:entropy_RS_E2} \mathcal F(\bar\rho_1,\ldots,\bar\rho_{n+m},\sigma)\ge0, \end{equation} where $(\bar\rho_1,\ldots,\bar\rho_{n+m}) =\mathcal{RS}(\rho_{1,0},\ldots,\rho_{n+m,0})$. \end{definition} \begin{remark} The entropy condition~(\ref{eq:entropy_RS_E1}) can be deduced in the following way. Fix, for every $l \in \{ 1, \ldots, n+m \}$, a smooth function $\varphi_l: [0, +\infty [ \times I_l \to [0, +\infty[$ with support contained in $[0, +\infty [ \times [-M,M]$ for some $M > 0$ and assume that $\varphi_{l'}(t,0)=\varphi_{l''}(t,0)$ for every $t \ge 0$ and $l', l'' \in \{ 1, \ldots, n+m \}$. Applying the divergence theorem to the inequality \begin{equation*} \sum_{l=1}^{n+m} \int_0^{+\infty} \int_{I_l} \left[ \abs{\bar \rho_l- k} \varphi_{l,t} + \sgn(\bar \rho_l- k) \left( f(\bar \rho_l) - f( k) \right) \varphi_{l,x} \right] dx dt \ge 0, \end{equation*} where $(\bar \rho_1, \ldots, \bar \rho_{n+m})$ is an equilibrium at $J$, we deduce~\eqref{eq:entropy_RS_E1}. Obviously, these kinds of entropies are not justified by physical considerations. \end{remark} Finally, let us introduce sets $\Omega_l$ and $\Phi_l$, related to the points~\ref{enum:2_def_RS} and~\ref{enum:3_def_RS} of Definition~\ref{def:Riemann_solver}. \begin{enumerate} \item For every $i\in\{1,\ldots,n\}$ define \begin{equation} \label{eq:omega_i} \Omega_i=\left\{ \begin{array}{ll} [0,f(\rho_{i,0})], & \textrm{ if } 0\le\rho_{i,0}\le\sigma, \vspace{.2cm}\\ { }[0,f(\sigma)], & \textrm{ if } \sigma\le\rho_{i,0}\le1, \end{array} \right. \end{equation} and \begin{equation} \label{eq:Phi_i} \Phi_i=\left\{ \begin{array}{ll} \{\rho_{i,0}\}\cup]\tau(\rho_{i,0}),1], & \textrm{ if } 0\le\rho_{i,0}\le\sigma,\vspace{.2cm}\\ { }[\sigma,1], & \textrm{ if } \sigma \le \rho_{i,0} \le 1. \end{array} \right. \end{equation} \item For every $j\in\{n+1,\ldots,n+m\}$ define \begin{equation} \label{eq:omega_j} \Omega_j=\left\{ \begin{array}{ll} [0,f(\sigma)], & \textrm{ if } 0\le\rho_{j,0}\le\sigma, \vspace{.2cm}\\ { }[0,f(\rho_{j,0})], & \textrm{ if } \sigma\le\rho_{j,0}\le1, \end{array} \right. \end{equation} and \begin{equation} \label{eq:Phi_j} \Phi_j=\left\{ \begin{array}{ll} { }[0,\sigma], & \textrm{ if } 0\le\rho_{j,0}\le\sigma,\vspace{.2cm}\\ \{\rho_{j,0}\}\cup[0,\tau(\rho_{j,0})[, & \textrm{ if } \sigma\le\rho_{j,0}\le1. \end{array} \right. \end{equation} \end{enumerate} The following Proposition links the previous sets with Definition~\ref{def:Riemann_solver}. \begin{prop}\label{prop:stati_ammissibili} The following statements hold. \begin{enumerate} \item For every $i\in\{1,\ldots,n\}$, an element $\bar\gamma$ belongs to $\Omega_i$ if and only if there exists $\bar\rho_i\in[0,1]$ such that $f(\bar\rho_i)=\bar\gamma$ and point~\ref{enum:2_def_RS} of Definition~\ref{def:Riemann_solver} is satisfied. \item For every $j\in\{n+1,\ldots,n+m\}$, an element $\bar\gamma$ belongs to $\Omega_j$ if and only if there exists $\bar\rho_j\in[0,1]$ such that $f(\bar\rho_j)=\bar\gamma$ and point~\ref{enum:3_def_RS} of Definition~\ref{def:Riemann_solver} is satisfied. \end{enumerate} \end{prop} The proof is trivial and hence omitted. The main result of this Section is that, if $n \ne m$, then every Riemann solver $\mathcal{RS}$ at $J$ does not satisfy the entropy condition (E1). We first need the following result. \begin{prop}\label{prop:n_ne_m} Fix a node $J$ with $n$ incoming arcs and $m$ outgoing arcs and a Riemann solver $\mathcal{RS}$ satisfying the entropy condition (E1). Denote with $(\bar\rho_1,\ldots,\bar\rho_{n+m})$ the image through $\mathcal{RS}$ of the initial condition $(\rho_{1,0},\ldots,\rho_{n+m,0})$. \begin{enumerate} \item If $n>m$, then $\min\left\{\bar\rho_1,\ldots,\bar\rho_{n}\right\}=0$. \item If $n<m$, then $\max\left\{\bar\rho_{n+1},\ldots,\bar\rho_{n+m}\right\}=1$. \end{enumerate} \end{prop} \begin{proof} Consider first the case $n>m$. Suppose by contradiction that $\min\left\{\bar\rho_1,\ldots,\bar\rho_{n}\right\}>0$. Define the set $J=\left\{j\in\{n+1,\ldots,n+m\}\,:\bar\rho_j=0\right\}$ and fix $0<k<\min\left\{\bar\rho_l\,:l\in\{1,\ldots,n+m\}\setminus J\right\}$. Thus, the entropy inequality $\entropy{\bar \rho_1, \ldots, \bar \rho_{n+m}, k}$ becomes, \begin{displaymath} \sum_{i=1}^n\left[f(\bar\rho_i)-f(k)\right]\ge \sum_{j\in\{n+1,\ldots,n+m\}\setminus J}\left[f(\bar\rho_j)-f(k)\right] +\sum_{j\in J}f(k). \end{displaymath} By point~\ref{enum:1_def_RS} of Definition~\ref{def:Riemann_solver}, we deduce that \begin{displaymath} -nf(k)\ge -(m-\#(J))f(k)+\#(J)f(k), \end{displaymath} where $\# (J)$ denotes the cardinality of $J$; thus $(m-n-2\#(J))f(k)\ge0$, which is a contradiction. Consider now the situation $n<m$. By contradiction we assume that $\max\left\{\bar\rho_{n+1},\ldots,\bar\rho_{n+m}\right\}<1$. Define the set $I=\left\{i\in\{1,\ldots,n\}\,:\bar\rho_i=1\right\}$ and fix $\max\left\{\bar\rho_l\,:l\in\{1,\ldots,n+m\}\setminus I\right\}<k<1$. Thus, the entropy inequality $\entropy{\bar \rho_1, \ldots, \bar \rho_{n+m}, k}$ becomes, \begin{displaymath} \sum_{i\in\{1,\ldots,n\}\setminus I}\left[f(k)-f(\bar\rho_i)\right] -\sum_{i\in I}f(k)\ge \sum_{j=n+1}^{n+m}\left[f(k)-f(\bar\rho_j)\right]. \end{displaymath} By point~\ref{enum:1_def_RS} of Definition~\ref{def:Riemann_solver}, we deduce that $(n-2\#(I)-m)f(k)\ge0$, which is a contradiction. \end{proof} \begin{theorem} \label{thm:n_ne_m} Fix a node $J$ with $n$ incoming arcs and $m$ outgoing arcs and suppose that $n\ne m$. Every Riemann solver $\mathcal{RS}$ at $J$ does not satisfy the entropy condition (E1). \end{theorem} \begin{proof} Suppose, by contradiction, that there exists a Riemann solver $\mathcal{RS}$ at $J$ satisfying the entropy condition (E1). Assume $n>m$ and consider an initial condition $(\rho_{1,0},\ldots,\rho_{n+m,0})$ satisfying $\rho_{i,0}\ne0$ for every $i\in\{1,\ldots,n\}$. If $(\bar\rho_1,\ldots,\bar\rho_{n+m})=\mathcal{RS}(\rho_{1,0},\ldots,\rho_{n+m,0})$, then, by Proposition~\ref{prop:n_ne_m}, there exists $i_1\in\{1,\ldots,n\}$ such that $\bar\rho_{i_1}=0$, which is a contradiction since the wave $(\rho_{i_1,0},\bar\rho_{i_1})$ has not negative speed. Assume now $n<m$ and consider an initial condition $(\rho_{1,0},\ldots,\rho_{n+m,0})$ satisfying $\rho_{j,0}\ne1$ for every $j\in\{n+1,\ldots,n+m\}$. By Proposition~\ref{prop:n_ne_m}, if $(\bar\rho_1,\ldots,\bar\rho_{n+m})=\mathcal{RS}(\rho_{1,0},\ldots,\rho_{n+m,0})$, then there exists $j_1\in\{n+1,\ldots,n+m\}$ such that $\bar\rho_{j_1}=1$, which is a contradiction since the wave $(\bar\rho_{j_1},\rho_{j_1,0})$ has not positive speed. \end{proof} \section{Riemann solvers satisfying (E1)}\label{se:RS_E1} In this Section we determine which Riemann solver satisfies the entropy condition (E1), in the sense of Definition~\ref{def:entropy_RS_E1}, for nodes with $n = m \in \{ 1, 2 \}$. In the case $n \ne m$, Theorem~\ref{thm:n_ne_m} implies that every Riemann solver does not satisfy (E1). Moreover if $n = m = 1$, then there exists exactly one Riemann solver at $J$ satisfying (E1), while if $n = m = 2$, then there exist infinitely many Riemann solvers satisfying (E1); see Sections~\ref{sse:n=m=1} and~\ref{sse:n=m=2}. We do not treat the case $n = m > 2$, for the huge number of different situations. \subsection{Nodes with $n = m =1$} \label{sse:n=m=1} In this subsection, we fix a node $J$ with one incoming and one outgoing arc. The following result holds. \begin{prop}\label{prop:1x1} A Riemann solver $\mathcal{RS}$ at $J$ satisfies the entropy condition (E1) if and only if, for every initial datum $(\rho_{1,0},\rho_{2,0})$, the image $(\bar\rho_1,\bar\rho_2)=\mathcal{RS}(\rho_{1,0},\rho_{2,0})$ satisfies either \begin{equation}\label{eq:cond1_E1} \bar\rho_1=\bar\rho_2 \end{equation} or \begin{equation}\label{eq:cond2_E1} \bar\rho_1<\bar\rho_2\quad\textrm{ and }\quad f(\bar\rho_1)=f(\bar\rho_2). \end{equation} \end{prop} \begin{proof} Consider first a Riemann solver $\mathcal{RS}$ satisfying the entropy condition (E1). By~\ref{enum:1_def_RS} of Definition~\ref{def:Riemann_solver}, it is clear that $f(\bar\rho_1)=f(\bar\rho_2)$. Assume by contradiction that $\bar\rho_1>\bar\rho_2$. Since $f(\bar\rho_1)=f(\bar\rho_2)$, we easily deduce that $\bar\rho_2<\sigma<\bar\rho_1$. Putting $k=\sigma$ in equation~(\ref{eq:entropy_RS_E1}) we derive \begin{displaymath} f(\bar\rho_1)-f(\sigma)\ge f(\sigma)-f(\bar\rho_2), \end{displaymath} which is, by assumptions, equivalent to $f(\bar\rho_1)\ge f(\sigma)$, and so we get a contradiction. Consider now a Riemann solver $\mathcal{RS}$ such that, for every initial datum $(\rho_{1,0},\rho_{2,0})$, the image $(\bar\rho_1,\bar\rho_2)=\mathcal{RS}(\rho_{1,0},\rho_{2,0})$ satisfies either~(\ref{eq:cond1_E1}) or~(\ref{eq:cond2_E1}). It is trivial to prove that (E1) holds. \end{proof} \begin{theorem} There exists a unique Riemann solver $\mathcal{RS}$ at $J$ satisfying the entropy condition (E1). This Riemann solver satisfies the consistency condition and coincides with the Riemann solver introduced in~\cite{C-G-P} for traffic or with the Riemann solver introduced in~\cite{da-m-p}. \end{theorem} \begin{proof} Fix an initial datum $(\rho_{1,0},\rho_{2,0})$. We show that there exists a unique $(\bar\rho_1,\bar\rho_2)$, which is the image of an entropy admissible Riemann solver. If $\rho_{1,0}=\rho_{2,0}$, then we claim that $\bar\rho_1=\bar\rho_2=\rho_{1,0}$. Assume by contradiction that $\bar\rho_1\ne\bar\rho_2$. In this case either $\bar\rho_1<\sigma<\bar\rho_2$ or $\bar\rho_2<\sigma<\bar\rho_1$. By Proposition~\ref{prop:1x1}, the only possibility is $\bar\rho_1<\sigma<\bar\rho_2$. By Proposition~\ref{prop:stati_ammissibili}, either $\bar\rho_1=\rho_{1,0}$ or $\bar\rho_2=\rho_{2,0}$. In the first case $\bar\rho_2=\tau(\rho_{2,0})$, while in the second one $\bar\rho_1=\tau(\rho_{1,0})$. It is not possible. Assume now that $\rho_{1,0}\ne\rho_{2,0}$. We have some different possibilities. \begin{enumerate} \item $\max\{\rho_{1,0},\rho_{2,0}\}\le\sigma$. By Proposition~\ref{prop:stati_ammissibili}, we deduce that $\bar\rho_2\in[0,\sigma]$. Moreover, by Proposition~\ref{prop:1x1}, we deduce that $\bar\rho_1=\rho_{1,0}$; hence $\bar\rho_2=\bar\rho_1=\rho_{1,0}$. This solution respects all the properties of Definition~\ref{def:Riemann_solver} and the entropy condition~(\ref{eq:entropy_RS_E1}). \item $\min\{\rho_{1,0},\rho_{2,0}\}\ge\sigma$. By Proposition~\ref{prop:stati_ammissibili}, we deduce that $\bar\rho_1\in[\sigma,1]$. Moreover, by Proposition~\ref{prop:1x1}, we deduce that $\bar\rho_2=\rho_{2,0}$; hence $\bar\rho_2=\bar\rho_1=\rho_{2,0}$. This solution respects all the properties of Definition~\ref{def:Riemann_solver} and the entropy condition~(\ref{eq:entropy_RS_E1}). \item $\rho_{1,0}<\sigma<\rho_{2,0}$. By Proposition~\ref{prop:stati_ammissibili}, we deduce that $\bar\rho_1=\rho_{1,0}$ or $\bar\rho_1>\sigma$ and that $\bar\rho_2=\rho_{2,0}$ or $\bar\rho_2<\sigma$.\\ If $f(\rho_{1,0})=f(\rho_{2,0})$, then, by Proposition~\ref{prop:1x1}, the only possibility is that $\bar\rho_1=\rho_{1,0}$ and $\bar\rho_2=\rho_{2,0}$.\\ If $f(\rho_{1,0})>f(\rho_{2,0})$, then, by Proposition~\ref{prop:1x1}, the only possibility is that $\bar\rho_1=\bar\rho_2=\rho_{2,0}$.\\ Finally, if $f(\rho_{1,0})<f(\rho_{2,0})$, then, by Proposition~\ref{prop:1x1}, the only possibility is that $\bar\rho_1=\bar\rho_2=\rho_{1,0}$.\\ In all the cases, the solution respects all the properties of Definition~\ref{def:Riemann_solver} and the entropy condition~(\ref{eq:entropy_RS_E1}). \item $\rho_{2,0}<\sigma<\rho_{1,0}$. By Proposition~\ref{prop:stati_ammissibili}, we deduce that $\bar\rho_1\ge\sigma$ and $\bar\rho_2\le\sigma$. By Proposition~\ref{prop:1x1}, the only possibility is that $\bar\rho_1=\bar\rho_2=\sigma$. The solution respects all the properties of Definition~\ref{def:Riemann_solver} and the entropy condition~(\ref{eq:entropy_RS_E1}). \end{enumerate} The proof is completed. \end{proof} \begin{remark} In~\cite{g-n-p-t}, the authors described all the Riemann solvers, with suitable properties, for nodes $J$ with $n = m = 1$. The unique Riemann solver $\mathcal{RS}$ satisfying (E1) corresponds to the Riemann solver generated by the set $X = \{ f(\sigma) \}$ and described in Section~3.1 of~\cite{g-n-p-t}. \end{remark} \begin{remark} \label{rmk:f1f2} One can try to generalize the entropy condition (E1), at least for nodes with $n = m = 1$, to the case of fluxes depending on the arcs. Unfortunately this is not a trivial problem. Consider indeed the following example. Let $f_1: [0,1] \to \mathbb R$, $f_2: [0,1] \to \mathbb R$ be two fluxes satisfying $(\mathcal F)$ and assume that: \begin{enumerate} \item $f_1$ is the flux in the arc $I_1$; \item $f_2$ is the flux in the arc $I_2$; \item $\sigma = \frac12$ is the point of maximum for both $f_1$ and $f_2$; \item $f_1(\rho) < f_2(\rho)$ for every $\rho \in ]0,1[$. \end{enumerate} Choose $0 < \bar \rho_2 < \bar \rho_1 < \frac12$ such that $f_1(\bar \rho_1) = f_2(\bar \rho_2)$ and take $k \in [\bar \rho_2, \bar \rho_1]$; see Figure~\ref{fig:remark_f1f2}. Then, the entropy condition~\eqref{eq:entropy_RS_E1} becomes \begin{displaymath} f_1 (\bar \rho_1) - f_1(k) \ge f_2(k) - f_2(\bar \rho_2), \end{displaymath} which is equivalent to $f_1(k) + f_2(k) \le f_1 (\bar \rho_1) + f_2 (\bar \rho_2)$. The last inequality does not hold for $k = \bar \rho_1$ and for all $k \in [\bar \rho_2, \bar \rho_1]$ near to $\bar \rho_1$. \begin{figure} \centering \begin{psfrags} \psfrag{0}{$0$} \psfrag{1}{$1$} \psfrag{s}{$\frac12$} \psfrag{f1}{$f_1$} \psfrag{f2}{$f_2$} \psfrag{r1}{$\bar \rho_1$} \psfrag{r2}{$\bar \rho_2$} \psfrag{rho}{$\rho$} \includegraphics[width=6cm]{f1f2.eps} \end{psfrags} \caption{The situation in the example of Remark~\ref{rmk:f1f2}.} \label{fig:remark_f1f2} \end{figure} \end{remark} \subsection{Nodes with $n = m = 2$} \label{sse:n=m=2} Consider a Riemann solver $\mathcal{RS}$ for a node $J$ with two incoming and two outgoing arcs. In this subsection, we assume that $(\bar\rho_1,\bar\rho_2,\bar\rho_3,\bar\rho_4)$ denotes an equilibrium for $\mathcal{RS}$. Recall that the equilibrium must satisfy $\f 1+\f 2=\f 3+\f 4$. By symmetry, we may assume also that \begin{description} \item[(H1)] $\bar\rho_1 \le \bar\rho_2$ and $\bar\rho_3 \le \bar\rho_4$. \end{description} The results of this subsection are summarized in Table~\ref{tab:equilibrium}. \begin{prop}\label{prop:0bad} Assume (H1) and that every $\bar\rho_l$ ($l\in\{1,2,3,4\}$) is a good datum. \begin{enumerate} \item If $\mathcal{RS}$ satisfies the entropy condition (E1), then $\bar\rho_1=\bar\rho_2=\bar\rho_3=\bar\rho_4=\sigma$. \item If $\bar\rho_1=\bar\rho_2=\bar\rho_3=\bar\rho_4=\sigma$, then $\mathcal F(\bar\rho_1,\bar\rho_2,\bar\rho_3,\bar\rho_4,k)=0$, for every $k\in[0,1]$. \end{enumerate} \end{prop} \begin{proof} Since all the data are good, then $\bar\rho_3\le\bar\rho_4\le\sigma\le\bar\rho_1\le\bar\rho_2$. If $k\in[\bar\rho_3,\bar\rho_4]$, then the entropy condition (E1) becomes \begin{displaymath} \f1+\f2-2\ff k\ge \f4-\f3, \end{displaymath} which is equivalent to $\ff k\le\f3$. This implies that $\f4=\f3$ and so $\bar\rho_3=\bar\rho_4$. If $k\in[\bar\rho_1,\bar\rho_2]$, then in the same way we deduce that $\bar\rho_1=\bar\rho_2$. Finally, if $k\in[\bar\rho_4,\bar\rho_1]$, then (\ref{eq:entropy_RS_E1}), coupled with the previous results, becomes \begin{displaymath} 2\f1-2\ff k\ge2\ff k-2\f4, \end{displaymath} which is equivalent to $\ff k\le\f1$. Therefore $\bar\rho_1=\sigma$ and the conclusion follows. \end{proof} \begin{prop}\label{prop:1bad} Assume (H1) and that the equilibrium $(\bar\rho_1,\bar\rho_2,\bar\rho_3,\bar\rho_4)$ for $\mathcal{RS}$ is composed by three good data and one bad datum. \begin{enumerate} \item Assume that the bad datum is in an incoming arc, say $\bar\rho_1<\sigma$.\\ If $\mathcal{RS}$ satisfies (E1), then $\bar\rho_2=\sigma$ and both $\bar\rho_3$ and $\bar\rho_4$ belong to $[\bar\rho_1,\sigma]$.\\ If $\bar\rho_2=\sigma$ and both $\bar\rho_3$ and $\bar\rho_4$ belong to $[\bar\rho_1,\sigma]$, then $\entropy{\brho_1,\brho_2,\brho_3,\brho_4,k}$ for every $k\in[0,1]$. \item Assume that the bad datum is in an outgoing arc, say $\bar \rho_4 > \sigma$.\\ If $\mathcal{RS}$ satisfies (E1), then $\bar\rho_3 = \sigma$ and both $\bar\rho_1$ and $\bar\rho_2$ belong to $[\sigma,\bar\rho_4]$.\\ If $\bar\rho_3 = \sigma$ and both $\bar\rho_1$ and $\bar\rho_2$ belong to $[\sigma,\bar\rho_4]$, then $\entropy{\brho_1,\brho_2,\brho_3,\brho_4,k}$ for every $k \in [0,1]$. \end{enumerate} \end{prop} \begin{proof} First assume that the bad datum is in an incoming arc and the Riemann solver satisfies the entropy condition (E1). Without loss of generality, suppose that $\bar\rho_1<\sigma$, $\bar\rho_2\ge\sigma$ and $\bar\rho_3\le\bar\rho_4\le\sigma$. We have three possibilities. \begin{description} \item[(a)] $\bar\rho_1\le\bar\rho_3\le\bar\rho_4$. \item[(b)] $\bar\rho_3\le\bar\rho_1\le\bar\rho_4$. \item[(c)] $\bar\rho_3\le\bar\rho_4\le\bar\rho_1$. \end{description} Consider the case \textbf{(a)}. If $k\in[\bar\rho_1,\bar\rho_3]$, then the entropy condition (E1) becomes \begin{displaymath} \f2-\f1\ge\f3+\f4-2\ff k, \end{displaymath} equivalent to $\ff k\ge\f1$, which is true.\\ If $k\in[\bar\rho_4,\bar\rho_2]$, then the entropy condition (E1) becomes \begin{displaymath} \f2-\f1\ge2\ff k-\f3-\f4, \end{displaymath} equivalent to $\f2\ge\ff k$, which implies that $\bar\rho_2=\sigma$.\\ If $k\in[\bar\rho_3,\bar\rho_4]$, then the entropy condition (E1) reads \begin{displaymath} \f2-\f1\ge\f4-\f3, \end{displaymath} equivalent to $\ff\sigma\ge\f4$, which is true. Consider the case \textbf{(b)}. If $k \in [\bar\rho_3,\bar\rho_1]$, then the entropy condition (E1) reads \begin{displaymath} \f1+\f2-2\ff k\ge\f4-\f3, \end{displaymath} equivalent to $\f3\ge\ff k$. This implies that $\bar\rho_3=\bar\rho_1$ and so we are in the case \textbf{(a)}. Consider the case \textbf{(c)}. If $k\in[\bar\rho_3,\bar\rho_4]$, then the entropy condition (E1) becomes \begin{displaymath} \f1+\f2-2\ff k\ge\f4-\f3, \end{displaymath} equivalent to $\f3\ge\ff k$. This implies that $\bar\rho_3=\bar\rho_4$.\\ If $k\in[\bar\rho_4,\bar\rho_1]$, then the entropy condition (E1) reads \begin{displaymath} \f1+\f2-2\ff k\ge2\ff k-2\f4, \end{displaymath} i.e. $\f4\ge\ff k$. This implies that $\bar\rho_4=\bar\rho_1$ and so we have a contradiction since, by case \textbf{(a)}, $\bar\rho_1=\bar\rho_3=\bar\rho_4<\sigma=\bar\rho_2$ and so $\f1+\f2\ne\f3+\f4$. The second statement in the case the bad datum is in an incoming arc easily follows. Assume now that the bad datum is in an outgoing arc and that the Riemann solver satisfies the entropy condition (E1). Without loss of generality, suppose that $\bar\rho_3 \le \sigma$, $\bar\rho_4 > \sigma$ and $\sigma \le \bar\rho_1 \le \bar\rho_2$. We have three possibilities. \begin{description} \item[(a)] $\bar\rho_1\le\bar\rho_2\le\bar\rho_4$. \item[(b)] $\bar\rho_1\le\bar\rho_4\le\bar\rho_2$. \item[(c)] $\bar\rho_4\le\bar\rho_1\le\bar\rho_2$. \end{description} Consider the case \textbf{(a)}. If $k \in [\bar\rho_3,\bar\rho_1]$, then the entropy condition (E1) becomes \begin{displaymath} \f1+\f2-2\ff k\ge\f4-\f3, \end{displaymath} i.e. $\f3 \ge \ff k$. This implies that $\bar\rho_3 = \sigma$.\\ If $k \in [\bar\rho_2,\bar\rho_4]$, then (\ref{eq:entropy_RS_E1}) becomes \begin{displaymath} 2\ff k-\f1-\f2\ge\f4-\f3, \end{displaymath} equivalent to $\ff k \ge \f4$, which is true.\\ If $k\in[\bar\rho_1,\bar\rho_2]$, then (\ref{eq:entropy_RS_E1}) becomes \begin{displaymath} \f2-\f1\ge\f4-\f3, \end{displaymath} equivalent to $f(\sigma) = \f3 \ge \f1$, which is true. Consider the case \textbf{(b)}. If $k \in [\bar\rho_4,\bar\rho_2]$, then the entropy condition (E1) reads \begin{displaymath} \f2 - \f1 \ge 2 \ff k - \f4 - \f3, \end{displaymath} which is equivalent to $\f2 \ge \ff k$. Thus we deduce that $\bar\rho_2=\bar\rho_4$ and so we are in the case \textbf{(a)}. Consider the case \textbf{(c)}. If $k \in [\bar\rho_1,\bar\rho_2]$, then the entropy condition (E1) reads \begin{displaymath} \f2-\f1\ge2\ff k-\f4-\f3, \end{displaymath} equivalent to $\f2\ge\ff k$. This implies that $\bar\rho_1=\bar\rho_2$.\\ If $k \in [\bar\rho_3,\bar\rho_4]$, then (\ref{eq:entropy_RS_E1}) reads \begin{displaymath} \f1+\f2-2\ff k\ge\f4-\f3, \end{displaymath} i.e. $\f3\ge\ff k$ and so $\bar\rho_3=\sigma$. Therefore $\bar\rho_1=\bar\rho_2=\bar\rho_3=\bar\rho_4=\sigma$, which is a contradiction. The second statement of the item~2 of the Proposition easily follows. The proof is finished. \end{proof} \begin{prop}\label{prop:2bad} Assume (H1) and that the equilibrium $(\bar\rho_1,\bar\rho_2,\bar\rho_3,\bar\rho_4)$ for $\mathcal{RS}$ is composed by two good and two bad data. \begin{enumerate} \item Assume that $\bar\rho_2<\sigma$, i.e. the bad data are both in the incoming arcs.\\ If the Riemann solver $\mathcal{RS}$ satisfies the entropy condition (E1), then $\bar\rho_1\le\bar\rho_3\le\bar\rho_4\le\bar\rho_2$.\\ If $\bar\rho_1\le\bar\rho_3\le\bar\rho_4\le\bar\rho_2 < \sigma$, then $\entropy{\brho_1,\brho_2,\brho_3,\brho_4,k}$ for every $k\in[0,1]$. \item Assume that $\bar\rho_3>\sigma$, i.e. the bad data are in the outgoing arcs.\\ If the Riemann solver $\mathcal{RS}$ satisfies the entropy condition (E1), then $\bar\rho_3\le\bar\rho_1\le\bar\rho_2\le\bar\rho_4$.\\ If $\sigma < \bar\rho_3 \le \bar\rho_1 \le \bar\rho_2 \le \bar\rho_4$, then $\entropy{\brho_1,\brho_2,\brho_3,\brho_4,k}$ for every $k\in[0,1]$. \item Assume that $\bar\rho_1<\sigma<\bar\rho_4$, i.e. the bad data are in the arcs $I_1$ and $I_4$.\\ If the Riemann solver $\mathcal{RS}$ satisfies the entropy condition (E1), then $\bar\rho_1\le\bar\rho_3\le\sigma\le\bar\rho_2\le\bar\rho_4$.\\ If $\bar\rho_1\le\bar\rho_3\le\sigma\le\bar\rho_2\le\bar\rho_4$, then $\entropy{\brho_1,\brho_2,\brho_3,\brho_4,k}$ for every $k\in[0,1]$. \end{enumerate} \end{prop} \begin{proof} Assume that $\bar\rho_2<\sigma$ and that the Riemann solver satisfies the entropy condition (E1). Since there are exactly two bad data, then $\bar\rho_4\le\sigma$. The conservation of mass at $J$ implies that we have the following possibilities. \begin{description} \item[(a)] $\bar\rho_1\le\bar\rho_3\le\bar\rho_4\le\bar\rho_2$. \item[(b)] $\bar\rho_3\le\bar\rho_1\le\bar\rho_2\le\bar\rho_4$. \end{description} Consider the case \textbf{(a)}. If $k\in[\bar\rho_1,\bar\rho_3]$, then the entropy condition (E1) reads \begin{displaymath} \f2-\f1\ge\f3+\f4-2\ff k, \end{displaymath} equivalent to $\ff k\ge\f1$, which is true.\\ If $k\in[\bar\rho_3,\bar\rho_4]$, then~(\ref{eq:entropy_RS_E1}) becomes \begin{displaymath} \f2-\f1\ge\f4-\f3, \end{displaymath} which clearly holds.\\ If $k\in[\bar\rho_4,\bar\rho_2]$, then the entropy condition (E1) reads \begin{displaymath} \f2-\f1\ge2\ff k-\f3-\f4, \end{displaymath} equivalent to $\f2\ge\ff k$, which is true. Consider the case \textbf{(b)}. If $k\in[\bar\rho_3,\bar\rho_1]$, then the entropy condition (E1) reads \begin{displaymath} \f1+\f2-2\ff k\ge\f4-\f3, \end{displaymath} equivalent to $\f3\ge\ff k$. This implies that $\bar\rho_1=\bar\rho_3$ and consequently $\bar\rho_2=\bar\rho_4$. The second statement of the item~1 of the Proposition easily follows. Assume now that $\bar\rho_3>\sigma$ and the Riemann solver satisfies the entropy condition (E1). Consequently $\bar\rho_1\ge\sigma$. Since $\f1+\f2=\f3+\f4$, we have the following possibilities. \begin{description} \item[(a)] $\bar\rho_3\le\bar\rho_1\le\bar\rho_2\le\bar\rho_4$. \item[(b)] $\bar\rho_1\le\bar\rho_3\le\bar\rho_4\le\bar\rho_2$. \end{description} Consider the case \textbf{(a)}. If $k\in[\bar\rho_3,\bar\rho_1]$, then the entropy condition (E1) reads \begin{displaymath} \f1+\f2-2\ff k\ge\f4-\f3, \end{displaymath} equivalent to $\f3\ge\ff k$, which is true.\\ If $k\in[\bar\rho_1,\bar\rho_2]$, then the entropy condition (E1) reads \begin{displaymath} \f2-\f1\ge\f4-\f3, \end{displaymath} which clearly holds.\\ If $k\in[\bar\rho_2,\bar\rho_4]$, then~(\ref{eq:entropy_RS_E1}) reads \begin{displaymath} 2\ff k-\f1-\f2\ge\f4-\f3, \end{displaymath} equivalent to $\ff k\ge\f4$, which is true. Consider the case \textbf{(b)}. If $k\in[\bar\rho_1,\bar\rho_3]$, then the entropy condition (E1) reads \begin{displaymath} \f2-\f1\ge\f3+\f4-2\ff k, \end{displaymath} equivalent to $\ff k\ge\f1$. This implies $\bar\rho_1=\bar\rho_3$ and so $\bar\rho_2=\bar\rho_4$. The second statement of the item~2 of the Proposition easily follows. Assume now $\bar\rho_1<\sigma<\bar\rho_4$, i.e. the bad data are in the arcs $I_1$ and $I_4$, and that the Riemann solver satisfies the entropy condition (E1). We have the following possibilities. \begin{description} \item[(a)] $\bar\rho_1\le\bar\rho_3\le\sigma\le\bar\rho_2\le\bar\rho_4$. \item[(b)] $\bar\rho_3\le\bar\rho_1<\sigma<\bar\rho_4\le\bar\rho_2$. \end{description} Consider the case \textbf{(a)}. If $k\in[\bar\rho_1,\bar\rho_3]$, then the entropy condition (E1) reads \begin{displaymath} \f2-\f1\ge\f3+\f4-2\ff k, \end{displaymath} equivalent to $\ff k\ge\f1$, which is true.\\ If $k\in[\bar\rho_3,\bar\rho_2]$, then (\ref{eq:entropy_RS_E1}) becomes \begin{displaymath} \f2-\f1\ge\f4-\f3, \end{displaymath} equivalent to $\f3\ge\f1$, which is true.\\ If $k\in[\bar\rho_2,\bar\rho_4]$, then the entropy condition (E1) becomes \begin{displaymath} 2\ff k-\f1-\f2\ge\f4-\f3, \end{displaymath} equivalent to $\ff k\ge\f4$, which is true. Consider the case \textbf{(b)}. If $k\in[\bar\rho_3,\bar\rho_1]$, then the entropy condition (E1) reads \begin{displaymath} \f1+\f2-2\ff k\ge\f4-\f3, \end{displaymath} equivalent to $\f3\ge\ff k$. This implies $\bar\rho_1=\bar\rho_3$ and so $\bar\rho_2=\bar\rho_4$. The second statement of the item~3 of the Proposition easily follows. The proof is finished. \end{proof} \begin{prop}\label{prop:3bad} Assume (H1) and that the equilibrium $(\bar\rho_1,\bar\rho_2,\bar\rho_3,\bar\rho_4)$ for $\mathcal{RS}$ is composed by three bad data and one good datum. \begin{enumerate} \item Assume that $\bar\rho_2\ge\sigma$, i.e. the good datum is in an incoming arc.\\ If the Riemann solver satisfies the entropy condition (E1), then $\bar\rho_1<\sigma$, $\bar\rho_3>\sigma$, $\bar\rho_2\le\bar\rho_4$ and $\f1\le\max\left\{\f2,\f3\right\}$.\\ If $\bar\rho_1<\sigma$, $\bar\rho_3>\sigma$, $\bar\rho_2\le\bar\rho_4$ and $\f1\le\max\left\{\f2,\f3\right\}$, then $\entropy{\brho_1,\brho_2,\brho_3,\brho_4,k}$ for every $k\in[0,1]$. \item Assume that $\bar\rho_3\le\sigma$, i.e. the good datum is in an outgoing arc.\\ If the Riemann solver satisfies the entropy condition (E1), then $\bar\rho_2<\sigma$, $\bar\rho_4>\sigma$, $\bar\rho_3\ge\bar\rho_1$ and $\f4\le\max\left\{\f2,\f3\right\}$. If $\bar\rho_2<\sigma$, $\bar\rho_4>\sigma$, $\bar\rho_3\ge\bar\rho_1$ and $\f4\le\max\left\{\f2,\f3\right\}$, then $\entropy{\brho_1,\brho_2,\brho_3,\brho_4,k}$ for every $k\in[0,1]$. \end{enumerate} \end{prop} \begin{proof} Assume first that $\bar\rho_2\ge\sigma$ and that the Riemann solver satisfies the entropy condition (E1). We easily deduce that $\bar\rho_1<\sigma<\bar\rho_3\le\bar\rho_4$. We have the following possibilities. \begin{description} \item[(a)] $\bar\rho_1<\sigma\le\bar\rho_2\le\bar\rho_3\le\bar\rho_4$. \item[(b)] $\bar\rho_1<\sigma<\bar\rho_3\le\bar\rho_2\le\bar\rho_4$. \item[(c)] $\bar\rho_1<\sigma<\bar\rho_3\le\bar\rho_4\le\bar\rho_2$. \end{description} Consider the case \textbf{(a)}. If $k\in[\bar\rho_1,\bar\rho_2]$, then the entropy condition (E1) reads \begin{displaymath} \f2-\f1\ge\f3+\f4-2\ff k, \end{displaymath} equivalent to $\ff k\ge\f1$. This implies that $\f2\ge\f1$.\\ If $k\in[\bar\rho_2,\bar\rho_3]$, then~(\ref{eq:entropy_RS_E1}) becomes \begin{displaymath} 2\ff k-\f1-\f2\ge\f3+\f4-2\ff k, \end{displaymath} equivalent to $2\ff k\ge\f3+\f4$, which is true.\\ If $k\in[\bar\rho_3,\bar\rho_4]$, then the entropy condition (E1) reads \begin{displaymath} 2\ff k-\f1-\f2\ge\f4-\f3, \end{displaymath} equivalent to $\ff k\ge\f4$, which is true. Consider the case \textbf{(b)}. If $k\in[\bar\rho_1,\bar\rho_3]$, then the entropy condition (E1) reads \begin{displaymath} \f2-\f1\ge\f3+\f4-2\ff k, \end{displaymath} equivalent to $\ff k\ge\f1$. This implies that $\f3\ge\f1$.\\ If $k\in[\bar\rho_3,\bar\rho_2]$, then~(\ref{eq:entropy_RS_E1}) becomes \begin{displaymath} \f2-\f1\ge\f4-\f3, \end{displaymath} equivalent to $\f3\ge\f1$.\\ If $k\in[\bar\rho_2,\bar\rho_4]$, then the entropy condition (E1) reads \begin{displaymath} 2\ff k-\f1-\f2\ge\f4-\f3, \end{displaymath} equivalent to $\ff k\ge\f4$, which is true. Consider the case \textbf{(c)}. If $k\in[\bar\rho_4,\bar\rho_2]$, then the entropy condition (E1) reads \begin{displaymath} \f2-\f1\ge2\ff k-\f3-\f4, \end{displaymath} equivalent to $\f2\ge\ff k$. This implies that $\bar\rho_2=\bar\rho_4$ and so we are in the case \textbf{(b)}. The second statement in the item~1 of the Proposition easily follows. Assume now that $\bar\rho_3\le\sigma$ and that the Riemann solver satisfies the entropy condition (E1). We easily deduce that $\bar\rho_1\le\bar\rho_2<\sigma<\bar\rho_4$. We have the following possibilities. \begin{description} \item[(a)] $\bar\rho_1\le\bar\rho_2\le\bar\rho_3\le\sigma<\bar\rho_4$. \item[(b)] $\bar\rho_1\le\bar\rho_3\le\bar\rho_2<\sigma<\bar\rho_4$. \item[(c)] $\bar\rho_3\le\bar\rho_1\le\bar\rho_2<\sigma<\bar\rho_4$. \end{description} Consider the case \textbf{(a)}. If $k\in[\bar\rho_1,\bar\rho_2]$, then the entropy condition (E1) reads \begin{displaymath} \f2-\f1\ge\f3+\f4-2\ff k, \end{displaymath} equivalent to $\ff k\ge\f1$, which is true.\\ If $k\in[\bar\rho_2,\bar\rho_3]$, then~(\ref{eq:entropy_RS_E1}) becomes \begin{displaymath} 2\ff k-\f1-\f2\ge\f3+\f4-2\ff k, \end{displaymath} equivalent to $2\ff k\ge\f1+\f2$, which is true.\\ If $k\in[\bar\rho_3,\bar\rho_4]$, then the entropy condition (E1) reads \begin{displaymath} 2\ff k-\f1-\f2\ge\f4-\f3, \end{displaymath} equivalent to $\ff k\ge\f4$. This implies that $\f3\ge\f4$. Consider the case \textbf{(b)}. If $k\in[\bar\rho_1,\bar\rho_3]$, then the entropy condition (E1) reads \begin{displaymath} \f2-\f1\ge\f3+\f4-2\ff k, \end{displaymath} equivalent to $\ff k\ge\f1$, which is true.\\ If $k\in[\bar\rho_3,\bar\rho_2]$, then~(\ref{eq:entropy_RS_E1}) becomes \begin{displaymath} \f2-\f1\ge\f4-\f3, \end{displaymath} equivalent to $\f3\ge\f1$, which is true.\\ If $k\in[\bar\rho_2,\bar\rho_4]$, then the entropy condition (E1) reads \begin{displaymath} 2\ff k-\f1-\f2\ge\f4-\f3, \end{displaymath} equivalent to $\ff k\ge\f4$. This implies that $\f2\ge\f4$. Consider the case \textbf{(c)}. If $k\in[\bar\rho_3,\bar\rho_1]$, then the entropy condition (E1) reads \begin{displaymath} \f1+\f2-2\ff k\ge\f4-\f3, \end{displaymath} equivalent to $\f3\ge\ff k$. This implies that $\bar\rho_1=\bar\rho_3$ and so we are in the case \textbf{(b)}. The second statement in the item~2 of the Proposition easily follows. The proof is finished. \end{proof} \begin{prop}\label{prop:4bad} Assume (H1) and that the equilibrium $(\bar\rho_1,\bar\rho_2,\bar\rho_3,\bar\rho_4)$ for $\mathcal{RS}$ is composed by four bad data. If the Riemann solver satisfies the entropy condition (E1), then $\bar\rho_1 \le \bar\rho_2 < \sigma < \bar\rho_3 \le \bar\rho_4$. Moreover, if $\bar\rho_1 \le \bar\rho_2 < \sigma < \bar\rho_3 \le \bar\rho_4$, then $\entropy{\brho_1,\brho_2,\brho_3,\brho_4,k}$ for every $k\in[0,1]$. \end{prop} \begin{proof} It is sufficient to check the entropy condition (E1). If $k\in[\bar\rho_1,\bar\rho_2]$, then the entropy condition (E1) reads \begin{displaymath} \f2-\f1\ge\f3+\f4-2\ff k, \end{displaymath} equivalent to $\ff k\ge\f1$, which is true.\\ If $k\in[\bar\rho_2,\bar\rho_3]$, then~(\ref{eq:entropy_RS_E1}) becomes \begin{displaymath} 2\ff k-\f1-\f2\ge\f3+\f4-2\ff k, \end{displaymath} equivalent to $2\ff k\ge\f3+\f4$, which is true. If $k\in[\bar\rho_3,\bar\rho_4]$, then the entropy condition (E1) reads \begin{displaymath} 2\ff k-\f1-\f2\ge\f4-\f3, \end{displaymath} equivalent to $\ff k\ge\f4$, which is true. This concludes the proof. \end{proof} \begin{table}[t] \centering \begin{tabular}{|c|l|} \hline \small{Bad data} & \hspace{2.5cm}admissible configurations\\ \hline \hline $0$ & $\bar\rho_1 = \bar\rho_2 = \bar\rho_3 = \bar\rho_4 = \sigma$\\ \hline $1$ & $\bar\rho_1 \le \bar\rho_3 \le \bar\rho_4 \le \sigma = \bar\rho_2$, \hspace{.5cm} $\bar\rho_1 < \sigma$\\ \cline{2-2} & $\bar\rho_3 = \sigma \le \bar\rho_1 \le \bar\rho_2 \le \bar\rho_4$, \hspace{.5cm} $\bar\rho_4 > \sigma$\\ \hline \hline $2$ & $\bar\rho_1 \le \bar\rho_3 \le \bar\rho_4 \le \bar\rho_2 < \sigma$\\ \cline{2-2} & $\sigma < \bar\rho_3 \le \bar\rho_1 \le \bar\rho_2 \le \bar\rho_4$\\ \cline{2-2} & $\bar\rho_1 \le \bar\rho_3 \le \sigma \le \bar\rho_2 \le \bar\rho_4$, \hspace{.5cm} $\bar\rho_1 < \sigma < \bar\rho_4$\\ \hline \hline $3$ & $\bar\rho_1 < \sigma < \bar\rho_3 \le \bar\rho_4$,\,\, $\sigma \le \bar\rho_2 \le \bar\rho_4$,\,\, $\f1 \le \max\left\{ \f2, \f3 \right\}$\\ \cline{2-2} & $\bar\rho_1 \le \bar\rho_2 < \sigma < \bar\rho_4$,\,\, $\bar\rho_1 \le \bar\rho_3 \le \sigma$,\,\, $\f4 \le \max\left\{ \f2, \f3 \right\}$\\ \hline \hline $4$ & $\bar\rho_1 \le \bar\rho_2 < \sigma < \bar\rho_3 \le \bar\rho_4$\\ \hline \end{tabular} \caption{All the possible configurations for an equilibrium $(\brho_1,\brho_2,\brho_3,\brho_4)$ of a $\mathcal{RS}$ satisfying the entropy condition (E1). By symmetry, we assume that $\bar\rho_1 \le \bar\rho_2$ and $\bar\rho_3 \le \bar\rho_4$, i.e. (H1) holds.} \label{tab:equilibrium} \end{table} \begin{remark} \label{rmk:RS_E1} Note that there exist Riemann solvers satisfying the consistency condition and the entropy condition (E1). Here we construct a Riemann solver $\mathcal{RS}$ with such properties.\\ Consider an initial condition $(\rho_{1,0}, \rho_{2,0}, \rho_{3,0}, \rho_{4,0})$. Denote with $(\hat \rho_1, \hat \rho_2, \hat \rho_3, \hat \rho_4)$ the image of the initial condition through $\mathcal{RS}$, i.e. \begin{displaymath} (\hat \rho_1, \hat \rho_2, \hat \rho_3, \hat \rho_4) = \mathcal{RS} (\rho_{1,0}, \rho_{2,0}, \rho_{3,0}, \rho_{4,0}) \end{displaymath} If $h$ is the number of bad initial data, then we define $\mathcal{RS}$ according to the following possibilities. \begin{description} \item[$h=0$.] We put $\hat \rho_1 = \hat \rho_2 = \hat \rho_3 = \hat \rho_4 = \sigma$. By Proposition~\ref{prop:0bad}, this provides an entropy admissible equilibrium. Moreover \begin{displaymath} \mathcal{RS} \left( \mathcal{RS} (\rho_{1,0}, \rho_{2,0}, \rho_{3,0}, \rho_{4,0}) \right) = \mathcal{RS} (\rho_{1,0}, \rho_{2,0}, \rho_{3,0}, \rho_{4,0}). \end{displaymath} \item[$h=1$.] Let $\bar l\in\{1,2,3,4\}$ be such that $\rho_{\bar l,0}$ is a bad datum. We have two possibilities: $\bar l \le 2$ or $\bar l \ge 3$.\\ Assume first $\bar l\le 2$. We put $\hat \rho_{\bar l} = \rho_{\bar l,0}$ and $\hat \rho_l = \sigma$ for $l \in\{1,2\}$, $l \ne \bar l$. Moreover we define $\hat \rho_3 = \hat \rho_1$ and $\hat \rho_4 = \hat \rho_2$.\\ Assume now $\bar l\ge 3$. We put $\hat \rho_{\bar l} = \rho_{\bar l,0}$ and $\hat \rho_l = \sigma$ for $l \in\{3,4\}$, $l \ne \bar l$. Moreover we define $\hat \rho_1 = \hat \rho_3$ and $\hat \rho_2 = \hat \rho_4$.\\ By Proposition~\ref{prop:1bad}, these solutions provide entropy admissible equilibria. Moreover \begin{displaymath} \mathcal{RS} \left( \mathcal{RS} (\rho_{1,0}, \rho_{2,0}, \rho_{3,0}, \rho_{4,0}) \right) = \mathcal{RS} (\rho_{1,0}, \rho_{2,0}, \rho_{3,0}, \rho_{4,0}). \end{displaymath} \item[$h=2$.] Let $l_1, l_2 \in \{1,2,3,4\}$, $\l_1\ne l_2$, be such that $\rho_{l_1,0}$ and $\rho_{l_2,0}$ are bad data. We have three different possibilities.\\ Assume first that $l_1, l_2\in\{1,2\}$. In this case we put $\hat \rho_{l_1} = \rho_{l_1,0}$, $\hat \rho_{l_2} = \rho_{l_2,0}$, $\hat \rho_3 = \hat \rho_1$ and $\hat \rho_4 = \hat \rho_2$.\\ Assume now that $l_1, l_2\in\{3,4\}$. In this case we put $\hat \rho_{l_1} = \rho_{l_1,0}$, $\hat \rho_{l_2} = \rho_{l_2,0}$, $\hat \rho_1 = \hat \rho_3$ and $\hat \rho_2 = \hat \rho_4$.\\ Consider finally the last case. For simplicity suppose that $l_1 = 1$ and $l_2 = 4$. We define $\hat \rho_{l_1} = \rho_{l_1,0}$, $\hat \rho_{l_2} = \rho_{l_2,0}$, $\hat \rho_2 = \hat \rho_{l_2}$ and $\hat \rho_3 = \hat \rho_{l_1}$. By Proposition~\ref{prop:2bad}, these solutions provide entropy admissible equilibria. Moreover \begin{displaymath} \mathcal{RS} \left( \mathcal{RS} (\rho_{1,0}, \rho_{2,0}, \rho_{3,0}, \rho_{4,0}) \right) = \mathcal{RS} (\rho_{1,0}, \rho_{2,0}, \rho_{3,0}, \rho_{4,0}). \end{displaymath} \item[$h=3$.] Let $\bar l\in\{1,2,3,4\}$ be such that $\rho_{\bar l,0}$ is a good datum. We have two possibilities: $\bar l \le 2$ or $\bar l \ge 3$.\\ Assume first $\bar l \le 2$; say $\bar l = 2$ for simplicity.\\ If $f(\rho_{3,0}) + f(\rho_{4,0}) - f(\rho_{1,0}) \in \left[\min\left\{ f(\rho_{3,0}), f(\rho_{4,0})\right\}, f(\sigma)\right]$, then we put $\hat \rho_l = \rho_{l,0}$ for every $l \in \{1,2,3,4\}$, $l \ne \bar l$ and $\hat \rho_{\bar l}\in [\sigma, 1]$ such that $f(\hat \rho_{2}) = f(\rho_{3,0}) + f(\rho_{4,0}) - f(\rho_{1,0})$.\\ If $f(\rho_{3,0}) + f(\rho_{4,0}) - f(\rho_{1,0}) > f(\sigma)$ and $f(\rho_{3,0}) \ge f(\rho_{4,0})$, then $\hat \rho_1 = \hat \rho_3 =\rho_{1,0}$ and $\hat \rho_2 = \hat \rho_4 =\rho_{4,0}$.\\ If $f(\rho_{3,0}) + f(\rho_{4,0}) - f(\rho_{1,0}) > f(\sigma)$ and $f(\rho_{3,0}) < f(\rho_{4,0})$, then $\hat \rho_2 = \hat \rho_3 =\rho_{3,0}$ and $\hat \rho_1 = \hat \rho_4 =\rho_{1,0}$.\\ If $f(\rho_{3,0}) + f(\rho_{4,0}) - f(\rho_{1,0}) < \min\left\{ f(\rho_{3,0}), f(\rho_{4,0})\right\}$, then $\hat \rho_2 = \hat \rho_4 =\rho_{3,0}$ and $\hat \rho_1 = \hat \rho_3 =\rho_{4,0}$. Assume now $\bar l \ge 3$; say $\bar l = 3$ for simplicity.\\ If $f(\rho_{1,0}) + f(\rho_{2,0}) - f(\rho_{4,0}) \in \left[\min\left\{ f(\rho_{1,0}), f(\rho_{2,0})\right\}, f(\sigma)\right]$, then we put $\hat \rho_l = \rho_{l,0}$ for every $l \in \{1,2,3,4\}$, $l \ne \bar l$ and $\hat \rho_{\bar l}\in [0,\sigma]$ such that $f(\hat \rho_{3}) = f(\rho_{1,0}) + f(\rho_{2,0}) - f(\rho_{4,0})$.\\ If $f(\rho_{1,0}) + f(\rho_{2,0}) - f(\rho_{4,0}) > f(\sigma)$ and $f(\rho_{1,0}) \ge f(\rho_{2,0})$, then $\hat \rho_1 = \hat \rho_4 =\rho_{4,0}$ and $\hat \rho_2 = \hat \rho_3 =\rho_{2,0}$.\\ If $f(\rho_{1,0}) + f(\rho_{2,0}) - f(\rho_{4,0}) > f(\sigma)$ and $f(\rho_{2,0}) > f(\rho_{1,0})$, then $\hat \rho_2 = \hat \rho_4 =\rho_{4,0}$ and $\hat \rho_1 = \hat \rho_3 =\rho_{1,0}$.\\ If $f(\rho_{1,0}) + f(\rho_{2,0}) - f(\rho_{4,0}) < \min\left\{ f(\rho_{1,0}), f(\rho_{2,0})\right\}$, then $\hat \rho_2 = \hat \rho_4 =\rho_{2,0}$ and $\hat \rho_1 = \hat \rho_3 =\rho_{1,0}$. By Propositions~\ref{prop:2bad} and~\ref{prop:3bad}, these solutions provide entropy admissible equilibria. Moreover \begin{displaymath} \mathcal{RS} \left( \mathcal{RS} (\rho_{1,0}, \rho_{2,0}, \rho_{3,0}, \rho_{4,0}) \right) = \mathcal{RS} (\rho_{1,0}, \rho_{2,0}, \rho_{3,0}, \rho_{4,0}). \end{displaymath} \item[$h=4$.] We have some different cases. Assume first that $f(\rho_{1,0}) + f(\rho_{2,0}) = f(\rho_{3,0}) + f(\rho_{4,0})$. We put $\hat \rho_1 = \rho_{1,0}$, $\hat \rho_2 = \rho_{2,0}$, $\hat \rho_3 = \rho_{3,0}$ and $\hat \rho_4 = \rho_{4,0}$.\\ Assume now that $f(\rho_{1,0}) + f(\rho_{2,0}) < f(\rho_{3,0}) + f(\rho_{4,0})$. For simplicity suppose that $f(\rho_{1,0}) \le f(\rho_{2,0})$ and $f(\rho_{3,0}) \ge f(\rho_{4,0})$.\\ If $f(\rho_{4,0}) > f(\rho_{2,0})$, then we put $\hat \rho_1 = \hat \rho_3 = \rho_{1,0}$ and $\hat \rho_2 = \hat \rho_4 = \rho_{2,0}$.\\ If $f(\rho_{4,0}) \le f(\rho_{2,0})$, then we put $\hat \rho_1 = \rho_{1,0}$, $\hat \rho_2 = \rho_{2,0}$, $\hat \rho_4 = \rho_{4,0}$ and $\hat \rho_3 \in [0,\sigma]$ such that $f(\hat \rho_3) = f(\hat \rho_1) + f(\hat \rho_2) - f(\hat \rho_4)$. Assume finally that $f(\rho_{1,0}) + f(\rho_{2,0}) > f(\rho_{3,0}) + f(\rho_{4,0})$. For simplicity suppose that $f(\rho_{1,0}) \le f(\rho_{2,0})$ and $f(\rho_{3,0}) \ge f(\rho_{4,0})$.\\ If $f(\rho_{1,0}) > f(\rho_{3,0})$, then we put $\hat \rho_1 = \hat \rho_3 = \rho_{3,0}$ and $\hat \rho_2 = \hat \rho_4 = \rho_{4,0}$.\\ If $f(\rho_{1,0}) \le f(\rho_{3,0})$, then we put $\hat \rho_1 = \rho_{1,0}$, $\hat \rho_3 = \rho_{3,0}$, $\hat \rho_4 = \rho_{4,0}$ and $\hat \rho_2 \in [\sigma, 1]$ such that $f(\hat \rho_2) = f(\hat \rho_3) + f(\hat \rho_4) - f(\hat \rho_1)$. By Propositions~\ref{prop:2bad}, \ref{prop:3bad} and~\ref{prop:4bad}, these solutions provide entropy admissible equilibria. Moreover \begin{displaymath} \mathcal{RS} \left( \mathcal{RS} (\rho_{1,0}, \rho_{2,0}, \rho_{3,0}, \rho_{4,0}) \right) = \mathcal{RS} (\rho_{1,0}, \rho_{2,0}, \rho_{3,0}, \rho_{4,0}). \end{displaymath} \end{description} \end{remark} \begin{remark} Another example of Riemann solver satisfying the entropy condition (E1) for a node with two incoming and two outgoing arcs is a particular case of the Riemann solver $\mathcal{RS}_2$, defined in Section~\ref{ssec:rs2}; see Proposition~\ref{prop:rs2_E1}. The Riemann solver $\mathcal{RS}$, constructed in Remark~\ref{rmk:RS_E1}, differs from the Riemann solver $\mathcal{RS}_2$. The key difference is that a permutation of initial data in incoming (resp. outgoing) arcs influences the solution in outgoing (resp. incoming) arcs in the case of $\mathcal{RS}$, but not in the case of $\mathcal{RS}_2$.\\ Consider the following example. Let $f(\rho) = 4 \rho (1 - \rho)$ be the flux. Assume that $\left( \frac14, \frac34, \frac14, \frac14 \right)$ and $\left( \frac34, \frac14, \frac14, \frac14 \right)$ are two initial conditions. In both cases, we have only one bad datum and so, using the notation of Remark~\ref{rmk:RS_E1}, $h = 1$. Hence we deduce \begin{displaymath} \mathcal{RS} \left( \frac14, \frac34, \frac14, \frac14 \right) = \left( \frac14, \frac12, \frac14, \frac12 \right) \end{displaymath} and \begin{displaymath} \mathcal{RS} \left( \frac34, \frac14, \frac14, \frac14 \right) = \left( \frac12, \frac14, \frac12, \frac14 \right), \end{displaymath} while \begin{displaymath} \mathcal{RS}_2 \left( \frac14, \frac34, \frac14, \frac14 \right) = \mathcal{RS}_2 \left( \frac34, \frac14, \frac14, \frac14 \right); \end{displaymath} see Section~\ref{ssec:rs2}. \end{remark} \section{Examples} \label{se:examples} This Section deals with some examples of Riemann solvers, introduced in literature for describing car and data traffic. For each of them, we analyze the entropy conditions (E1) and (E2). First we need some notation. Consider the set \begin{equation} \label{eq:calA} \mathcal A:=\left\{ \begin{array}{ll} A=\{a_{ji}\}_{\substack{i=1,\ldots,n\\ j=n+1,\ldots,n+m}}: & \begin{array}{l} 0 < a_{ji} < 1\,\, \forall i,j,\\ \sum\limits_{j=n +1}^{n+m} a_{ji} =1\,\,\forall i \end{array} \end{array} \right\}. \end{equation} Let $\{e_1,\ldots,e_n\}$ be the canonical basis of $\mathbb R^n$. For every $i=1,\ldots,n$, we denote $H_i=\{e_i\}^\bot$. If $A\in\mathcal A$, then we write, for every $j=n+1,\ldots,n+m$, $a_j=(a_{j1},\ldots,a_{jn})\in\mathbb R^n$ and $H_j=\{a_j\}^\bot$. Let $\mathcal{K}$ be the set of indices ${\bf k}=(k_1,...,k_\ell)$, $1\leq\ell\leq n-1$, such that $0\leq k_1<k_2<\cdots<k_\ell\leq n+m$ and for every ${\bf k}\in\mathcal{K}$ define \begin{equation*} H_{\bf k}=\bigcap\limits_{h=1}^\ell H_{k_h}. \end{equation*} Writing ${\bf 1}=(1,\ldots,1)\in\mathbb R^n$ and following \cite{C-G-P} we define the set \begin{equation} \label{eq:frakn} \mathfrak N:=\left\{A\in\mathcal A:{\bf 1}\notin H_{\bf k}^\bot\, \textrm{ for every } {\bf k}\in\mathcal{K} \right\}\,. \end{equation} Notice that, if $n> m$, then $\mathfrak N=\emptyset$. The matrices of $\mathfrak N$ will give rise of a unique solution to the Riemann problem at $J$. For later use, define the set \begin{equation} \label{eq:theta} \Theta=\left\{\boldsymbol{\theta}=\left(\theta_1,\ldots,\theta_{n+m}\right) \in\mathbb R^{n+m}:\, \begin{array}{c} \theta_1>0,\cdots,\theta_{n+m}>0,\vspace{.2cm}\\ \sum_{i=1}^n\theta_i=\sum_{j=n+1}^{n+m}\theta_j=1 \end{array} \right\}\,. \end{equation} \subsection{Riemann Solver $\mathcal{RS}_1$} In this subsection, we consider the Riemann solver introduced for car traffic in~\cite{C-G-P}. The construction can be done in the following way. \begin{enumerate} \item Fix a matrix $A \in \mathfrak N$ and consider the closed, convex and not empty set \begin{equation} \label{eq:omega} \Omega=\left\{ (\gamma_1,\cdots,\gamma_n)\in\prod_{i=1}^n\Omega_i: A\cdot (\gamma_1,\cdots,\gamma_n)^T\in\prod_{j=n+1}^{n+m}\Omega_j \right\}\,. \end{equation} \item Find the point $(\bar\gamma_1,\ldots,\bar\gamma_n)\in\Omega$ which maximizes the function \begin{equation}\label{eq:E} E(\gamma_1,\ldots,\gamma_n)=\gamma_1+\cdots+\gamma_n, \end{equation} and define $(\bar\gamma_{n+1},\ldots,\bar\gamma_{n+m})^T :=A\cdot(\bar\gamma_1,\ldots,\bar\gamma_n)^T$. Since $A\in\mathfrak N$, then $(\bar\gamma_1,\ldots,\bar\gamma_n)$ is unique. \item For every $i\in\{1,\ldots,n\}$, define $\bar\rho_i$ either by $\rho_{i,0}$ if $f(\rho_{i,0})=\bar\gamma_i$, or by the solution to $f(\rho)=\bar\gamma_i$ such that $\bar\rho_i\ge\sigma$. For every $j\in\{n+1,\ldots,n+m\}$, define $\bar\rho_j$ either by $\rho_{j,0}$ if $f(\rho_{j,0})=\bar\gamma_j$, or by the solution to $f(\rho)=\bar\gamma_j$ such that $\bar\rho_j\le\sigma$. Finally, define $\mathcal{RS}_1:[0,1]^{n+m}\to[0,1]^{n+m}$ by \begin{equation}\label{eq:rs1_rho} \mathcal{RS}_1(\rho_{1,0},\ldots,\rho_{n+m,0}) =(\bar\rho_1,\ldots,\bar\rho_n,\bar\rho_{n+1},\ldots,\bar\rho_{n+m})\,. \end{equation} \end{enumerate} The following result holds. \begin{lemma} The function defined in~(\ref{eq:rs1_rho}) satisfies the consistency condition, in the sense of Definition~\ref{def:consistency}. \end{lemma} For a proof, see~\cite{C-G-P,gp-book}. We show that this Riemann solver does not satisfy neither the entropy condition (E1) nor (E2). \begin{prop}\label{prop:no_E2_rs1} The Riemann solver $\mathcal{RS}_1$ does not satisfy the entropy condition (E2) in the sense of Definition~\ref{def:entropy_RS_E2} and, consequently, does not satisfy the entropy condition (E1) in the sense of Definition~\ref{def:entropy_RS_E1}. \end{prop} \begin{proof} Consider a node with $2$ incoming and $2$ outgoing arcs, the flux function $f(\rho)=4\rho(1-\rho)$, a matrix \begin{equation*} A=\left( \begin{array}{cc} \frac13 & \frac12\vspace{.2cm}\\ \frac23 & \frac12 \end{array} \right) \end{equation*} and the initial conditions $\rho_{1,0}=\frac34$, $\rho_{2,0}=\frac18$, $\rho_{3,0}=\frac{8+\sqrt{34}}{16}$ and $\rho_{4,0}=\frac1{10}$. In this case the set $\Omega$ in~(\ref{eq:omega}) is \begin{displaymath} \left\{(\gamma_1,\gamma_2)\in[0,1]\times\left[0,\frac7{16}\right]: 0\le\frac{\gamma_1}3+\frac{\gamma_2}2\le\frac{15}{32},\, 0\le\frac{2\gamma_1}3+\frac{\gamma_2}2\le1\right\}\,; \end{displaymath} see Figure~\ref{fig:omega_rs1_no_E2}. \begin{figure}[h] \centering \begin{psfrags} \psfrag{gamma_1}{$\gamma_1$} \psfrag{gamma_2}{$\gamma_2$} \psfrag{Insieme Omega}{$\Omega$} \includegraphics[width=7cm]{Omega_rs1.eps} \end{psfrags} \caption{The set $\Omega$ of Proposition~\ref{prop:no_E2_rs1}.} \label{fig:omega_rs1_no_E2} \end{figure} Therefore we deduce that $\bar\gamma_1=1$, $\bar\gamma_2=\frac{13}{48}$, $\bar\gamma_3=\frac{15}{32}$, $\bar\gamma_4=\frac{77}{96}$, $\bar\rho_1=\sigma$, $\bar\rho_2>\sigma$, $\bar\rho_3=\rho_{3,0}$ and $\bar\rho_4<\sigma$. The entropy condition~(\ref{eq:entropy_RS_E2}) in this case becomes \begin{displaymath} f(\bar\rho_2)-f(\sigma)\ge f(\bar\rho_3)-f(\sigma)+f(\sigma)-f(\bar\rho_4), \end{displaymath} which is equivalent to \begin{displaymath} 0\le f(\bar\rho_2)-f(\sigma)-f(\bar\rho_3)+f(\bar\rho_4)= \frac{13}{48}-1-\frac{15}{32}+\frac{77}{96}=-\frac{19}{48}\,. \end{displaymath} This concludes the proof. \end{proof} The maximization of the function $E$ over $\Omega$, which defines the Riemann solver $\mathcal{RS}_1$, is, however, in connection with the maximization of the entropy $\mathcal F$. In order to explain this fact, let us introduce some notations.\\ Given $\Omega$ in \eqref{eq:omega}, define \begin{equation} \label{eq:PHI} \Phi=\!\left\{ (\rho_1,\ldots,\rho_{n+m})\in\prod_{l=1}^{n+m}\Phi_l:\!\! \begin{array}{l} \left(f(\rho_1),\ldots,f(\rho_n)\right)\in\Omega,\\ \left( \begin{array}{c} f(\rho_{n+1})\\ \vdots\\ f(\rho_{n+m}) \end{array} \right)=A\cdot \left( \begin{array}{c} f(\rho_1)\\ \vdots\\ f(\rho_n) \end{array} \right)\! \end{array} \right\} \end{equation} and the functional \begin{equation} \label{eq:cal_G} \begin{array}{ccc} \mathcal G: \Phi & \longrightarrow & \mathbb R\\ (\rho_1, \ldots, \rho_{n+m}) & \longmapsto & \mathcal F(\rho_1, \ldots, \rho_{n+m}, \sigma), \end{array} \end{equation} which is the restriction of $\mathcal F$ on $\Phi \times \{ \sigma \}$. Note that the set $\Phi$ consists in all the possible solutions at $J$ satisfying Definition~\ref{def:Riemann_solver} and the distribution rule, determined by the matrix $A \in \mathfrak N$. It is easy to see that there exists a one to one correspondence between $\Omega$ and $\Phi$.\\ For every $\mathcal H\subseteq\{1,\ldots,n+m\}$ of cardinality $h$, with $0\le h\le n-1$, define \begin{equation} \label{eq:Omega_h} \Omega_{\mathcal H}=\!\left\{ (\gamma_1,\ldots,\gamma_n)\in\prod_{i=1}^n\Omega_i:\! \begin{array}{l} (\gamma_{n+1},\ldots,\gamma_{n+m})^T\! =\! A\!\cdot\!(\gamma_1,\ldots,\gamma_n)^T,\\ (\gamma_{n+1},\ldots,\gamma_{n+m})\in\prod_{j=n+1}^{n+m}\Omega_j,\\ \gamma_l=\max \Omega_l\quad\textrm{ if }\quad l\in\mathcal H,\\ \gamma_l<\max \Omega_l\quad\textrm{ if }\quad l\not\in\mathcal H,\\ \end{array} \right\} \end{equation} and \begin{equation} \label{eq:PHI_h} \Phi_{\mathcal H}=\!\left\{ (\rho_1,\ldots,\rho_{n+m})\in\prod_{l=1}^{n+m}\Phi_l:\!\! \begin{array}{l} \left(f(\rho_1),\ldots,f(\rho_n)\right)\in\Omega_{\mathcal H},\\ \left( \begin{array}{c} f(\rho_{n+1})\\ \vdots\\ f(\rho_{n+m}) \end{array} \right)=A\cdot \left( \begin{array}{c} f(\rho_1)\\ \vdots\\ f(\rho_n) \end{array} \right)\! \end{array} \right\}. \end{equation} Notice that $\Omega_{\mathcal H}$ and $\Phi_{\mathcal H}$ depend on the initial condition $(\rho_{1,0}, \ldots, \rho_{n+m,0})$ and on the matrix $A \in \mathfrak N$. There is a one to one correspondence between $\Omega_{\mathcal H}$ and $\Phi_{\mathcal H}$, given by the one-to-one function \begin{displaymath} \begin{array}{ccc} \Phi_{\mathcal H} & \longrightarrow & \Omega_{\mathcal H}\\ (\rho_1, \ldots, \rho_{n+m}) & \longmapsto & (f(\rho_1), \ldots, f(\rho_n)). \end{array} \end{displaymath} Moreover, if $\Omega_{\mathcal H} \ne \emptyset$, then $\Omega_{\mathcal H}$ has, at most, topological dimension $n - h$.\\ The following proposition holds. \begin{prop} \label{prop:entropy_restricted} Let $\mathcal H\subseteq\{1,\ldots,n+m\}$ be a set of cardinality $h$, with $0\le h\le n-1$ and suppose that $\Omega_{\mathcal H}\ne\emptyset$. The functional $\mathcal G$, restricted to $\Phi_{\mathcal H}$, is given by \begin{equation} \label{eq:entropy_restricted} \mathcal G(\rho_1, \ldots, \rho_{n+m}) = \sum_{l\in\{1,\ldots,n+m\}\setminus\mathcal H}\left[f(\rho_l)-f(\sigma)\right] +\sum_{l\in\mathcal H}\left[f(\sigma)-f(\rho_l)\right]. \end{equation} \end{prop} \begin{proof} Fix $(\rho_1\ldots,\rho_{n+m})\in\Phi_{\mathcal H}$ and $l\in\{1,\ldots,n+m\}$. We have some different possibilities. \begin{enumerate} \item $l\le n$ and $l\in\mathcal H$. In this case the term $\sgn(\rho_l-\sigma)\left(f(\rho_l)-f(\sigma)\right)$ becomes $f(\sigma)-f(\rho_l)$. \item $l\le n$ and $l\not\in\mathcal H$. In this case the term $\sgn(\rho_l-\sigma)\left(f(\rho_l)-f(\sigma)\right)$ becomes $f(\rho_l)-f(\sigma)$. \item $l\ge n+1$ and $l\in\mathcal H$. In this case the term $-\sgn(\rho_l-\sigma)\left(f(\rho_l)-f(\sigma)\right)$ becomes $f(\sigma)-f(\rho_l)$. \item $l\ge n+1$ and $l\not\in\mathcal H$. In this case the term $-\sgn(\rho_l-\sigma)\left(f(\rho_l)-f(\sigma)\right)$ becomes $f(\rho_l)-f(\sigma)$. \end{enumerate} Therefore the proof is finished. \end{proof} \begin{corollary} Let $\mathcal H\subseteq\{1,\ldots,n+m\}$ be a set of cardinality $h$, with $0\le h\le n-1$ and suppose that $\Omega_{\mathcal H}\ne\emptyset$. The problem of maximizing $\mathcal G$ on the set $\Phi_{\mathcal H}$ is equivalent to the problem of maximizing the function $E$, defined in~(\ref{eq:E}), on the set $\Omega_{\mathcal H}$. \end{corollary} \begin{proof} Notice that, by Proposition~\ref{prop:entropy_restricted}, the function $\mathcal G$ on the set $\Phi_{\mathcal H}$ coincides with \begin{displaymath} \sum_{l\in\{1,\ldots,n+m\} \setminus \mathcal H} f(\rho_l) + C, \end{displaymath} where $C$ is a constant, depending on $\mathcal H$ and on the initial conditions. Indeed, if $l\in\mathcal H$, then $\rho_l$ is completely determined by the initial condition $\rho_{l,0}$. More precisely, $\rho_l$ is equal to $\rho_{l,0}$ when $\rho_{l,0}$ is a bad datum, while $\rho_l$ is equal to $\sigma$ in the other case. Therefore, if $(\rho_1,\ldots,\rho_{n+m})\in\Phi_{\mathcal H}$, then we deduce that \begin{eqnarray*} \mathcal G(\rho_1,\ldots,\rho_{n+m}) & = & \sum_{i\in\{1,\ldots,n\}\setminus\mathcal H} f(\rho_i) + \sum_{j\in\{n+1,\ldots,n+m\}\setminus\mathcal H} f(\rho_j) + C\\ & = & \sum_{i\in\{1,\ldots,n\}\setminus\mathcal H} f(\rho_i) + \sum_{j\in\{n+1,\ldots,n+m\}} f(\rho_j) + C_1\\ & = & \sum_{i\in\{1,\ldots,n\}\setminus\mathcal H} f(\rho_i) + \sum_{i\in\{1,\ldots,n\}} f(\rho_j) + C_1\\ & = & 2\sum_{i\in\{1,\ldots,n\}\setminus\mathcal H} f(\rho_i) + C_2, \end{eqnarray*} where $C_1$ and $C_2$ are constants. Finally note that the function $E$, restricted on $\Omega_{\mathcal H}$, is given by \begin{displaymath} E(\gamma_1,\ldots,\gamma_n)=\sum_{i\in\{1,\ldots,n\}\setminus\mathcal H} \gamma_i+C_2-C_1. \end{displaymath} This completes the proof. \end{proof} \begin{remark} Note that the set $\Phi$ is, in general, disconnected, while the set $\Omega$ is convex and so connected. The function $\mathcal G$, defined in~(\ref{eq:cal_G}), i.e. the entropy function restricted on $\Phi \times \{ \sigma \}$, is continuous, since it has not jumps in each connected component of $\Phi$. Since there is a bijection between the sets $\Omega$ and $\Phi$, then we can consider the entropy function on $\Omega$. More precisely, define the function \begin{displaymath} \begin{array}{ccc} \Upsilon: \Omega & \longrightarrow & \Phi\\ (\gamma_1, \ldots, \gamma_{n}) & \longmapsto & (\rho_1, \ldots, \rho_{n+m}), \end{array} \end{displaymath} satisfying $f(\rho_i) = \gamma_i$ for every $i \in \{1, \ldots, n\}$, and consider the map $\mathcal G \circ \Upsilon: \Omega \to \mathbb R$. This map, in general, is discontinuous, since it can have jumps at every point $(\gamma_1, \ldots, \gamma_n) \in \overline{\Omega}_{\mathcal H_1} \cap \overline{\Omega}_{\mathcal H_2}$ with $\mathcal H_1 \ne \mathcal H_2$ different subsets of $\{1,\ldots,n+m\}$ of cardinalities less than or equal to $n-1$. \end{remark} \subsection{Riemann Solver $\mathcal{RS}_2$}\label{ssec:rs2} In this subsection, we consider the Riemann solver, introduced in \cite{da-m-p} for data networks; see also \cite{gp-book}. The construction can be done in the following way. \begin{enumerate} \item Fix $\boldsymbol{\theta} \in \Theta$ and define \begin{equation*} \Gamma_{inc}=\sum_{i=1}^n\sup\Omega_i,\quad \Gamma_{out}=\sum_{j=n+1}^{n+m}\sup\Omega_j, \end{equation*} then the maximal possible through-flow at the crossing is \begin{equation*} \Gamma = \min \left\{\Gamma_{inc},\Gamma_{out} \right\}\,. \end{equation*} \item Introduce the closed, convex and not empty sets \begin{eqnarray*} I & = & \left\{ (\gamma_1, \ldots,\gamma_n) \in \prod_{i=1}^n \Omega_i \colon \sum_{i=1}^n \gamma_i = \Gamma \right\} \\ J & = & \left\{ (\gamma_{n+1}, \ldots,\gamma_{n+m}) \in \prod_{j=n+1}^{n+m} \Omega_j \colon \sum_{j=n+1}^{n+m} \gamma_j = \Gamma \right\} \,. \end{eqnarray*} \item Denote with $(\bar\gamma_1,\ldots,\bar\gamma_n)$ the orthogonal projection on the convex set $I$ of the point $(\Gamma\theta_1,\ldots,\Gamma\theta_n)$ and with $(\bar\gamma_{n+1},\ldots,\bar\gamma_{n+m})$ the orthogonal projection on the convex set $J$ of the point $(\Gamma\theta_{n+1},\ldots,\Gamma\theta_{n+m})$. \item For every $i\in\{1,\ldots,n\}$, define $\bar\rho_i$ either by $\rho_{i,0}$ if $f(\rho_{i,0})=\bar\gamma_i$, or by the solution to $f(\rho)=\bar\gamma_i$ such that $\bar\rho_i\ge\sigma$. For every $j\in\{n+1,\ldots,n+m\}$, define $\bar\rho_j$ either by $\rho_{j,0}$ if $f(\rho_{j,0})=\bar\gamma_j$, or by the solution to $f(\rho)=\bar\gamma_j$ such that $\bar\rho_j\le\sigma$. Finally, define $\mathcal{RS}_2:[0,1]^{n+m}\to[0,1]^{n+m}$ by \begin{equation}\label{rs2_rho} \mathcal{RS}_2(\rho_{1,0},\ldots,\rho_{n+m,0}) =(\bar\rho_1,\ldots,\bar\rho_n,\bar\rho_{n+1},\ldots,\bar\rho_{n+m})\,. \end{equation} \end{enumerate} The following result holds. \begin{lemma} The function defined in~(\ref{rs2_rho}) satisfies the consistency condition \begin{equation}\label{eq:stab_rs2} \mathcal{RS}_2(\mathcal{RS}_2(\rho_{1,0},\ldots,\rho_{n+m,0}))= \mathcal{RS}_2(\rho_{1,0},\ldots,\rho_{n+m,0}) \end{equation} for every $(\rho_{1,0},\ldots,\rho_{n+m,0})\in[0,1]^{n+m}$. \end{lemma} For a proof, see~\cite{g-p_generic_J}. We prove now that the Riemann solver $\mathcal{RS}_2$ satisfies the entropy condition (E2). \begin{prop} Assume $n = m$ and consider a node $J$ with $n$ incoming roads and $m$ outgoing roads. The Riemann solver $\mathcal{RS}_2$ satisfies the entropy condition (E2) in the sense of Definition~\ref{def:entropy_RS_E2}. \end{prop} \begin{proof} Fix an initial condition $(\rho_{1,0}, \ldots, \rho_{n+m,0})$ and define $(\bar \rho_1, \ldots, \bar \rho_{n+m}) = \mathcal{RS}_2 (\rho_{1,0}, \ldots, \rho_{n+m,0})$. We have two different cases. \begin{description} \item[$\Gamma_{inc} \le \Gamma_{out}$.] In this situation, we deduce that $\bar \rho_i \le \sigma$ for every $i \in \{1, \ldots, n\}$. Thus the entropy reads \begin{displaymath} \mathcal F (\bar \rho_1, \ldots, \bar \rho_{n+m}, \sigma) = n f(\sigma) - \sum_{i=1}^n f(\bar \rho_i) -\!\! \sum_{j=n+1}^{n+m}\! \sgn (\bar \rho_j - \sigma) \left( f(\bar \rho_j) - f(\sigma) \right). \end{displaymath} For every $j \in \{n+1, \ldots, n+m\}$, the term $-\sgn (\bar \rho_j - \sigma) \left( f(\bar \rho_j) - f(\sigma) \right)$ can be minorized by $f(\bar \rho_j) - f(\sigma)$ and so \begin{eqnarray*} \mathcal F (\bar \rho_1, \ldots, \bar \rho_{n+m}, \sigma) & \ge & n f(\sigma) - \sum_{i=1}^n f(\bar \rho_i) + \sum_{j=n+1}^{n+m} \left( f(\bar \rho_j) - f(\sigma) \right)\\ & = & (n-m) f(\sigma) = 0. \end{eqnarray*} \item[$\Gamma_{inc} > \Gamma_{out}$.] In this situation, we deduce that $\bar \rho_j \ge \sigma$ for every $j \in \{n+1, \ldots, n+m\}$. Thus the entropy reads \begin{displaymath} \mathcal F (\bar \rho_1, \ldots, \bar \rho_{n+m}, \sigma) = \! \sum_{i=1}^{n} \sgn (\bar \rho_i - \sigma) \left( f(\bar \rho_i) - f(\sigma) \right) + m f(\sigma) - \!\! \sum_{j=n+1}^{n+m} f(\bar \rho_j). \end{displaymath} For every $i \in \{1, \ldots, n\}$, the term $\sgn (\bar \rho_i - \sigma) \left( f(\bar \rho_i) - f(\sigma) \right)$ can be minorized by $f(\bar \rho_i) - f(\sigma)$ and so \begin{eqnarray*} \mathcal F (\bar \rho_1, \ldots, \bar \rho_{n+m}, \sigma) & \ge & \sum_{i=1}^{n} \left( f(\bar \rho_i) - f(\sigma) \right) + m f(\sigma) - \sum_{j=n+1}^{n+m} f(\bar \rho_j)\\ & = & (m - n) f(\sigma) = 0. \end{eqnarray*} \end{description} The proof is finished. \end{proof} In general, the Riemann solver $\mathcal{RS}_2$ does not satisfy the entropy condition (E1) even in the case $n=m$, as the next Proposition shows. \begin{prop} The Riemann solver $\mathcal{RS}_2$ does not satisfy the entropy condition (E1) in the sense of Definition~\ref{def:entropy_RS_E1}. \end{prop} \begin{proof} Consider a node with $2$ incoming and $2$ outgoing arcs, the flux function $f(\rho)=4\rho(1-\rho)$, $\boldsymbol\theta=\left(\frac12,\frac12,\frac5{12},\frac7{12}\right)$ and the equilibrium configuration $\left(\frac14,\frac14,\frac12-\frac{\sqrt3}{4\sqrt2}, \frac12-\frac1{4\sqrt2}\right)$. In this case equation~(\ref{eq:entropy_RS_E1}) becomes \begin{gather*} 2\sgn\left(\frac14-k\right)\left(\frac34-f(k)\right) -\sgn\left(\frac12-\frac{\sqrt3}{4\sqrt2}-k\right) \left(\frac58-f(k)\right)\\- \sgn\left(\frac12-\frac1{4\sqrt2}-k\right) \left(\frac78-f(k)\right)\ge0 \end{gather*} for every $k\in[0,1]$. If $k=\frac14$, then the previous inequality becomes \begin{displaymath} \left(\frac58-\frac34\right)-\left(\frac78-\frac34\right)\ge0, \end{displaymath} which is clearly false. \end{proof} Indeed, in some special situation, namely for nodes with $2$ incoming and $2$ outgoing arcs and $\boldsymbol\theta=\left(\frac12,\frac12,\frac12,\frac12\right)$, the Riemann solver $\mathcal{RS}_2$ satisfies the entropy condition (E1). \begin{prop} \label{prop:rs2_E1} Fix a node $J$ with two incoming and two outgoing arcs. If $\boldsymbol\theta=\left(\frac12,\frac12,\frac12,\frac12\right)$, then the Riemann solver $\mathcal{RS}_2$ satisfies the entropy condition (E1), in the sense of Definition~\ref{def:entropy_RS_E1}. \end{prop} \begin{proof} Consider an equilibrium $(\brho_1,\brho_2,\brho_3,\brho_4)$ for the Riemann solver $\mathcal{RS}_2$ and denote with $g$ the number of good data. We have the following possibilities. \begin{description} \item[$g=4$.] In this case we deduce that $(\brho_1,\brho_2,\brho_3,\brho_4)=\left(\frac12,\frac12,\frac12,\frac12\right)$ and so the entropy condition (E1) is satisfied. \item[$g=3$.] Consider only the case $\Gamma=\Gamma_{inc}$, since the other case $\Gamma=\Gamma_{out}$ is completely symmetric. Thus the bad datum is in an incoming arc and so we may assume that $\bar\rho_1<\sigma$, $\bar\rho_2\ge\sigma$ and $\bar\rho_3\le\bar\rho_4\le\sigma$. Since $\boldsymbol\theta=\left(\frac12,\frac12,\frac12,\frac12\right)$, then $\bar\rho_2=\sigma$ and $\bar\rho_3=\bar\rho_4<\sigma$. Moreover, the fact that $\f1+\f2=\f3+\f4$ implies that \begin{displaymath} \bar\rho_1<\bar\rho_3=\bar\rho_4<\bar\rho_2=\sigma. \end{displaymath} By item~1 of Proposition~\ref{prop:1bad}, the entropy condition (E1) holds. \item[$g=2$.] Consider only the case $\Gamma=\Gamma_{inc}$, since the other case $\Gamma=\Gamma_{out}$ is completely symmetric. We have two possibilities: either the bad data are in the incoming arcs or one bad datum is in an incoming arc and the other bad datum is in an outgoing arc.\\ Assume first that the bad data are in the incoming arcs. Without loss of generality we may assume that $\bar\rho_1\le \bar\rho_2<\sigma$ and $\bar\rho_3\le\bar\rho_4\le\sigma$. Since $\boldsymbol\theta=\left(\frac12,\frac12,\frac12,\frac12\right)$, then $\bar\rho_3=\bar\rho_4$ and so, the fact that $\f1+\f2=\f3+\f4$ implies that \begin{displaymath} \bar\rho_1\le\bar\rho_3=\bar\rho_4\le\bar\rho_2<\sigma. \end{displaymath} By item~1 of Proposition~\ref{prop:2bad}, the entropy condition (E1) is satisfied. Assume now that one bad datum is in an incoming arc and the other bad datum is in an outgoing arc. Without loss of generality we may assume that $\bar\rho_1<\sigma<\bar\rho_4$ and $\bar\rho_3\le\sigma\le\bar\rho_2$. Since $\Gamma=\Gamma_{inc}$, then we deduce that $\bar\rho_2=\sigma$. Moreover $\boldsymbol\theta=\left(\frac12,\frac12,\frac12,\frac12\right)$ implies that $\f3\ge\f4$ and so $\f1\le\f4$, since $\f1+\f2=\f3+\f4$. Therefore \begin{displaymath} \bar\rho_1\le\bar\rho_3\le\bar\rho_2=\sigma<\bar\rho_4\quad\textrm{ and }\quad \bar\rho_1<\bar\rho_2. \end{displaymath} By item~3 of Proposition~\ref{prop:2bad}, the entropy condition (E1) is satisfied. \item[$g=1$.] Consider only the case $\Gamma=\Gamma_{inc}$, since the other case $\Gamma=\Gamma_{out}$ is completely symmetric. We have two possibilities: the good datum is in an incoming arc or in an outgoing arc. Assume first that the good datum is in an incoming arc. Without loss of generality, we may consider that $\bar\rho_1<\sigma\le\bar\rho_2$ and $\sigma<\bar\rho_3\le \bar\rho_4$. Since $\Gamma=\Gamma_{inc}$, then $\bar\rho_2=\sigma$. Moreover $\f1+\f2=\f3+\f4$ implies that $\f4\ge \f1$. By item~1 of Proposition~\ref{prop:3bad}, the entropy condition (E1) is satisfied. Assume now that the good datum is in an outgoing arc. Without loss of generality, suppose that $\bar\rho_1\le\bar\rho_2<\sigma$, $\bar\rho_3\le\sigma<\bar\rho_4$. Since $\boldsymbol\theta=\left(\frac12,\frac12,\frac12,\frac12\right)$, then $\f3\ge\f4$ and so $\f4\le\f2$ and $\bar\rho_3\ge\bar\rho_1$ since $\f1+\f2=\f3+\f4$. By item~2 of Proposition~\ref{prop:3bad}, the entropy condition (E1) is satisfied. \item[$g=0$.] In this case we have that $\Gamma=\Gamma_{inc}=\Gamma_{out}$. Without loss of generality, suppose that $\bar\rho_1\le\bar\rho_2<\sigma<\bar\rho_3\le\bar\rho_4$ and we conclude by Proposition~\ref{prop:4bad}. \end{description} The proof is finished. \end{proof} \subsection{Riemann Solver $\mathcal{RS}_3$} In this subsection, we consider the Riemann solver, introduced in \cite{marigo-piccoli_2008_T_junction} for crossing nodes. Consider a node $J$ with $n$ incoming and $m=n$ outgoing arcs and fix a positive coefficient $\Gamma_J$, which is the maximum capacity of the node. The construction can be done in the following way. \begin{enumerate} \item Fix $\boldsymbol{\theta} \in \Theta$. For every $i\in\{1,\ldots,n\}$, define \begin{equation*} \Gamma_i = \min\left\{\sup\Omega_i,\sup\Omega_{i+n}\right\}. \end{equation*} Then the maximal possible through-flow at $J$ is \begin{equation*} \Gamma = \sum_{i=1}^n \Gamma_i. \end{equation*} \item Introduce the closed, convex and not empty set \begin{equation*} I=\left\{(\gamma_1,\ldots,\gamma_n) \in \prod_{i=1}^n [0,\Gamma_i] \colon\sum_{i=1}^n\gamma_i=\min\left\{\Gamma,\Gamma_J\right\}\right\}. \end{equation*} \item Denote with $(\bar\gamma_1,\ldots,\bar\gamma_n)$ the orthogonal projection on the convex set $I$ of the point $(\min \{\Gamma, \Gamma_J\} \theta_1,\ldots, \min \{\Gamma, \Gamma_J\} \theta_n)$ and set $ (\bar\gamma_{n+1},\ldots,\bar\gamma_{2n}) =(\bar\gamma_1,\ldots,\bar\gamma_n)$. \item For every $i\in\{1,\ldots,n\}$, define $\bar\rho_i$ either by $\rho_{i,0}$ if $f(\rho_{i,0})=\bar\gamma_i$, or by the solution to $f(\rho)=\bar\gamma_i$ such that $\bar\rho_i\ge\sigma$. For every $j\in\{n+1,\ldots,n+m\}$, define $\bar\rho_j$ either by $\rho_{j,0}$ if $f(\rho_{j,0})=\bar\gamma_j$, or by the solution to $f(\rho)=\bar\gamma_j$ such that $\bar\rho_j\le\sigma$. Finally, define $\mathcal{RS}_3:[0,1]^{n+m}\to[0,1]^{n+m}$ by \begin{equation}\label{rs3_rho} \mathcal{RS}_3(\rho_{1,0},\ldots,\rho_{n+m,0}) =(\bar\rho_1,\ldots,\bar\rho_n,\bar\rho_{n+1},\ldots,\bar\rho_{n+m})\,. \end{equation} \end{enumerate} The following result holds. \begin{lemma} The function defined in~(\ref{rs3_rho}) satisfies the consistency condition \begin{equation}\label{eq:stab_rs3} \mathcal{RS}_3(\mathcal{RS}_3(\rho_{1,0},\ldots,\rho_{n+m,0}))= \mathcal{RS}_3(\rho_{1,0},\ldots,\rho_{n+m,0}) \end{equation} for every $(\rho_{1,0},\ldots,\rho_{n+m,0})\in[0,1]^{n+m}$. \end{lemma} For a proof, see Proposition 2.4 of \cite{marigo-piccoli_2008_T_junction}. \begin{example} Consider a node $J$ with $2$ incoming arcs and $2$ outgoing ones, $\boldsymbol{\theta} = \left(\frac34, \frac14, \frac34, \frac14 \right)$ and $\Gamma_J = \frac{64}{75}$. Moreover, assume that $f(\rho) = 4 \rho (1-\rho)$.\\ We easily see that \begin{displaymath} (\bar \rho_{1}, \bar \rho_{2}, \bar \rho_{3}, \bar \rho_{4}) = \left( \frac15,\, \frac12 + \frac1{10} \sqrt{\frac{59}3},\, \frac45,\, \frac12 - \frac1{10} \sqrt{\frac{59}3} \right) \end{displaymath} is an equilibrium for $\mathcal{RS}_3$. Thus we have \begin{eqnarray*} \mathcal F (\bar \rho_{1}, \bar \rho_{2}, \bar \rho_{3}, \bar \rho_{4}, \sigma ) & = & \left( f(\sigma) - f(\bar \rho_1) \right) + \left( f(\bar \rho_2) - f(\sigma) \right)\\ & & - \left( f(\bar \rho_3) - f(\sigma)\right) - \left( f(\sigma) - f(\bar \rho_4) \right)\\ & = & - \frac{64}{75}. \end{eqnarray*} \end{example} \begin{example} Consider a node $J$ with $2$ incoming arcs and $2$ outgoing ones, $\boldsymbol{\theta} = \left(\frac12, \frac12, \frac12, \frac12 \right)$ and $\Gamma_J = \frac{7}{6}$. Moreover, assume that $f(\rho) = 4 \rho (1-\rho)$.\\ We easily see that \begin{displaymath} (\bar \rho_{1}, \bar \rho_{2}, \bar \rho_{3}, \bar \rho_{4}) = \left( \frac12 + \frac1{2} \sqrt{\frac{1}2}, \, \frac12 + \frac1{2} \sqrt{\frac{1}3},\, \frac12 + \frac1{2} \sqrt{\frac{1}2},\, \frac12 - \frac1{2} \sqrt{\frac{1}3} \right) \end{displaymath} is an equilibrium for $\mathcal{RS}_3$. Thus we have \begin{eqnarray*} \mathcal F (\bar \rho_{1}, \bar \rho_{2}, \bar \rho_{3}, \bar \rho_{4}, \sigma ) & = & \left( f(\bar \rho_1) - f(\sigma) \right) + \left( f(\bar \rho_2) - f(\sigma) \right)\\ & & - \left( f(\bar \rho_3) - f(\sigma)\right) - \left( f(\sigma) - f(\bar \rho_4) \right)\\ & = & 2 \left( f(\bar \rho_2) - f(\sigma) \right) = - \frac{2}{3}. \end{eqnarray*} \end{example} The following result follows by the previous examples. \begin{prop} The Riemann solver $\mathcal{RS}_3$ does not satisfy neither the entropy condition (E1) nor the entropy condition (E2). \end{prop} {\small{ \bibliographystyle{abbrv}
2009.09892
\section{Introduction} Let $\mathcal{B}\left( \mathcal{H} \right)$ denote the ${{C}^{*}}$ -algebra of all bounded linear operators on a complex Hilbert space $\mathcal{H}$ with inner product $\left\langle \cdot,\cdot \right\rangle $. For $A\in \mathcal{B}\left( \mathcal{H} \right)$, let $\omega \left( A \right)$ and $\left\| A \right\|$ denote the numerical radius and the operator norm of $A$, respectively. Recall that $\omega \left( A \right)=\underset{\left\| x \right\|=1}{\mathop{\sup }}\,\left\langle Ax,x \right\rangle $. It is well-known that $\omega \left( \cdot \right)$ defines a norm on $\mathcal{B}\left( \mathcal{H} \right)$, which is equivalent to the operator norm $\left\| \cdot \right\|$. In fact, for every $A\in \mathcal{B}\left( \mathcal{H} \right)$, \begin{equation}\label{038} \frac{1}{2}\left\| A \right\|\le \omega \left( A \right)\le \left\| A \right\|. \end{equation} Also, it is a basic fact that $\omega \left( \cdot \right)$ defines a norm on $\mathcal{B}\left( \mathcal{H} \right)$ which satisfies the power inequality \[\omega \left( {{A}^{n}} \right)\le {{\omega }^{n}}\left( A \right)\] for all $n=1,2,\ldots $. In \cite{05}, Kittaneh gave the following estimate of the numerical radius which refines the second inequality in \eqref{038}: For every $A$, \begin{equation}\label{5} \omega \left( A \right)\le \frac{1}{2}\left\| \left| A \right|+\left| {{A}^{*}} \right| \right\|. \end{equation} The following estimate of the numerical radius has been given in \cite{06}: \begin{equation}\label{36} \frac{1}{4}\left\| {{\left| A \right|}^{2}}+{{\left| {{A}^{*}} \right|}^{2}} \right\|\le {{\omega }^{2}}\left( A \right)\le \frac{1}{2}\left\| {{\left| A \right|}^{2}}+{{\left| {{A}^{*}} \right|}^{2}} \right\|. \end{equation} The first inequality in \eqref{36} also refines the first inequality in \eqref{038}. This can be seen by using the fact that for any positive operator $A,B\in \mathcal{B}\left( \mathcal{H} \right)$, \[\max \left( \left\| A \right\|,\left\| B \right\| \right)\le \left\| A+B \right\|.\] Actually, \[\frac{1}{4}{{\left\| A \right\|}^{2}}=\frac{1}{4}\max \left( \left\| {{\left| A \right|}^{2}} \right\|,\left\| {{\left| {{A}^{*}} \right|}^{2}} \right\| \right)\le \frac{1}{4}\left\| {{\left| A \right|}^{2}}+{{\left| {{A}^{*}} \right|}^{2}} \right\|.\] For other properties of the numerical radius and related inequalities, the reader may consult \cite{ms, 11, 10}. In this article, we give several refinements of numerical radius inequalities. Our results mainly improve the inequalities in \cite{06}. \section{Main Results} \begin{lemma}\label{1} Let $A\in \mathcal{B}\left( \mathcal{H} \right)$. Then \[\frac{1}{2}\left\| A\pm {{A}^{*}} \right\|\le \omega \left( A \right).\] \end{lemma} \begin{proof} Since $A+{{A}^{*}}$ is normal, we have \[\begin{aligned} \left\| A+{{A}^{*}} \right\|&=\omega \left( A+{{A}^{*}} \right) \\ & \le \omega \left( A \right)+\omega \left( {{A}^{*}} \right) \\ & =2\omega \left( A \right). \end{aligned}\] Therefore, \begin{equation}\label{2} \frac{1}{2}\left\| A+{{A}^{*}} \right\|\le \omega \left( A \right). \end{equation} Now, by replacing $A$ by $iA$ in \eqref{2}, we reach the desired result. \end{proof} \begin{theorem}\label{3} Let $A\in \mathcal{B}\left( \mathcal{H} \right)$. Then \[\frac{1}{4}\left\| {{\left| A \right|}^{2}}+{{\left| {{A}^{*}} \right|}^{2}} \right\|\le \frac{1}{8}\left( {{\left\| A+{{A}^{*}} \right\|}^{2}}+{{\left\| A-{{A}^{*}} \right\|}^{2}} \right)\le {{\omega }^{2}}\left( A \right).\] \end{theorem} \begin{proof} For any $A,B\in \mathcal{B}\left( \mathcal{H} \right)$, we have the following parallelogramm law \[{{\left| A+B \right|}^{2}}+{{\left| A-B \right|}^{2}}=2\left( {{\left| A \right|}^{2}}+{{\left| B \right|}^{2}} \right),\] equivalently \[{{\left| \frac{A+B}{2} \right|}^{2}}+{{\left| \frac{A-B}{2} \right|}^{2}}=\frac{{{\left| A \right|}^{2}}+{{\left| B \right|}^{2}}}{2}.\] Therefore, by the triangle inequality for the usual operator norm and Lemma \ref{1}, we have \begin{align} \frac{1}{4}\left\| {{\left| A \right|}^{2}}+{{\left| {{A}^{*}} \right|}^{2}} \right\|&=\frac{1}{2}\left\| \frac{{{\left| A \right|}^{2}}+{{\left| {{A}^{*}} \right|}^{2}}}{2} \right\| \nonumber\\ & =\frac{1}{2}\left\| {{\left| \frac{A+{{A}^{*}}}{2} \right|}^{2}}+{{\left| \frac{A-{{A}^{*}}}{2} \right|}^{2}} \right\| \nonumber\\ & \le \frac{1}{2}\left\| {{\left| \frac{A+{{A}^{*}}}{2} \right|}^{2}} \right\|+\frac{1}{2}\left\| {{\left| \frac{A-{{A}^{*}}}{2} \right|}^{2}} \right\| \nonumber\\ & =\frac{1}{2}{{\left\| \frac{A+{{A}^{*}}}{2} \right\|}^{2}}+\frac{1}{2}{{\left\| \frac{A-{{A}^{*}}}{2} \right\|}^{2}} \nonumber\\ & \le {{\omega }^{2}}\left( A \right) \nonumber. \end{align} We remark here that if $T\in \mathcal{B}\left( \mathcal{H} \right)$, and if $f$ is a non-negative increasing function on $\left[ 0,\infty \right)$, then $\left\| f\left( \left| T \right| \right) \right\|=f\left( \left\| T \right\| \right)$. In particular, $\left\| {{\left| T \right|}^{r}} \right\|={{\left\| T \right\|}^{r}}$ for every $r>0$. This completes the proof of the theorem. \end{proof} We present a refinement the first inequality from \eqref{36}. For see this, we need the following lemma, which can found in \cite{04}. \begin{lemma}\label{02} Let $A,B\in \mathbb{B}\left( \mathscr{H} \right)$. Then \[\left\| A+B \right\|\le \sqrt{{{\left\| A^*A+B^*B \right\|}}+2\omega ( B{^*}A)}.\] \end{lemma} By the above lemma, we can improve the first inequality in \eqref{36}. \begin{theorem}\label{4} Let $A\in \mathbb{B}\left( \mathscr{H} \right)$. Then \begin{equation}\label{020} \frac{1}{4}\left\| {{\left| A \right|}^{2}}+{{\left| {{A}^{*}} \right|}^{2}} \right\|\le \frac{1}{2}\sqrt{2{{\omega }^{4}}\left( A \right)+\frac{1}{8}\omega \left( {{\left( {{A}^{*}}-A \right)}^{2}}{{\left( {{A}^{*}}+A \right)}^{2}} \right)}\le {{\omega }^{2}}\left( A \right). \end{equation} \end{theorem} \begin{proof} Let $A=B+iC$ be the Cartesian decomposition of $A$. Then $B$ and $C$ are self-adjoint operators. One can easily check that \begin{equation}\label{019} \frac{{{\left| A \right|}^{2}}+{{\left| {{A}^{*}} \right|}^{2}}}{4}=\frac{{{B}^{2}}+{{C}^{2}}}{2}, \end{equation} and \begin{equation}\label{011} {{\left| \left\langle Ax,x \right\rangle \right|}^{2}}={{\left\langle Bx,x \right\rangle }^{2}}+{{\left\langle Cx,x \right\rangle }^{2}} \end{equation} for any unit vector $x\in \mathscr{H}$. Of course, the relation \eqref{011} implies \[{{\left\langle Bx,x \right\rangle }^{2}}\left( \text{resp}\text{. }{{\left\langle Cx,x \right\rangle }^{2}} \right)\le {{\left| \left\langle Ax,x \right\rangle \right|}^{2}}.\] Now, by taking supremum over $x\in \mathscr{H}$ with $\left\| x \right\|=1$, we get \begin{equation}\label{013} {{\left\| B \right\|}^{2}}\left( \text{resp}\text{. }{{\left\| C \right\|}^{2}} \right)\le {{\omega }^{2}}\left( A \right). \end{equation} Whence, \[\begin{aligned} \frac{1}{4}\left\| {{\left| A \right|}^{2}}+{{\left| {{A}^{*}} \right|}^{2}} \right\|&=\frac{1}{2}\left\| {{B}^{2}}+{{C}^{2}} \right\| \quad \text{(by \eqref{019})}\\ & \le\frac{1}{2}\sqrt{\| B ^{4}+ C^4\|+2\omega \left( {{C}^{2}}{{B}^{2}} \right)} \quad \text{(by Lemma \ref{02})}\\ & \le\frac{1}{2}\sqrt{\| B\| ^{4}+ \|C\|^4+2\omega \left( {{C}^{2}}{{B}^{2}} \right)}\\ & \le \frac{1}{2}\sqrt{2{{\omega }^{4}}\left( A \right)+2\omega \left( {{C}^{2}}{{B}^{2}} \right)} \quad \text{(by \eqref{013})}\\ & \le \frac{1}{2}\sqrt{2{{\omega }^{4}}\left( A \right)+2\left\| {{C}^{2}}{{B}^{2}} \right\|} \quad \text{(by the second inequality in \eqref{038})}\\ & \le \frac{1}{2}\sqrt{2{{\omega }^{4}}\left( A \right)+2{{\left\| B \right\|}^{2}}{{\left\| C \right\|}^{2}}} \\ &\qquad \text{(by the submultiplicativity of the usual operator norm)}\\ & \le {{\omega }^{2}}\left( A \right) \quad \text{(by \eqref{013})} \end{aligned}\] i.e., \[\frac{1}{4}\left\| {{\left| A \right|}^{2}}+{{\left| {{A}^{*}} \right|}^{2}} \right\|\le \frac{1}{2}\sqrt{2{{\omega }^{4}}\left( A \right)+2\omega \left( {{C}^{2}}{{B}^{2}} \right)}\le {{\omega }^{2}}\left( A \right).\] Since \[\omega \left( {{C}^{2}}{{B}^{2}} \right)=\frac{1}{16}\omega \left( {{\left( {{A}^{*}}-A \right)}^{2}}{{\left( {{A}^{*}}+A \right)}^{2}} \right)\] we get the desired result \eqref{020}. \end{proof} \begin{remark} Notice that if $A$ is a self-adjoint operator, then Theorem \ref{3} implies \[\frac{1}{2}{{\left\| A \right\|}^{2}}\le {{\left\| A \right\|}^{2}}\] while from Theorem \ref{4} we infer that \[\frac{1}{2}{{\left\| A \right\|}^{2}}\le \frac{\sqrt{2}}{2}{{\left\| A \right\|}^{2}}\le {{\left\| A \right\|}^{2}}.\] Hence, in this case, Theorem \ref{4} is better than Theorem \ref{3}. \end{remark} The next lemma can be found in \cite{07}. \begin{lemma} If $A$ and $B$ are positive operators in $\mathbb{B}\left( \mathscr{H} \right)$, Then \[\left\| {{ A }}-{{ B }} \right\|\le \max\{ {{\left\| A \right\|}},{{\left\| B \right\|}}\}-\min \{ m\left( {{A }} \right),m\left( {{ B }} \right) \},\] where $m\left( {{ A }} \right)=\inf \left\{ \left\langle A x,x \right\rangle :\text{ }x\in \mathscr{H},\left\| x \right\|=1 \right\}$. \end{lemma} \begin{theorem}\label{6} Let $A\in \mathbb{B}\left( \mathscr{H} \right)$. Then \[\omega^2 \left( A \right)\le \frac{1}{2}\left[ \left\| {{\left| A \right|}^{2}}+{{\left| {{A}^{*}} \right|}^{2}} \right\|-m\left( {{\left( \left| A \right|-\left| {{A}^{*}} \right| \right)}^{2}} \right )\right].\] \end{theorem} \begin{proof} We can write \[\begin{aligned} & \left\| {{\left( \frac{\left| A \right|+\left| {{A}^{*}} \right|}{2} \right)}^{2}} \right\| \\ & =\left\| {{\left( \frac{\left| A \right|-\left| {{A}^{*}} \right|}{2} \right)}^{2}}-\frac{{{\left| A \right|}^{2}}+{{\left| {{A}^{*}} \right|}^{2}}}{2} \right\| \\ & \le \max \left( \left\| {{\left( \frac{\left| A \right|-\left| {{A}^{*}} \right|}{2} \right)}^{2}} \right\|,\left\| \frac{{{\left| A \right|}^{2}}+{{\left| {{A}^{*}} \right|}^{2}}}{2} \right\| \right)\\ &\qquad-\min \left( m\left( {{\left( \frac{\left| A \right|-\left| {{A}^{*}} \right|}{2} \right)}^{2}} \right),m\left( \frac{{{\left| A \right|}^{2}}+{{\left| {{A}^{*}} \right|}^{2}}}{2} \right) \right) \\ & \le \left\| \frac{{{\left| A \right|}^{2}}+{{\left| {{A}^{*}} \right|}^{2}}}{2} \right\|- m\left( {{\left( \frac{\left| A \right|-\left| {{A}^{*}} \right|}{2} \right)}^{2}} \right ). \end{aligned}\] On the other hand, since \[\omega \left( A \right)\le \frac{1}{2}\left\| \left| A \right|+\left| {{A}^{*}} \right| \right\|,\] we have \[{{\omega }^{2}}\left( A \right)\le {{\left\| \left( \frac{\left| A \right|+\left| {{A}^{*}} \right|}{2} \right) \right\|}^{2}}=\left\| {{\left( \frac{\left| A \right|+\left| {{A}^{*}} \right|}{2} \right)}^{2}} \right\|.\] Consequently, \[{{\omega }^{2}}\left( A \right)\le \left\| \frac{{{\left| A \right|}^{2}}+{{\left| {{A}^{*}} \right|}^{2}}}{2} \right\|-m\left( {{\left( \frac{\left| A \right|-\left| {{A}^{*}} \right|}{2} \right)}^{2}} \right ) ,\] as desired. \end{proof} Using some ideas of \cite{1}, we prove our last result. \begin{theorem} Let $A\in \mathbb{B}\left( \mathscr{H} \right)$ and let $f$ be a continuous function on the interval $\left[ 0,\infty \right)$ and let $g$ be increasing and concave on $\left[ 0,\infty \right)$, such that $gof$ is increasing and convex on $\left[ 0,\infty \right)$. Then \[f\left( \omega \left( A \right) \right)\le \left\| {{g}^{-1}}\left( \frac{gof\left( \left| A \right| \right)+gof\left( \left| {{A}^{*}} \right| \right)}{2} \right) \right\|\le \frac{1}{2}\left\| f\left( \left| A \right| \right)+f\left( \left| {{A}^{*}} \right| \right) \right\|.\] \end{theorem} \begin{proof} As mentioned above, $gof$ is increasing and convex on $\left[ 0,\infty \right)$, therefore, from the inequality \eqref{5}, \[\begin{aligned} gof\left( \omega \left( A \right) \right)&\le gof\left( \left\| \frac{\left| A \right|+\left| {{A}^{*}} \right|}{2} \right\| \right) \\ & =\left\| gof\left( \frac{\left| A \right|+\left| {{A}^{*}} \right|}{2} \right) \right\| \\ & \le \left\| \frac{gof\left( \left| A \right| \right)+gof\left( \left| {{A}^{*}} \right| \right)}{2} \right\|. \end{aligned}\] Therefore, \[gof\left( \omega \left( A \right) \right)\le \left\| \frac{gof\left( \left| A \right| \right)+gof\left( \left| {{A}^{*}} \right| \right)}{2} \right\|.\] Now, since ${{g}^{-1}}$ is increasing and convex, we then have \[\begin{aligned} f\left( \omega \left( A \right) \right)&={{g}^{-1}}\left( gof\left( \omega \left( A \right) \right) \right) \\ & \le {{g}^{-1}}\left( \left\| \frac{gof\left( \left| A \right| \right)+gof\left( \left| {{A}^{*}} \right| \right)}{2} \right\| \right) \\ & =\left\| {{g}^{-1}}\left( \frac{gof\left( \left| A \right| \right)+gof\left( \left| {{A}^{*}} \right| \right)}{2} \right) \right\| \\ & \le \left\| \frac{f\left( \left| A \right| \right)+f\left( \left| {{A}^{*}} \right| \right)}{2} \right\| \\ & =\frac{1}{2}\left\| f\left( \left| A \right| \right)+f\left( \left| {{A}^{*}} \right| \right) \right\|. \end{aligned}\] Thus, \[f\left( \omega \left( A \right) \right)\le \left\| {{g}^{-1}}\left( \frac{gof\left( \left| A \right| \right)+gof\left( \left| {{A}^{*}} \right| \right)}{2} \right) \right\|\le \frac{1}{2}\left\| f\left( \left| A \right| \right)+f\left( \left| {{A}^{*}} \right| \right) \right\|.\] \end{proof} \begin{corollary} Let $A\in \mathbb{B}\left( \mathscr{H} \right)$. Then for any $r \ge 2$, \[\begin{aligned} & {{\omega }^{r}}\left( A \right) \\ & \le \frac{1}{2}\left\| {{\left| A \right|}^{r}}+{{\left| {{A}^{*}} \right|}^{r}}+{{\left| A \right|}^{\frac{r}{2}}}+{{\left| {{A}^{*}} \right|}^{\frac{r}{2}}}+I-\sqrt{2\left( {{\left| A \right|}^{r}}+{{\left| {{A}^{*}} \right|}^{r}}+{{\left| A \right|}^{\frac{r}{2}}}+{{\left| {{A}^{*}} \right|}^{\frac{r}{2}}} \right)+I} \right\| \\ & \le \frac{1}{2}\left\| {{\left| A \right|}^{r}}+{{\left| {{A}^{*}} \right|}^{r}} \right\|. \\ \end{aligned}\] \end{corollary} \begin{proof} Define \[g\left( x \right)=x+\sqrt{x}\quad\text{ }\!\!\And\!\!\text{ }\quad f\left( x \right)={{x}^{r}},\text{ }\left( r\ge 2 \right)\] on $\left[ 0,\infty \right)$. Thus, \[gof\left( x \right)={{x}^{r}}+{{x}^{\frac{r}{2}}}.\] One can quickly check that $f$, $g$, and $gof$ satisfy all the assumptions in Theorem \ref{6}. Since \[{{g}^{-1}}\left( x \right)=\frac{2x+1-\sqrt{4x+1}}{2},\] we get the desired result. \end{proof}
2109.11083
\section{Introduction} Molecular dynamics (MD) describes the motion of a molecular system based on classical Newtonian mechanics. In the classical mechanics, particles or coarse grained beads feature in masses and charges and their dynamics is driven by empirical force fields\cite{MM-Leach2001,CHARMMFF}. The empirical force field is usually an approximation to quantum mechanical calculations\cite{CGenFF,MMPT-Xu2019} or experimental observations.\cite{MMPT-Oxa-XuMeuwly2017,MMPT-Mackeprang-2016} The computer simulation is thus an important and efficient tool to help observe the motion of particles, compute physical quantities and compare with experimental observations (i.e., viscometric properties of fluids)\cite{Book_Understanding2002,Book_EVANS1990_NE_LIQUIDS}. The non-equilibrium molecular dynamics (NEMD), as a variant of molecular dynamics, is required if the equilibrium conditions are not satisfied for a system being modeled. Similar to its the equilibrium counterpart, NEMD is based on time-reversible equations of motion. However, it differs from conventional mechanics by using a microscopic environment for the macroscopic second law of thermodynamics\cite{NEMD-Hoover2004}. \\ \\ In the recent decades, there have been continuously increasing interests in using NEMD techniques to study behaviors of liquids under various types of flow fields\cite{NEMD-Sarman1998}. With these NEMD methods, one can accurately measure transport properties of fluids with molecule-like structures (e.g., realistic molecules, coarse grained beads etc.)\cite{SLLOD_Todd2007}. A typical model system of fluids for which NEMD techniques are intensively employed is a shear flow system. In fluid mechanics, the concept of shear flow corresponds to a type of fluid flow which is caused by external forces and driving adjacent layers of fluid to move parallel to each other and with different speeds. Meanwhile, viscous fluids resist against such shear motion. For computer simulations, there have been a number of methods which enable the simulation of a shear flow. One of the simplest ways is to introduce moving walls at the boundaries of the shear plane\cite{Couette-Khare1996}. With the moving walls, a shear flow can be driven passively but the periodicity upon the shear plane is thus closed which is a great downside of this type of methods. Moreover, the shear or Couette flows under the closed moving walls will introduce strong boundary and finite size effects, such as layering effects for sheared particles close to the boundaries\cite{LEBC-Rastogi1996-Wall}. Thereafter, one of the interesting methods is the SLLOD technique, which introduces the flow velocities into the equation of particle motions\cite{SLLOD_Tuckerman1997,SLLOD_Petravic1998,SLLOD_Todd2007,SLLOD-Daivis2006}. With the SLLOD equation of motions, the thermostat only acts on the \emph{peculiar} velocities of particles without the contribution from shears. Other methods such as hard reflecting walls\cite{DLMESO-Seaton2013} and quaternion-based dissipative particle dynamics (QDPD)\cite{QDPD-Sims2004} were also developed in the past years. \\ \\ To simulate the steady shear flow in a fully open-boundary space, the Lees-Edwards boundary condition (LEbc) was introduced by Lees and Edwards\cite{LEBC_Lees1972} in 1972. This technique has been proven advantageous in accurately capturing the non-linear behavior of particles in the shear. LEbc does not require external forces to drive the flow, i.e. by applying moving walls, which posit parallel to and move against each other. Thereby, the finite size effects are avoided and using LEbc recovers spatial homogeneity and bulk behaviors of particles for small systems in the simulation\cite{LEBC-Wagner2002-LB}, especially if limited computer power is considered. Currently, very few MD packages properly include the implementation to the Lees-Edwards boundary condition. Furthermore, in almost none of these implementations the detailed parallelism schemes have been thoroughly discussed for LEbc or the measurement of computing performance was presented in modern high performance computing (HPC) environments. One of the first parallel algorithms to the Lees-Edwards boundary condition was proposed and discussed by Rastogi \emph{et al.} 25 years ago\cite{LEBC-Rastogi1996}. In their work, the data information of virtual particles (which are from image boxed and close to the shear boundaries) were pre-stored in data arrays which are assigned to the virtual processors (or domains in the language of modern domain decomposition). These particles were collected and participate into force calculations for close particle pairs as if they are particles within the central boxes of the simulation. They showed the overall computing overhead under the parallelization to LEbc is less than $10\%$ out of the total run-time of simulation. Nowadays, the method of domain decomposition is found efficient in many processors parallel computing and also a standard in most of MD software packages. In this work, we discuss the parallelization of the Lees-Edwards boundary condition by improving the communication scheme in the domain decomposition which greatly reduce the computing overhead and hence enhances the overall performance for HPC systems. Our implementation of LEbc is also advantageous as it can be used in more generalized applications. Using this LEbc is not exclusive to specific potentials, MD types or thermostats. The portability to open-source MD simulation packages is also easy to achieve if the scheme is properly adopted. \\ \\ In this paper, we present the implementation of the Lees-Edwards boundary condition into the \epp ~MD simulation software\cite{EPP-Halverson2013,EPP-Guzman2019}. In Section \ref{sec:method}, we first give an overview on how to model a steady shear flow under LEbc. Second, we introduce the fundamental scheme for implementing LEbc by optimizing cell communication in the domain decomposition. That helps capture all non-redundant particle-particle pairs for the force calculation to short-ranged based interactions (e.g., the \lj~interaction). We also discuss the generalization from cell to node communication of LEbc and corresponding adaption to multi-core parallel computing systems. In Section \ref{sec:result}, we present simulation experiments starting from the simple Lennard Jones fluid\cite{LJ-Ruiz-Franco2018} to million-particle Kremer-Grest polymer melts\cite{KG-Grest1986,KG-Kremer1990}. We compare results with those from previous literatures and benchmark the parallelism performance in a super computing system. In Section \ref{sec:summary}, we draw our conclusion and give the outlook on the future work. \\ \\ The source code of \epp~with LEbc is available at (\url{https://github.com/xzhh/espressopp}) and the GPL-3.0 license is applied. \section{Methods and Development}\label{sec:method} \subsection{An Introduction to the Lees-Edwards boundary condition}\label{sec:lebc} \begin{figure}[H] \begin{center} \includegraphics[trim=0px 0px 0px 0px,clip,width=0.7\linewidth]{pic/lebc}\\ \caption{The schematic of the Lees-Edwards boundary condition. At a shear rate of $ \dot{\gamma} $, the image boxes move away from the central box at the speeds of $ v_s=\dot{\gamma}L_z/2 $ and $ -v_s $, respectively. At the time $ t $, the displacements are given by $ \delta=\dfrac{1}{2}\dot{\gamma} L_z t $ and $ -\delta $, compared to the original positions of the image boxes at $ t=0 $, see the dot-dashed frames. When the particle \emph{i} moves out of the top of the central box and re-enters at the bottom, then a shift of $ -2\delta $ is updated in the shear direction, and vice versa.} \label{fig:lebc} \end{center} \end{figure} Figure \ref{fig:lebc} depicts a typical schematic for implementing the Lees-Edwards boundary condition into a cuboid simulation system. In addition to the Brownian motion, which is driven by conservative forces (such as the Lennard-Jones forces), the shear contribution for the motions of particles is applied systematically and presented in a form of the velocity with a linear profile, which writes as \begin{equation} \begin{aligned} v_s(z)=\dot{\gamma}\cdot(z-\dfrac{L_z}{2}). \end{aligned} \label{eq:shear-rate} \end{equation} The shear speed $v_s$ is applied along the $x$-direction (the shear direction) and determined by the shear rate $ \dot{\gamma} $ and the distance of a particle from the $xy-$plane (the shear plane) at $ z=L_z/2 $ where $L_z$ is the height of the simulation box. The $z-$direction is considered as the gradient direction and the third direction, $y$, is unrelated to shear flow. This assignment of directions is fixed in all following discussion in this paper. \\ \\ In LEbc, the periodicity is activated for all dimensions but the boundary cross for a particle is different when it occurs in the $xy-$planes of the boundaries (namely the top and bottom of the central box) Here, we assume the particle $i$ is about to move out of the central box at a speed \begin{equation} \begin{aligned} \mathbf{p}_i(t)/m_i=\left[\left(v_x(t)+v_s(z(t))\right),v_y(t),v_z(t)\right], \end{aligned} \label{eq:vi} \end{equation} where $ \left[v_x(t),v_y(t),v_z(t)\right] $ is from kinematic contribution and is so-called \emph{peculiar} velocity\cite{DPD_SoddemannKremer2003,LEBC_Moshfegh2015,LJ-Ruiz-Franco2018},The position of $ i $ is written as \begin{equation} \begin{aligned} \mathbf{q}_i(t)=\big[x(t),y(t),z(t)\big]. \end{aligned} \label{eq:qi} \end{equation} If the particle $i$ leaves from the top of the central box, the shear effect is taken into account as $ i $ is re-entering the central box (from the bottom) with its new position (see $ i' $ in Figure \ref{fig:lebc}), \begin{equation} \mathbf{q}_i^\text{new}=\begin{bmatrix} \mathbf{mod}\big(x(t)+\dot{\gamma} L_z t,L_x\big) \\ y(t) \\ z(t)-L_z \end{bmatrix}. \label{eq:pos_new} \end{equation} Here, \textbf{mod} takes a modulo of the new $ x $ position over $ L_x $, the length of the box along the $ x $-direction. Changing the signs in Eq. \ref{eq:pos_new} can also describes a move from bottom to the top of the central box after a boundary cross. \subsection{Shear Flow Simulation with LEbc}\label{sec:thermostat} For a non-equilibrium simulations with a steady shear flow, the dynamics under the Langevin (LGV) thermostat can be presented as\cite{LJ-Ruiz-Franco2018,POLYM-Shang2017} \begin{equation} \begin{aligned} m_i\ddot{\mathbf{q}}_i(t)=\sum_{j\neq i}\mathbf{F}_{i,j}-\sum_{i} \xi(\mathbf{p}_i(t)-m_iv_s)+\sum_{i}\mathbf{F}^r_i. \end{aligned} \label{eq:langevin} \end{equation} \\ $ \mathbf{F}_{i,j} $ represents the conservative forces between particle $ i $ and other particles (here, we assume only a two-body system is considered). $ (\mathbf{p}_i(t)-m_iv_s) $ represents the \emph{peculiar} velocity and $ \xi $ is the friction coefficient and this second term stands for the dissipative forces. $ \mathbf{F}^r_i $ refers to the random forces. Unlike an equilibrium MD (EQMD) simulation, the Langevin thermostat does not act on the absolute velocity of a particle but excludes the shear contribution. Thus only the \emph{peculiar} part in Eq. \ref{eq:vi} participates in velocity integration in the \epp~software, combinedly with the force updates from the Langevin thermostat. The shear contribution (into the simulations) can only take effects in the propagation of coordinates. By this, the kinematic information along the $x-$direction is not lost and easy to access. In order to validate the current LEbc development, we also aim to reproduce existing numerical experiments.\cite{LJ-Ruiz-Franco2018,POLYM-Shang2017} Hence, the Langevin thermostat, in this work, is also altered to act on absolute velocities of particles for comparisons. \\ \\ Despite of being well known and used in many simulation applications, the Langevin thermostat has an important drawback in which the total momentum of the system is not well conserved. This is because the dragging force given by the thermostat is not pairwise. The dissipative particle dynamics (DPD) method\cite{DPD-HoogerbruggeKoelman1992,DPD-Espanol1995,DPD-Groot1997,DPD_SoddemannKremer2003} is, in the contrary, a momentum conserving thermostat. The dissipative forces in the DPD thermostat act on the relative velocities between two particles (or beads) and thereby pairwise (as well as random forces). The DPD thermostat has a similar form as in Eq. \ref{eq:langevin}, but the dissipative and random forces are given by \begin{equation} \begin{aligned} \mathbf{F}^D_{i,j}=-\xi\left[\omega(r_{ij})\right]^2\left|\hat{r}_{ij}\cdot \vec{v}_{ij}\right|\cdot\hat{r}_{ij} \end{aligned} \label{eq:dissipative} \end{equation} and \begin{equation} \begin{aligned} \mathbf{F}^R_{i,j}=\sqrt{24k_\text{B}T\xi}\cdot\theta_{ij}\omega(r_{ij})\left|\hat{r}_{ij}\cdot \vec{v}_{ij}\right|\cdot\hat{r}_{ij}. \end{aligned} \label{eq:random} \end{equation} where $ r_{ij}$ is the distance between particle \emph{i} and \emph{j}, $ \hat{r}_{ij}=\dfrac {\mathbf{q}_i-\mathbf{q}_j}{\left|\mathbf{q}_i-\mathbf{q}_j\right|}$ and $ \vec{v}_{ij}=v_i-v_j $. $ \omega(r_{ij}) $ is the conditional weight function, which is expressed as \begin{equation} \omega(r_{ij}) = \left\{ \begin{array}{lr} 1-\dfrac{r_{ij}}{r_c}, & \text{if }r_{ij}<r_c, \\ 0, & \text{if } r_{ij}\geq r_c \end{array} \color{white}\right\}, \label{eq:omega} \end{equation} and $\xi$, $ k_\text{B} $, $ T $ and $ \theta_{ij} $ are respectively the friction coefficient, the Boltzmann constant, the temperature and the random number with a uniform distribution within $ (-0.5,0.5) $. \subsection{Parallelization of LEbc in ESPResSo++}\label{sec:parallel} \begin{figure}[H] \begin{center} \includegraphics[trim=0px 0px 0px 0px,clip,width=0.95\linewidth]{pic/dd-a} \end{center} a) \end{figure} \begin{figure}[H] \begin{center} \includegraphics[trim=50px 100px 50px 0px,clip,width=0.95\linewidth]{pic/dd-b}\\ b) \caption{The pattern of cell communication within the domain decomposition for a) a single node grid (which stands for the serial computing) and b) 3$\times$3 node grids (parallel computing). The cells shift in the ghost layers (in gray) at the speed of $ \pm v_s(L_z/2)=\pm\dot{\gamma}L_z/2 $ when a shear flow starts. $ \tau $ is the duration for a ghost cell moving by a complete cell grid, which is given by $ \tau=2l_\text{cell}/(\dot{\gamma}L_z) $ and $ l_\text{cell} $ is the size of each cell in the $x-$direction. The ghost cell also shift iteratively (panel a-iv), and thus the connections of neighbor cells for data communication change dynamically. The panel b shows the generalization of dynamic cell communication with the presence of multiple node grids in parallel computing. In this pattern, node grids are assigned to their respective MPI ranks. When the ghost cells in the top ghost layer shift to the position as shown in the panel b, the current data communication (operated by the MPI communicator) occurs between Node \textbf{bA} and \textbf{tH} via Cell \gcell{1} and \gcell{2} (light green), and between Node \textbf{bA} and \textbf{tI} via Cell \gcell{3}$ - $ and \gcell{5} (light yellow). } \label{fig:decomp} \end{center} \end{figure} Consider an equilibrium MD simulation in a cuboid system with the domain decomposition which subdivides the simulation box into N$_x\times$N$_y\times$N$_z$ cell grids. If the pairwise short-range interactions are present, only particle pairs with in a distance cutoff $r_c$ are included for force calculations. However, a boundary condition should be taken into account while these distances are measured. If two particles of a particle pair are close to two opposite boundaries respectively, an image transformation is required for computing their distance. Thus, within the domain decomposition ghost cells are introduced (gray in Fig. \ref{fig:decomp}a-i)) and directly (up- or downright) mapped from real cells within the central domain (the box with blue frame lines). For each cell mapping, the particle information (masses, positions and properties etc.) is copied from the real cell to the ghost cell. Once the force calculations are complete, in the ghost cell updated forces are sent backwards to the real cell for the next step of velocity integration. In \epp, ghost cells (or layers) are built in an order of $ x $-, $ y $- and $ z $-directions, and for simplicity the $ y $-direction is omitted in Figure \ref{fig:decomp}. At $ t=0 $, the ghost cells in the ghost layers are indexed as \gcell{1}, \gcell{2}, $ \cdots $ and so on, corresponding to the real cells from the central domain. For ghost cells at the corners, there is no direct mapping from real cells but instead particle information is copied from ghost cells which are created from previous iteration of the cell communication (e.g., \textbf{1}\sups{\prime}$\rightarrow$\rcell{1} or \textbf{5}\sups{\prime}$\rightarrow$\lcell{5}). If the simulation starts without a shear, the starting network between ghost and real cells for data transmission remains unchanged during all simulation time. According to the 26-neighbor rule in rebuilding neighbor lists\cite{DD-PLIMPTON1995}, for example, Cell \textbf{7} collects particle information from its ghost neighbors, namely Cell \textbf{1}\sups{\prime}, \textbf{2}\sups{\prime} and \textbf{3}\sups{\prime}. And Cell \textbf{7} is also responsible for receiving particles leaving from Cell \textbf{1}, \textbf{2} and \textbf{3}, via the corresponding ghost cells as virtual transitions. For a shear simulation, however, all ghost cells start moving to the left (for cells at the bottom) and right (for cells at the top, see Fig. \ref{fig:decomp}a-ii). Once those ghost cells have moved by one cell size ($l_\text{cell}$); for example, Cell \textbf{1}\sups{\prime} has been shifted by one complete cell grid, at time $ t=\tau=2l_\text{cell}/(\dot{\gamma}L_z) $, and posits perfectly above Cell \textbf{7}. Thereby, the new neighbor cells (\lcell{5}, \textbf{1}\sups{\prime} \& \textbf{2}\sups{\prime}) are re-assigned to the real cell \textbf{7} and particle pairs are updated in its neighbor list (Fig. \ref{fig:decomp}a-iii). Meanwhile, ghost cells at the corners are updated correspondingly (i.e., \rcell{1} and \lcell{X} replaced by \lcell{4} and \rcell{7}). These new cell links will last for the next time $ \tau $ until the ghost cells shift to the next grid positions. More accurately, Cell \textbf{7} starts receiving Cell \lcell{5} (replacing Cell \gcell{3}), \textbf{1}\sups{\prime} and \textbf{2}\sups{\prime} as neighbors at $ t=\tau/2 $ as Cell \textbf{1}\sups{\prime} has moved by a half grid and being the closest neighbor cell to Cell \textbf{7} among all ghost cells. These new neighbor cells are valid until $t=3/2\tau$. The update of neighbor cells is also an iterative process, see in Fig. \ref{fig:decomp}a-iv. With the rightward shift of in the top ghost layer, the ghost cell \gcell{1} finally leaves from the rightmost boundary and reappears to its original position at $t=0$ as in Fig. \ref{fig:decomp}a-i. That is also a result of that the image box completes a move by a full length of $L_x$. \\ \\ Figure \ref{fig:decomp}b generalizes the pattern of the cell communication in the domain decomposition when many computing processors are present and all data communication is operated by the MPI (Messaging Passing Interface) communication in a shear flow simulation. Similar to the simulations in serial computing, the data communication relies on ghost cells in the shear planes. The difference in parallel computing, however, is that ghost cells are not external or attached to the entire central simulation box but bind to each node domain. The data communication contains mainly two parts: 1) sending particle information from real cells to ghost cells (\textbf{r2g}) and 2) receiving results of force calculations from ghost cells and back to real cells (\textbf{g2r}). For the equilibrium simulation, the \rg~communication follows, for the example of the target node \textbf{bA} and in $z-$direction, the sequence of nodes $\textbf{mD}\rightarrow\textbf{bA}\rightarrow \textbf{tG}$ (see the first column of nodes in Fig. \ref{fig:decomp}b and the right arrow represents the direction for sending particle information). The particle information from Node \textbf{bA} is transmitted to Node \textbf{tG} via ghost cells above the boundary. The \gr~communication, on the other hand, undergoes in reversed direction - nodes $\textbf{tG}\rightarrow\textbf{bA}\rightarrow\textbf{mD}$ and all node connections are left unchanged until the end of EQMD simulation. Similar to the cell communication in serial computing for the shear flow simulations, the connection between node grids is also dynamic for every node at the top (labeled as node \textbf{tX} and $ \textbf{X}=\{\textbf{G},\textbf{H},\textbf{I}\cdots\} $, see Fig. \ref{fig:decomp}b) and bottom (\textbf{bY}, $ \textbf{Y}=\{\textbf{A},\textbf{B},\textbf{C}\cdots\} $). Following the example pattern in Fig. \ref{fig:decomp}b, dual nodes \textbf{tH}~and \textbf{tI} receive the particle information from and return the forces back to Node \textbf{bA}, via cells \{\gcell{1}, \gcell{2}\} and cells \{\gcell{3}, \gcell{4}, \gcell{5}\} respectively. In the reversed $z-$direction, the communication of $\textbf{tH}\rightarrow\textbf{bA}$ and $\textbf{tI}\rightarrow\textbf{bA}$ is also taking effects at the mean time and respectively via 2 and 3 ghost cells as well. Compared to the linear route of the communication $\textbf{mD}\leftrightarrow\textbf{bA}\leftrightarrow\textbf{tG}$ in EQMD, the communication in NEMD requires the MPI communicator to request extra communication between MPI ranks which satisfies dual routes between the node grids (one from the top shear plane; the other from the bottom) like $\textbf{bA}\Leftrightarrow \textbf{tH/tI}$. For the communication with the internal node grids (\textbf{mZ}, $ \textbf{Z}=\{\textbf{D},\textbf{E},\textbf{F}\cdots\} $), the linear route stays unchanged (e.g., $\textbf{mD}\leftrightarrow\textbf{bA}$) during the shear flow simulation. \subsection{Simulation Details and Analysis} All MD simulations were carried out by using the \epp~MD software package\cite{EPP-Halverson2013,EPP-Guzman2019} and Lennard-Jones fluids and Kremer-Grest (KG) polymer melts\cite{KG-Grest1986,KG-Kremer1990,POLYM-Shang2017} were chosen as model systems for validating the implementation of the current work. Before the shear flow simulations start, all systems were (heated and) equilibrated under equilibrium simulations. The \lj~fluid contains a total of $ N=2000 $ particles in a cubic box. The density of the system is set to $ \rho=0.844 $ and the \lj~potential is given by \begin{equation} \begin{aligned} V(r) = 4 \epsilon \left[ \left( \frac{\sigma}{r} \right)^{12} - \left( \frac{\sigma}{r} \right)^{6} \right], \end{aligned} \label{eq:lj} \end{equation} where $ r $ is the distance between paired particles. Other physical parameters are set unitless to $ \epsilon=\sigma=m=k_\text{B}=1$. The short-range interactions have a cutoff at $r_c=2.5\sigma$. Simulations were run using both the Langevin thermostat and the dissipative particle dynamics method. For the Langevin thermostat, the friction coefficients of $\xi=1$ and $100$ are chosen to represent low and high viscosities of fluids; For the DPD thermostat, the friction constants of 5 and 25 are chosen. Temperatures were set to $ T=0.5,1.0 $ and $ 1.5 $ for studying different liquid behaviors. \\ \\ The KG polymer melts were modeled in forms of linear chains of beads\cite{KG-Grest1986,KG-Kremer1990}. The total number of polymer beads are fixed to $ N=4000 $ and the density of the system is $ \rho=0.84 $. The numbers of monomers for each polymer chain (namely the length of a chain) were selected to $m=20, 50$ and 100, respectively. All polymer melts use the shifted \lj~potential \begin{equation} \begin{aligned} V_\text{LJ}(r) = 4 \epsilon \left[ \Big( \frac{\sigma}{r} \Big)^{12} -\Big( \frac{\sigma}{r_c} \Big)^{12} - \Big( \frac{\sigma}{r} \Big)^{6} +\Big( \frac{\sigma}{r_c} \Big)^{6} \right], \end{aligned} \label{eq:lj-shift} \end{equation} at the short cutoff of $r_c=2^{1/6}\sigma$\cite{LJ-WCA}. Within a polymer chain, the bonded interactions between two adjacent beads are modeled using the FENE (finitely extensible nonlinear elastic\cite{KG-Kremer1990}) potential \begin{equation} V_\text{FENE}(r) = \left\{ \begin{array}{lr} -\dfrac{1}{2}kr_\text{max}^2ln\left[1-\left(\dfrac{r}{r_\text{max}}\right)^2\right], & \text{if }r<r_\text{max}, \\ +\infty, & \text{if } r\geq r_\text{max} \end{array} \color{white}\right\}, \label{eq:fene} \end{equation} \\ where $k=30$ refers to the force constant and $r_\text{max}=1.5\sigma$ is the maximal bond length. The angular term is presented in a form of cosine potential, which writes as \begin{equation} \begin{aligned} V_{cos}(r) = k_a\left(1-\phi/\phi_0\right) \end{aligned} \label{eq:angle} \end{equation} with the force constant $ k_a=1.5 $ and the equilibrium $ \phi_0=180^\circ $. \\ \\ Combining Eq. \ref{eq:lj-shift}-Eq. \ref{eq:angle}, the overall total potential for a KG melt system is written as \begin{equation} \begin{aligned} E_\text{tot} = \sum V_\text{LJ}(r)+\sum V_\text{FENE}(r)+\sum V_{cos}(r) \end{aligned} \label{eq:kg-tot} \end{equation} for all particle pairs and triples under corresponding selection conditions. \\ \\ In addition, the shear viscosity was computed for both model systems. The general non-Newtonian shear viscosity $ \eta $ can be obtained, at finite shear rates, by calculating \begin{equation} \begin{aligned} \eta=\frac{\left<\varsigma_{xy}\right>}{\dot{\gamma}}, \end{aligned} \label{eq:viscosity} \end{equation} where $ \varsigma_{xy} $ denotes as the off-diagonal component of the shear stress tensor \bfit{$\varsigma$}. \bfit{$\varsigma$} is given by the Irving-Kirkwood equation\cite{Irving1950} \begin{equation} \begin{aligned} \varsigma=-\dfrac{1}{V}\left[\sum_{i} m\left(v_i-v_{s,i}\right)\otimes\left(v_i-v_{s,i}\right)+\sum_{i}\sum_{j (j>i)}r_{ij}\otimes F_{ij}(r_{ij})\right], \end{aligned} \label{eq:tensor} \end{equation} where $ V $ is the volume of the simulation box, $ v_{s,i} $ is the instantaneous shear velocity for particle $ i $ (which can be obtained from Eq. \ref{eq:shear-rate}), $ F_{ij} $ stands for the conservative force between particles $ i $ and $ j $ and the $ \otimes $ symbol denotes to the dyadic product. \section{Results and Discussion}\label{sec:result} \subsection{Lennards-Jones fluids} \begin{figure}[H] \begin{center} \includegraphics[trim=0px 0px 0px 0px,clip,width=0.95\linewidth]{pic/profile-1}\\ i) $T=0.5$ \end{center} \end{figure} \begin{figure}[H] \begin{center} \includegraphics[trim=0px 0px 0px 50px,clip,width=0.95\linewidth]{pic/profile-2}\\ ii) $T=1.5$ \caption{Profiles of physical observables (\textbf{a} - temperature; \textbf{b} - velocity; \textbf{c} - density) for MD simulations with shear rates at $\dot{\gamma}=0.02,0.1$ and 0.5 and temperatures at i) $T=0.5$ and ii) $T=1.5$. All data are presented as a function of $z-$coordinate (normalized to $0\sim1$) and obtained by averaging 50 independent MD trajectories (the same number of averaging also applies in the following of this section, if not specified). For each MD trajectory, only the last 20\% of the MD steps are used. $\xi_{pec}$ and $\xi_{\text{abs}}$ represent the friction coefficients in the Langevin thermostat. The dragging forces with $\xi_{pec}$ in the Langevin thermostat is only applied upon \textit{peculiar} velocity of particles without the shear contribution, whereas the absolute velocities are thermostatted with $\xi_\text{abs}$. In the panel \textbf{b} charts are with error bars which are manually amplified by $\times250\%$. Additionally, the insets in panel \textbf{b} report the momentum trajectories (projected on the $x-z$ plane) of the centers of mass from example MD runs using the Langevin ($\xi_{pec}$, red) and DPD (green) thermostats and with other corresponding input parameters. } \label{fig:profile} \end{center} \end{figure} Figure \ref{fig:profile} presents the profiles of the configurational temperature $ T_y $ (along $y-$direction), the one-dimensional velocity $ v_x $ and the density $ \rho $ as a function of the $z-$coordinate. The aim is to discuss the liquid behavior in different layers along the gradient direction of shear. The MD simulations were run with selected temperatures and different ways of thermalization ($ \xi_{pec} $, $\xi_{\text{abs}}$ and $\xi_\text{DPD}$). At the low shear rate ($ \dot{\gamma}=0.02 $), all simulations are well thermalized to their target temperatures. For the velocity $ v_x $, however, different behaviors are found. For simulations run with the Langevin thermostat ($ \xi_{pec} $) and the DPD thermostat, the linear velocity profiles are reproduced as expected. For the $\xi_{\text{abs}}$ thermalization, however, the non-linear development by layers are seen in all selected temperatures (Fig. \ref{fig:profile}i/ii-b1). Especially for the simulations with $\xi_{\text{abs}}=100$ and at $ T=0.5 $, in more than $ 80\% $ of the layers particles are strongly stuck despite of the presence of the shear flow. \\ \\ Similar observations were also found in the density profile (Fig. \ref{fig:profile}i/ii-c1). The uniformed distribution of the density was seen in both of the $ \xi_{pec} $ and $\xi_\text{DPD}$ thermalization, which suggests the agreement of homogeneity between the simulation systems and real physics. With $\xi_{\text{abs}}=100$, results fail to show such homogeneity and a layering behavior is found for layers close to the boundaries. That implies it is not necessary to keep a linear velocity profile even if the temperature is well maintained by the thermostat. Such layering effects are even stronger for higher shear rates ($ \dot{\gamma}=0.1 $ and 0.5). These unrealistic inhomogeneities in the both velocity and density profiles for $\xi_{\text{abs}}=1$ indicate that a conventional Langevin thermostat is not good enough to simulate a homogeneous system with a steady shear flow and reproduce some physical observables correctly. Hence, simulations with the $\xi_{\text{abs}}$ thermalization are not further discussed in this paper. \\ \\ As a refinement to the conventional Langevin thermostat, the $ \xi_{pec} $ thermalization applies the Langevin dynamics upon the peculiar velocity. With $ \xi_{pec} $, both linear velocity profile and flat density profile are recovered by any combination of friction coefficients, shear rates and temperatures, as well as results from DPD simulations which are usually considered as a reference. When the simulations of a higher shear rate ($ \dot{\gamma}=0.5 $) were investigated, however, the refined Langevin thermostat does not show a good thermalization especially if a weaker viscous friction and/or a lower temperature are present. For simulations with $ \xi_{pec}=1 $ and $T=0.5$, the temperature profile shows a more than $+40\%$ up from the target temperature (Fig. \ref{fig:profile}i-a3). Only a higher temperature or a extremely strong friction reduces or even diminishes the temperature deviation. Interestingly, the DPD thermostat also somewhat fails to thermalize the system at the highest shear rate by $+10\%$ up from the target temperature. That could indicate that it is necessary to control the shear flow in speed in order to fully dissipate extra kinetic energies for simulation introduced by fast shears. At last, the momentum conservation was compared between the $ \xi_{pec}$ and $\xi_\text{DPD}$ thermalization. Since all particle masses are equal to 1, the velocities of the center of mass (CoM) were instead computed. As shown in the inset graphs in Fig. \ref{fig:profile}i/ii-b, the velocity trajectories of CoM are given. For each simulation input, one representative trajectory is presented. The DPD thermostat is well known for its momentum conserving feature. At the low shear rate ($\dot{\gamma}=0.02$), the momentum conservation is successfully reproduced in all dimensions. At the high shear rate ($\dot{\gamma}=0.5$), however, the influences from the shear flows are seen that the momentum conservation is not rigorous especially on the shear direction. Nevertheless, the momentum of CoM in the $y-$ and $z-$directions is conserved, which makes the velocity trajectory line-segment like, see in the insets of Fig. \ref{fig:profile}i/ii-b3. For the Langevin thermostat, the velocity trajectories fluctuate around the origin during the MD simulations. The majorities of fluctuations are in the ranges of $ v_x$ or $v_z\in(-0.05,0.05) $ and $ \in(-0.1,0.1) $ at $ T=0.5 $ and $ T=1.5 $ respectively, which are much smaller than the averaged velocities (in one dimension) for a particle is given by $ v_\text{kin}=\sqrt{2T/m} $ from the kinetic energies at the given temperature ($v_\text{kin}=1$ for $ T=0.5 $ and $v_\text{kin}=1.732$ for $ T=1.5 $). In addition, it is not found that a high shear speed significantly influences the momentum of CoM in the simulations using the Langevin thermostat. This could be attributed to the absence of the shear contribution and the equi-distributed density along the gradient direction from the simulations using the $ \xi_{pec}$ thermalization. \begin{figure}[H] \begin{center} \includegraphics[trim=0px 0px 0px 0px,clip,width=0.9\linewidth]{pic/msd-dpd}\\ \caption{One dimensional mean squared displacement (MSD) of the \lj~fluids in the $ z- $direction under the shear flows. Simulations were run at $\dot{\gamma}=0.02$ (red), 0.1 (blue) and 0.5 (green) and with the Langevin thermostat (left : $\xi_\text{DPD}=5$; right: $\xi_\text{DPD}=25$) at $ T=1.0 $. Data were compared to reference MSD from the equilibrium simulation at the same temperature (see black solid lines). The inset shows the shear viscosity as a function of shear rates, ranged from $\dot{\gamma}=0.01$ to $\dot{\gamma}=1$ and data were computed for both $\xi_\text{DPD}=5$ (brown) and $\xi_\text{DPD}=25$ (orange). } \label{fig:msd} \end{center} \end{figure} \noindent To investigate the system diffusivity of the \lj~fluids under LEbc, we first compared mean squared displacement of \lj~particles along the non-shear direction (the $z-$dimensions) at selected shear rates. Figure \ref{fig:msd} shows such results from shear flow simulations with the DPD thermostat at different friction coefficients. The self-diffusion constant (SDC) can be obtained by calculating $ D=\dfrac{1}{2}\left<x(t)^2\right>/t $, namely the half of the slope. The overall SDCs of the sheared \lj~fluid have around twofold increases in the simulations with ($\xi_\text{DPD}=5$) for every selected shear rate, compared to those with $\xi_\text{DPD}=25$. The reference particle displacement is also computed from the EQMD simulations (see black solid lines). If a steady shear starts, the diffusivity rises with the increase of the shear rate which even applies to non-shear directions. We also investigated the relationship between the shear viscosity and the shear rate. For both $\xi_\text{DPD}=5$ and $\xi_\text{DPD}=25$, that is consistent with the measurement on viscosity which monotonically decreases with the increase of $\dot{\gamma}$, see the inset of Figure \ref{fig:msd}. \subsection{Kremer-Grest Polymer Melts} \begin{figure}[H] \begin{center} \includegraphics[trim=0px 0px 0px 0px,clip,width=0.8\linewidth]{pic/visco}\\ \caption{Shear viscosity for polymer melts ($N_\text{mono}=20$ and 100 per chain) in the shear flow simulation using Langevin thermostat and with Lees-Edwards boundary condition. Data were reported for shear rates in a range of $\dot{\gamma}=0.01\sim1.0$. } \label{fig:visco} \end{center} \end{figure} The most common type of behavior to a shear flow model of polymer melts is the shear thinning, where the fluid viscosity decreases with an increasing shear rate. Figure \ref{fig:visco} shows the shear viscosity for simulations using both the Langevin and DPD thermostats. $ N_\text{mono} $ represents the length of monomers for each chain in the polymer melt systems and the shear rate $\dot{\gamma}$ ranges from 0.01 to 1. In general, all results show similar behaviors with respect to the shear thinning. At the low shear rates ($\dot{\gamma}<0.02$), the polymer melts show more Newtonian behaviors with a constant value of shear viscosity, which finally converges to the plateau of zero-shear viscosity as $ \lim\{\dot{\gamma}\rightarrow0\} $. The low shear rate region applies to simulations of both lengths of polymer chains. At the region of a critical shear rate ($0.02<\dot{\gamma}<0.5$ for $ N_\text{mono}=20 $), the shear viscosity drops enormously and usually a power law model can be applied for numeric fitting. For $\dot{\gamma}>0.5$, the region of the highest shear rates, the decrease of the shear viscosity slows down which means the viscosity becomes less sensitive to higher shear stress when polymer chains are fully disentangled and aligned. In this region, a second viscosity plateau can be observed (or called the infinite shear viscosity plateau) or a weaker shear thinning behavior forms which can be fitted to a Sisko model\cite{Sisko}. For the longer polymer chains, the second viscosity plateau has yet to be seen within the region in Figure \ref{fig:visco} but still the trending of slowdown can be seen in $0.5<\dot{\gamma}<1$. \subsection{Performance Benchmark} \begin{figure}[H] \begin{center} \includegraphics[trim=0px 0px 0px 0px,clip,width=0.9\linewidth]{pic/performance}\\ \caption{Parallel runtime depending on different MPI ranks for the simulations of polymer melts. The total number of the polymer beads is $N=2,560,000$ and every result is averaged from 10 MD runs. } \label{fig:benchmark} \end{center} \end{figure} In Sec. \ref{sec:parallel}, we have discussed the adaption of the \lebc~for parallel computing and the MPI communication with dual routes for exchanging particle and force information between the node grids. This specialized real-to-ghost and ghost-to-real communication, however, only occurs between the node grids from the top panel of the shear plane within the simulation box and those from the bottom panel (let us call them \emph{special} nodes). For other node grids located in the internal part of the central box, their communication to those \emph{special} node grids and to each other of themselves obeys the standard scheme of the domain decomposition as if the equilibrium simulations are run. \epp~offers two modes to proceed the communication scheme for a target node grid. For internal-internal and internal-\emph{special} node pairs, the first mode is applied which uses the standard pattern of cell communication; For \emph{special}-\emph{special} node pairs, the second mode is switched on and the LEbc adapted communication is used. However, that limits the minimum number of node grids in the $z-$direction ($ N_z$). If $ N_z=2 $, the neighbor nodes (at the top and at the bottom) do not only communicate cross the $xy-$boundary planes but also communicate to each other internally within the central box. That requires the activation for both modes, which is not supported by the current data layouts. Therefore, the LEbc based domain decomposition is required to set $ N_z\ge3 $. In the current version of \epp, the assignment of the numbers of node grids (with given MPI ranks) from the domain decomposition is fixed to $ N_x\ge N_y\ge N_z $. Thus, for running a simulation with LEbc there is a minimum requirement of at least 27 MPI ranks for parallel computing. But this would be trivial, as a problem, since we focus on many-particle systems ($N>100,000$) in future works. \\ \begin{table} \caption{Comparison of the total runtime (s/1,000 steps) between the shear simulation with LEbc and non-shear simulation (without LEbc).} \begin{center} \begin{tabular}{ c c c c c } \hline Number & \multicolumn{2}{c}{Runtime} & Runtime & Domain decomposition \\ of processors & LEbc & w/o LEbc & difference (\%) & $ \left(N_x,N_y,N_z\right) $ \\ \hline 64 & 45.5 & 42.8 & +6.3 & $\left(4,4,4\right)$ \\ 128 & 24.6 & 21.5 & +14.4 & $\left(8,4,4\right)$ \\ 256 & 12.9 & 10.4 & +24.0 & $\left(8,8,4\right)$ \\ 512 & 6.76 & 4.7 & +43.8 & $\left(8,8,8\right)$ \\ 1024 & 3.8 & 2.33 & +63.1 & $\left(16,8,8\right)$ \\ \hline \end{tabular} \end{center} \label{tab:runtime} \end{table} \\ All benchmarks were performed on the MOGON II supercomputer cluster from the data center of Johannes Gutenberg University Mainz. Up to 32 nodes were used for performing the shear flow simulation for a huge size polymer melt system (which includes $ N=2,560,000 $ beads). Each node contains two 16-core Intel Skylake processors (Xeon Gold 6130, 2.10GHz) and nodes are networked with 100 Gbps Omni-Path. The Intel compiler (v2018.03) is used for compiling the \epp~software package and the Langevin thermostat was used for the benchmark simulations. As shown in Figure \ref{fig:benchmark}, the benchmark for parallel simulation starts from $ N_\text{core}=64 $ ($ 4\times4\times4 $ node grids). With $ N_\text{core}=64 $, the average total runtime is $t=45.5$ seconds per 1,000 MD steps for shear simulation with LEbc, compared to 42.8 seconds for the same number of steps for which EQMD was run (see the hollow square). The performance differs only in $\sim6\%$. Further looking into the contributions to the total run time, the force calculation (bonds and non-bonded pairs) takes the major contribution among all computing procedures. Since there are only bonded and short-range interactions present in the polymer melt simulations, the computing time from the force calculations shows a perfect linear scaling up to 1024 MPI ranks. With the increase of MPI ranks, the force calculation does not take the majority of the time consumption from $ N_\text{core}\ge512 $. Instead, the communication contribution takes a more significant role as well as the resort part. Besides the overhead from increased data communication when multiple computer nodes are allocated, the assignment of domain decomposition is additionally an important factor which influences the performance of a shear flow simulation. For $ N_\text{core}=64 $, the assignment to the node grids is given as $ \left(N_x,N_y,N_z\right)=\left(4,4,4\right) $ while that becomes $\left(16,8,8\right)$ with $ N_\text{core}=1024 $ MPI ranks. With the same system size, more divisions to the node grids along shear direction lead to more frequent updates on both node-to-node and cell-to-cell (within one node grid) connection for data communication with the framework of the domain decomposition. When such an update is requested, a resort process is called which consequently results in creating extra overhead upon the resort (and partly into the communication contribution). As mentioned previously, using LEbc requires more complex node-to-node communication which also attributes to increased computing time during the simulations. \section{Conclusions and outlooks}\label{sec:summary} In this paper, we have presented the implementation of the \lebc~in the \epp~MD software package and the detailed design of its parallelization. We hope the current work can be helpful and inspiring also for future codes or implementations in other open source MD software. \\ \\ We employed our LEbc implementation in non-equilibrium MD simulations using both Langevin and DPD thermostats. We first investigated the \lj~fluids and found that using the conventional Langevin thermostat fails to reproduce the linear profiles of velocities. Moreover, the strong layering effects for the density profiles were found for the regions close to the boundaries which are perpendicular to the gradient of the shear. Instead, a modified Langevin thermostat, which acts on the \emph{peculiar} velocities, was used and certain physical observables were recovered. Results also agree with the simulations using the DPD thermostat at the low and moderate shear rates. \\ \\ We also simulated Kremer-Grest polymer melts and determined the shear thinning region between $\dot{\gamma}=0.02$ and $\dot{\gamma}=0.5$ for short chains polymer melts. To measure the parallel performance for the current implementation, we used up to 32 supercomputer nodes for simulating a huge polymer melt system with more than $2.5$ million particles. Results show the overall speedup from $N_\text{core}=64$ to $N_\text{core}=1024$ is around $\times1200\%$ (and the scaling efficiency is ca. $75\%$). The major overhead created for many processors parallelization can be attributed to more frequent resorts during the MD simulations which is inevitable with the presence of a shear flow. \\ \\ For the future work, we first aim to exploit the future performances in parallel computing. Possible directions could include 1) the optimization of resort management, 2) the integration of the current LEbc code into the new load-balance optimization from the recent \epp~development\cite{EPP-Vance2021} and 3) further optimization of the data communication and the data layouts which are intensively used in the domain decomposition. Second, we are also expecting to extend the current LEbc implementation for more generalized application cases. For example, we are interested in finding more commonly used thermostats (i.e, the Bussi-Danadio-Parrinello thermostat\cite{BDP-Bussi2007}) and integrating them with the LEbc implementation. We also notice that there will be increasing needs for performing shear flow simulations for systems which require long-range interactions. In a first step, Ewald \cite{Ewald-Wheeler1997} is now an available method in \epp~for simulations with LEbc. Further applications and code optimization will be presented in future publications. \clearpage \section*{Acknowledgments} This work is supported by the TRR 146 project under the funding from the German Research Foundation (DFG), Johannes Gutenberg University Mainz and Max Plank Institute for Polymer Research (MPIP). We specially thank the Data Center of Johannes Gutenberg University Mainz for providing supercomputer resources.
1304.7204
\section{Proof of Lemma \ref{l:procedure}} \begin{proof} Assume that $\phi$ is satisfiable. By Lemma \ref{l:shortpaths} and Lemma \ref{l:narrowtrees} there exists a small model $\str{T}\models \phi$. The procedure accepts $\phi$ by making all its guesses in accordance to $\str{T}$, i.e.~in the first step it sets $\bar{\alpha}$ to be equal to the full type of the root of $\str{T}$ and then in each step it sets $\bar{\alpha}_i$ to be the full type of the $i$-th child of the previously considered element. In the opposite direction, from an accepting (tree-)run $t$ of the procedure we can naturally construct a tree structure $\str{T}_t$, with $1$-types of elements as guessed during the execution. Our procedure guesses actually not only $1$-types but full types of elements. The function $locally$-$consistent$ guarantees that the full types of elements in $\str{T}_t$ are indeed as guessed. Since the procedure checks if each of those full types is $\phi$-consistent, then by Proposition \ref{p:consistent_types} we have that $\str{T}_t \models \phi$. \hfill $\Box$ \end{proof} \section{Proof of Lemma \ref{l:pqr}} \begin{proof} Suppose $\str{T} \models \phi[u]$, then there is $v \in \str{T}$ such that $\str{T} \models \psi[u,v]$. If $u {\not\sim} v$ then $\str{T} \models \psi_{\upharpoonright x {\not\sim} y}[u,v]$. By definition $u \in R'$ and thus there is $q \in Q$ such that $q {\downarrow_{\scriptscriptstyle *}} u$. The cases $v {\downarrow_{\scriptscriptstyle +}} u$ and $u {\downarrow_{\scriptscriptstyle +}} v$ are similar. If $u = v$ then $\str{T} \models \psi[u,u]$ and thus $\str{T} \models \psi_{\upharpoonright x = y}[u,u]$. In the opposite direction, suppose there is $q \in Q$ such that $q {\downarrow_{\scriptscriptstyle *}} u$. Notice that if $q {\downarrow_{\scriptscriptstyle *}} u$ then for every node $v$ we have $q {\not\sim} v \Rightarrow u {\not\sim} v$. So if $q \in R'$ then there is a node $v$ such that $\str{T} \models \psi_{\upharpoonright x {\not\sim} y}[u,v]$. Otherwise $q \in Q'$ and there exists a node $v$ such that $\str{T} \models \psi_{\upharpoonright y {\downarrow_{\scriptscriptstyle +}} x}[u,v]$. In both cases there is a node $v$ such that $\str{T} \models \psi[u,v]$ and thus $\str{T} \models \phi[u]$. The case if there is $p \in P$ such that $u {\downarrow_{\scriptscriptstyle *}} p$ is similar. If $\str{T} \models \psi_{\upharpoonright x = y}[u,u]$ then $\str{T} \models \psi[u,u]$ and thus $\str{T} \models \phi[u]$. \hfill $\Box$ \end{proof} \section{Proof of Lemma \ref{l:intervals}} \begin{lemma-non} {\bf \ref{l:intervals}} Let $\phi \in \mbox{$\mbox{\rm FO}^2[{\downarrow_{\scriptscriptstyle +}}]$}{}$ over the alphabet $\tau_0$ in ENNF and with one free variable, let $\str{T}$ be a tree over the alphabet $\tau_0$, and let $a \in \tau_0$. There is a set $S \subseteq T$ which is a union of tree slices in $\str{T}$ such that for every $i \in \str{T}^a: \str{T} \models \phi[u]$ iff $u \in S$; and every path in $\str{T}$ intersects at most $|\phi|^2$ tree slices from $S$. \end{lemma-non} \begin{proof} Induction on the structure of $\phi$. We consider only the case when $\phi = \exists y \psi(x,y)$. Otherwise the proof is similar as in the corresponding Lemma 2.1.10 from \cite{Weis11}. Let \[\psi(x,y) = \beta(x {\downarrow_{\scriptscriptstyle +}} y, x {=} y, y {\downarrow_{\scriptscriptstyle +}} x, x {\not\sim} y, \xi_1(x), \dots, \xi_s(x), \zeta_1(y), \dots, \zeta_t(y))\] Applying the inductive hypothesis to the formulas $\xi_\sigma$ , $\sigma \in [1, s]$, let $S_\sigma$ be the set as described in the statement of this lemma, and let $I_{(\sigma, k_1)} , \dots , I_{(\sigma,k_\sigma )}$ be tree slices such that every path in $\str{T}$ intersects at most $|\xi_\sigma|^2$ of them and $S_\sigma = \bigcup_{l=1}^{k_\sigma}I_{(\sigma,l)}$. We define the set $H = \bigcup_{\sigma=1}^{s}S_\sigma \cup \{r\} \cup L$, where $r$ is the root of $\str{T}$ and $L$ is the set of leaves in $\str{T}$. Looking at each tree slice $I$ bounded by points from $H$, the truth values of the formulas $\xi_1, \dots, \xi_s$ remain constant among all points from $I^a$. Let $\xi_1^\star, \dots, \xi_s^\star$ be these respective true values. Thus, on all nodes from $I^a$, $\phi(x)$ is equivalent to $\exists y \beta(x {\downarrow_{\scriptscriptstyle +}} y, x{=}y, y {\downarrow_{\scriptscriptstyle +}} x, x {\not\sim} y, \xi_1^\star, \dots, \xi_s^\star, \zeta_1(y), \dots, \zeta_t(y))$. This formula satisfies the requirements of Lemma \ref{l:pqr}, so that the truth of $\phi(x)$ over $I^a$ is determined by the relative position of $x$ with respect to $P$, $Q$ and by truth of the formulas $\zeta_1(x), \dots, \zeta_t(x)$ for the nodes in between $P$ and $Q$. We now can construct the set $S$ of all nodes from $\str{T}^a$ where $\phi(x)$ is true as the union of tree slices bounded by: points from $H$; points that result from applying this lemma to the formulas $\zeta_1(x), \dots, \zeta_t(x)$; or points from $P$ and $Q$ added on every tree slice $I$. We set a path in $\str{T}$ and count the number of tree slices from $S$ this path intersects. An intersection of a path from $\str{T}$ with a tree slice is an interval. By the remark made after Lemma \ref{l:pqr} we know there is at most one point from $P$ and $Q$ added on every path in $I$, thus there is at most one point $p \in P$ and $q \in Q$ on every interval. This means we can use the calculations in Lemma 2.1.10 from \cite{Weis11} to achieve at most $|\phi|^2$ intervals on every path in $\str{T}$. \hfill $\Box$ \end{proof} \section{Remaining part of the proof of Lemma \ref{l:shortpathssing}} We argue that $\str{T}' \models \phi$. To see this, we show by induction that for every subformula $\eta$ of $\phi$ with at most one free variable and all $u \in T'$, $\str{T} \models \eta[u]$ iff $\str{T}' \models \eta[u]$. If $\eta$ is an atomic formula or a boolean combination of other formulas then the claim is obvious. Suppose $\eta = \phi_\kappa$ for some $\kappa \in [1,k]$. Suppose that $u \in I$ and $\str{T} \models \eta[u]$. Then there is a $v \in \str{T}$ such that $\str{T} \models \psi_\kappa[u,v]$. Let $I \in \str{I}^a_\kappa$ such that $u \in I$. We find $\hat{v} \in T'$ such that $\str{T} \models \psi_\kappa[u,v]$ as follows: if $u {\downarrow_{\scriptscriptstyle +}} v$ then there is a $\hat{v} \in P_I$ such that $v {\downarrow_{\scriptscriptstyle *}} \hat{v}$, if $v {\downarrow_{\scriptscriptstyle +}} u$ then there is a $\hat{v} \in Q_I$ such that $\hat{v} {\downarrow_{\scriptscriptstyle *}} v$, if $u {\not\sim} v$ then there is a $\hat{v} \in R_I$ such that $v {\downarrow_{\scriptscriptstyle *}} \hat{v}$, if $u = v$ then we set $\hat{v} = u$. Clearly $\str{T}' \models \psi_\kappa[u, \hat{v}]$ and thus $\str{T}' \models \eta[u]$. Suppose now that $u \in T'$ and $\str{T}' \models \eta[u]$. Then it is easy to see that $\str{T} \models \eta[u]$. So because $\str{T} \models \phi$, we have $\str{T}' \models \phi$. We now show that paths in $\str{T}'$ have a bounded length. Set $I \in \str{I}^a_\kappa$ and the formula $\psi_\kappa^I(x,y)$ as in fragment of this proof from the main part of the paper. For every $b \in \tau_0$ and every $i \in [1,t]$ let $S^{'b}_i$ be a set as in Lemma \ref{l:intervals} applied to the formula $\zeta_i(y)$, where $S_i^{'b}$ intersects at most $|\zeta_i|^2$ tree slices on every path. Thus there is a set $\str{K}^b_\kappa$ of tree slices $I'$ such that every path in $\str{T}$ intersects at most $2 \cdot \sum_{i \in [1,t]}|\zeta_i|^2$ of them and $\bigcup \str{K}^b_\kappa = \str{T}$. Set a path in $\str{T}$ and let $\str{J}^a_\kappa$ be the set of intervals that are the intersections of this path with tree slices from $\str{K}^a_\kappa$. We claim that there is at most one element from $P_I^b$ on every $J \in \str{J}^b_\kappa$. Suppose we have $u \in I$ and $v_1, v_2 \in J \cap P_I^b$. We show that $\str{T} \models \psi_{\kappa \upharpoonright x {\downarrow_{\scriptscriptstyle +}} y}^I[ u,v_1]$ iff $\str{T} \models \psi_{\kappa \upharpoonright x {\downarrow_{\scriptscriptstyle +}} y}^I[u,v_2]$. Indeed recall that $\psi_{\kappa \upharpoonright x {\downarrow_{\scriptscriptstyle +}} y}^I(x,y) = \beta(\top, \bot, \bot, \bot,\xi_1^I, \dots, \xi_s^I, \zeta_1(y), \dots, \zeta_t(y))$. Thus the boolean value of $\beta$ depends on the boolean values of $\zeta_i(y)$. But we assumed that they are the same for $v_1$ and $v_2$. Since $P_I$ is a set of maximal nodes then $v_1 = v_2$. We can do analogous calculations for the sets $Q_I$ and $R_I$. Altogether the length of every path in $\str{T}'$ is at most $3 \cdot 2 \cdot \sum_{\kappa \in [1,k], a \in \tau_0, i \in [1,t]}|\zeta_i|^2 \leq 6 \cdot |\tau| \cdot |\phi|^3$. \hfill $\Box$ \section{Complexity of \mbox{$\mbox{\rm GF}^2[{\downarrow_{\scriptscriptstyle +}}]$}{} over singular trees} In this section we expand our arguments for \textsc{PSpace}{} upper bound for \mbox{$\mbox{\rm GF}^2[{\downarrow_{\scriptscriptstyle +}}]$}{} over singular trees. A \mbox{$\mbox{\rm GF}^2[{\downarrow_{\scriptscriptstyle +}}]$}{} formula $\phi$ is in \emph{normal form} if $\phi=\bigwedge_{i \in I}\forall xy (\eta_i(x,y) \Rightarrow \psi_i(x,y)) \wedge \bigwedge_{i \in J} \forall x (\lambda_i(x) \Rightarrow \exists y (\eta_i(x,y) \wedge \psi_i(x,y)))$, for some disjoint index sets $I$ and $J$, where $\eta_i$ is a guard of the form $x{\downarrow_{\scriptscriptstyle +}} y$, $y {\downarrow_{\scriptscriptstyle +}} x$ or $x{=}y$, $\lambda_i(x)$ is an atomic formula $a(x)$ for some unary symbol $a$, and $\psi_i(x,y)$ is a boolean combination of unary atomic formulas. We can prove a slightly weaker counterpart of Lemma \ref{l:normalform} for \mbox{$\mbox{\rm GF}^2$}. Namely, we show that satisfiability of a \mbox{$\mbox{\rm GF}^2[{\downarrow_{\scriptscriptstyle +}}]$}{} formula can be reduced to satisfiability of a normal form \mbox{$\mbox{\rm GF}^2[{\downarrow_{\scriptscriptstyle +}}]$}{} formula nondeterministically. \begin{lemma} \label{l:normalformgf} There exists a nondeterministic procedure {\tt GF$^2[{\downarrow_{\scriptscriptstyle +}}]$-normalisation}, such that for a \mbox{$\mbox{\rm GF}^2[{\downarrow_{\scriptscriptstyle +}}]$}{} formula $\phi$ over a signature $\tau$, and a tree frame $\cT$ consisting of at least two nodes the following holds. The formula $\phi$ is satisfiable over $\cT$ (singularly satisfiable over $\cT$) if and only if there exists a polynomial execution of {\tt GF$^2[{\downarrow_{\scriptscriptstyle +}}]$-normalisation} on $\phi$ producing a normal form \mbox{$\mbox{\rm GF}^2[{\downarrow_{\scriptscriptstyle +}}]$}{} formula $\phi'$ over a signature $\tau'$ consisting of $\tau$ and some additional unary symbols, satisfiable over $\cT$ (satisfiable over $\cT$ in a model which restricted to $\tau$ is singular). \end{lemma} \begin{proof} By the work from \cite{ST04} it follows that for a given \mbox{$\mbox{\rm GF}^2$}{} formula $\phi$ over a signature $\tau$ there exists a polynomially computable formula $\phi'=\bigwedge_{i \in I} ((\forall x \; r_i(x))\Leftrightarrow \exists x (\lambda_i(x) \wedge \psi_i(x)) \wedge ((\forall x \; r_i(x)) \vee (\forall x \; \neg r_i(x)))) \wedge \bigwedge_{i \in J} \exists x (\lambda_i(x) \wedge \psi_i(x)) \wedge \phi''$, for some disjoint index sets $I$ and $J$, over a signature consisting of $\tau$ and some additional unary predicates, where $\lambda_i(x)$ is an atomic formula $a(x)$ for some unary symbol $a$, $\psi_i(x)$ is a boolean combinations of atoms, $\phi''$ is in normal form, and none of $r_i$-s is used as a guard, such that $\phi$ and $\phi'$ are satisfiable over the same tree frames. Now for each $i \in I$ we guess whether $\forall x \; r_i(x)$ is satisfied or not and replace the occurrences of $r_i(x)$ and $r_i(y)$ in $\phi'$ by $\top$ or $\bot$ appropriately. We thus get a conjunction of a normal form formula, some formulas of the form $\exists x (\lambda_i(x) \wedge \psi_i(x))$, and some formulas of the form $\neg \exists x (\lambda_i(x) \wedge \psi_i(x))$. A formula of the last type can be rewritten as $\forall xy (x=y \Rightarrow \neg \lambda_i(x) \vee \neg \psi_i(x))$. To deal with purely existential statements we introduce a fresh unary predicate $root$ and make it true precisely at the root of a tree by adding the conjunct $\forall xy (x{=}y \Rightarrow (root(x) \Leftrightarrow \neg \exists y (y {\downarrow_{\scriptscriptstyle +}} x)))$. A formula $\exists x (\lambda_i(x) \wedge \psi_i(x))$ can be now rewritten as the normal form conjunct $\forall x (root(x) \Rightarrow \exists y (x {\downarrow_{\scriptscriptstyle +}} y \wedge (\lambda_i(y) \wedge \psi_i(y)) \vee (\lambda_i(x) \wedge \psi_i(x))))$. This transformation works properly over trees containing at least two nodes. The describe nondeterministic procedure is thus the required {\tt GF$^2[{\downarrow_{\scriptscriptstyle +}}]$-normalisation} procedure. \hfill $\Box$ \end{proof} Let us see that in an arbitrary (not necessarily singular) model $\str{T}$ of a normal form \mbox{$\mbox{\rm GF}^2[{\downarrow_{\scriptscriptstyle +}}]$}{} formula $\phi$ we can find a submodel in which the degree of nodes is bounded polynomially in $|\phi|$ and in the length of the paths of $\str{T}$. As we are able to shorten paths in singular models to length polynomial in $|\phi|$, this will lead to a polynomial bound on the degree of nodes in singular models of \mbox{$\mbox{\rm GF}^2[{\downarrow_{\scriptscriptstyle +}}]$}{} formulas (which, as we have seen, contrasts with the case of \mbox{$\mbox{\rm FO}^2[{\downarrow_{\scriptscriptstyle +}}]$}{}). \begin{lemma} \label{l:narrowertrees} Let $\phi$ be a normal form \mbox{$\mbox{\rm GF}^2[{\downarrow_{\scriptscriptstyle +}}]$}{} formula and let $\str{T} \models \phi$. Then there exists a submodel $\str{T}' \models \phi$ of $\str{T}$ in which the number of successors of each node is bounded by $max \cdot |\phi|$, where $max$ is the length of the longest path in $\str{T}$. \end{lemma} \begin{proof} Let $v$ be the root of $\str{T}$. For every conjunct $\phi_i$ of $\phi$ of the form $\forall x (\lambda_i(x) \Rightarrow \exists y (\eta_i(x,y) \wedge \psi_i(x,y)))$, with $\eta_i(x,y)=x {\downarrow_{\scriptscriptstyle +}} y$ pick a witness $w$ for $v$ and $\phi_i$, mark $w$ and mark all the elements $u$ such that $\str{T} \models u {\downarrow_{\scriptscriptstyle +}} w$, i.e.,~the elements on the path from the root to $w$. Remove all subtrees rooted at successors of $v$ containing no marked elements. Repeat this process for all the elements $v$ of $\str{T}$, say, in the depth-first manner. Note that the structure obtained after each step is a model of $\phi$, since we explicitly take care of providing lower witnesses, and the upper witnesses are retained automatically as every element which is not removed from the model is kept together with the whole path from the root from the original model $\str{T}$. Let $\str{T}'$ be the structure obtained after the final step of the above procedure. Observe that the number of marked descendants of an element located at level $l$ is bounded by $(l+1) \cdot |\phi|$, thus the degree of each node of $\str{T}'$ is bounded by $max \cdot |\phi|$ as required. \hfill $\Box$ \end{proof} We recall the statement of Theorem \ref{t:gfto} from the main part of the paper, and prove its part related to the upper bound. Lower bound is proved in the next section. \begin{theorem-non} {\bf \ref{t:gfto}.} The satisfiability problem for \mbox{$\mbox{\rm GF}^2[{\downarrow_{\scriptscriptstyle +}}]$}{} over finite singular trees is \textsc{PSpace}-complete. \end{theorem-non} \begin{proof} We show here that the problem belongs to \textsc{PSpace}{} by designing an alternating polynomial time procedure. We first run the non-deterministic procedure {\tt GF$^2$[${\downarrow_{\scriptscriptstyle +}}$]-normalisation} (see Lemma \ref{l:normalformgf}) and obtain a formula $\phi'$ over signature $\tau'$. It remains to test satisfiability of $\phi'$. The procedure builds a path in a model together with the immediate successors of its nodes. Information about a node $u$ consists of its $1$-type, and a polynomially bounded set of atomic $1$-types the \emph{promised types of descendants} of $u$. The procedure starts from guessing information about the root and then moves down the tree in the following way: when inspecting a node $u$ it guesses information about all its children (polynomially many) and then proceeds universally to one of them. During the execution the following natural conditions are checked: \begin{enumerate} \item[(i)] Every guessed atomic type contains precisely one predicate from $\tau$. \item[(ii)] The set of promised types of descendants of the current node $u$ is sufficient to provide necessary witnesses for $u$ for conjuncts of $\phi'$ of the form $\forall x (\lambda_i(x) \Rightarrow \exists y (x {\downarrow_{\scriptscriptstyle +}} y \wedge \psi_i(x,y)))$. \item[(iii)] The current node has the required witnesses for the conjuncts of the form $\forall x (\lambda_i(x) \Rightarrow \exists y (y {\downarrow_{\scriptscriptstyle +}} x \wedge \psi_i(x,y)))$ among its ascendants. \item[(iv)] The universal part $\forall \forall$ of $\phi'$ is not violated by a pair consisting of the current node $u$ and any of its ascendants. \item[(v)] Every promised type of a descendant of the inspected node $u$ is either realised or promised by one of its children. \end{enumerate} The procedure accepts when it reaches (without violating the above conditions) in at most polynomially many steps a node with no promised descendants. % The described alternating procedure works in time bounded polynomially in $\phi$, so, as \textsc{APTime}=\textsc{PSpace}{} \cite{CKS81}, it can be also implemented to work in deterministic polynomial space. We claim that it accepts $\phi$ iff $\phi$ has a finite singular tree model. Assume that $\phi$ is accepted. This means that $\phi'$ has a tree model which restricted to $\tau$ is singular. By Lemma \ref{l:normalformgf} it follows that $\phi$ has a singular model. In the opposite direction, let $\str{T} \models \phi$ be singular, and let $\cT$ be the frame of $\str{T}$. By Lemma \ref{l:shortpathssing} we can assume that the depth of $\str{T}$ is bounded by $6 \cdot |\tau| \cdot |\phi|^3$. By Lemma \ref{l:normalformgf}, {\tt GF$^2$[${\downarrow_{\scriptscriptstyle +}}$]-normalisation} can produce $\phi'$ which is satisfiable over $\cT$, say in a model $\str{T}'$. By Lemma \ref{l:narrowtrees}, $\phi'$ is also satisfied in a submodel $\str{T}''$ of $\str{T}'$ in which the degree of every node is bounded by $6 \cdot |\tau| \cdot |\phi|^3 \cdot |\phi'|$. Thus our alternating procedure can make all its guesses in accordance to $\str{T}''$ and accept. \hfill $\Box$ \end{proof} \input{witek.tex} \end{appendix} \section{Introduction} Classical results from the 1930s by Church and Turing show that the satisfiability problem for first-order logic is undecidable. Moreover, undecidability can be proved even for the fragment with only three variables, $\mbox{$\mbox{\rm FO}^3$}$, \cite{KMW62}. This fact attracted the attention of researchers to the two-variable fragment, \mbox{$\mbox{\rm FO}^2$}{}, which turns out to be decidable \cite{Mor75} and \textsc{NExpTime}-complete \cite{GKV97}. In particular, \mbox{$\mbox{\rm FO}^2$}{} gained a lot of interest from computer scientists, because of its close connections to formalisms such as modal, temporal, description logics, and XML, widely used in various areas of computer science, including hardware and software verification, knowledge representation, databases, and artificial intelligence. The expressive power of \mbox{$\mbox{\rm FO}^2$}{} is limited and is not sufficient to axiomatise some natural simple classes of structures, such us trees or words. It is also not possible to say, e.g., that a binary relation is transitive, an equivalence or a linear order. Thus, \mbox{$\mbox{\rm FO}^2$}{} over various classes of structures, in which certain relational symbols have to be interpreted in a special way, e.g., as equivalences, has been extensively studied (see, e.g., \cite{GO98,GradelOR99,Otto01,Kie2011,KO12,KMPT12} for some results in this area). \mbox{$\mbox{\rm FO}^2$}{} over words is investigated in \cite{EVW02}. The authors work there with signatures consisting of some unary predicates and two built-in binary predicates: $succ$ for the successor relation and $<$ for its transitive closure. The resulting logic, \mbox{$\mbox{\rm FO}^2$}$[succ, <]$, is shown to have \textsc{NExpTime}-complete satisfiability problem, both over $\omega$-words and over finite words. Actually, the lower bound can be shown for \emph{monadic} \mbox{$\mbox{\rm FO}^2$}{}, i.e., without using the binary relations $succ$ and $<$. The elementary complexity of \mbox{$\mbox{\rm FO}^2$}{} over words sharply contrasts with the non-elementary complexity of \mbox{$\mbox{\rm FO}^3$}{} over words which follows from \cite{Sto74}. In this paper we consider \mbox{$\mbox{\rm FO}^2$}{} over unranked trees (ordered or unordered), assuming that, beside unary symbols, signatures may include the child relation ${\downarrow}$, the right sibling relation ${\rightarrow}$, and their respective transitive closures ${\downarrow_{\scriptscriptstyle +}}$ and ${\rightarrow^{\scriptscriptstyle +}}$. Decidability of the satisfiability problem for \mbox{$\mbox{\rm FO}^2$}{} over various classes of infinite trees is implied by the celebrated result by Rabin \cite{Rabin69}, that the monadic second-order theory of the binary tree is decidable. Over finite trees decidability follows from \cite{Idziak88}. However, regarding complexity, the above mentioned results give only non-elementary upper bounds. A better upper complexity bound for the richest of the logics we consider, \mbox{$\FOt[\succv, \lessv, \succh, \lessh]$}, can be obtained by exploring its correspondence to XPath. In \cite{Marx05} it is argued that \mbox{$\FOt[\succv, \lessv, \succh, \lessh]$}{} is expressively equivalent to a variant of Core XPath which is shown in \cite{Marx04} to be \textsc{ExpTime}-complete. As the translation to XPath involves an exponential blowup in the size of formulas, we get this way 2\textsc{-ExpTime}{} upper bound. Our first contribution is establishing the precise complexity of the satisfiability problem for \mbox{$\FOt[\succv, \lessv, \succh, \lessh]$}{} over finite trees by showing that it is \textsc{ExpSpace}-complete. Worth mentioning here is the work from \cite{BMSS09}, where two-variable logics over unranked, ordered trees with additional equivalence relation on nodes, denoted $\sim$, is proposed. The purpose of $\sim$ is to model XML \emph{data values}. It is argued that this extension of \mbox{$\FOt[\succv, \lessv, \succh, \lessh]$}{} is very hard and its decidability is left as an open problem. On the positive side, decidability of \mbox{$\mbox{\rm FO}^2$}$[{\downarrow}, {\rightarrow}, \sim]$ is shown. In the context of XML reasoning it is natural to consider also the additional semantic restriction that at a node of a tree precisely one unary predicate holds. We call trees meeting this assumption \emph{singular trees}. In \cite{Weis11} an analogous restriction for finite words is considered.\footnote{In that paper a slightly different terminology is used: the term \emph{word} denotes a structure meeting the singularity assumption, and the term \emph{power words} is reserved for structures that allow for multiple unary predicates holding at a single position.} It appears that \mbox{$\mbox{\rm FO}^2$}$[succ, <]$ over finite singular words remains \textsc{NExpTime}-complete, but \mbox{$\mbox{\rm FO}^2$}$[<]$ becomes \NPTime-complete. In this paper we observe a similar effect in the case of unordered trees: over singular trees, \mbox{$\mbox{\rm FO}^2[{\downarrow_{\scriptscriptstyle +}}, {\downarrow}]$}{} remains \textsc{ExpSpace}-hard, and the complexity of \mbox{$\mbox{\rm FO}^2[{\downarrow_{\scriptscriptstyle +}}]$}{} decreases. This time the complexity drop is slightly less spectacular, as the problem is \textsc{NExpTime}-complete. We observe, however, that for \textsc{NExpTime}-hardness the ability of speaking about pairs of elements $x,y$ in free position, i.e., such that $y$ is neither an ascendant or descendant of $x$, is needed. This is not typical of logics used in computer science, as their atomic constructions usually allow to refer only to pairs of elements that lie on the same path. To capture the former kind of scenario we consider the restriction of \mbox{$\mbox{\rm FO}^2$}{} to the two-variable guarded fragment, \mbox{$\mbox{\rm GF}^2$}{}, in which all quantifiers have to be relativised by binary predicates. We observe that the satisfiability problem for \mbox{$\mbox{\rm GF}^2[{\downarrow_{\scriptscriptstyle +}}]$}{} over finite singular trees is \textsc{PSpace}-complete. To complete the picture we show that augmenting \mbox{$\mbox{\rm GF}^2[{\downarrow_{\scriptscriptstyle +}}]$}{} with any of the remaining navigational predicates leads to \textsc{ExpSpace}-hardness over singular trees. Thus, we establish the complexity over finite trees and over finite singular trees of all logics \mbox{$\mbox{\rm GF}^2$}[$\tau_{bin}$] and \mbox{$\mbox{\rm FO}^2$}[$\tau_{bin}$], for $\{ {\downarrow_{\scriptscriptstyle +}} \} \subseteq \tau_{bin} \subseteq \{{\downarrow}, {\downarrow_{\scriptscriptstyle +}}, {\rightarrow}, {\rightarrow^{\scriptscriptstyle +}} \}$. \section{Preliminaries} {\bf Trees and logics.} We work with signatures of the form $\tau=\tau_0 \cup \tau_{bin}$, where $\tau_0$ is a set of unary symbols and $\tau_{bin} \subseteq \{ {\downarrow}, {\downarrow_{\scriptscriptstyle +}}, {\rightarrow}, {\rightarrow^{\scriptscriptstyle +}} \}$. Over such signatures we consider two fragments of first-order logic: \mbox{$\mbox{\rm FO}^2$}{}, i.e., the restriction of first-order logic in which only variables $x$ and $y$ are available, and \mbox{$\mbox{\rm GF}^2$}{} being the intersection of \mbox{$\mbox{\rm FO}^2$}{} and the \emph{guarded fragment}, \mbox{$\mbox{\rm GF}$}{} \cite{ABN98}. \mbox{$\mbox{\rm GF}$}{} is defined as the least set of formulas such that: (i) every atomic formula belongs to \mbox{$\mbox{\rm GF}$}{}; (ii) \mbox{$\mbox{\rm GF}$}{} is closed under logical connectives $\neg, \vee, \wedge, \Rightarrow$; and (iii) quantifiers are appropriately relativised by atoms, i.e., if $\phi ({\mathbf x}, {\mathbf y})$ is a formula of \mbox{$\mbox{\rm GF}$}{} and $\alpha ({\mathbf x}, {\mathbf y})$ is an atomic formula containing all the free variables of $\phi$, then the formulas ${\boldsymbol \forall} {\mathbf y}(\alpha ({\mathbf x}, {\mathbf y}) \Rightarrow \phi ({\mathbf x}, {\mathbf y}))$ and ${\boldsymbol \exists} {\mathbf y}(\alpha ({\mathbf x}, {\mathbf y}) \wedge \phi ({\mathbf x}, {\mathbf y}))$ belong to \mbox{$\mbox{\rm GF}$}{}. Atom $\alpha ({\mathbf x}, {\mathbf y})$ is called a {\em guard}. Equalities $x{=}x$ or $x{=}y$ are also allowed as guards. For a given formula $\phi$ we denote by $\tau_0(\phi)$ the set of unary symbols that appear in $\phi$. We write \mbox{$\mbox{\rm FO}^2$}$[\tau_{bin}]$ or \mbox{$\mbox{\rm GF}^2$}$[\tau_{bin}]$ to denote that the only binary symbols that are allowed in signatures are those from $\tau_{bin}$. We are interested in finite unranked tree structures, in which the interpretation of symbols from $\tau_{bin}$ is fixed: if available in the signature, ${\downarrow}$ is interpreted as the child relation, ${\rightarrow}$ as the right sibling relation, and ${\downarrow_{\scriptscriptstyle +}}$ and ${\rightarrow^{\scriptscriptstyle +}}$ as their respective transitive closures. If at least one of ${\rightarrow}$, ${\rightarrow^{\scriptscriptstyle +}}$ is interpreted in a tree then we say that this tree is \emph{ordered}; in the opposite case we say that the tree is \emph{unordered}. We use $x {\not\sim} y$ to abbreviate the formula stating that $x$ and $y$ are in \emph{free position}, i.e., that they are related by none of the binary predicates available in the signature. E.g., if we consider ordered trees over $\tau_{bin}= \{ {\downarrow}, {\downarrow_{\scriptscriptstyle +}}, {\rightarrow}, {\rightarrow^{\scriptscriptstyle +}}\}$ then $x {\not\sim} y$ can be defined as $x {\not=} y \wedge \neg (x {\downarrow_{\scriptscriptstyle +}} y) \wedge \neg (y {\downarrow_{\scriptscriptstyle +}} x) \wedge \neg (x {\rightarrow^{\scriptscriptstyle +}} y) \wedge \neg (y {\rightarrow^{\scriptscriptstyle +}} x)$; for unordered trees over $\tau_{bin}=\{{\downarrow_{\scriptscriptstyle +}} \}$ it is just $x {\not=} y \wedge \neg (x {\downarrow_{\scriptscriptstyle +}} y) \wedge \neg (y {\downarrow_{\scriptscriptstyle +}} x)$. Let us call the formulas specifying the relative position of a pair of elements in a tree with respect to binary predicates \emph{order formulas}. There are ten possible order formulas: $x {\downarrow} y$, $y {\downarrow} x$, $x {\downarrow_{\scriptscriptstyle +}} y \wedge \neg (x {\downarrow} y)$, $y {\downarrow_{\scriptscriptstyle +}} x \wedge \neg (y {\downarrow} x)$, $x {\rightarrow} y$, $y {\rightarrow} x$, $x {\rightarrow^{\scriptscriptstyle +}} y \wedge \neg (x {\rightarrow} y)$, $y {\rightarrow^{\scriptscriptstyle +}} x \wedge \neg (y {\rightarrow} x)$, $x {\not\sim} y$, $x{=}y$. They are denoted, respectively, as: $\theta_{\downarrow}$, $\theta_{\uparrow}$, $\theta_{\downarrow \downarrow_+}$, $\theta_{\uparrow \uparrow^+}$, $\theta_{\rightarrow}$, $\theta_{\leftarrow}$, $\theta_{\rightrightarrows^+}$, $\theta_{\leftleftarrows^+}$, $\theta_{\not\sim}$, $\theta_{=}$. Let $\Theta$ be the set of these ten formulas. A structure over a signature $\tau=\tau_0 \cup \tau_{bin}$ is \emph{singular} if at every element of this structure precisely one unary predicate from $\tau_0$ holds. We say that a formula $\phi$ is \emph{singularly satisfiable} (over a class of structures $\mathcal{C}$) if there exists a singular model of $\phi$ (from $\mathcal{C}$). We use symbol $\str{T}$ (possibly with sub- or superscripts) to denote tree structures. For a given tree $\str{T}$ we denote by $T$ its universe. A \emph{tree frame} is a tree over a signature containing no unary predicates. We say that a formula $\phi$ is {\emph (singularly) satisfiable over a tree frame} $\cT$ if $\str{T} \models \phi$ for some (singular) $\str{T}$ such that $\cT$ is the restriction of $\str{T}$ to binary symbols. \medskip\noindent {\bf Normal form.} We say that an \mbox{$\FOt[\succv, \lessv, \succh, \lessh]$}{} formula $\phi$ is in \emph{normal form} if $\phi=\forall xy \chi(x,y) \wedge \bigwedge_{i \in I} \forall x (\lambda_i(x) \Rightarrow \exists y (\eta_i(x,y) \wedge \psi_i(x,y)))$, for some index set $I$, where $\chi(x,y)$ is quantifier-free, $\lambda_i(x)$ is an atomic formula $a(x)$ for some unary symbol $a$, $\psi_i(x,y)$ is a boolean combination of unary atomic formulas, and $\eta_i(x,y)$ is an order formula. Please note, that in $\chi$ the equality symbol may be used, e.g., we can enforce that a model contains at most one node satisfying $a$: $\forall xy (a(x) \wedge a(y) \Rightarrow x{=}y)$. The following lemma can be proved in a standard fashion (cf.~e.g., \cite{KMPT12}). \begin{lemma} \label{l:normalform} Let $\phi$ be an \mbox{$\FOt[\succv, \lessv, \succh, \lessh]$}{} formula over a signature $\tau$ and let $\cT$ be a tree frame. There exists a polynomially computable \mbox{$\FOt[\succv, \lessv, \succh, \lessh]$}{} normal form formula $\phi'$ over signature $\tau'$ consisting of $\tau$ and some additional unary symbols, such that $\phi$ is satisfiable over $\cT$ (singularly satisfiable over $\cT$) iff $\phi'$ is satisfiable over $\cT$ (satisfiable over $\cT$ in a model that restricted to $\tau$ is singular). \end{lemma} Consider a conjunct $\phi_i=\forall x (\lambda_i(x) \Rightarrow \exists y (\eta_i(x,y) \wedge \psi_i(x,y)))$ of a normal form \mbox{$\FOt[\succv, \lessv, \succh, \lessh]$}{} formula $\phi$. Let $\str{T} \models \phi$, and let $v \in T$ be an element such that $\str{T} \models \lambda_i[v]$. Then an element $w \in T$ such that $\str{T} \models \eta_i[v,w] \wedge \psi_i[v,w]$ is called a \emph{witness} for $v$ and $\phi_i$. Sometimes, $b$ is called an \emph{upper} witness if $\eta_i(x,y) \models y {\downarrow_{\scriptscriptstyle +}} x$, a \emph{lower} witness if $\eta_i(x,y) \models x {\downarrow_{\scriptscriptstyle +}} y$, and a \emph{free} witness if $\eta_i(x,y) \models x {\not\sim} y$. \medskip\noindent {\bf Types.} A (atomic) {\em $1$-type}, over a signature $\tau=\tau_0 \cup \tau_{bin}$, is a subset of $\tau_0$. We often identify a $1$-type $\alpha$ with the formula $\bigwedge_{a \in \alpha} a(x) \wedge \bigwedge_{a \not\in \alpha} \neg a(x)$. For a given $\tau$-tree $\str{T}$, and $v \in T$, we denote by ${\rm tp}^\str{T}(v)$ the $1$-type \emph{realized} by $v$, i.e., the unique $1$-type $\alpha$ such that $\str{T} \models \alpha[v]$. A {\em full type} is a function $\bar{\alpha}:\Theta \rightarrow \cP(\tau_0)$, such that $\bar{\alpha}(\theta_{\uparrow})$, $\bar{\alpha}(\theta_{\rightarrow})$, $\bar{\alpha}(\theta_{\leftarrow})$ are singletons or empty, $\bar{\alpha}(\theta_{=})$ is a singleton, and if $\bar{\alpha}(\theta_{\uparrow})$ (respectively $\bar{\alpha}(\theta_{\downarrow})$, $\bar{\alpha}(\theta_{\leftarrow})$, $\bar{\alpha}(\theta_{\rightarrow})$) is empty then $\bar{\alpha}(\theta_{\uparrow \uparrow^+})$ (respectively $\bar{\alpha}(\theta_{\downarrow \downarrow_+})$, $\bar{\alpha}(\theta_{\leftleftarrows^+})$, $\bar{\alpha}(\theta_{\rightrightarrows^+})$) is also empty. We employ the following convention: for a given full type $\bar{\alpha}$ we denote by $\alpha$ the unique member of $\bar{\alpha}(\theta_{=})$. For a given $\tau$-tree $\str{T}$, and $v \in T$, we denote by ${\rm ftp}^\str{T}(v)$ the full type \emph{realized} by $v$, i.e., the unique full type $\bar{\alpha}$, such that $\alpha$ is the $1$-type of $v$, and for all $\theta \in \Theta$ we have that $\bar{\alpha}(\theta)= \{ {\rm tp}^{\str{T}}(w): \str{T} \models \theta[v,w] \}$. A {\em reduced full type} is a tuple $(\alpha, A, B, F)$, where $\alpha$ is a $1$-type and $A, B, F$ are sets of $1$-types. Reduced full types are used to keep information recorded in full types in a slightly (lossy) compressed form. Let ${\rm ftp}^\str{T}(v)=\bar{\alpha}$. By ${\rm rftp}^\str{T}(v)$ we denote the reduced full type \emph{realized} by $v$, i.e., the reduced full type $(\alpha, A, B, F)$, such that $A=\bar{\alpha}(\theta_{\uparrow}) \cup \bar{\alpha}(\theta_{\uparrow \uparrow^+})$, $B=\bar{\alpha}(\theta_{\downarrow}) \cup \bar{\alpha}(\theta_{\downarrow \downarrow_+})$ and $F=\bar{\alpha}(\theta_{\rightarrow}) \cup \bar{\alpha}(\theta_{\leftarrow}) \cup \bar{\alpha}(\theta_{\rightrightarrows^+}) \cup \bar{\alpha}(\theta_{\leftleftarrows^+}) \cup \bar{\alpha}(\theta_{\not\sim})$. Note that $\alpha$ denotes the $1$-type of $v$, and, informally speaking, $A$ is the set of $1$-types of elements realized \emph{above} $v$, $B$ is the set of $1$-types of elements realized \emph{below} $v$, and $F$ is the set of $1$-types of the siblings of $v$ and the elements realized in \emph{free} position to $v$. Note that the number of $1$-types is bounded exponentially, and the numbers of full types and reduced full types are bounded doubly exponentially in the size of the signature. For a given normal form \mbox{$\FOt[\succv, \lessv, \succh, \lessh]$}{} formula $\phi$ and a full type $\bar{\alpha}$, we say that $\bar{\alpha}$ is $\phi$-\emph{consistent} if an element realizing $\bar{\alpha}$ cannot be a member of a pair violating the universal conjunct $\forall xy \chi(x,y)$ of $\phi$, and has all witnesses required by $\phi$. Formally, $\bar{\alpha}$ is $\phi$-consistent if for every $\theta \in \Theta$, and every $\alpha' \in \bar{\alpha}(\theta)$ we have $\alpha(x) \wedge \alpha'(y) \wedge \theta(x,y) \models \chi(x,y) \wedge \chi(y,x)$, and for every conjunct $\forall x (\lambda_i(x) \Rightarrow \exists y (\eta_i(x,y) \wedge \psi_i(x,y)))$ of $\phi$, such that $\alpha(x) \models \lambda_i(x)$, there exists a $1$-type $\alpha' \in \bar{\alpha}(\eta_i)$ such that $\alpha(x), \alpha'(y) \models \psi_i(x,y)$. A proof of the following proposition is straightforward. \begin{proposition} \label{p:consistent_types} Let $\str{T}$ be a tree and let $\phi$ be a normal form \mbox{$\FOt[\succv, \lessv, \succh, \lessh]$}{}-formula. Then $\str{T} \models \phi$ iff every full type realized in $\str{T}$ is $\phi$-consistent. \end{proposition} We say that a full type $\bar{\alpha}$ is \emph{combined} of two full types $\bar{\alpha}_1$ and $\bar{\alpha}_2$ if $\alpha=\alpha_1=\alpha_2$ and for each $\theta \in \Theta$ we have $\bar{\alpha}(\theta)=\bar{\alpha}_1(\theta)$ or $\bar{\alpha}(\theta)=\bar{\alpha}_2(\theta)$. Also the following fact is immediate. \begin{proposition} \label{p:combined_types} Let $\phi$ be a normal form \mbox{$\FOt[\succv, \lessv, \succh, \lessh]$}{}-formula, and let $\bar{\alpha}$ be a full type combined of two $\phi$-consistent full types $\bar{\alpha}_1, \bar{\alpha}_2$. Then $\bar{\alpha}$ is $\phi$-consistent. \end{proposition} \section{Finite ordered trees} This section is devoted to a proof of the following theorem. \begin{theorem} \label{t:finitetrees} The satisfiability problem for \mbox{$\FOt[\succv, \lessv, \succh, \lessh]$}{} over finite trees is \textsc{ExpSpace}-complete. \end{theorem} The crucial fact is that every satisfiable formula has a model of exponentially bounded depth and degree. We prove this in two steps, and present a procedure looking for such small models, working in alternating exponential time. \medskip\noindent {\bf Short paths.} First, let us see how the paths of a model can be shortened. \begin{lemma} \label{l:surgery} Let $\phi$ be a normal form \mbox{$\FOt[\succv, \lessv, \succh, \lessh]$}{} formula, $\str{T}$ its model, and $v,w \in T$ two nodes of $\str{T}$, such that $\str{T} \models v {\downarrow_{\scriptscriptstyle +}} w$ and ${\rm rftp}^\str{T}(v)={\rm rftp}^\str{T}(w)$. Then the tree $\str{T}'$, obtained from $\str{T}$ by replacing the subtree rooted at $v$ by the subtree rooted at $w$, is a model of $\phi$. \end{lemma} \begin{proof} It can be verified that for every $u \in T'$, if $u {\not=} w$ then ${\rm ftp}^{\str{T}'}(u) = {\rm ftp}^{\str{T}}(u)$, and that ${\rm ftp}^{\str{T}'}(w)$ is combined of ${\rm ftp}^{\str{T}}(v)$ and ${\rm ftp}^{\str{T}}(w)$. Thus, by Propositions \ref{p:consistent_types} and \ref{p:combined_types}, all types realized in $\str{T}'$ are $\phi$-consistent, and $\str{T}' \models \phi$ by Proposition \ref{p:consistent_types}. \hfill $\Box$ \end{proof} Using the above lemma we can successively shorten ${\downarrow}$-paths in a model of a normal form formula $\phi$ obtaining after a finite number of steps a model of $\phi$ in which on every path only distinct reduced full types are realized. Even though there are potentially doubly exponentially many reduced full types it can be shown that such a model has exponentially bounded ${\downarrow_{\scriptscriptstyle +}}$-paths. \begin{lemma} \label{l:shortpaths} Let $\phi$ be a normal form \mbox{$\FOt[\succv, \lessv, \succh, \lessh]$}{} formula satisfied in a finite tree. Then there exists a tree model of $\phi$ whose every ${\downarrow}$-path has length bounded by $3 \cdot (2^{2\cdot |\tau_0(\phi)|})$, exponentially in $|\phi|$. \end{lemma} \begin{proof} Let $\str{T} \models \phi$ be a tree in which on every ${\downarrow}$-path only distinct full types are realized and let $v_1, v_2, \ldots, v_k$ be a ${\downarrow}$-path in $\str{T}$. Observe that the sets $A, B, F$ in reduced full types of $v_i$ behave monotonically. More precisely, if $(\alpha_i, A_i, B_i, F_i)$ is the reduced full type realized by $v_i$, for $1 \le i \le k$, then for $i<j$ we have $A_i \subseteq A_j$, $B_i \supseteq B_j$ and $F_i \subseteq F_j$. Thus along the path each of the sets $A, B, F$ is modified at most $2^{|\tau_0(\phi)|}$ times (since this is the number of possible $1$-types). The number of reduced full types with fixed $A, B, F$ is equal to the number of $1$-types, so the length of each path is bounded as required. \hfill $\Box$ \end{proof} \noindent {\bf Small degree.} Now we observe that to provide all witnesses for $\forall \exists$ conjuncts of $\phi$ we only need nodes with at most exponential degree. \begin{lemma} \label{l:narrowtrees} Let $\phi$ be a normal form \mbox{$\FOt[\succv, \lessv, \succh, \lessh]$}{} formula and let $\str{T} \models \phi$. Then there exists a model $\str{T}' \models \phi$ in which the number of successors of each node is bounded by $4 \cdot 2^{2\cdot|\tau_0(\phi)|}$. Moreover $\str{T}'$ can be obtained by removing from $\str{T}$ some number of elements (together with the subtrees rooted at them). \end{lemma} \begin{proof} We show first how to decrease the degree of a single node of $\str{T}$. Let $v$ be a node of $\str{T}$ of full type $\bar{\alpha}_v$, and let $U$ be the set of the children of $v$. For every element $u \in U$ let $\bar{\alpha}_u$ be its full type. We are going to mark some important elements of $U$ and then remove all subtrees rooted at unmarked ones producing a model $\str{T}''' \models \phi$. First, for every $1$-type $\alpha$, if $\alpha$ is realized in $U$ precisely once then mark this realisation; if $\alpha$ is realized more than once then mark the minimal and the maximal (with respect to ${\rightarrow^{\scriptscriptstyle +}}$) realisations of $\alpha$. Further, for every $1$-type $\alpha$, let $U_\alpha=\{u \in U \ \rvert \ \alpha \in \bar{\alpha}_u(\theta_{\downarrow}) \cup \bar{\alpha}_u(\theta_{\downarrow \downarrow_+})$. For each $\alpha$ mark $\mbox{min}(2, |U_\alpha|)$ elements of $U_\alpha$. Note that so far we have marked at most $4 \cdot |2^{\tau_0(\phi)}|$ elements of $U$. Assume that these (listed according to ${\rightarrow^{\scriptscriptstyle +}}$) are: $u_1, \ldots, u_k$. We call them \emph{primarily marked} elements, and denote their set by $U_P$. Consider the tree $\str{T}''$ obtained from $\str{T}$ by removing the subtrees rooted at elements of $U \setminus U_P$. It can be verified that elements from $T'' \setminus U_P$ retain in $\str{T}''$ their full types from $\str{T}$. Unfortunately, the ${\rightarrow}$-connections among the elements of $U_P$ in $\str{T}''$ may be inconsistent with $\phi$. To fix this problem we mark some additional elements of $U$ (at most exponentially many) between $u_i$ and $u_{i+1}$, for all $i$.\footnote{Actually, this fragment of the construction combined with some earlier parts, reproduces the small model theorem for \mbox{$\mbox{\rm FO}^2$}{} over words.} For every $i$, consider the ${\rightarrow}$-chain $C$ of elements of $\str{T}$ between $u_i$ and $u_{i+1}$. If $C$ is empty then $u_{i+1}$ is ${\rightarrow}$-successor of $u_i$ and there is nothing to do. Otherwise, let $\alpha$ be the $1$-type of the successor $w$ of $u_i$. Find the maximal (with respect to ${\rightarrow^{\scriptscriptstyle +}}$) element $w'$ of type $\alpha$ in $C$, and mark it. The elements between $u_i$ and $w'$ will never be marked, so $w'$ will become the ${\rightarrow}$-successor of $u_i$ in the final model $\str{T}'''$. Thus, $u_i$ will retain in its full type its $\bar{\alpha}_{u_i}(\theta_{\rightarrow})$ (singleton) set, and, due to our strategy of primarily marking maximal realisations of $1$-types, also its $\bar{\alpha}_{u_i}(\theta_{\rightrightarrows^+})$ set. This is not necessarily true for $w'$ and its (singleton) $\bar{\alpha}(\theta_{\leftarrow})$ set, and $\bar{\alpha}_{w'}(\theta_{\leftleftarrows^+})$ set. However, these sets will be equal, respectively, to $\bar{\alpha}_w(\theta_{\leftarrow})$, and $\bar{\alpha}_w(\theta_{\leftleftarrows^+})$ sets of $w$, which means that the full type of $w'$ in $\str{T}'''$ will be combined of two full types (of $w$ and $w'$) from $\str{T}$. We proceed recursively with the ${\rightarrow}$-chain of elements between $w'$ and $u_{i+1}$. Note that the number of elements between $u_i$ and $u_{i+1}$ which are marked during this process is bounded by the number of $1$-types. Thus we mark in total at most $4 \cdot 2^{|\tau_0(\phi)|} \cdot 2^{|\tau_0(\phi)|}$ elements of $U$, as required in the statement of this lemma. Let us denote the set of the marked elements $U_M$. We construct $\str{T}'''$ by removing from $\str{T}$ all subtrees rooted at elements of $U \setminus U_M$. It can be verified that all elements from $T''' \setminus U_M$ retain their full types from $\str{T}$, and that the full types of elements from $U_M$ in $\str{T}'''$ are either retained from $\str{T}$ or are combined of pairs of full types in $\str{T}$ of elements from $U$. By Proposition \ref{p:consistent_types} we have that $\str{T}''' \models \phi$. The desired model $\str{T}'$ can be obtained by applying the described procedure in depth-first manner. \hfill $\Box$ \end{proof} \medskip\noindent {\bf Alternating procedure and complexity.} We are ready to design a procedure checking if a given \mbox{$\FOt[\succv, \lessv, \succh, \lessh]$}{} formula $\phi$ has a finite tree model. By Lemma \ref{l:normalform} we may assume that $\phi$ is in normal form. By Lemma \ref{l:shortpaths} and Lemma \ref{l:narrowtrees} we may restrict our attention to models in which the length of each path and the degree of each node are bounded exponentially in $|\phi|$. We present an alternating procedure working in exponential time. This justifies that the problem is in \textsc{ExpSpace}{} since, by \cite{CKS81}, \textsc{ExpSpace}=\textsc{AExpTime}{}. The procedure first guesses the full type of the root and then guesses the full types of its children, checking if the information recorded in the full types is locally consistent, and if each full type is $\phi$-consistent. Further, it works in a loop, universally choosing one of the types of the children and proceeding similarly. \medskip\noindent {\bf Procedure} {\tt {FO$^2$[${\downarrow}, {\downarrow_{\scriptscriptstyle +}}, {\rightarrow}, {\rightarrow^{\scriptscriptstyle +}}$]-sat-test}}\\ {\bf input:} an \mbox{$\FOt[\succv, \lessv, \succh, \lessh]$}{} normal form formula $\phi$ \begin{itemize}\itemsep0pt \item let $maxdepth:=3 \cdot |2^{\tau_0(\phi)}|^2$; let $maxdegree:=4 \cdot 2^{2\cdot|\tau_0(\phi)|}$; \item let $level:=0$; \item {\bf guess} a full type $\bar{\alpha}$ such that $\bar{\alpha}(\theta_{\uparrow})=\bar{\alpha}(\theta_{\uparrow \uparrow^+})=\bar{\alpha}(\theta_{\rightarrow})=\bar{\alpha}(\theta_{\rightrightarrows^+})= \bar{\alpha}(\theta_{\leftarrow})=\bar{\alpha}(\theta_{\leftleftarrows^+})=\bar{\alpha}(\theta_{\not\sim})=\emptyset$; \item {\bf while} $level < maxdepth$ {\bf do} \item \hspace*{20pt} if $\bar{\alpha}$ is not $\phi$-consistent then {\bf reject} \item \hspace*{20pt} if $\bar{\alpha}(\theta_{\downarrow}) \cup \bar{\alpha}(\theta_{\downarrow \downarrow_+}) = \emptyset$ then {\bf accept} \item \hspace*{20pt} {\bf guess} an integer $1 \le k \le maxdegree$; \item \hspace*{20pt} for $1 \le i \le k$ {\bf guess} a full type type $\bar{\alpha}_i$; \item \hspace*{20pt} if not $locally$-$consistent(\bar{\alpha}, \bar{\alpha}_1, \ldots, \bar{\alpha}_k)$ then {\bf reject}; \item \hspace*{20pt} $level:= level + 1$; \item \hspace*{20pt} {\bf universally choose} $1 \le i \le k$; let $\bar{\alpha}=\bar{\alpha}_i$; \item {\bf endwhile} \item {\bf reject} \end{itemize} The function $locally$-$consistent$ checks whether, from a local point of view, a tree may have a node of full type $\bar{\alpha}$ whose children, listed from left to right, have full types $\bar{\alpha}_1, \ldots, \bar{\alpha}_k$. Namely, it returns {\bf true} if and only if all of the following conditions hold: \medskip \noindent{\em Horizontal conditions:}\\ (h1) $\bar{\alpha}_i(\theta_{\leftarrow})=\{\alpha_{i-1}\}$ for $i>1$; $\bar{\alpha}_1(\theta_{\leftarrow})=\emptyset$;\\ (h2) $\bar{\alpha}_i(\theta_{\rightarrow})=\{\alpha_{i+1}\}$ for $i<k$; $\bar{\alpha}_k(\theta_{\rightarrow})=\emptyset$;\\ (h3) $\bar{\alpha}_i(\theta_{\leftleftarrows^+})=\bar{\alpha}_{i-1}(\theta_{\leftarrow}) \cup \bar{\alpha}_{i-1}(\theta_{\leftleftarrows^+})$ for $i>1$; $\bar{\alpha}_1(\theta_{\leftleftarrows^+})=\emptyset$;\\ (h4) $\bar{\alpha}_i(\theta_{\rightrightarrows^+})=\bar{\alpha}_{i+1}(\theta_{\rightarrow}) \cup \bar{\alpha}_{i+1}(\theta_{\rightrightarrows^+})$ for $i<k$; $\bar{\alpha}_k(\theta_{\rightrightarrows^+})=\emptyset$;\\ \noindent{\em Vertical conditions:}\\ (v1) $\bar{\alpha}(\theta_{\downarrow})=\{\alpha_1, \ldots, \alpha_k \}$;\\ (v2) $\bar{\alpha}_i(\theta_{\uparrow})=\{\alpha \}$ for $1 \le i \le k$;\\ (v3) $\bar{\alpha}(\theta_{\downarrow \downarrow_+})=\bigcup_{1 \le i \le k} (\bar{\alpha}_i(\theta_{\downarrow}) \cup \bar{\alpha}_i(\theta_{\downarrow \downarrow_+}))$;\\ (v4) $\bar{\alpha}_i(\theta_{\uparrow \uparrow^+})=\bar{\alpha}(\theta_{\uparrow}) \cup \bar{\alpha}(\theta_{\uparrow \uparrow^+})$ for $1 \le i \le k$;\\ \noindent{\em Free conditions:}\\ (f1) $\bar{\alpha}_i(\theta_{\not\sim})=\bigcup_{j {\not=} i} (\bar{\alpha}_j(\theta_{\downarrow}) \cup \bar{\alpha}_j(\theta_{\downarrow \downarrow_+})) \cup \bar{\alpha}(\theta_{\leftleftarrows^+}) \cup \bar{\alpha}(\theta_{\leftarrow}) \cup \bar{\alpha}(\theta_{\rightarrow}) \cup \bar{\alpha}(\theta_{\rightrightarrows^+}) \cup \bar{\alpha}(\theta_{\not\sim}) $ for $1 \le i \le k$. \begin{lemma} \label{l:procedure} Procedure {\tt {FO$^2$[${\downarrow}, {\downarrow_{\scriptscriptstyle +}}, {\rightarrow}, {\rightarrow^{\scriptscriptstyle +}}$]-sat-test}} accepts its input $\phi$ if and only if $\phi$ is satisfied in a finite tree. \end{lemma} A matching \textsc{ExpSpace}-lower bound follows from \cite{Kie02}, where it was shown that a restricted variant of the two-variable guarded fragment with some unary predicates and a single binary predicate that is interpreted as a transitive relation is \textsc{ExpSpace}-hard. It is not hard to see that the proof presented there works fine (actually, it is even more natural) if we restrict the class of admissible structures to (finite) trees. Thus we get the following corollary. \begin{corollary} Over finite trees the satisfiability problem for each logic between \mbox{$\mbox{\rm GF}^2[{\downarrow_{\scriptscriptstyle +}}]$}{} and \mbox{$\FOt[\succv, \lessv, \succh, \lessh]$}{} is \textsc{ExpSpace}-complete. \end{corollary} \section{Singular finite trees} We start this section with establishing the complexity of \mbox{$\mbox{\rm FO}^2[{\downarrow_{\scriptscriptstyle +}}]$}{}. \begin{theorem} \label{t:fotsing} The satisfiability problem for \mbox{$\mbox{\rm FO}^2[{\downarrow_{\scriptscriptstyle +}}]$}{} over finite singular trees is \textsc{NExpTime}-complete. \end{theorem} To show the upper bound we observe that every singularly satisfiable formula has a singular model whose all paths are bounded polynomially. This fact is a generalisation of Theorem 2.1.1 from \cite{Weis11}, that every \mbox{$\mbox{\rm FO}^2$}$[<]$ formula $\phi$, singularly satisfiable over finite words, has a finite singular model with polynomially many elements. Actually, our work is strongly influenced by the construction from \cite{Weis11}, and, generally, can be seen as its adaptation to the case of trees. We describe here all the required constructions, but omit some proofs, as many of them are obtained by obvious adjustments of the corresponding proofs for the case of words. Thus, in order to fully understand all the details, we advise the reader to familiarise with Chapter 2 of \cite{Weis11}. Before going further we discuss the main differences with the case of words. The main idea from \cite{Weis11} is to show that for a given singular word $\str{W} \models \phi$, a letter $a \in \tau_0$ and a given subformula $\xi(x)$ of $\phi$ there exists a division of $\str{W}$ into polynomially many segments in which, at elements satisfying $a$, the value of $\xi(x)$ is constant. In our case the role of those segments is played by \emph{slices}, i.e., connected components of trees. We show that each path intersects polynomially many slices. In \cite{Weis11} \emph{left} and \emph{right} witnesses are considered. In our case they correspond to \emph{upper} and \emph{lower} witnesses (which, however, in contrast to the case of words, are not necessarily linearly ordered), but we must also deal with \emph{free} witnesses. Finally, the small model is constructed by picking at most three witnesses for each slice. As the total number of considered slices in a tree may be exponential we have to be careful at this point, to avoid choosing too many witnesses from a single path. Now we turn to technical details. Recall that in the current scenario we have four order formulas $x {\downarrow_{\scriptscriptstyle +}} y$, $x {=} y$, $y {\downarrow_{\scriptscriptstyle +}} x$ and $x {\not\sim} y$. We also use a shortcut: $x {\downarrow_{\scriptscriptstyle *}} y = x {\downarrow_{\scriptscriptstyle +}} y \vee x {=} y$. The normal form from Lemma \ref{l:normalform} is not very useful since it introduces fresh unary predicates that destroy singularity of models. Thus, we only slightly adjust formulas by converting them to existential negation form (ENNF). A formula $\phi \in \mbox{$\mbox{\rm FO}^2[{\downarrow_{\scriptscriptstyle +}}]$}{}$ is in ENNF if it does not contain any universal quantifier, and negations only appear in front of unary predicates or existential quantifiers. Negations in front of order formulas are not allowed. Obviously, any formula $\phi \in \mbox{$\mbox{\rm FO}^2[{\downarrow_{\scriptscriptstyle +}}]$}{}$ is equivalent to a formula in ENNF of size at most $2|\phi|$. We may view our formulas as positive boolean combinations of order formulas and formulas with at most one free variable. \begin{proposition} \label{p:betaform} Let $\phi \in \mbox{$\mbox{\rm FO}^2[{\downarrow_{\scriptscriptstyle +}}]$}{}$ be a formula in ENNF. Then there exists a number $s \in \mathbb{N}$, a positive boolean formula $\beta$ in variables $Z_{{\downarrow_{\scriptscriptstyle +}}}, Z_=, Z_{{\uparrow^{\scriptscriptstyle +}}}, Z_{{\not\sim}},$ $X_1, \dots, X_s$, and formulas $\phi_1 ,\dots, \phi_s \in \mbox{$\mbox{\rm FO}^2[{\downarrow_{\scriptscriptstyle +}}]$}{}$ in ENNF, each with at most one free variable, such that $\phi = \beta(x {\downarrow_{\scriptscriptstyle +}} y, x {=} y, y {\downarrow_{\scriptscriptstyle +}} x, x {\not\sim} y, \phi_1 , \dots, \phi_s).$ Moreover $\phi \equiv (x {\downarrow_{\scriptscriptstyle +}} y \wedge \phi_{\upharpoonright x {\downarrow_{\scriptscriptstyle +}} y}) \vee (x {=} y \wedge \phi_{\upharpoonright x=y}) \vee (y {\downarrow_{\scriptscriptstyle +}} x \wedge \phi_{\upharpoonright y {\downarrow_{\scriptscriptstyle +}} x}) \vee (x {\not\sim} y \wedge \phi_{\upharpoonright x {\not\sim} y})$ where $\phi_{\upharpoonright x {\downarrow_{\scriptscriptstyle +}} y} = \beta(\top, \bot, \bot, \bot, \phi_1 , \dots, \phi_s)$, and $\phi_{\upharpoonright \theta}$ is analogously defined for the remaining $\theta$-s. \end{proposition} For a finite tree $\str{T}$ and a set of nodes $P \subseteq T$ we define $max(P)$ as the set of the maximal nodes from $P$ and $min(P)$ as the set of the minimal nodes from $P$, with respect to the order relation ${\downarrow_{\scriptscriptstyle +}}$. For example $max(T)$ is the set of the leaves and $min(T)$ is the singleton consisting of the root of $\str{T}$. \begin{lemma}\label{l:pqr} Let $\zeta_1(y), . . . , \zeta_t(y)$ be \mbox{$\mbox{\rm FO}^2[{\downarrow_{\scriptscriptstyle +}}]$}{} formulas with $y$ as the only free variable and in ENNF, and let $\str{T}$ be a finite singular tree. Let $\beta$ be a positive boolean formula in the variables $Z_{{\downarrow_{\scriptscriptstyle +}}}, Z_=, Z_{{\uparrow^{\scriptscriptstyle +}}}, Z_{{\not\sim}}, Y_1 ,\dots , Y_t$, let $\psi(x, y) = \beta(x {\downarrow_{\scriptscriptstyle +}} y, x {=} y,$ $y {\downarrow_{\scriptscriptstyle +}} x, x {\not\sim} y,$ $\zeta_1(y), \dots , \zeta_t(y))$, and let $\phi(x) = \exists y \psi(x, y)$. Let $P' := \{u \in T \rvert \str{T} \models \psi_{\upharpoonright x {\downarrow_{\scriptscriptstyle +}} y} [u,v],$ for some $v$ s.t. $u {\downarrow_{\scriptscriptstyle +}} v \}$, $Q' := \{u \in T \rvert \str{T} \models \psi_{\upharpoonright y {\downarrow_{\scriptscriptstyle +}} x}[u,v],$ for some $v$ s.t. $v {\downarrow_{\scriptscriptstyle +}} u \}$, $R' := \{u \in T \rvert \str{T} \models \psi_{\upharpoonright x {\not\sim} y}[u,v],$ for some $v$ s.t. $u {\not\sim} v\}.$ Set $P = max(P')$ and $Q = min(Q' \cup R')$. Then for all $u \in \str{T}$, $\str{T} \models \phi[u]$ iff there exists $p \in P$ s.t. $ u {\downarrow_{\scriptscriptstyle *}} p$ or there exists $q \in Q$ s.t. $ q {\downarrow_{\scriptscriptstyle *}} u$ or $\str{T} \models \psi_{\upharpoonright x {=} y}[u,u]$. \end{lemma} \noindent {\it Remark}. Notice that on every path in $\str{T}$ there is at most one point from $P$ and at most one point from $Q$. \medskip Let $a \in \tau_0$ be a letter, $\str{T}$ a finite singular tree, and $S$ a set of nodes of $\str{T}$. Then by $S^a$ we denote the set of nodes in $S$ where the letter $a$ occurs. We also say that $S$ is a tree \emph{slice} iff it induces a connected (with respect to the symmetric closure of the child relation ${\downarrow}$) subgraph of $\str{T}$. \begin{lemma} \label{l:intervals} Let $\phi \in \mbox{$\mbox{\rm FO}^2[{\downarrow_{\scriptscriptstyle +}}]$}{}$ be a formula in ENNF and with one free variable, let $\str{T}$ be a finite singular tree, and let $a \in \tau_0$. There is a set $S \subseteq T$ which is a union of tree slices in $\str{T}$ such that: for every $u \in T^a$ we have $\str{T} \models \phi[u]$ iff $u \in S$; and every path in $\str{T}$ intersects at most $|\phi|^2$ tree slices from $S$. \end{lemma} The proof is inductive: if $\phi=\exists y \psi(x,y)$, for $\phi(x,y)= \beta(x {\downarrow_{\scriptscriptstyle +}} y, x {=} y, y {\downarrow_{\scriptscriptstyle +}} x,\linebreak x {\not\sim} y, \xi_1(x), \dots, \xi_s(x), \zeta_1(y), \dots, \zeta_t(y))$, then we consider the slices obtained inductively for the formulas $\xi_\sigma(x)$. The slices for different $\sigma$-s may overlap. Their endpoints determine a more refined division into slices, such that in each slice, on nodes carrying $a$, the values of all $\xi_\sigma(x)$ are constant. In each such slice we apply Lemma \ref{l:pqr} to introduce new divisions. Now arguments and calculations similar as in the proof of the corresponding Lemma 2.1.10 from \cite{Weis11} lead to the desired claim. \begin{lemma} \label{l:shortpathssing} Let $\phi$ be an \mbox{$\mbox{\rm FO}^2[{\downarrow_{\scriptscriptstyle +}}]$}{} formula over a signature $\tau$. If $\phi$ is satisfied in a singular tree, then $\phi$ is also satisfied in a singular tree, in which the length of every path is bounded by $6 \cdot |\tau| \cdot |\phi|^3$. \end{lemma} \begin{proof} We assume that $\phi$ is in ENNF. Let $\str{T} \models \phi$ be singular, and let $\phi_1, \dots, \phi_k$ be the subformulas of $\phi$ of the form $\exists x \psi$ for some variable $x$ and some formula $\psi$. For every $\kappa \in [1,\dots, k]$ we use Proposition \ref{p:betaform} to find a positive boolean formula such that $\psi_\kappa(x,y) = \beta(x {\downarrow_{\scriptscriptstyle +}} y, x{=}y, y {\downarrow_{\scriptscriptstyle +}} x, x {\not\sim} y, \xi_1(x), \dots, \xi_s(x), \zeta_1(y),$ $\dots,$ $\zeta_t(y))$. For every $a \in \tau_0$ and every $\sigma \in [1, \dots, s]$ let $S_\sigma^a$ be a set as in Lemma \ref{l:intervals} applied to the formula $\xi_\sigma(x)$ and $a$, where every path intersects at most $|\xi_\sigma|^2$ tree slices from $S_\sigma^a$. Thus there is a set $\str{I}^a_\kappa$ of tree slices $I$ such that: every path in $\str{T}$ intersects at most $2 \cdot \sum_{\sigma \in [1,s]}|\xi_\sigma|^2$ of them; $\bigcup_{I \in \str{I}^a_\kappa}I = \str{T}$; and there are $\xi_1^I, \dots, \xi_s^I \in \{\top, \bot\}$ such that $\str{T} \models \xi_\sigma[u]$ iff $\xi_\sigma^I = \top$ for every $u \in I$ satisfying $a[u]$. For each $I \in \str{I}^a_\kappa$ we consider the formula $\phi^{I}_{\kappa} = \exists y \psi_\kappa^I(x,y)$, where $\psi_\kappa^I(x,y) = \beta(x {\downarrow_{\scriptscriptstyle +}} y, x{=}y, y {\downarrow_{\scriptscriptstyle +}} x, x {\not\sim} y,\xi_1^I, \dots, \xi_s^I, \zeta_1(y), \dots, \zeta_t(y)).$ Let $P'' := \{v \in \str{T} \ \rvert \ \str{T} \models \psi^I_{\kappa \upharpoonright x {\downarrow_{\scriptscriptstyle +}} y}[u,v]$ for all $u {\downarrow_{\scriptscriptstyle +}} v\}$, $Q'' := \{v \in \str{T} \ \rvert \ \str{T} \models \psi^I_{\kappa \upharpoonright y {\downarrow_{\scriptscriptstyle +}} x}[u,v]$ for all $v {\downarrow_{\scriptscriptstyle +}} u\}$, $R'' := \{v \in \str{T} \ \rvert \ \str{T} \models \psi^I_{\kappa \upharpoonright x {\not\sim} y}[u,v] \text{ for all $u {\not\sim} v$}\}$, and let $P_I = max(P''), Q_I = min(Q''), R_I = max(R'')$. Let $T_\kappa^a = \bigcup_{I \in \str{I}^a_\kappa}(P_I \cup Q_I \cup R_I)$, $T' = \bigcup_{\kappa \in [1,k], a \in \tau_0}T_\kappa^a$, and let $\str{T}'$ be the restriction of $\str{T}$ to $T'$. Lemma \ref{l:pqr} can be used to prove that $\str{T}' \models \phi$. Also it can be shown that the paths of $\str{T}'$ are bounded as required. \hfill $\Box$ \end{proof} \begin{corollary} \label{c:smalltree} Let $\phi$ be a singularly satisfiable \mbox{$\mbox{\rm FO}^2[{\downarrow_{\scriptscriptstyle +}}]$}{} formula. Then it is satisfied in a singular tree whose number of nodes is exponential in $|\phi|$. \end{corollary} \begin{proof} Let $\str{T} \models \phi$ be a singular tree over the signature $\tau$, let $\cT$ be the frame of $\str{T}$. By Lemma \ref{l:shortpathssing} we may assume that all its paths are bounded polynomially. Let $\phi'$ be the normal form formula over signature $\tau'$ from the statement of Lemma \ref{l:normalform}. By that lemma $\phi'$ is satisfiable in a model $\str{T}'$ based on the frame $\cT$. By Lemma \ref{l:narrowtrees} we can remove some subtrees from $\str{T}'$ to obtain a model $\str{T}'' \models \phi'$ with exponentially bounded degree. Again by Lemma \ref{l:normalform}, the restriction of $\str{T}''$ to the original signature $\tau$ is a singular model. As its paths are bounded polynomially and the degree of nodes is bounded exponentially, the total number of nodes is bounded exponentially in $|\phi|$ as required. \hfill $\Box$ \end{proof} Corollary \ref{c:smalltree} justifies the upper bound from Theorem \ref{t:fotsing}, since for a given $\phi$ we can nondeterministically guess its exponential model and then verify it. The exponential bound on the degree of nodes in singular models of \mbox{$\mbox{\rm FO}^2[{\downarrow_{\scriptscriptstyle +}}]$}{} formulas is essentially optimal. Indeed, let us see that there exists a formula of size polynomial in $n$ in whose every model the root has $2^n$ children. We use unary predicates $root, elem, b_0, \ldots, b_{n-1}$, and say that all elements in $elem$ are children of the root: $\forall x (root(x) \Leftrightarrow \neg \exists y \; y {\downarrow_{\scriptscriptstyle +}} x) \wedge \forall x (\mathit{elem}(x) \Leftrightarrow (\neg \exists y (y {\downarrow_{\scriptscriptstyle +}} x \wedge \neg root(y)))$. We think that each $v$ in $elem$ encodes a number $0 \le N(v) < 2^n$ such that the $i$-th bit in its binary representation is $1$ iff the formula $\delta_i(x)=\exists y (x {\downarrow_{\scriptscriptstyle +}} y \wedge b_i(y))$ is satisfied at $v$. In a standard way we can now write a formula $\it{first}(x)$ which says that $N(x)=0$, a formula $\it{last}(x)$ stating that $N(x)=2^n-1$, and a formula $\it{succ}(x,y)$ saying that $N(y)=N(x)+1$. Now the formula $\exists x \; \it{first}(x) \wedge \forall x (\neg \it{last}(x) \Rightarrow \exists y \; \it{succ}(x,y))$ is as required. This idea can be easily employed to obtain \textsc{NExpTime}-lower bound in Theorem \ref{t:fotsing}. It turns out, that the ability of speaking about pairs of nodes in free position is crucial for \textsc{NExpTime}-hardness. Indeed if we allow only guarded formulas, we get \textsc{PSpace}{} complexity. The upper bound in the following theorem can be proved by bounding polynomially not only the length of the paths but also the degree of the nodes in models of \mbox{$\mbox{\rm GF}^2[{\downarrow_{\scriptscriptstyle +}}]$}{} formulas. \begin{theorem} \label{t:gfto} The satisfiability problem for \mbox{$\mbox{\rm GF}^2[{\downarrow_{\scriptscriptstyle +}}]$}{} over finite singular trees is \textsc{PSpace}-complete. \end{theorem} Finally we show that augmenting \mbox{$\mbox{\rm GF}^2[{\downarrow_{\scriptscriptstyle +}}]$}{} with any of the remaining binary navigational predicates leads to \textsc{ExpSpace}-lower bound over singular trees. \begin{theorem} \label{t:lowerbounds} The satisfiability problem over singular trees for each of the logics \mbox{$\mbox{\rm GF}^2$}$[{\downarrow_{\scriptscriptstyle +}},{\downarrow}]$, \mbox{$\mbox{\rm GF}^2$}$[{\downarrow_{\scriptscriptstyle +}}, {\rightarrow}]$, \mbox{$\mbox{\rm GF}^2$}$[{\downarrow_{\scriptscriptstyle +}}, {\rightarrow^{\scriptscriptstyle +}}]$ is \textsc{ExpSpace}-hard. \end{theorem} \section{Future work} One possible direction of a further research could be investigating the case in which infinite trees are admitted as models. It seems that the complexity results we have obtained for finite trees can be transfered to this case without major difficulties. It could be interesting to examine also the cases in which $\tau_{bin}$ contains ${\downarrow}$ but does not contain ${\downarrow_{\scriptscriptstyle +}}$. A related result is obtained in \cite{CW13}, where \textsc{NExpTime}-completeness of \mbox{$\mbox{\rm FO}^2$}{} with counting quantifiers and arbitrary number of binary symbols, of which fixed two have to be interpreted as child relations in two trees. The trees considered in \cite{CW13} are, however, ranked and unordered. \medskip\noindent {\bf Acknowledgement.} Similar results were obtained independently in \cite{BBLW13}. The two works were merged into a single paper \cite{BBCKLMW13}. \bibliographystyle{plain} \section{Lower bounds for logics over singular trees} \begin{theorem}\label{th:singFOto} The satisfiability problem for \mbox{$\mbox{\rm FO}^2[{\downarrow_{\scriptscriptstyle +}}]$}{} over singular finite trees is \textsc{NExpTime}-hard. \end{theorem} \begin{proof} We give a~reduction from the satisfiability problem of unary \mbox{$\mbox{\rm FO}^2$}, which is known to be \textsc{NExpTime}-complete~(see e.g., \cite{EVW02}). For a~given $\mbox{$\mbox{\rm FO}^2$}$ formula $\phi$ over a~unary signature $\tau$ we construct an equisatisfiable \mbox{$\mbox{\rm FO}^2[{\downarrow_{\scriptscriptstyle +}}]$}{} formula $\transl{\phi}$ over the signature $\tau\cup\{{\downarrow_{\scriptscriptstyle +}},\mathit{elem}\}$ where $\mathit{elem}$ is a~fresh unary predicate. Without loss of generality we may assume that $\phi$ is built from variables $x,y$, unary predicate symbols, boolean connectives $\wedge,\neg$ and existential quantification. Now we inductively define the translation $\transl{\phi}$. \begin{eqnarray*} \transl{p(x)}&=& \exists y\; x{{\downarrow_{\scriptscriptstyle +}}}y \wedge p(y)\\ \transl{\neg \phi} &= & \neg\transl{\phi}\\ \transl{\phi_1\wedge\phi_2}& = & \transl{\phi}\wedge\transl{\phi_2}\\ \transl{\exists x\; \psi} &= & \exists x\; \mathit{elem}(x)\wedge \transl{\psi} \end{eqnarray*} Note that $\transl{\phi}$ is a formula of length linear in ($|\phi|$). It remains to be shown that $\phi$ and $\transl{\phi}$ are equisatisfiable. For one direction, assume that $\str{A}$ is a~model of $\phi$. Construct a~tree $\str{T}$ such that all elements of the universe of~$\str{A}$ are immediate successors of the root of $\str{T}$ and are labeled $\mathit{elem}$; each such element $e$ has as many immediate successors as there are predicates in $\tau$ that are true of $e$, and each such successor is a~leaf labeled with a~distinct predicate true of~$e$ in~$\str{A}$, see Figure~\ref{fig:repr}. It can be easily proved by induction on the structure of $\phi$ that $\str{T}$ is a~(singular) model of $\transl{\phi}$. \begin{figure}[htb] \begin{center} \[ \begin{array}{c@{\hskip.5cm}c@{\hskip.5cm}c@{\hskip.5cm}c@{\hskip.5cm}c} &&&\rnode{n00}{} \\[2ex] &&\rnode{n10}{\mathit{elem}}&&\rnode{n11}{\mathit{elem}} \\[3ex] &\rnode{n1f}{p} & &\rnode{n2a}{q}& \rnode{p}{p}\\[2ex] \end{array} \psset{nodesep=2pt} \ncline{->}{n00}{n10} \ncline{->}{n00}{n11} \ncline{->}{n10}{n1f} \ncline{->}{n10}{n2a} \ncline{->}{n11}{p} \] \caption{Representation of a~structure over the signature $\{p,q\}$. There are two elements in the universe; the first belongs to the relations $p$ and $q$, the second to $p$.} \label{fig:repr} \end{center} \end{figure} For the other direction assume that $\str{T}$ is a~model of $\transl{\phi}$. Construct a~structure $\str{A}$ such that the universe of $\str{A}$ is the set of nodes labeled $\mathit{elem}$ in~$\str{T}$ and for all elements $e$ and all predicates $p$, $p(e)$ is true in $\str{A}$ if and only if there is a~node $e'$ labeled $p$ that is below $e$ in~$\str{T}$. Again it is easy to prove by structural induction that $\phi$ is true in~$\str{A}$. \hfill $\Box$ \end{proof} \begin{theorem} The satisfiability problem for \mbox{$\mbox{\rm GF}^2[{\downarrow_{\scriptscriptstyle +}}, {\downarrow}]$}{} over singular finite trees is \textsc{ExpSpace}-hard. \end{theorem} \begin{proof} We give a~reduction from \mbox{$\mbox{\rm GF}^2[{\downarrow_{\scriptscriptstyle +}}]$}{} over arbitrary trees. The idea of the encoding is the same as in Theorem~\ref{th:singFOto}: a node $e$ in a~tree is modeled by a~singular node labeled $\mathit{elem}$ with immediate successors encoding predicates true in~$e$. The binary predicate ${\downarrow_{\scriptscriptstyle +}}$ is used to preserve the structure of the tree, the additional ${\downarrow}$ predicate gives the access to nodes modeling unary predicates. In the following reduction, for a~given $\mbox{$\mbox{\rm GF}^2[{\downarrow_{\scriptscriptstyle +}}]$}$ formula $\phi$ over a~signature $\tau=\tau_0\cup \{{\downarrow_{\scriptscriptstyle +}}\}$ we construct a \mbox{$\mbox{\rm GF}^2[{\downarrow_{\scriptscriptstyle +}}, {\downarrow}]$}{} formula over the signature $\tau\cup\{{\downarrow},\mathit{elem}\}$ that is satisfiable over singular trees if and only if $\phi$ is satisfiable over trees. Let us start with a formula ensuring that the underlying structure is an encoding of a~tree. The formula $\mathit{tree}$ is defined as the conjunction of \[ \bigwedge_{p\in\tau_0\cup\{\mathit{elem}\}}\forall x \; p(x) \Rightarrow \forall y\; y{\downarrow_{\scriptscriptstyle +}} x\Rightarrow \mathit{elem}(y) \] with \[ \forall x\; \mathit{elem}(x) \Rightarrow \forall y\; x{\downarrow_{\scriptscriptstyle +}} y\Rightarrow\bigvee_{p\in\tau_0\cup\{\mathit{elem}\}} p(y). \] It ensures that (unless the tree is trivial, i.e., no node is labeled at all) each node is labeled with some predicate symbol, all internal nodes are labeled $\mathit{elem}$ and only leaves may be labeled with predicates from $\tau_0$. Without loss of generality we may assume that the formula $\phi$ is built from unary atoms, boolean connectives $\wedge,\neg$ and guarded existential quantification. The translation $\transl{\phi}$ of a~formula $\phi$ is defined inductively as follows. \begin{eqnarray*} \transl{p(x)}&=& \exists y\; x{\downarrow} y \wedge p(y)\\ \transl{\neg \phi} &= & \neg\transl{\phi}\\ \transl{\phi_1\wedge\phi_2}& = & \transl{\phi}\wedge\transl{\phi_2}\\ \transl{\exists x\; p(x) \wedge\psi(x)} &= & \exists x\; \mathit{elem}(x)\wedge \transl{p(x)} \wedge\transl{\psi(x)}\\ \transl{\exists y\; x{{\downarrow_{\scriptscriptstyle +}} }y \wedge\psi(x,y)} &= & \exists y\; x{{\downarrow_{\scriptscriptstyle +}} }y \wedge \mathit{elem}(y) \wedge\transl{\psi(x,y)}\\ \transl{\exists y\; y{{\downarrow_{\scriptscriptstyle +}} }x \wedge\psi(x,y)} &= & \exists y\; y{{\downarrow_{\scriptscriptstyle +}} }x \wedge \mathit{elem}(y) \wedge\transl{\psi(x,y)} \end{eqnarray*} Note that $\transl{\phi}$ is a~guarded formula of length linear in ($|\phi|$). Again a~simple inductive argument shows that $\phi$ is satisfiable if and only if $\mathit{tree}\wedge\transl{\phi}$ has a~singular tree model. \hfill $\Box$ \end{proof} \begin{theorem} The satisfiability problems for \mbox{$\mbox{\rm GF}^2$}$[{\downarrow_{\scriptscriptstyle +}}, {\rightarrow}]$ and \mbox{$\mbox{\rm GF}^2$}$[{\downarrow_{\scriptscriptstyle +}}, {\rightarrow^{\scriptscriptstyle +}}]$ over singular trees are \textsc{ExpSpace}-hard. \end{theorem} \newcommand{\mathit{zero}}{\mathit{zero}} \newcommand{\mathit{one}}{\mathit{one}} \newcommand{\mathit{none}}{\mathit{none}} \newcommand{\mathit{number}}{\mathit{number}} \newcommand{\mathit{cell}}{\mathit{cell}} \newcommand{\mathit{conf}}{\mathit{conf}} \begin{proof} We follow the construction from \cite{Kie02} and give a generic reduction from \textsc{AExpTime}{}. Consider an alternating Turing machine $M$ working in exponential time. Without loss of generality we may assume that $M$ works in time $2^n$ and that every non-final configuration of $M$ has exactly two successor configurations. Let $w$ be an input word of size $n$. Following \cite{Kie02} we construct a~formula whose models encode accepting configuration trees of machine $M$ on input $w$. \newcommand{\horizEnc}{ \begin{array}{c@{\hskip.5cm}c@{\hskip.5cm}c} \circlenode{p2}{}&\rnode{p3}{\ldots}&\circlenode{p4}{} \end{array} \ncline{->}{p2}{p3} \ncline{->}{p3}{p4} } \begin{figure}[htb] \begin{center} \[ \begin{array}{c@{\hskip.2cm}c@{\hskip.2cm}c@{\hskip.2cm}c@{\hskip.2cm}c} &&&\rnode{root}{} \\[2ex] &&2^n\left\{ \begin{array}{c} \circlenode{conf0}{} \\[4ex] \rnode{conf00}{\vdots} \\[2ex] \circlenode{confn}{} \end{array}\right.\mbox{~~~~} \\[10ex] & 2^n\left\{ \begin{array}{c} \rnode{n1f}{n_1} \\[2ex] \circlenode{conf1}{}\\[4ex] \rnode{conf1n}{\vdots}\\[2ex] \circlenode{conf1nn}{} \end{array} \right. && \begin{array}{c} \rnode{n2a}{n_2} \\[2ex] \circlenode{conf2}{}\\[4ex] \rnode{conf2n}{\vdots}\\[2ex] \circlenode{conf2nn}{} \end{array} \end{array} \psset{nodesep=2pt} \ncline[linestyle=dotted]{root}{conf0} \ncline{->}{conf0}{conf00} \ncline{->}{conf00}{confn} \ncline{->}{confn}{n1f} \ncline{->}{confn}{n2a} \ncline{->}{n1f}{conf1} \ncline{->}{n2a}{conf2} \ncline{->}{conf1}{conf1n} \ncline{->}{conf1n}{conf1nn} \ncline{->}{conf2}{conf2n} \ncline{->}{conf2n}{conf2nn} \mbox{~~~~~~~} \begin{array}{c@{\hskip.02cm}c@{\hskip.2cm}c@{\hskip.2cm}c@{\hskip.2cm}c} &&\rnode{sroot}{} \\[2ex] &&2^n\left\{ \begin{array}{c@{\hskip.5cm}c} \circlenode{sconf0}{} &\rnode{h0}{{\underbrace{\horizEnc}}}\\ & {2n+2}\\ \rnode{sconf00}{\vdots} \\[2ex] \circlenode{sconfn}{} &\rnode{hn}{\horizEnc}\rnode{hn2}{} \end{array}\right.\mbox{~~~~} \\[10ex] & 2^n\left\{ \begin{array}{cc} \rnode{sn1f}{n_1}&\lefteqn{\ldots} \\[2ex] \circlenode{sconf1}{}&\lefteqn{\ldots}\\[4ex] \rnode{sconf1n}{\vdots}\\[2ex] \circlenode{sconf1nn}{}&\lefteqn{\ldots} \end{array} \right. && \begin{array}{cc} \rnode{sn2a}{n_2}&\ldots \\[2ex] \circlenode{sconf2}{}&\ldots\\[4ex] \rnode{sconf2n}{\vdots}\\[2ex] \circlenode{sconf2nn}{}&\ldots \end{array} \end{array} \psset{nodesep=2pt} \ncline[linestyle=dotted]{sroot}{sconf0} \ncline{->}{sconf0}{sconf00} \ncline{->}{sconf00}{sconfn} \ncline{->}{sconfn}{sn1f} \ncline{->}{sn1f}{sconf1} \ncline{->}{sn2a}{sconf2} \ncline{->}{sconf1}{sconf1n} \ncline{->}{sconf1n}{sconf1nn} \ncline{->}{sconf2}{sconf2n} \ncline{->}{sconf2n}{sconf2nn} \ncline{->}{sconf0}{h0} \ncline{->}{sconfn}{hn} \ncline{->}{hn2}{sn2a} \] \caption{Left: frame of a~configuration tree in \cite{Kie02}; nodes $n_1$ and $n_2$ are siblings. Right: frame of a~configuration tree in our encoding; nodes $n_1$ and $n_2$ are not siblings.} \label{fig:frames} \end{center} \end{figure} In~\cite{Kie02} each configuration is represented by $2^n$ elements of a~tree, each of which represents a~single cell of the tape of $M$ (see left part of Figure~\ref{fig:frames}). Each such node is then labeled with unary predicate symbols from the set $\{C_1,\ldots,C_n,P_1,\ldots,P_n\}$ to encode the number of a~configuration (i.e., the depth of the configuration in the computation tree) and its position (i.e., the number of a~cell) in the configuration: $C_i(x)$ is true if the $i$-th bit of the configuration number is 1 and $P_i(x)$ is true if the $i$-th bit of the position number is 1. Additional predicate symbols are used to encode the tape symbol and the state of the machine (if it is necessary, i.e., if the head of of the machine is scanning the cell under consideration). Here, to encode the numbers, we use additional $2n$ elements that are siblings of the node representing a~cell, see right part of Figure \ref{fig:frames}. Each of these elements stores information about a~single bit using one of two unary predicates $\mathit{zero}$ or $\mathit{one}$. Then the atomic formulas $C_i(x)$ and $P_i(x)$ are simulated by formulas \[ \exists y\; x{\rightarrow^{\scriptscriptstyle +}} y \wedge \mathit{Path}_i(y)\wedge \mathit{one}(y) \mbox{~~and~respectively~~}\exists y\; x{\rightarrow^{\scriptscriptstyle +}} y \wedge \mathit{Path}_{n+i}(y)\wedge \mathit{one}(y) \] where the subformula $\mathit{Path}_i(y)$ is defined recursively as follows. For the logic \mbox{$\mbox{\rm GF}^2$}$[{\downarrow_{\scriptscriptstyle +}}, {\rightarrow}]$ we define \begin{eqnarray*} \mathit{Path}_0(y)&=& \neg \exists x\; x{{\rightarrow}} y\\ \mathit{Path}_{i+1}(y)&=& \exists x\; x{{\rightarrow}} y \wedge \mathit{Path}_i(x) \end{eqnarray*} and for the logic \mbox{$\mbox{\rm GF}^2$}$[{\downarrow_{\scriptscriptstyle +}}, {\rightarrow^{\scriptscriptstyle +}}]$ we define \begin{eqnarray*} \mathit{Path}_{\geq 0}(y)&=& \neg \exists x\; x{{\rightarrow^{\scriptscriptstyle +}}} y\\ \mathit{Path}_{\geq i+1}(y)&=& \exists x\; x{{\rightarrow^{\scriptscriptstyle +}}} y \wedge \mathit{Path}_{\geq i}(x)\\ \mathit{Path}_{i}(y)&=& \mathit{Path}_{\geq i}(y)\wedge \neg \mathit{Path}_{\geq i+1}(y). \end{eqnarray*} Note that in both cases the formula $\mathit{Path}_i$ is guarded and has polynomial length. The negated atomic formulas $\neg C_i(x)$ and $\neg P_i(x)$ are simulated using predicate $\mathit{zero}$ instead of $\mathit{one}$. Now, having the ability to count, we may encode tape symbols and states of the machine by simply using more siblings, and we may follow the lines of the construction in \cite{Kie02} to encode the computation of $M$. The only remaining subtle point is that in \cite{Kie02} the two successor configurations are siblings in a~computation tree while here they must not be siblings in order not to mess up the information about numbers --- this may be simply done by rooting the two configurations at different nodes as shown on Figure~\ref{fig:frames}. \hfill $\Box$ \end{proof} \medskip \begin{theorem} The satisfiability problem for \mbox{$\mbox{\rm GF}^2[{\downarrow_{\scriptscriptstyle +}}]$}{} over singular trees is \textsc{PSpace}-hard. \end{theorem} \begin{proof} We propose a~reduction from the satisfiability of quantified boolean formulas, QBF. Let $\psi$ be an instance of QBF problem. Without loss of generality we may assume that $\psi$ is of the form \[ \exists v_k\ldots\exists v_2\forall v_1 \psi' \] where the number of all quantifiers ($k$) is even, all even-numbered variables are existentially quantified, all odd-numbered variables are universally quantified and $\psi'$ is a~propositional formula over the variables $v_1,\ldots,v_k$. We now translate the formula $\psi$ to a~formula over the signature \[\tau=\set{\mathit{root}, \mathit{leaf},\mathit{true}, \mathit{false}, {\downarrow_{\scriptscriptstyle +}} } \] such that $\psi$ is true if and only if its translation is satisfiable over singular trees. First, for $i\in \set{0,\dots, k}$ we define auxiliary formulas $\mathit{depth}_i$ and $\mathit{height}_i$. Let $\mathit{depth}_0(x)=\mathit{root}(x)$ and for $i\geq 1$ let $\mathit{depth}_i(x)=\exists y \;x{{\downarrow_{\scriptscriptstyle +}} }y\wedge \mathit{depth}_{i-1}(y)$. Intuitively, the formula $\mathit{depth}_i(x)$ expresses that the node $x$ occurs at distance at least $i$ from the root. Let $\mathit{height}_0(x)=\mathit{leaf}(x)$, $\mathit{height}_1(x)=\mathit{depth}_{k}(x)$ and let $\mathit{height}_{i}(x)=depth_{k+1-i}(x)\wedge \neg \mathit{depth}_{k+2-i}(x)$ for $i>1$. For $i>0$ the formula $\mathit{height}_i(x)$ expresses that $x$ is a~node at depth exactly $k+1-i$; in the construction below, for $i\geq 0$, the formula $\mathit{height}_i(x)$ will mean that the subtree rooted at $x$ has height $i$. Note that $\mathit{height}_i(x)$ is a~guarded formula of length linear in $i$. In the following construction a~model of the translation of $\psi$ is a~tree that describes a~set of valuations justifying that $\psi$ is true. It is a binary tree of depth $k+1$ where every path describes a~valuation of variables $v_1,\ldots, v_k$. Every node at height $i$ is labeled either $\mathit{true}$ or $\mathit{false}$, which corresponds to a~value of the variable $v_i$ under a~given valuation. Every non-leaf node at odd height $i$ has two successors corresponding to the universally quantified variable $v_{i+1}$; every node at even height $i$ where $i>0$ has one successor corresponding to the existentially quantified variable $v_{i+1}$. If $k>0$ then let $\mathit{tree}_k$ be the conjunction of \begin{eqnarray} \exists x\;\mathit{root}(x),\\ \forall x\; \mathit{root}(x)&\Rightarrow &(\exists y\; x{{\downarrow_{\scriptscriptstyle +}} }y\wedge\mathit{height}_k(y)\wedge (\mathit{true}(y)\vee \mathit{false}(y)), \\ \forall x\; \mathit{true}(x)&\Rightarrow\big( &\mathit{height}_i(x)\Rightarrow \nonumber\\ &&\big((\exists y\; x{{\downarrow_{\scriptscriptstyle +}} }y\wedge\mathit{height}_{i-1}(y)\wedge \mathit{true}(y))\\&\wedge&\;\,(\exists y\; x{{\downarrow_{\scriptscriptstyle +}} }y\wedge\mathit{height}_{i-1}(y)\wedge \mathit{false}(y))\big)\; \big)\nonumber\\ &&\mbox{for all even numbers $2\leq i\leq k$,}\nonumber\\ \forall x\; \mathit{false}(x)&\Rightarrow \big(&\mathit{height}_i(x)\Rightarrow \nonumber\\ &&\big((\exists y\; x{{\downarrow_{\scriptscriptstyle +}} }y\wedge\mathit{height}_{i-1}(y)\wedge \mathit{true}(y))\\ &\wedge& \;\,(\exists y\; x{{\downarrow_{\scriptscriptstyle +}} }y\wedge\mathit{height}_{i-1}(y)\wedge \mathit{false}(y))\big)\;\big)\nonumber\\ &&\mbox{for all even numbers $2\leq i\leq k$,}\nonumber\\ \forall x\; \mathit{true}(x)&\Rightarrow \big( &\mathit{height}_i(x)\Rightarrow \nonumber\\ &&\exists y\; x{{\downarrow_{\scriptscriptstyle +}} }y\wedge\mathit{height}_{i-1}(y) \wedge \big(\mathit{true}(y)\vee \mathit{false}(y)\big)\big) \\ &&\mbox{for all odd numbers $3\leq i<k$,}\nonumber \\ \forall x\; \mathit{false}(x)&\Rightarrow \big( &\mathit{height}_i(x)\Rightarrow \nonumber\\ &&\exists y\; x{{\downarrow_{\scriptscriptstyle +}} }y\wedge\mathit{height}_{i-1}(y) \wedge \big(\mathit{true}(y)\vee \mathit{false}(y)\big)\big) \\ &&\mbox{for all odd numbers $3\leq i<k$,}\nonumber\\ \forall x\; \mathit{true}(x)&\Rightarrow \big( &\mathit{height}_1(x)\Rightarrow \nonumber\\ &&(\exists y\; x{{\downarrow_{\scriptscriptstyle +}} }y\wedge \mathit{leaf}(y))\big),\\ \forall x\; \mathit{false}(x)&\Rightarrow \big( &\mathit{height}_1(x)\Rightarrow \nonumber\\ &&(\exists y\; x{{\downarrow_{\scriptscriptstyle +}} }y\wedge \mathit{leaf}(y))\big). \end{eqnarray} In the case of $k=0$ the formula $\mathit{tree}_0$ boils down to $\exists x\;\mathit{root}(x)\wedge\forall x\; \mathit{root}(x)\Rightarrow (\exists y\; x{{\downarrow_{\scriptscriptstyle +}} }y\wedge \mathit{leaf}(y))$. Note that $\mathit{tree}_k$ is a~guarded formula of length polynomial in~$k$. Now we inductively define the translation $\transl{\psi'}$ of the quantifier-free formula~$\psi'$. \begin{eqnarray*} \transl{\mathit{true}}&=& \mathit{true}\\ \transl{\mathit{false}}&=& \mathit{false}\\ \transl{v_i}&=& \exists y\; y{{\downarrow_{\scriptscriptstyle +}} }x \wedge \mathit{height}_i(y)\wedge \mathit{true}(y)\\ \transl{\neg \phi} &= & \neg\transl{\phi}\\ \transl{\phi_1\wedge\phi_2}& = & \transl{\phi}\wedge\transl{\phi_2}\\ \transl{\phi_1\vee\phi_2} &= & \transl{\phi}\vee\transl{\phi_2} \end{eqnarray*} Note that $\transl{\psi'}$ is a~guarded formula of length polynomial in ($|\psi'|+k$). It is not difficult to prove by induction on $k$ (and by nested structural induction on propositional formulas with free variables $v_1,\ldots,v_k$) that $\psi$ is true if and only if $\mathit{tree}_k\wedge \forall x \; \mathit{leaf}(x)\Rightarrow \transl{\psi'}$ has a~singular tree model. Each node labeled $\mathit{leaf}$ in such a~model uniquely determines a~path to a~node labeled $\mathit{root}$ and such a~path corresponds to a~valuation of the variables $v_1,\ldots,v_k$ that makes the formula $\psi'$ true. \hfill $\Box$ \end{proof}
1304.7086
\section{Introduction} In \cite{HL1999}, Connor Lazarov and the second author gave general conditions which guarantee that the Bismut superconnection formalism extends in full generality to families of non-compact manifolds. This allowed us to prove a families index theorem for generalized Dirac operators defined along such families. Special cases give the Atiyah-Singer families index theorem, the Atiyah $L^2$ index theorem \cite{Atiyah:1976}, and the Connes foliation cohomology index theorem \cite{Connes:1986}. In addition, we got a new theorem for fiber bundles which is a combination of the first two above, namely an $L^2$ families index theorem. Note that Connes and Skandalis have proven a families index theorem for families of elliptic operators defined along the leaves of a foliation. See \cite{Connes-Skandalis:1984} and \cite{Connes:1987}. One of the conditions we required is that the Novikov-Shubin invariants, \cite{NS86a, NS86b, HL1994}, of the Dirac operators were greater than three times the codimension of the foliation. In \cite{BH2004, BH2008}, the first two authors extended these results to generalized Dirac operators $D$ along the leaves of a Riemannian foliation. We assumed that the projection onto the kernel of $D$ is transversely smooth, and that the spectral projections of $D^{2}$ for the intervals $(0,\epsilon)$ are transversely smooth, for $\epsilon$ sufficiently small. We defined the Connes-Chern character of $D$ in the Haefliger cohomology of the foliation. We then showed that the pairing of this Connes-Chern character with a given Haefliger $2k-$current is the same as that of the Haefliger Chern character of the index bundle of $D$, whenever the Novikov-Shubin invariants of $D$ are greater than $k$. In particular, if the Novikov-Shubin invariants are greater than half the codimension of $F$, they are always the same. We conjectured that this theorem is still true provided only that the Novikov-Shubin invariants are positive. In this paper, we show that this is false, and that the result in \cite{BH2008} is the best possible, under the given hypotheses. It is an interesting question as to what additional conditions need to be imposed for the conjecture to be true. \section{A bit of background} We assume that the reader is familiar with the papers \cite{BH2004, BH2008}, in particular with the concepts of those papers, including: Haefliger cohomology; generalized Dirac operators; the various Connes-Chern characters used there; the Novikov-Shubin invariants; and transverse smoothness of leafwise operators. Suppose that $\widehat{D}$ is a generalized Dirac operator defined along the leaves of a Riemannian foliation $F$ of codimension $n$ on a smooth manifold $M$. Denote by $D$ the induced leafwise operator for the foliation $F_s$ (whose leaves are the inverse image of points on $M$ under the source map from the groupoid to $M$) on the holonomy (or homotopy) groupoid of $F$. Denote by $P_0$ the leafwise projection onto the kernel of $D$, and by $P_{\epsilon}$ the leafwise spectral projection for $D^{2}$ for the interval $(0,\epsilon)$. \begin{theorem}[Theorem 4.2 of \cite{BH2008}] \label{main} Assume that $P_0$, and (for $\epsilon$ sufficiently small) $P_{\epsilon}$ are transversely smooth. For a fixed integer $k$ with $0\leq k\leq n/2$, assume that the Novikov-Shubin invariants of $D$ are greater than $k$. Then the $k^{th}$ component of the Chern character of the K-theory index of $D$ equals the $k^{th}$ component of the Chern character of the index bundle of $D$, that is, in the Haefliger cohomology $\operatorname{H}_c^{2k}(M/F)$ of $F$, $$ \operatorname{ch}^k_{a}(\operatorname{Ind}_{a}(D)) = \operatorname{ch}^k_{a}([P_{0}]). $$ \end{theorem} \begin{corollary}[Theorem 4.1 of \cite{BH2008}]\label{limit} If the Novikov-Shubin invariants of $D$ are greater than $n/2$, then $$ \operatorname{ch}_{a}(\operatorname{Ind}_{a}(D)) = \operatorname{ch}_{a}([P_{0}]). $$ \end{corollary} \section{The examples} We now show that Corollary \ref{limit} is the best possible. Consider the product of two $n$-tori $\mathbb T^n \times \mathbb T^n = \mathbb R^n \times \mathbb R^n/ \mathbb Z^n \times \mathbb Z^n$, and a foliation $F$ on it which is the product of constant irrational slope foliations on the individual $\mathbb T \times \mathbb T$. We assume that the slopes are not rationally related. We give $\mathbb T^n \times \mathbb T^n$ the metric it inherits from the usual metric on $\mathbb R^n \times \mathbb R^n$. The leaves of $F$ are then just copies of $\mathbb R^n$ with the usual metric. $F$ is a Riemannian foliation, and its holonomy and homotopy groupoids are just the product $\mathbb T^n \times \mathbb T^n \times \mathbb R^n$, with the foliation $F_s$ given by the $\mathbb R^n$ factors. The Haefliger cohomology of $F$ is highly non-trivial, but still computable, and $\mathbb T^n \times \mathbb T^n$ has an interesting leafwise flat bundle over it with which to pair the leafwise Dirac operator. Denote by $\widehat{E}_n \to \mathbb T^n \times \mathbb T^n$ the Hermitian bundle which is given as follows. Let $(\xi,\widehat{\xi}) \in \mathbb Z^n \times \mathbb Z^n = \pi_1(\mathbb T^n \times \mathbb T^n)$, and $(x, w, z) \in \mathbb R^n \times \mathbb R^n \times \mathbb C$, and define $$ (\xi,\widehat{\xi}) \cdot (x, w, z) = ( x + \xi, w + \widehat{\xi}, e^{2\pi i \langle \xi, w \rangle} z), $$ where $$ \langle \xi, w \rangle= \xi_1 w_1 + \cdots + \xi_n w_n. $$ Set $$ \widehat{E}_n = (\mathbb R^n \times \mathbb R^n \times \mathbb C) / \mathbb Z^n \times \mathbb Z^n. $$ Now $\widehat{E}_n = E_1 \otimes \cdots \otimes E_n$, where $E_j$ is the pull back of $\widehat{E}_1$ by the projection $\pi_j:\mathbb T^n \times \mathbb T^n \to \mathbb T \times \mathbb T$ onto the $j$-th coordinates. In \cite{BH2011}, the proof of Theorem 11.3, we showed that $\operatorname{ch}(E_j) = 1 + \beta_j$, where $\beta_j$ is the pull back under $\pi_j$ of the natural generator of $H^2(\mathbb T^2;\mathbb Z)$. (Note that the definition of $\widehat{E}_n $ in \cite{BH2011} has a typo, and it should be defined as above.) Then the Chern character $$ \operatorname{ch}(\widehat{E}_n) \,\, = \,\, \prod_{j=1}^n \operatorname{ch}(E_j) \,\, = \,\, \prod_{j=1}^n 1 + \beta_j, $$ in particular, it is as non-trivial as it can be. Next, note that $\widehat{E}_n$ restricted to the leaves of $F$ is a flat bundle. This is because the leaves are products of $\mathbb R$s, where each $\mathbb R$ is a leaf of the foliation of the $j$-th $\mathbb T^1 \times \mathbb T^1$, and the fact that the curvature of the pull back is the pull back of the curvature. But the connection we will use on $\widehat{E}_n$ is the tensor product of the pull backs of the connection on $\mathbb T^1 \times \mathbb T^1$ by the $\pi_j$. So the curvature of our connection on a leaf is the tensor product of the pull backs of the curvatures on one dimensional manifolds, and is identically zero. On the open dense subset of $\mathbb T^n \times \mathbb T^n$ given by $(0,1)^n \times \mathbb T^n$, set $$ \nabla(f) \,\, = \,\, df - (2\pi i \sum_j x_j dw_j)f, $$ where $f$ is a section of $\widehat{E}_n$ defined on $(0,1)^n \times \mathbb T^n$. We leave it to the reader to check that in fact this defines a connection $\nabla$ on $\widehat{E}_n$. $\nabla$ is the tensor product of the pull backs of this connection on $\widehat{E}_1$ by the $\pi_j$, as promised. The leaves of $F$ are given as follows. Choose $(a,b) \in \mathbb R^n \times \mathbb R^n$, so that $a_j/b_j \in \mathbb R - \mathbb Q$ and the $a_j/b_j $ are not rationally related. For each $c \in \mathbb R^n$, set $$ L_c \,\, = \,\, \{a_1 t_1, ... , a_n t_n, b_1 t_1 + c_1, ... , b_n t_n + c_n) \,\, | \, t \in \mathbb R^n \}. $$ The leaves of $F$ are the images of the $L_c$ under the projection $\mathbb R^n \times \mathbb R^n \to \mathbb T^n \times \mathbb T^n$, which are denoted $\widetilde{L}_c$. On $(0,1)^n \times \mathbb T^n$, we may use $x \in (0,1)^n$ as the coordinates on $\widetilde{L}_c$, and when we do, the restriction of $\nabla$ to $\widetilde{L}_c$ is $$ \nabla^c \,\, = \,\, d - 2\pi i \sum_j \frac{b_j}{a_j} x_j dx_j, $$ which is independent of the transverse coordinate $c$. Note that since the metric on the leaves of $F$ is the usual metric on $\mathbb R^n$, the $*$ operator used in the definition of the signature operator is just the usual $*$ operator and is also independent of the transverse coordinate $c$. So all the elements used in the construction of the twisted leafwise signature operator $D_n$ (which is a generalized Dirac operator) are independent of the transverse coordinate $c$. Thus, for each $j$, the commutator $[\partial/\partial w_j, D_n] = 0$. Finally note that the pull back of $\nabla$ to the groupoid $\mathbb T^n \times \mathbb T^n \times \mathbb R^n$ has uniformly bounded coefficients, since we use the usual groupoid coordinates on $\mathbb T^n \times \mathbb T^n \times \mathbb R^n$, which come from $\mathbb T^n \times \mathbb T^n$. The operator $D_n$ is a leafwise operator on the holonomy (homotopy) groupoid for the foliation $F_s$, whose leaves are diffeomorphic to the $\widetilde{L}_c$, under the target map $r$, with the same structure. Any differential form on $\widetilde{L}_c$ twisted by the bundle $\widehat{E}_n$ can be written as $\omega(t) \otimes s(t)$, where $||s(t)|| = 1$, and $\nabla^c s(t) = 0$. An easy computation shows that $D_n$ applied to this is just $D\omega(t) \otimes s(t)$, where $D$ is the usual signature operator on $\mathbb R^n$. As there are no non-zero $L^2$ harmonic forms on $\mathbb R^n$, it follows immediately that $P_0 = 0$ and so is transversely smooth, and that the Chern character of the index bundle $\operatorname{ch}_a([P_0])$ of $D_n$ is zero. Finally, we show that the spectral projections $P_{(0,\epsilon)}$ of $D_n$ for the interval $(0,\epsilon)$ are tranversely smooth. To this end, we choose the normal bundle $\nu$ of $F$ to be the sub-bundle of $T(\mathbb T^n \times \mathbb T^n)$ which is just the tangent bundle to the second $\mathbb T^n$, that is the bundle spanned by the $\partial/\partial w_j$. Now the groupoid is diffeomorphic to $\mathbb T^n \times \mathbb T^n \times \mathbb R^n$, and the structure on the leaf $\wL_{(x,w)} = (x,w) \times \mathbb R^n$ is given by mapping $(x,w,t)$ to $(at, w+ bt - bx/a)$ in the leaf $L_c$ containing $(x,w)$. The transverse derivatives of $P_{(0,\epsilon)}$ are obtained by taking vectors $X \in T(\mathbb T^n \times \mathbb T^n)_{(x,w)}$, translating them along the leaf $\wL_{(x,w)}$ using the product structure, and then computing the covariant derivatives $\widetilde{\nabla}_X (P_{(0,\epsilon)})$, where $\widetilde{\nabla}$ is the twisted (using $\widehat{E}_n$) Levi-Civita connection on $\mathbb T^n \times \mathbb T^n$. See \cite{BH2008}, Section 3. The map from $\wL_{(x,w)}$ to $\wL_{(x + as ,w + bs)}$, which is parallel translation along the geodesics determined by the vector $\sum_j a_j \partial /\partial x_j + b_j \partial /\partial w_j$ in $TF_{(x,w)}$ translated along the leaf $\wL_{(x,w)}$, is actually the identity in our coordinates, so $\widetilde{\nabla}_X (P_{(0,\epsilon)}) = 0$ for such $X$. To see that the $P_{(0,\epsilon)}$ are transversely smooth, note that $ [\partial/\partial w_j,D_n] = 0$ implies that $[\partial/\partial w_j, P_{(0,\epsilon)}] = 0$. Using the fact that $\widetilde{\nabla}$ has uniformly bounded coefficients, we have that $[\widetilde{\nabla}_{\partial/\partial w_j}, P_{(0,\epsilon)}]$ is the commutator of the bounded leafwise smoothing operator $P_{(0,\epsilon)}$ with an order zero differential operator with uniformly bounded coefficients, so $[\widetilde{\nabla}_{\partial/\partial w_j}, P_{(0,\epsilon)}]$ is a bounded leafwise smoothing operator. The higher derivatives may be handled the same way, so we have that the spectral projections $P_{(0,\epsilon)}$ are transversely smooth. Thus we have that this example satisfies all the conditions of Corollary \ref{limit}, except one. Namely, the Novikov-Shubin invariants of the signature operator on $\mathbb R^n$ are exactly $n/2$, i.e.\ the Novikov-Shubin invariants of $D_n$ are not greater than half the codimension of the foliation. Recall, \cite{HL1999}, Corollary 4, that the Haefliger Connes-Chern character of any leafwise Dirac operator with coefficients in a leafwise flat bundle $E$ is given (up to a constant) by the integral over the fiber of the foliation of the characteristic class $\widehat{A}(TF)\operatorname{ch}(E)$, where $\widehat{A}(TF)$ is the $\widehat{A}$ genus of the tangent bundle of the foliation $F$. As $TF$ is a trivial bundle, $\widehat{A}(TF) = 1$, and the Haefliger Connes-Chern character of $D_n$ is given by $$ \operatorname{ch}_a(D_n) \,\, = \,\, \int_F \operatorname{ch}(\widehat{E}_n). $$ By Hector et al, \cite{KacimiHector, KacimiHectorSergiescu}, the Haefliger cohomology of $F$ is the same as the basic cohomology , that is the cohomology of the transverse forms which are invariant under the holonomy. It is not hard to see that this is isomorphic to $H^*(\mathbb T^n;\mathbb R)$, and we can easily identify the class $\displaystyle \int_F \operatorname{ch}(\widehat{E}_n)$ under this isomorphism. In particular it is just $\prod_{j=1}^n \alpha_j,$ where $\alpha_j$ is the pull back of the natural generator of $H^1(\mathbb T^1;\mathbb Z)$ under the projection $\mathbb T^n \to \mathbb T^1$ onto the $j$\,th coordinate. Thus $\operatorname{ch}_a(D_n)$ is non-zero, so $\operatorname{ch}_{a}(\operatorname{Ind}_{a}(D)) \neq \operatorname{ch}_{a}([P_{0}])$ for this example. Note further that $\operatorname{ch}_a(D_n)$ is non-zero only in the top dimension, so for $k < n/2$, $\operatorname{ch}_a^k(D_n) =\operatorname{ch}_a^k([P_0])$, which they must be by Theorem \ref{main}.
2010.02496
\section*{References}
2104.02089
\section{Introduction} \label{sec:intro} Galaxies can be found in environments of very different nature, ranging from very low (isolated galaxies) to high density (massive cluster cores), with filaments and groups laying in between these extreme conditions. Both the properties of galaxies and the relative fractions of different galaxy types depend on the environment \citep[e.g.,][]{Oemler1974, Dressler1980, Goto2003, Blanton2005}. In low density environments, galaxies tend to be blue, star-forming and late-type, while dense environments are dominated by red, early-type galaxies. The dynamics of the system, e.g. the relative speed with which galaxies move, or the characteristics of the intergalactic medium, e.g. the presence of hot gas, may influence the evolution of galaxies and thus modify their properties. Within this scenario, different physical processes have been proposed to affect the evolutionary history of galaxies, in most cases leading to a suppression of the star formation. Among those turning galaxies into passive, some are expected to affect both the stellar and the gas components, and they can substantially change the structure of galaxies and even cause significant loss of mass. Examples include galaxy-galaxy interactions, mergers and galaxy harassment \citep[e.g.,][]{toomre72, Moore1998}. Some others are instead expected to leave the stellar component mostly unaltered and are due to the presence of gas in the intergalactic medium that can substantially affect the gas in galaxies through mechanisms such as ram pressure \citep[e.g.,][]{Gunn1972, abadi99} or starvation \citep[e.g.,][]{Larson1980, Balogh2000, Kawata2008}. There are also processes that feed galaxies with gas, replenishing their fuel for star formation and therefore causing star formation enhancement. Gas accretion from the external regions has been observed in nearby galaxies \citep{Sancisi2008} and also predicted by cosmological models \citep{Semelin2005}. Note though that some of the aforementioned processes that eventually lead to star formation quenching can also induce short bursts of star formation: ram pressure stripping via the compression of the available gas \citep{BekkiCouch2003, Gavazzi2003, Vulcani2018_L, Tomicic2018}; gas rich mergers through accumulation of cold gas \citep[e.g.,][]{Fujita2003, Brinchmann2004, Ostriker2011}. The relative influence of these processes depends on several physical parameters that vary from one environment to another. For example, galaxy mergers are rare in clusters because of the large velocity dispersions of the systems and are instead favoured in less dense environments \citep[e.g.][]{mihos04}; in contrast, ram pressure stripping is expected to be more effective in the cluster cores because of the large velocities and higher densities of the intracluster medium, while its role in less dense environments has not been characterized in detail yet. While much of the literature on the impact of the hosting environments on galaxy star formation has been confined to studies of galaxy clusters, a systematic characterization and census of all the different mechanisms in the less dense environments is still lacking. A step towards achieving an understanding of the role of the environment on galaxy evolution is the study of the spatially resolved properties of galaxies: each of the aforementioned processes are indeed expected to leave a different imprint on the gas and star distributions. Data delivering maps of galaxy properties are hence extremely useful and the advent of integral field spectroscopy (IFS) has increased considerably our understanding of the processes that govern galaxy transformations. Few studies have so far specifically focused on the analysis of spatially resolved properties of galaxies in the field, and never compared the signatures of the different mechanisms in homogeneous samples. \cite{Privon2017} investigated the properties of an interacting dwarf pair found in isolation in the local universe with Very Large Telescope/Multi Unit Spectroscopic Explorer (VLT/MUSE) optical IFU observations, finding that starbursts in low-mass galaxy mergers may be triggered by large-scale ISM compression, and thus may be more distributed than in high mass systems. \cite{Fossati2019} used VLT/MUSE observations to map the extended ionized gas in between the members of a group infalling into a cluster at z = 0.021. Their analysis highlights the coexistence of different mechanisms: the group is shaped by pre-processing produced by gravitational interactions in the local group environment combined with ram pressure stripping by the global cluster halo. \cite{DuartePuertas2019} observed the Stephan's Quintet, the prototypical compact group of galaxies in the local Universe, with the imaging Fourier transform spectrometer SITELLE, at the Canada-France-Hawaii-Telescope, to perform a deep search for intergalactic star formation, shedding light on the complicated history of this system. \cite{Schaefer2019TheGroups} used a statistical approach to explore the radial distribution of star formation in galaxies in the Sydney Australian Astronomical Observatory Multi-object Integral Field Spectrograph \citep[SAMI,][]{Croom2012} Galaxy Survey as a function of their local group environment, finding that the dynamical mass of the parent halo of a galaxy is a good predictor of environmental quenching. In this context, the Gas Stripping Phenomena in galaxies (GASP\footnote{\url{http://web.oapd.inaf.it/gasp/index.html}}) survey is helping to shed light on gas removal processes as a function of environment and to understand in what environmental conditions such processes are efficient. The survey is based on VLT/MUSE observations and therefore delivers maps of many galaxy properties, allowing us to characterize both the ionized gas and the stellar components. GASP explores a wide range of environments, from galaxy clusters to groups and poor groups, filaments and galaxies in isolation. Its targets are located in dark matter halos with masses spanning four orders of magnitude ($10^{11}-10^{15} M_\odot$). Most of the GASP papers have focused on cluster galaxies \citep{Poggianti2017, Poggianti2019b, Bellhouse2017, Bellhouse2019, Fritz2017, Gullieuszik2017, Moretti2017, George2018, George2019} characterizing mainly ram pressure stripping events and connecting them to cluster properties \citep{Jaffe2018, Gullieuszik2020}. A few papers have also investigated peculiar galaxies in the field \citep{Vulcani2017c, Vulcani2018_b} and characterized galaxies in filaments \citep{Vulcani2019_fil} and galaxies belonging to the same group \citep{Vulcani2018_g}, highlighting the variety of mechanisms that take place in these environments. The goal of this paper is to present a comprehensive characterization of all the field galaxies with peculiar properties in GASP, trying to determine the main mechanisms altering their properties and providing a panorama of the different processes taking place in low-density environments. As the selection of the GASP targets was biased towards galaxies with signs of possible stripping (i.e. unilateral debris, see Sec.\ref{sec:data}), the sample is not suitable for performing general statistics of the most probable processes occurring in the field. We will however state the `success' of an optical selection in identifying gas removal processes for field galaxies and probe the power of IFU data in pinning down the acting mechanism. We will also study whether galaxies that are affected by environmental processes follow the fundamental scaling relations of undisturbed galaxies. { This study is based on a data set of excellent quality for the large spatial extent of the Field-Of-View (FoV) and the broad wavelength coverage. This latter in particular allows us to study the properties of both the ionised gas and the stellar components out to large galactocentric distance in a homogeneous way. These are the main advantages with respect to existing surveys obtained with other optical IFU instruments, such as CALIFA \citep{Sanchez2012}, SAMI \citep{Bryant2015}, and MaNGA \citep{Bundy2015},\footnote{{ We refer to Tab. 3 in \cite{Bundy2015} for a detailed comparison of the different IFU Surveys.}} or optical Fabry–Perot interferometry such as GHASP \citep{Garrido2002}. Unfortunately, these differences between GASP and other IFU surveys hamper a straightforward and homogeneous comparison with other samples, preventing us from increasing the number of galaxies in our study. } The paper is structured as follows: Sec. \ref{sec:analysis} presents the data analysis and Sec. \ref{sec:data} the data sample; Sec. \ref{sec:results1} describes in detail { the typical signatures of the different mechanisms on galaxies, Sec. \ref{sec:results1_1}} presents some clear examples of the different mechanisms identified in the sample. Sec. \ref{sec:results2} summarizes the main results and investigates some scaling relations. Finally, Sec. \ref{sec:summary} gives a summary of the main results. In Appendix \ref{sec:all} a panorama of all galaxies in the sample is given. Throughout the paper, we adopt a \cite{Chabrier2003} initial mass function (IMF) in the mass range 0.1-100 M$_{\odot}$. The cosmological constants assumed are $\Omega_m=0.3$, $\Omega_{\Lambda}=0.7$ and H$_0=70$ km s$^{-1}$ Mpc$^{-1}$. \section{Data analysis} \label{sec:analysis} The survey strategy, observations, data reduction and analysis procedure is presented in detail in \citet{Poggianti2017}. Observations were carried out between April 2015 and April 2017 using the MUSE spectrograph located at VLT in Paranal. Each galaxy was observed with clear conditions (seeing $<0\farcs$9). Data were reduced with the most recent available version of the MUSE pipeline\footnote{\url{http://www.eso.org/sci/software/pipelines/muse}} and datacubes were averaged filtered in the spatial direction with a 5$\times$5 pixel kernel, corresponding to our worst seeing conditions of 1$^{\prime\prime}$ = 0.7-1.3 kpc at the redshifts of the targets. Reduced datacubes were corrected for extinction due to our Galaxy, using the extinction value estimated at the galaxy position \citep{Schlafly2011} and assuming the extinction law from \cite{Cardelli1989}. To obtain an emission-only datacube, we subtracted the stellar-only component of each spectrum derived with our spectrophotometric code {\sc sinopsis}\xspace \citep{Fritz2017}. In addition, {\sc sinopsis}\xspace provided us with spatially resolved estimates of the following stellar population properties: stellar masses; average star formation rate and total mass formed in four age bins (= star formation histories, SFH): young (ongoing star formation) = $t < 2 \times 10^7$ yr, recent $= 2 \times 10^7 < t< 5.7 \times 10^8$ yr, intermediate-age = $5.7 \times 10^8 <t< 5.7 \times 10^9$ yr, and old =$> 5.7 \times 10^9$ yr; luminosity- weighted stellar ages; reconstructed absolute magnitudes in a wide range of filters. Emission line fluxes and errors, along with the underlying continuum, gaseous velocities (with respect to given redshift), and velocity dispersions were derived using the IDL software {\sc kubeviz}\xspace \citep{Fossati2016}. We consider as reliable only spaxels with S/N(H$\alpha$\xspace)$>$4. H$\alpha$\xspace luminosities\footnote{{ Throughout the paper, we assume that H$\alpha$\xspace emission originates in HII regions and not in diffuse ionized gas. A detailed analysis of the diffuse ionized gas can be found in \citet{Tomicic2021}.}} corrected both for stellar absorption and for dust extinction were used to compute SFRs, adopting the \cite{Kennicutt1998a}'s relation: $\rm SFR (M_{\odot} \, yr^{-1}) = 4.6 \times 10^{-42} L_{\rm H\alpha} (erg \, s^{-1})$. The extinction was estimated from the Balmer decrement assuming an intrinsic value $\rm H\alpha/H\beta = 2.86$ and the \cite{Cardelli1989} extinction law. Stellar kinematics were extracted from the spectra using the Penalized Pixel-Fitting (pPXF) code \citep{Cappellari2004}, which fit the observed spectra with the stellar population templates by \cite{Vazdekis2010}. We performed the fit of spatially binned spectra based on signal-to-noise ratio (S/N = 10 for most galaxies), as described in \cite{Cappellari2003}, with the weighted Voronoi tessellation modification proposed by \cite{Diehl2006}. Maps were then smoothed using the two-dimensional local regression techniques (LOESS) as implemented in the Python code developed by M. Cappellari.\footnote{\url{http://www-astro.physics.ox.ac.uk/~mxc/software}} We employed the standard diagnostic diagrams to separate the regions powered by star formation from regions powered by Active galactic Nuclei (AGN) or Low-Ionization Nuclear Emission Region (LINER) emission \citep{Baldwin1981}. Only spaxels with a S/N$>$3 in all emission lines involved (including H$\alpha$\xspace) were considered. To compute total SFRs, we adopted the [O\textsc{III}]5007/$\rm H\beta$ vs. [N\textsc{II}]6583/$\rm H\alpha$ diagram and the division lines by \citet{Kauffmann2003} to select the spaxels whose ionized flux is powered by star formation. To characterize the physical processes acting on galaxies, we also inspected the [OIII]5007/$\rm H\beta$ vs [OI]6300/$\rm H\alpha$ diagnostic diagram. Among the various line-ratio diagrams, the one based on the [O\textsc{I}] is the most sensitive to physical processes different from Star Formation (e.g. thermal conduction from the surrounding hot ICM, turbulence and shocks) and can therefore be considered as a conservative lower limit of the real star formation budget \citep{Poggianti2019}. Metallicity of the ionized gas was computed for each star-forming spaxel using the pyqz Python module7 \citep{Dopita2013} v0.8.2; we obtained the $12 + \log(O/H)$ values by interpolating from a finite set of diagnostic line ratio grids computed with the MAPPINGS code \citep[see][for details]{Franchetto2020}. Only spaxels with a S/N$>$3 in all emission lines involved are considered Structural parameters (effective radius $R_e$, inclination $i$, ellipticity $\varepsilon$, position angle $PA$) were computed on I-band images by measuring the radius of an ellipse including half of the total light of the galaxy \citep[see][for details]{Franchetto2020}. To identify the spaxels belonging to the galaxy main body we sliced the 2D image of the near-H$\alpha$\xspace continuum obtained with {\sc kubeviz}\xspace. We selected the isophote enclosing essentially all of the galaxy body, down to $\sim$1$\sigma$ above the background level \citep[see][for details]{Poggianti2017}. Integrated values are measured as the sum of all the spaxels within this contour. Only for metallicity, the integrated value we provide is the mean value computed at R$_e$ \citep{Franchetto2020}. \subsection{Ancillary catalogs - Galaxy environments} As defining the environment is quite a critical task, especially for the sparsest galaxies, in what follows we will report the group classification from three different works: our own PM2GC \citep{Calvi2011TheGalaxies}, the \cite{Tempel2014_g} and \cite{Saulder2016} group catalogs. The latter two are both based on Sloan Digital Sky Survey data. Specifically, \cite{Tempel2014_g} is based on SDSS DR10 \citep{york00, Ahn2014}, while \cite{Saulder2016} used the SDSS DR12 \citep[][]{Alam2015}, combined to the Two Micron All Sky Survey and the 2MASS Redshift Survey \citep[2MASS and 2MRS,][]{Skrutskie2006, Huchra2012}. However, (1) a few galaxies do not fall into the SDSS/2MASS footprints (2) besides the group environment we also need to detect companions that might exert tidal forces. We therefore combined MCG \citep{Driver2005}, SDSS, WINGS/OmegaWINGS \citep{Moretti2014WINGSClusters, Moretti2017} and Hyperleda\footnote{\url{http://leda.univ-lyon1.fr}} redshifts to get a catalog as complete as possible. We also downloaded data from the Hyperleda catalog for galaxies with no redshift available, to further detect possible companions. We note that as group finding methods are automatic and meant to study groups from a statistical point of view, sometimes the values obtained to characterize the physical parameters of the single groups are not very reliable and should be taken with caution. \section{Galaxy Sample} \label{sec:data} The GASP program observed 114 galaxies, extracted from three surveys that, together, cover the whole range of environmental conditions at low redshift: WINGS \citep{Fasano2006}, OMEGAWINGS \citep{Gullieuszik2015}, and PM2GC \citep{Calvi2011TheGalaxies}. 94 of these galaxies were selected from the \cite{Poggianti2016JELLYFISHREDSHIFT} catalog of gas stripping candidates as they showed in their B-band images debris trails, tails, or surrounding debris located on one side of the galaxy. \begin{longrotatetable} \begin{deluxetable*}{llllllllllll} \tablecaption{Main properties of the galaxies in the sample. \label{tab:sample}} \tabletypesize{\scriptsize} \tablehead{ \colhead{ID} & \colhead{z} & \colhead{RA} & \colhead{DEC} & \colhead{$M_\ast$} & \colhead{SFR} & \colhead{$R_e$} & \colhead{$\varepsilon$} & \colhead{incl} & \colhead{12+$\log(O/H)@R_e$} & \colhead{mechanism} & \colhead{flag$_{c}$\tablenotemark{\footnotesize{a}}}\\ \colhead{} & \colhead{} & \colhead{[J2000]} & \colhead{[J2000]} & \colhead{[$\rm 10^9 M_\odot$]} & \colhead{[$\rm M_\odot/yr$]} & \colhead{[arcsec]} & \colhead{[deg]} & \colhead{[deg]} & \colhead{} & \colhead{} & \colhead{} } \startdata JO134\tablenotemark{\footnotesize{b}} & 0.0166 & 12:54:38.33 & -30:09:26.5 & 1.1$\pm$0.2 & 0.188$\pm$0.001 & -- & -- & -- & 8.1$\pm$0.1& merger+RPS &1 \\ JO190\tablenotemark{\footnotesize{b}} & 0.0132 & 22:26:53.62 & -30:53:11.1 & 2.2$\pm$0.6 & 0.370$\pm$0.002 & -- & -- & -- & 8.3$\pm$0.1 & merger & 1\\ JO20\tablenotemark{\footnotesize{b}} & 0.1471 & 01:08:55.06 & +02:14:20.8 & 71$\pm$12 & 12.1$\pm$0.1& -- & -- & -- &8.9$\pm$0.1& merger & 1\\ P11695 & 0.0464 & 10:46:14.89 & +00:03:00.8 & 12$\pm$2 & 3.27$\pm$0.04 & 4.9$\pm$0.4 & 0.09$\pm$0.02 & 25$\pm$2 &8.71$\pm$0.04& gas accretion (GASP VII) & 1\\ P12823 & 0.0504 & 10:52:24.04 & -00:06:09.9 & 22$\pm$6 & 1.33$\pm$0.01 & 3.3$\pm$0.2 & 0.56$\pm$0.04 & 65$\pm$3 &9.12$\pm$0.02& interaction & 1\\ P14672 & 0.0498 & 11:01:55.10 & +00:11:41.1 & 8$\pm$1 & 0.154$\pm$0.05 & 2.9$\pm$0.2 & 0.11$\pm$0.04 & 28$\pm$6 &8.98$\pm$0.05& CWE & 0\\ P16762 & 0.0487 & 11:10:19.63 & -00:08:34.4 & 35$\pm$5& 0.011$\pm$0.002 & 3.4$\pm$0.2 & 0.493$\pm$0.009 & 60.4$\pm$0.6 &--& starvation & 1\\ P17048 & 0.049 & 11:12:38.27 & +00:08:01.2 & 4$\pm$1 & 0.544$\pm$0.006 & 3.7$\pm$0.2 & 0.20$\pm$0.03 & 37$\pm$3 &8.44$\pm$0.05& interaction & 1\\ P18060 & 0.043 & 11:14:59.28 & -00:00:43.2 & 1.2$\pm$0.3 & 0.049$\pm$0.001 & 3.0$\pm$0.3 & 0.36$\pm$0.02 & 51$\pm$1 &8.4$\pm$0.1& CWS & 1\\ P19482 & 0.0406 & 11:22:31.25 & -00:01:01.6 & 21$\pm$4 & 1.25$\pm$0.02 & 4.7$\pm$0.4 & 0.419$\pm$0.005 & 55$\pm$4 &8.97$\pm$0.05& CWE (GASP XVI) & 1 \\ P20159 & 0.0489 & 11:24:22.26 & -00:16:34.3 & 3.7$\pm$0.9 & 0.233$\pm$0.003 & 5.2$\pm$0.4 & 0.76$\pm$0.01 & 78.5$\pm$0.8 &8.2$\pm$0.1& RPS &0\\ P3984 & 0.0464 & 10:14:05.86 & -00:07:37.6 & 2.5$\pm$0.6 & 0.441$\pm$0.007 & 3.6$\pm$0.5 & 0.54$\pm$0.05 & 64$\pm$4 &8.36$\pm$0.09& merger & 1\\ P40457 & 0.0678 & 13:01:33.06 & -00:04:51.1 & 5$\pm$1 & 0.187$\pm$0.005 & 4.0$\pm$0.6 & 0.61$\pm$0.04 & 68$\pm$3 &8.4$\pm$0.1& gas accretion & 0\\ P443 & 0.0464 & 09:59:31.53 & -00:15:22.8 & 26$\pm$2 & 0.015$\pm$0.002 & 2.9$\pm$0.3 & 0.30$\pm$0.05 & 46$\pm$4 &--& starvation & 1\\ P4946 & 0.0621 & 10:18:30.81 & +00:05:05.0 & 29$\pm$6 & 0.288$\pm$0.07 & 2.4$\pm$0.3 & 0.63$\pm$0.03 & 69$\pm$2 &?\tablenotemark{\footnotesize{c}}& gas accretion (GASP XII) & 0\\ P5055 & 0.061 & 10:18:08.54 & -00:05:03.1 & 51$\pm$9 & 0.870$\pm$0.007 & 6.0$\pm$0.7 & 0.822$\pm$0.005 & 83.0$\pm$0.4 &8.93$\pm$0.03& RPS (GASP XII) & 1\\ P5169 & 0.0634 & 10:18:13.79 & +00:03:56.6 & 2.6$\pm$0.4& 0.0018$\pm$0.0003 & 2.3$\pm$0.2 & 0.52$\pm$0.04 & 63$\pm$3 &--& starvation (GASP XII) & 1\\ P5215 & 0.0629 & 10:16:58.24 & -00:14:52.9 & 33$\pm$6 & 0.99$\pm$0.01 & 5.2$\pm$0.3 & 0.266$\pm$0.04 & 43$\pm$4 &9.05$\pm$0.03& RPS/CWE (GASP XII) & 0\\ P59597 & 0.0495 & 14:17:41.13 & -00:08:39.4 & 5$\pm$1 & 0.283$\pm$0.005 & 5.7$\pm$0.4 & 0.651$\pm$0.009 & 70.9$\pm$0.6 &8.42$\pm$0.06& RPS & 1 \\ P63661 & 0.055 & 14:32:21.78 & +00:10:41.4 & 21$\pm$5& 1.31$\pm$0.02 & 6.0$\pm$0.6 & 0.26$\pm$0.08 & 43$\pm$7 &8.74$\pm$0.05& CWE (GASP XVI) & 1\\ P63692 & 0.0561 & 14:31:59.98 & +00:05:03.3 & 0.8$\pm$0.2 & 0.063$\pm$0.001 & 2.7$\pm$0.2 & 0.62$\pm$0.02 & 69$\pm$2 &8.0$\pm$0.1& CWS & 1\\ P63947 & 0.0562 & 14:31:01.82 & -00:10:56.9 & 2.2$\pm$0.5 & 0.111$\pm$0.002 & 2.9$\pm$0.2 & 0.3$\pm$0.1 & 50$\pm$9 &8.38$\pm$0.09& merger & 1\\ P8721 & 0.0648 & 10:34:08.65 & +00:00:03.2 & 52$\pm$8 & 2.76$\pm$0.02 & 6.0$\pm$0.2 & 0.68$\pm$0.03 & 73$\pm$2 &8.95$\pm$0.05& CWE (GASP XVI) & 1\\ P877 & 0.0427 & 10:00:49.96 & -00:08:57.0 & 24$\pm$4 & 1.57$\pm$0.01 & 4.4$\pm$0.3 & 0.17$\pm$0.04 & 34$\pm$4 &9.05$\pm$0.04& merger & 0 \\ P95080 & 0.0403 & 13:12:08.75 & -00:14:20.3 & 13$\pm$3 & 1.17$\pm$0.02 & 7.5$\pm$0.9 & 0.27$\pm$0.07 & 43$\pm$6 &8.63$\pm$0.07& CWE (GASP XVI) &1\\ P96244 & 0.0531 & 14:18:35.47 & +00:09:27.8 & 60$\pm$20 & 5.32$\pm$0.03 & 5.9$\pm$0.8 & 0.44$\pm$0.02 & 57$\pm$1 &9.03$\pm$0.04& RPS & 1\\ P96949 & 0.0511 & 11:54:09.95 & +00:08:18.1 & 25.11$\pm$5 & 1.8$\pm$0.02 & 5.5$\pm$0.5 & 0.51$\pm$0.06 & 62$\pm$4 &8.7$\pm$0.2& merger (GASP VIII)& 1\\ \enddata \tablenotetext{a}{flag$_c$=1 for certain classification, flag$_c$=0 for uncertain classification.} \tablenotetext{b}{structural parameters not measured due to the irregularities of the galaxy.} \tablenotetext{c}{P4946 has no a reliable estimate of the ionized gas metallicity because of the presence of the central AGN that does not allow to have a significant number of star-forming spaxels to use to estimate the metallicity at the effective radius.} \end{deluxetable*} \end{longrotatetable} Galaxies whose optical morphological disturbance was clearly induced by mergers or tidal interactions were deliberately excluded from the target sample. In this paper we study the field galaxies included in the stripping sample and the few passive ones. Effectively, we excluded from the entire GASP sample (114 galaxies) cluster stripped galaxies \citep{Vulcani2018_L, Vulcani2020, Vulcani2020b}, field galaxies not showing any disturbance in their H$\alpha$\xspace images \citep[control sample galaxies,][]{Vulcani2018_L,Vulcani2019b}, and a galaxy belonging to a background cluster. The sample analysed in this work therefore includes 27 galaxies. We note that an in depth analysis of 10/27 galaxies has been already published in previous papers \citep{Vulcani2017c, Vulcani2018_b, Vulcani2018_g, Vulcani2019_fil}, hence in what follows we will just refer to those papers without recapping their main findings. Those galaxies, however, will be included in Sec.\ref{sec:results2}, where we will characterize the field population as a whole. Table \ref{tab:sample} presents the list of all galaxies, along with some main properties. Most of the galaxy names start with ``P'', as they were drawn from the field sample of the PM2GC. There are though few galaxies whose name starts with ``J''. These galaxies were drawn from the imaging of the cluster sample (WINGS/OMEGAWINGS), but had no redshift prior to MUSE observations. It was therefore assumed that they were cluster members, but, as will be discussed later on, they are not. As it will be discussed in the following, according to both the $\rm [OIII]5007/H\beta$ vs. [N II]6583/$\rm H\alpha$ and $\rm [OIII]5007/H\beta$ vs. [OI]6300/$\rm H\alpha$ diagrams only two galaxies host an AGN in their center: JO20 and P4946. \section{The body of evidence of the different physical processes} \label{sec:results1} As mentioned in the introduction, galaxies in the different environments feel different physical processes, which are expected to leave a different imprint on the galaxy spatially resolved properties. \begin{figure*} \centering \includegraphics[scale=0.5, angle=90]{mechanisms_v6.png} \caption{Flow chart describing the main criteria adopted to pin point the major mechanism acting on galaxies. Further evidence for each physical process is given the following figures. \label{fig:mech} } \end{figure*} In this section we will present the criteria adopted for the classification in sub-classes, describing what are their main expected signatures on the gas and stellar property maps. To assign a physical mechanism to each galaxy in the sample, we adopted the scheme presented in Fig. \ref{fig:mech}. This diagram is as general as possible and can be used to classify also galaxies outside our sample. However, it includes only the main line of evidence. Our classification is also supported by the analysis of the other galaxy properties, as discussed below. { We note that our analysis is based on optical observations, and that no information on the distribution and kinematics of neutral gas, which likely dominates the gas mass budget of our star-forming galaxies, is available for this sample.} \subsection{The classification criteria} The first step is to separate galaxies with H$\alpha$\xspace in emission throughout the disk - i.e. star-forming galaxies - from those with no H$\alpha$\xspace in emission - i.e. passive systems. \subsubsection{Galaxy-Galaxy interactions} Focusing on galaxies with H$\alpha$\xspace in emission, we first select the objects with a close companion (distance between the galaxies $r<50\arcsec$) to identify possible cases of galaxy-galaxy interactions. Indeed, even though interactions and mergers were avoided in the target selection based on B-band imaging \citep{Poggianti2016JELLYFISHREDSHIFT, Poggianti2017}, MUSE observations revealed their presence in the sample. Disk galaxies with a companion can experience gravitational forces which are different from one side of the galaxy to the other. As a result, their material undergoes deforming effects that re-arrange the individual components of the galaxy \citep[e.g.,][]{toomre72}, producing tails \citep[e.g.,][]{Mihos1993} and, sometimes, warps \citep{Semczuk2020}. Effects of interactions can therefore be seen on the ionized gas and stellar kinematics maps obtained by MUSE. Tidal interactions (cyan box in Fig.\ref{fig:mech}, discussed in Sec. \ref{sec:interactions} and \ref{sec:interactions_a}) are expected to happen when the acceleration $a_{tid}$ produced by the companion on the galaxy of interest is larger than the acceleration from the potential of the galaxy itself, $a_{gal}$. Following \cite{Vollmer2005b}, $$ \frac{a_{tid}}{a_{gal}} = \frac{M_{neighbour}}{M_{gal}} \left(\frac{r}{R}-1\right)^{-2} $$ where R is the distance from the centre of the galaxy, r is the distance between the galaxies, and $\frac{M_{neighbour}}{M_{gal}}$ the stellar mass ratio. This formulation would require the de-projected distance between the two galaxies, which is unfortunately unknown, therefore we use the projected distance. Typically, if $\frac{a_{tid}}{a_{gal}}>0.15$, meaning that the tidal acceleration is at least 15\% of the galaxy's own acceleration, tidal interaction could be invoked as one of the main mechanisms affecting galaxy properties \citep{Merluzzi2016}. \subsubsection{Merging systems} { Next, we focus on merging systems.} During an interaction, in some cases galaxies get very close and the exchange of orbital angular momentum (through dynamical friction) becomes very high, so that the mean distance between the progenitors rapidly decreases and they finally merge (blue box in Fig.\ref{fig:mech}, discussed in Sec. \ref{sec:Mergers} and \ref{sec:Mergers_a}), forming a unique galaxy \citep{Barnes1992a}. As the gas is redistributed, the regular metallicity gradients typically found in galaxies \citep[e.g.][]{Pilyugin2014} can be destroyed \citep{Kobayashi2004}. Star formation can also be triggered during mergers: gas can infall towards the center, inducing a nuclear starburst \citep{Springel2000, Barnes2002, Naab2006}. At large radii the gravitational torques instead push the material to the outer regions. This outflow enhances the formation of the tails already formed by the tidal field itself \citep{Bournaud2010}. Therefore, the signs of the mergers on the spatially resolved properties are chaotic gas and stellar kinematics and, in some cases, tidal features \citep[e.g.][]{Mihos1993, Struck1999}; flat or very irregular metallicity gradients, and signs of the merger remnants (i.e. double nuclei). Note that simulations by \cite{Hung2016} have shown that the time and duration during which the merger signatures are detectable are typically 0.2 - 0.4 Gyr, except in the case of equal mass merger, when the duration is approximately twice as long. Interactions and mergers can also produce gas inflows that can feed a central black hole and induce AGN activity \citep{Sanders1988}. If no interactions/mergers occur, galaxies are expected to show rather regular patterns in their stellar kinematics. If they also present a regular ionized gas kinematics, we can consider them as undisturbed galaxies (grey box in Fig.\ref{fig:mech}) and, in our case, they constitute the GASP control sample \citep{Vulcani2018_L,Vulcani2019b}. \subsubsection{Ram pressure stripping} { In the previous sections we focused on processes that alter the galaxy stellar kinematic, we now discuss hydrodynamical processes, which typically leave the stellar component unaltered, but produce } disturbed gas kinematics. Distortions can be limited to the galaxy disk, or can be visible as ionized gas tails. The latter are typically produced by ram pressure stripping \citep{Gunn1972}, one of the most efficient mechanisms to remove the gas from galaxies, provided the galaxy is embedded in a rather dense medium (clusters and groups). Indeed the strength of the ram pressure depends on the intracluster gas density and the speed of the galaxy relative to the medium. This pressure can strip gas out of the galaxy where the gas is gravitationally bound to the galaxy less strongly than the force from the intracluster medium wind due to the ram pressure. Ram pressure stripping (purple boxes in Fig.\ref{fig:mech}, discussed in Sec. \ref{sec:RPS} and \ref{sec:RPS_a}) can produce tails of ionized gas that is ionized through stellar photoionization due to ongoing star formation within the stripped gas. { In contrast, the stellar component is undisturbed.} The gas retains the velocity of the stars at the location of the disc from which it was stripped and continues to rotate coherently with the galaxy until several kiloparsecs away from the main galaxy body. The stripping proceeds from the outside in, with the outermost regions of the disc stripped first \citep[e.g.][]{Poggianti2017}. As a consequence, the gas properties in the tail are similar to the gas properties in the external regions of the disk (e.g. lower metallicity than in the galaxy core, A. Franchetto et al. in prep). Tails are also very young \citep[few Myr, e.g.][]{Gullieuszik2017}, as stars are born out of the stripped gas. Galaxies undergoing ram pressure stripping can also show a central burst in star formation \cite{Vulcani2018_L}, as expected when the gas is compressed by this mechanism \citep{Roediger2014}. Ram pressure stripping can eventually remove the gas, producing first truncated disks \citep[e.g.,][]{Fritz2017} and then quenching star formation and transforming galaxies into passive systems \citep{Vulcani2020}. \subsubsection{Cosmic web stripping} Even though many parameters influence the efficiency of ram pressure stripping \citep[galaxy mass, galaxy orbit within the cluster, cluster properties;][]{Gullieuszik2020}, the density of the medium in which the galaxy is embedded plays a critical role. Galaxies in isolation are not expected to feel ram pressure stripping. These galaxies could instead undergo cosmic web stripping (green box in Fig.\ref{fig:mech}, discussed in Sec. \ref{sec:cws} and \ref{sec:cws_a}). This mechanism has been introduced by \cite{Benitez2013} to characterize the effect of the cosmic web on isolated dwarf galaxies. The process is very similar to ram pressure stripping, but exerted by a much less dense environment. { Similarly to ram pressure stripping, it is expected to give origin to a ionized gas protuberance that resembles a tail, extending in a well defined direction. The comparison between the stellar and gas kinematics is therefore key to distinguish between cosmic web stripping and e.g. irregular galaxies: in irregular galaxies the stellar kinematics is expected to closely follow the motion of the gas, both in terms of extent and rotation \citep[e.g.,][]{Johnson2012}.} \begin{table*} \centering \small \begin{tabular}{ccccccc} \hline \multicolumn{1}{c}{\multirow{2}{*}{ID}} & presence of & a$_{tid}$ & asymmetric & asymmetric & inhomogeneous & \multirow{2}{*}{$\rm flag_c$} \\ & companion & $>0.15$ & rgb and H$\alpha$\xspace & $\Delta v_{gas}$ and $\Delta v_{star}$ &$ \sigma v_{star}$ & \\ \hline P17048 & \cmark & \cmark & \cmark & \cmark & \cmark & 1\\ P12823 & \cmark & {?} & \cmark & \xmark & \cmark & 1\\ \end{tabular} \caption{{ Summary of the main features investigated to characterize interacting systems. \cmark\ means that the criterion is met, \xmark\ means that the criterion is not met, { ?} means that it is not possible to verify the criterion (see text for details). $\rm flag_c$ indicates if the classification is certain (1), or uncertain (0). } \label{tab:interacting}} \end{table*} Cosmic web stripping is also expected to eventually remove the gas, producing first truncated disks and then quenching star formation and transforming galaxies into passive systems \citep{Benitez2013}. However, no observational studies have observed these phases. \subsubsection{Cosmic web enhancement} Distorted gas kinematics can also be produced by processes that rather than removing gas from galaxies either compress the existing gas or feed the galaxies with new gas, aiding star formation. Cosmic Web Enhancement (orange box in Fig.\ref{fig:mech}, discussed in Sec. \ref{sec:CWE}) is one of these mechanisms. This process has been first proposed by \cite{Vulcani2018_g} to explain the appearance of galaxies found in filaments. Filaments can indeed assist gas cooling and increase the extent of the star formation in the densest regions in the circumgalactic gas. \cite{Liao2018} showed that filaments are an environment that particularly favors this gas cooling followed by condensation and star formation enhancement. As the clouds move through the filament, the metal-rich gas of the galaxies mixes with the metal-poor gas constituting the circumgalactic gas. As a consequence, the latter gets enriched and cools more easily. Galaxies undergoing Cosmic Web Enhancement show detached H$\alpha$\xspace clouds out to large distances from the galaxy center (beyond 4$\times R_e$). The gas kinematics, metallicity map, and the ratios of emission-line fluxes confirm that they do belong to the galaxy gas disc; the analysis of their spectra shows that very weak stellar continuum is associated with them. Similarly, the star formation history and luminosity weighted age maps point to a recent formation of such clouds. \subsubsection{Gas accretion} The last mechanism that can alter regular gas kinematics is gas accretion (red boxes in Fig.\ref{fig:mech}, discussed in Sec. \ref{sec:accr}) \citep{Sancisi2008, Semelin2005}. An injection of gas can produce a very lopsided morphology. This asymmetry is expected to develop only at late times {and affecting only the gaseous component, leaving the stellar one mostly unaltered}. Galaxies in isolation can be most likely fed by very low-metallicity gas inflow from the cosmic web, and present very steep and asymmetric metallicity gradients, while galaxies in denser environments most likely underwent substantial gas accretion throught mergers from a metal rich less massive object, whose presence is not visible, in a particular event. Signs of the accretion are again seen on the metallicity distribution, which is different from that observed in undisturbed galaxies \citep{Pilyugin2014}. Gas accretion with angular momentum opposite to that of the host galaxy can also produce a counter-rotating stellar disk \citep[e.g.,][]{Thakar1997, Puerari2001, Algorry2014, Bassett2017}. \subsubsection{Starvation} Focusing on galaxies with no H$\alpha$\xspace in emission, the analysis of the stellar kinematics can show signatures of past mergers, while the SFH maps can reveal how the quenching proceeded: inside-out, outside-in or homogeneously. An homogeneous suppression of the star formation can be produced by starvation. According to this scenario, star formation in galaxies is quenched because the inflow of gas from the IGM is halted, and as a consequence star formation in these galaxies can continue only for a limited amount of time by using the gas available in the galaxy. However, understanding what is causing the cut off of the gas reservoir is quite hard. \section{Results of galaxy classification} \label{sec:results1_1} In this section we will present an overview of the different mechanisms we identified acting on the galaxies in our sample. We will also characterize the galaxy hosting environments. For the sake of clarity, we will discuss in detail only one galaxy per physical process (Sec.\ref{sec:results1_1}), { showing and discussing only the main galaxy properties used to pin point the acting mechanism. In Appendix \ref{sec:all_fig}, we will instead show the maps of all the properties at our disposal, for completeness. Appendix \ref{sec:all} will show all the other galaxies in the sample, along with their classification.} We note that in some cases what we propose is just the most probable process, and we can not always exclude that other not identified mechanisms are responsible for the observed features. Therefore, in our classification, we will distinguish between certain and uncertain cases ($flag_c$ in Tab. \ref{tab:sample}). { This flag is based on the analysis presented in Tables \ref{tab:interacting}, \ref{tab:merger}, \ref{tab:rps}, \ref{tab:cws}, \ref{tab:cwe}, \ref{tab:accr}, \ref{tab:pass}. If at least two criteria are not met, we assign $flag_c$=0, meaning that the classification is tentative. In the other cases, $flag_c$=1 and the classification is secure.} Overall, 21 galaxies have a secure classification and 6 a tentative one. \subsection{Galaxy-Galaxy Interactions} \label{sec:interactions} \begin{figure*} \centering \includegraphics[scale=0.6]{P17048_forinteracting_v4.png} \caption{P17048: example of interacting candidates in the sample. From left to right: Color composite image obtained by combining the reconstructed g, r, and i filters from the MUSE data cube (rgb), H$\alpha$\xspace flux map, H$\alpha$\xspace and stellar kinematics maps, stellar velocity dispersion map. The kpc scale is also shown in each panel. North is up, and east is left. Cyan or black contours represent the distribution of the oldest stellar population (from {\sc sinopsis}\xspace). { The magenta contour on the H$\alpha$\xspace flux map represents the galaxy rotated of 180$^\circ$ and is used to quantify the lopsidedness (see text for details). } \label{fig:interacting} } \end{figure*} Two galaxies fall in this category, and in both cases the classification is certain. { Table \ref{tab:interacting} lists them and summarizes the main features investigated to characterize interacting systems. These will be discussed in what follows. Figure \ref{fig:interacting} shows the useful maps for P17048 and its companion P17044 \citep{Calvi2011TheGalaxies}, used as example to outline the features to look for to characterize interactions. These are:} color composite image obtained by combining the reconstructed g, r, and i filters from the MUSE data cube (from now on called rgb images for brevity), the H$\alpha$\xspace flux map and the H$\alpha$\xspace and stellar kinematics { (from now on $\Delta v_{gas}$ and $\Delta v_{star}$ for brevity)}\footnote{{ We note that spatially resolved spectroscopy is affected by beam smearing, i.e the flux profile is spatially blurred in the central regions due to the atmospheric seeing. As a consequence, results in the central 1 arcsec, corresponding to the PSF, are influenced by this effect.}}. { All the available maps for P17048 (but not useful to determine the main acting mechanisms) are provided in Fig.\ref{fig:P17048_bis}}. The same maps for P12823 and its companion are presented in Sec.\ref{sec:interactions_a}. In these and all the following figures, contours represent the distribution of the oldest stellar population ($t> 5.7 \times 10^9$ yr, from {\sc sinopsis}\xspace). These contours will help us to identify the ``original body'' of the galaxy. \begin{table*} \centering \small \begin{tabular}{ccccccccccc} \hline \multicolumn{1}{c}{\multirow{2}{*}{ID}}& no clear & tidal & evidence for & { asymmetric} & { asymmetric} & inhomogeneous & asymmetric & patchy &\multirow{2}{*}{$\rm flag_c$} \\ & companion & tail & merger remnant\tablenotemark{\footnotesize{a}} & $\Delta v_s$ & $\Delta v_g$ & $\sigma v_{star}$ & metallicity & young regions & & \\ \hline JO20 & \cmark & \cmark & \cmark & \xmark & \cmark &\cmark & \cmark & \cmark &1\\ JO134 & \cmark & \cmark & \cmark & \cmark & \cmark &\cmark & \cmark & \cmark &1 \\ JO190 & \cmark & \cmark & \cmark & \cmark & \xmark &\cmark & \cmark & \cmark &1 \\ P3984 & \cmark & \cmark & \xmark & \cmark& \xmark &\cmark& \cmark & \cmark &1\\ P877 & \cmark & \xmark & \xmark & \cmark &\cmark & \cmark & \xmark & \xmark &0\\ P63947 & \cmark & \cmark & \xmark & \cmark & \cmark & \cmark & \cmark & \cmark &1\\ P96949 & \cmark & \cmark & \cmark & \cmark & \cmark &\cmark & \cmark &\cmark &1 \\ \end{tabular} \tablenotetext{a}{Note that the evidence for merger remnant is not a absolutely necessary criterion to state if the galaxy underwent a merging event, and is not considered to define $\rm flag_c$.} \caption{{ Summary of the main features investigated to characterize merging systems. The meaning of the symbols is as in Tab. \ref{tab:interacting}. } \label{tab:merger}} \end{table*} As shown in Fig.\ref{fig:interacting}, P17048 is characterized by a lopsided morphology: the rgb image highlights that it extends much more towards North-West than towards South-East. The H$\alpha$\xspace map is similarly asymmetric and shows a tattered distribution towards North. { To quantify the degree to which the light is lopsided, we measure the overlap between the original image and the image of the galaxy rotated by 180$^\circ$ around the galaxy center. Specifically, we take the ratio of the area that overlap in the two images to the area of the original image. Symmetric galaxies have a level of overlap ($A_o$) of $>85\%$. The rotated map of P17048 is overlaid as pink contour to the H$\alpha$\xspace flux map in Fig.\ref{fig:interacting}. P17048 has an overlap of 73\%, suggesting that about a quarter of the light is lopsided.} Such lopsidedness is seen also in the motion of the gas: the gas kinematics is distorted. The velocity field spans the range -50$<v [km/s]<$ 50. Also P17044 shows rotation and its gas kinematics is distorted. The galaxy presents a warp in position angle in the direction of the companion. The stellar kinematics is similarly disturbed and spans the same $\Delta(v)$ range. The southern part has very low rotational velocity. The velocity dispersion map of the stellar component is rather inhomogenous, suggesting that the motion of the stars has been altered. The velocity dispersion is higher towards North, reaching values of $\sigma v_{star}$= 35 km/s. {To quantify the asymmetries in the velocity fields (both for the stellar and the gas components, separately), we consider only the stellar disk and therefore the region defined by the stellar velocity map.} We adapt the definition of asymmetry first introduced by \cite{Conselice1997}. As done for the H$\alpha$\xspace flux image, we rotate the image of the galaxy around the galaxy center and perform a comparison of the resultant rotated image with the original image. We then measure the asymmetry $A$ as follows: $$ A = \sqrt{\frac{\sum \left(|V_o|-|V_{180}|\right)^2}{\sum V_o^2}} $$ with $V_o$ either stellar or ionized gas velocity in the original image and $V_{180}$ stellar or ionized gas velocity in the image rotated by 180$^\circ$. The sum is performed over all pixels within the matching region of the original and rotated images. The lowest possible value for the asymmetry parameter is 0, corresponding to a completely symmetric velocity field, while the highest is 1 and corresponds to a galaxy that is completely asymmetric. We set a threshold of 0.3 above which we define a velocity field as asymmetric. For P17048 the asymmetry of the stellar kinematics ($A_v$) is 0.58, for the gas kinematics ($A_g$) is 0.65. \begin{figure*} \centering \includegraphics[scale=0.6]{JO20_formerger_v4_bis.png} \caption{JO20: example of merging system. From left to right: rgb image, H$\alpha$\xspace flux map, H$\alpha$\xspace and stellar kinematics maps, gas and stellar velocity dispersion maps, BPT map obtained using the OI line and the division by \cite{Kauffmann2003}, metallicity map, mass density map and luminosity weighted age map. { The stellar velocity map is shown twice with a different range of velocities, to better highlight the rotation of the eastern object.} The kpc scale is also shown in each panel. North is up, and east is left. Cyan or black contours represent the distribution of the oldest stellar population. \label{fig:merger} } \end{figure*} { To understand whether P17048 and P17044 are indeed interacting, we also inspect their surrounding environment.} P17048 is part of a small four-member group for \cite{Tempel2014_g}, while both \cite{Calvi2011TheGalaxies} and \cite{Saulder2016} classify it as binary system. P17044 is at 22\arcsec\ from P17048, towards South, and it is visible in rgb image shown in the first panel of Figure \ref{fig:interacting}. P17044 is at z= 0.04938, its mass is $\log (M_\ast/M_\sun) \sim 8.8$. Following the \cite{Vollmer2005b} prescription, $\frac{a_{tid}}{a_{gal}}>0.15$ is reached at r$\sim$10\arcsec (=$\sim$ 10 kpc), implying that at larger distances tidal interaction effects indeed play a role. P17048 and P17044 are therefore likely interacting, and, given that the disturbance is more pronounced in the opposite side with respect to the position of the companion, they could have already had a closer approach, and they might be orbiting around each other. \subsection{Merging systems}\label{sec:Mergers} Seven galaxies fall in this category: JO20, JO134, JO190, P3984, P877, P63947 and P96949. In all the cases but P877 the classification can be considered certain. { Table \ref{tab:merger} summarizes the main features used to state they are undergoing mergers. As it can be seen, some galaxies do not meet all the criteria, but as explained in the text, these do not challenge the proposed classification.} We note that for these galaxies no companions were detected either from MUSE images or from the available redshifts in the literature, but we can not exclude the presence of very faint objects. The details for the major merger P96949 can be found in \cite{Vulcani2017c} { and are summarized in Tab. \ref{tab:merger}}, those for JO134, which is an example of multiple physical processes at play, can be found in Sec.\ref{sec:multiple}, while the description of the other galaxies is deferred to Sec.\ref{sec:Mergers_a}. In this section we will focus on JO20, which is a very clear case of a major 1:1 merger { and use this galaxy to outline the typical features of galaxies undergoing mergers.} JO20 is found in the field of view of the cluster Abell 147 for which there is no spectroscopic coverage available. Indeed, JO20 had no redshift before MUSE observations and it is the farthest away object in our sample (z=0.1471). It is quite isolated in the universe: the closest galaxy at similar measured redshift is at 18\arcmin. Nonetheless, we can not exclude the presence of some unidentified small structure. \begin{figure*} \centering \includegraphics[scale=0.6]{P96244_forrps_v4.png} \caption{P96244: example of ram pressure stripped candidate. From left to right: rgb images, H$\alpha$\xspace flux maps, H$\alpha$\xspace and stellar kinematics maps, metallicity maps, BPT maps obtained using the OI line and the division by \cite{Kauffmann2003}, and star formation histories maps in 4 age bins. The kpc scale is also shown in each panel. North is up, and east is left. Cyan or black contours represent the distribution of the oldest stellar population.\label{fig:rps} } \end{figure*} \begin{table*} \centering \small \begin{tabular}{lccccccccc} \hline \multicolumn{1}{c}{\multirow{2}{*}{ID}} & member of & presence of & { asymmetric} & { symmetric} & evidence & stretched & young & central & \multirow{2}{*}{$\rm flag_c$} \\ & a group & gas tail & $\Delta v_{gas}$ & $\Delta v_{star}$ & for shocks & metallicity & tail & burst & \\ \hline P96244 & \cmark & \cmark & \cmark & \cmark & \cmark & \cmark & \cmark & \cmark &1\\ JO134 & { ? } & \cmark & \cmark & \xmark & \cmark & \xmark & \xmark & \cmark & 1\tablenotemark{\footnotesize{a}} \\ P20159 & { ? } & \cmark & \cmark & \cmark & \cmark & \cmark & \cmark & \cmark & 0 \\ P59597 & \cmark & \cmark & \cmark & \cmark & \cmark & \cmark & \cmark & \cmark & 1 \\ P5055 & \cmark & \xmark\tablenotemark{\footnotesize{b}} & \cmark & \cmark &\cmark & \cmark &\xmark\tablenotemark{\footnotesize{b}} & \cmark & 1\\ P5125 & \cmark & { ?} & \cmark & \cmark &\xmark &\xmark & \cmark &\cmark & 0 \\ \end{tabular} \tablenotetext{a}{This galaxy also underwent a merger event, so flag$_c$=1 has not been determined using the usual criterion.} \tablenotetext{b}{This galaxy is a truncated disk, which is another clear signature for RPS at a very advanced stage.} \caption{{ Summary of the main features investigated to characterize ram pressure stripped galaxies. The meaning of the symbols is as in Tab. \ref{tab:merger}.} \label{tab:rps}} \end{table*} Figure \ref{fig:merger} shows the rgb image, the gas and stellar kinematics and their velocity dispersions, the BPT, the ionized gas metallicity,\footnote In this case metallicities have been derived neglecting the information from the BPT classification } the mass density and luminosity weighted age maps for JO20. { Other galaxy properties can be found in Fig. \ref{fig:JO20_bis}.} The rgb image highlights the presence of a clear tail extending towards North. Two bright nuclei are visible and these are most likely the remnants of the two merging galaxies. These have similar brightness and size, suggesting the two objects had similar mass. In the H$\alpha$\xspace flux map the tail is also visible. Part of it is detached from the main galaxy body, but ionized gas is still visible between the galaxy and the tail. The velocity field of the ionized gas is disturbed, spanning the range -300$<v [km/s]<$300 { ($A_g$=0.55)}. The locus of zero velocity is bent and there are hints for an inner disk which is fast rotating. The velocity field of the stellar component reveals even more clearly the existence of the two objects: the Eastern one has a typical velocity with respect to the center of -180km/s and rotates with a speed of about 60 km/s, the Western one has the same velocity as the gas, but it has almost no rotation, suggesting it could be face on. The presence of the two components in the velocity field is evidence of the presence of merger remnants. In this case, $A_v$=0.21.\footnote{{We note that in this case $A_v$ might be biased by the very high relative velocity of the two objects ($\sim600$ km/s). This number is used as denominator in the formula to obtain $A_v$.}} Also the stellar velocity dispersion map is very chaotic and inhomogenous: { its median value is 135 km/s, but reaches very high values (up to 250 $\rm km \, s^{-1}$\xspace)}. For comparison, control sample galaxies have median velocity dispersion of 37$\pm$7 km/s, highlighting the chaotic motions of the stars in the aftermath of the collision. The BPT map shows the presence of an AGN in the core, along with a bi-conical structure. Very little star formation in the disk is ionizing the gas; the tail is instead most likely ionized by young stars formed as a consequence of the merger. The metallicity map is very asymmetric and highlights again the presence of two systems: the eastern component - roughly corresponding to the peak of negative gas velocity - is metal poor ($12 + \log(O/H)$ = 8.5), while the western one - corresponding to the peak of positive gas velocity - is much richer ($12 + \log(O/H)$ = 9.1). Most of the mass of the galaxy is confined in the core, where the two merger remnants are visible as a double peak in the mass distribution. Finally, the luminosity weighted age map shows many young regions (LWA$<10^{7.5}$ yr), suggesting that the merger has produced a burst in star formation. An arc characterized by young ages (LWA$\sim10^{7}$ yr) is also clearly visible towards the East, where the metallicity is the lowest. \subsection{Galaxies undergoing ram pressure stripping} \label{sec:RPS} \begin{figure} \centering \includegraphics[scale=0.3]{P96244_P59597_paper.png} \caption{Spatial distribution of galaxies around P96244. The group definition is taken from \cite{Tempel2014_g}. P96244 is represented with thick red symbol. Colored crosses represent galaxies in groups (each color identifies a different group) whose redshift is within $\pm0.01$ from the galaxy redshift. Black dots represent galaxies whose redshift is within $\pm0.015$ the galaxy redshift, grey small dots represent galaxies with no redshift available (from Hyperleda). Note that P96244 and P59597 (Sec.\ref{sec:RPS_a}) are relatively close in space and are in the same plot, therefore also P59597 is indicated here. \label{fig:rps_env} } \end{figure} \begin{figure*} \centering \includegraphics[scale=0.3]{P96244_tail_extent.png} \includegraphics[scale=0.37]{cfr_RHa.png} \caption{{ Left: Example of the procedure adopted to compute the asymmetric extent of H$\alpha$\xspace in galaxies undergoing stripping, with the intent of identifying the tail. The grey region identifies the extent of H$\alpha$\xspace in P96244. Colored spaxels are the ones used to compute the maximum H$\alpha$\xspace extent ($R_{H\alpha_{max}}$), identified by the red dashed line, and the minimum H$\alpha$\xspace ($R_{H\alpha_{min}}$) extent, identified by the red solid ellipse (see text for details. Right: distribution of $R=R_{H\alpha_{max}}/R_{H\alpha_{min}}$ for control sample galaxies (black histogram), ram pressure stripping galaxies (purple histogram) and cosmic web stripping galaxies (green histogram). Vertical dashed line represents the median control sample value, grey region indicate 1$\sigma$ uncertainty. Stripping galaxies have systematically larger R than control sample galaxies, highlighting the presence of a ionized gas tail. } \label{fig:tail} } \end{figure*} \begin{figure*} \centering \includegraphics[scale=0.6]{P18060_forrps_v4.png} \caption{P18060: example of cosmic web stripping candidate. From left to right: rgb image, H$\alpha$\xspace flux map, H$\alpha$\xspace and stellar kinematics maps, metallicity map, BPT map obtained using the OI line and the division by \cite{Kauffmann2003}, and star formation histories maps in 4 age bins. The kpc scale is also shown in each panel. North is up, and east is left. Cyan or black contours represent the distribution of the oldest stellar population.\label{fig:cws} } \end{figure*} We identify { six} galaxies in our sample whose characteristics are consistent with ram pressure stripping. { All of these galaxies but JO134 have $A_v<0.3$, suggesting symmetric stellar kinematics.} The best candidate is P96244 { and will be used to outline the main features of galaxies falling in this class. Its main} properties are shown in Fig. \ref{fig:rps}: the rgb image, the BPT, H$\alpha$\xspace flux, H$\alpha$\xspace gas, and stellar kinematics, metallicity and the SFH maps. { Additional maps are shown in Fig.\ref{fig:P96244_bis}.} As the hosting environment is a key feature for understanding if ram pressure stripping can be advocated as the main mechanism, Fig.\ref{fig:rps_env} shows the surrounding region of P96244. The same plots for the other candidates (P20159 and P59597) are shown in Sec. \ref{sec:RPS_a}. P5055 and P5215 are the other two cases of possible ram pressure stripping in groups and have been characterized in \cite{Vulcani2018_g}. { The properties of all galaxies falling in this class are summarized in Tab.\ref{tab:rps}.} { As already mentioned, JO134 is an example of multiple physical processes (merger+ram pressure stripping) at play, and will be discussed in Sec.\ref{sec:multiple}.} Its rgb image shows that P96244 is a spiral galaxy with a clear tail extending towards South. The signatures of unwinding arms (i.e. dislodged material that appears to retain the original structure of the spiral arms) are also observed, a feature that can be induced by ram pressure stripping \citep{Bellhouse2020}. The tail is even more visible in the H$\alpha$\xspace flux map, where small detached clouds are also detected. { The presence of the tail for this and for all the other ram pressure and cosmic web stripping candidates has been confirmed by comparing the extent of H$\alpha$\xspace along two opposite directions with the extent of H$\alpha$\xspace in undisturbed galaxies (see Sec.\ref{sec:results2} for details on the control sample). Specifically, for each galaxy showing a possible tail, we selected a wedge centered on the galaxy center and with an aperture of 40 deg enclosing the tail. We then measured the maximum extent of H$\alpha$\xspace ($R_{H\alpha_{max}}$), i.e. the semi-major axis of an ellipse with the same PA, inclination and ellipticity of the galaxy, enclosing 95\% of the data points. Next we took the symmetrical wedge with respect to the galaxy center and measured again the extent of H$\alpha$\xspace ($R_{H\alpha_{min}}$). We then took the ratio of the two quantities ($R$) to obtain a measurement of how much H$\alpha$\xspace is extended on one side (tail) with respect to the other. An example of the procedure is shown in the left panel of Fig.\ref{fig:tail} for P96244. We did the same analysis for the control sample galaxies. As these ones do not have a tail by definition, we selected two symmetric wedges along the semi-major axis.\footnote{Choosing a random position did not alter the control sample results.} The right panel of Fig.\ref{fig:tail} shows the distribution of $R$ for the stripping candidates and for the control sample. The mean $R$ for the control sample is $\bar{R} =1.18\pm0.12$. No control sample galaxies has $R>\bar{R}+2\sigma$. In contrast, stripping candidates typically exceed this value, allowing us to conclude that they are characterized by a tail. The only exceptions are P5055, which is a truncated disk, and P5215, whose tail is only hinted, as extensively discussed by \cite{Vulcani2018_g}.} The H$\alpha$\xspace flux map of P96244 also indicates that the objects seen towards East in the rgb image are actually background objects. { The stellar velocity field is overall regular. The bending of the locus of zero velocity is due to the presence of a bar (O. Sanchez et al. in prep.). The same bending is observed also in the ionised gas velocity field. We also observe that the gas in the tail maintains a coherent rotation with the galaxy.} { The velocity field though shows asymmetries, as $A_g=0.68$}. The metallicity gradient in the stellar disk is overall regular, but the metallicity in the tail is stretched and the lowest (A. Franchetto et al. in prep.). The line ratios in the tail are indicative of processes other than star formation as producers of the ionized gas \citep{Poggianti2019b}. Finally, the SFH maps point to a recent formation of the tail and shows a strong burst of star formation in the galaxy core for $t<2\times 10^7$yr. The tail is also aligned along the direction that connects the galaxy and the center of its hosting group: P96244 is indeed part of a 26 member group (Fig.\ref{fig:rps_env}, \citealt{Tempel2014_g, Saulder2016}) at z=0.053, with $\sigma=201.9 km/s$, $R_{vir}=0.31$ Mpc and halo mass of 10$^{13.65}$$\rm M_\odot$\xspace. P96244 is at 2.6$R_{vir}$ and 1.2 $\sigma$ from the group center and might be at first infall. However, Fig. \ref{fig:rps_env} shows that the approach adopted by \citealt{Tempel2014_g} might have broken the same structure into many smaller systems, so the galaxy could actually be part of a bigger group whose center is closer to the galaxy. \begin{table} \centering \small \begin{tabular}{lccccccc} \hline \multicolumn{1}{c}{\multirow{2}{*}{ID}} & \multirow{2}{*}{D/T} & $r_d$ & r$_{extent}$ & \multirow{2}{*}{r/R$_{200}$} & \multirow{2}{*}{$v/\sigma$} & $\Pi_{gal}$ \\ & & (kpc) & (kpc) & & & (N m$^2$) \\ \hline P96244 & $\sim 1$ & 3.9 &12.8 &2.6 &1.2& 1.0$\times 10^{-13}$ \\ P20159 & $\sim 1$ & 1.6 & 5.3 & 2.2 & -3.7 & 8.4$\times 10^{-14}$ \\ P59597 & $\sim 1$ & 1.8 & 6.8 & 4.3 & -6.5 &4.0$\times 10^{-14}$ \\ P5055\tablenotemark{\footnotesize{a}} & $\sim 1$ &2.3 & 12.6 & 2.4 & -0.5 & 5.4$\times 10^{-15}$\\ P5125\tablenotemark{\footnotesize{a}} & $\sim 1$ & 4.2 & 15.0 & 0.3 & -2.1 & 1.4$\times 10^{-14}$ \\ JO134 & $\sim 1$ & 1.1 &2.6 & ? & ? &2.4$\times 10^{-13}$\\ P18060 & $\sim 1$ & 1.1 &3.5 & 5.6 & -2.7 &8.3$\times 10^{-14}$ \\ P63692 & $\sim 1$ & 1.0 & 3.6 & 6 & 0 &3.2$\times 10^{-14}$ \\ \end{tabular} \tablenotetext{a}{Values taken from \citet{Vulcani2018_g}} \caption{{ Structural parameters and other physical properties of the galaxies undergoing stripping: disk to total light ratio (D/T) disk scale-length ($r_d$), extent of the H$\alpha$\xspace emission within the mayor axis of the disk (r$_{extent}$), projected phase-space coordinates, and $\Pi_{gal}$ at r$_{extent}$.} \label{tab:rps_math}} \end{table} { To better understand whether ram pressure stripping can be invoked to explain the features observed in P96244 an in the other stripping galaxies, we follow the approach presented in \cite{Vulcani2018_g}. As we do not have at our disposal X-ray observations of the environment surrounding the galaxies, we can not directly infer the intensity of ram- pressure from the density of the intra-group medium ($\rho_{IGM}$). We instead must compare the expected ram- pressure with the self-gravity or anchoring pressure across the galaxy ($\Pi_{gal}(r_{gal})$, where $r_{gal}$ is the radial distance from the galaxy centre), which reflects their ability to retain gas. Gas stripping will occur when $P_{ram} > \Pi_{gal}(r_{gal})$. We compute $\Pi_{gal}$ using a pure disk model as described in \cite{Jaffe2018}, with the parameters listed in Tab. \ref{tab:rps_math}. We assume that all galaxies are disk-dominated (D/T$\sim1$), measure the extent of the H$\alpha$\xspace emission disk along the semi major axis ($r_{extent}$) of the galaxies and compute the stellar disk scale length $r_d$ from the stellar mass of galaxies, as in \cite{Wu2018}. We assume a gas fraction of 0.25 (corresponding to $\sim 10^{10} M_\odot$ late-types, \citealt{Popping2014}) and a disk scale-length for the gas 1.7 times that of the stars \citep{Cayatte1994}. We compute $\Pi_{gal}$ at $r_{extent}$ assuming this is the maximum radius at which ram- pressure has been able to strip gas. $\Pi_{gal}(r_{gal} = r_{extent})$ can be considered an upper limit to the $P_{ram}$ experienced by the galaxies. We report in Tab. \ref{tab:rps_math} the maximum ram-pressure experienced by the analyzed galaxies. These numbers must be compared to the predicted $P_{ram}$ from hydrodynamical simulations of groups and filaments: according to \cite{Bahe2013}, for groups with $\rm M_{host} = 1 \, to \, 3 \times 10^{13} M_\odot$ the estimated $P_{ram}$ ranges from $\sim 3\times 10^{-14}$ to $\sim 10^{-15}$ N m$^2$ in the region 1-3$\times R_{200}$. Filaments around low-mass structures have a $P_{ram}$ ranging from $\sim 2\times 10^{-13}$ to $\sim 6\times 10^{-15}$ N m$^2$ in the region 1-5$\times R_{200}$ from the centre of the group. In voids, $P_{ram}$ can range from $\sim 10^{-14}$ to $\sim 10^{-16}$ N m$^2$ in in the region 1-5$\times R_{200}$. This analysis confirms that the galaxies analyzed in this Section can indeed be undergoing a ram pressure sufficient to produced the observed features. P96244 is the galaxy most likely feeling the largest $P_{ram}$ of the sample. Similarly, also cosmic web stripping galaxies can indeed be feeling some stripping, but it is not clear if that is produced by the closest groups, indeed found too far in both space and velocity to exert a significant pressure, or by the voids. } Finally, we can exclude that P96244 is feeling tidal interaction: its closest galaxy is P59391 at 145\arcsec\, (=150 kpc), whose stellar mass is $10^{9.9}$$\rm M_\odot$\xspace. According to \cite{Vollmer2005}, P96244 would feel the effect of the galaxy galaxy interaction only 74\arcsec\ away from the galaxy center. Note, though, that P59391 is present in the \cite{Poggianti2016JELLYFISHREDSHIFT} catalog of stripping candidates, included in the mildest stripping category (JClass=1). This galaxy could therefore be another case of ram pressure stripping exerted by the group. \subsection{Galaxies undergoing cosmic web stripping} \label{sec:cws} \begin{table*} \centering \small \begin{tabular}{cccccccc} \hline \multicolumn{1}{c}{\multirow{2}{*}{ID}} & group only in & presence of & { asymmetric} & { symmetric} & stretched & young & \multirow{2}{*}{$\rm flag_c$} \\ & the surrounding & gas tail & $\Delta v_{gas}$ & $\Delta v_{star}$ & metallicity & tail & \\ \hline P18060 & \cmark & \cmark & \xmark & \cmark & \cmark & \cmark & 1\\ P63692 & \cmark & \cmark & \cmark & { ?} & { ?} & \cmark & 0\\ \end{tabular} \caption{{ Summary of the main features investigated to characterize cosmic web stripping candidates. The meaning of the symbols is as in Tab. \ref{tab:merger}.} \label{tab:cws}} \end{table*} Within the sample, we detected two cosmic web stripping candidates: P18060 and P63692. They both have low values of stellar mass ($\log (M_\ast/M_\odot)<9.1$). Note the the galaxies analyzed by \cite{Benitez2013} are passive, while our galaxies are still star forming, so we hypothesize we are witnessing the early phases of this phenomenon. { Table \ref{tab:cws} summarizes the main features investigated to assess cosmic web stripping. These are also shown in Fig. \ref{fig:cws} for P18060. These are the rgb image, the BPT, H$\alpha$\xspace flux, H$\alpha$\xspace gas, and stellar kinematics, metallicity, SFHs maps. Additional maps can be found in Fig. \ref{fig:P18060_bis}}. Figure \ref{fig:cws_env} shows the environment where P18060 is found. The description of P63692 is deferred to Sec.\ref{sec:cws_a}. \begin{figure} \centering \includegraphics[scale=0.3]{P18060_paper.png} \caption{Spatial distribution of galaxies around P18060. Colors and symbols have the same meaning as in Fig. \ref{fig:rps_env}. \label{fig:cws_env} } \end{figure} Its rgb image shows that P18060 seems to have a rather regular morphology. The clouds seen in the rgb image are background objects. However, the H$\alpha$\xspace map highlights the presence of a tail towards the South. { The presence of the tail is supported by the analysis of the H$\alpha$\xspace extent (Fig.\ref{fig:tail}).} The { extent of the} gas distribution is asymmetric, { but the rotation in the disk is symmetric ($A_g=0.22$).} The gas rotates slower than the stars (-60$<v_{gas} [km/s]<$60, -100$<v_{star} [km/s]<$100). The galaxy has an inverted metallicity gradient, with the tail having a metallicity of $12+\log[O/H]\sim 8.8$ the core a metallicity of $12+\log[O/H]\sim 8.4$, the side opposite to the tail a metallicity of $12+\log[O/H]\sim 8.0$. This inverted gradient is often found in low mass galaxies \citep[e.g.,][]{Wang2019}. This piece of evidence could suggest that either the gas was displaced during the stripping, or that rather than stripping we are witnessing accretion of high metallicity gas from the North. However, there are no other indications for gas accretion. The BPT maps shows that the gas is mostly powered by star formation. The SFH maps point to a rather recent formation of the tail. While all its properties are overall consistent with ram pressure stripping, P18060 does not belong to any structure (Fig.\ref{fig:cws_env}). A binary system is at 677 kpc. According to \cite{Tempel2014_g, Saulder2016}, the closest group is at 1268 kpc from P18060, at a redshift of 0.04062. Given the estimated radius of the group (0.2259 Mpc), the galaxy is at 5.6 $R_{vir}$. The group velocity dispersion is $\sigma= $267.1 km/s, meaning that the velocity difference between the galaxy and the group is $\sim 720 km/s$, corresponding to $\sim 2.7\times \sigma$. { Tab.\ref{tab:rps_math} reports the expected $\Pi_{ram}$ on the galaxy, which is consistent with stripping exerted by filaments/voids. } Even though the group could hardly be responsible for the galaxy properties, it is worth mentioning that the position of the tail of P18060 is roughly aligned along the direction of the group. The closest galaxy to P18060 is P18079 at z = 0.03951, found at 137.89\arcsec. This galaxy has no mass estimate available from the literature, but it is too far to exert a significant tidal influence on P18060, so we conclude that the most likely mechanism is cosmic web stripping. \begin{table*} \centering \small \begin{tabular}{ccccccccc} \hline \multicolumn{1}{c}{\multirow{2}{*}{ID}} & member of & H$\alpha$\xspace beyond & symmetric & properties of H$\alpha$\xspace clouds & H$\alpha$\xspace clouds & \multirow{2}{*}{$\rm flag_c$} \\ & a filament & 4R$_e$ & $\Delta v_{star}$ & similar to main body & young & \\ \hline P14672 & { ?} & \cmark & \cmark & \cmark & \cmark & 0\\ P95080 & \cmark & \cmark & \cmark & \cmark & \cmark & 1\\ P63661 & \cmark & \cmark & \cmark & \cmark & \cmark & 1\\ P8721 & \cmark & \cmark & \cmark & \cmark & \cmark & 1\\ P19482 & \cmark & \cmark & \cmark & \cmark & \cmark & 1\\ \end{tabular} \caption{{ Summary of the main features investigated to characterize cosmic web enhancement galaxies. The meaning of the symbols is as in Tab. \ref{tab:merger}. } \label{tab:cwe}} \end{table*} { We note that at first sight, cosmic web stripping galaxies could also simply be irregular galaxies: the two populations cover a similar mass and luminosity range \citep{Hunter1986, Hunter2004}. Nonetheless, the main difference between the two populations lies in the comparison between their stellar and gas kinematics. In irregular galaxies the stellar kinematics is expected to closely follow the motion of the gas \citep[e.g.,][]{Johnson2012}, while in cosmic web stripping galaxies the gas extends much further away than the stellar component, especially in a preferential direction that gives origin to a tail. This is suggestive of stripping.} \subsection{Galaxies undergoing cosmic web enhancement} \label{sec:CWE} \begin{figure*} \centering \includegraphics[scale=0.6]{P14672_forcwe_v4.png} \caption{P14672: example of cosmic web enhancement candidate. From left to right: rgb image, H$\alpha$\xspace flux map, H$\alpha$\xspace and stellar kinematics maps, stellar velocity dispersion map, metallicity map, BPT map obtained using the OI line and the division by \cite{Kauffmann2003}, and star formation histories maps in 4 age bins. The kpc scale is also shown in each panel. North is up, and east is left. Cyan or black contours represent the distribution of the oldest stellar population. { The magenta contour on the H$\alpha$\xspace flux map represents the galaxy rotated of 180$^\circ$ and is used to quantify the asymmetry (see text for details). } \label{fig:cwe} } \end{figure*} \begin{figure} \centering \includegraphics[scale=0.3]{P14672_paper.png} \caption{Spatial distribution of galaxies around P14672. Colors and symbols have the same meaning as in Fig. \ref{fig:rps_env}. \label{fig:P14672_env} } \end{figure} In \cite{Vulcani2019_fil} we identified four galaxies feeling Cosmic Web Enhancement (P95080, P19482, P8721 and P63661) and in \cite{Vulcani2018_g} we proposed that P5215 might also be feeling the same mechanism, instead of ram pressure stripping. Within the GASP sample, there is another candidate that might be undergoing cosmic web enhancement, as it shows properties similar to those of the galaxies discussed by \cite{Vulcani2019_fil} { and summarized in Tab.\ref{tab:cwe}}: P14672, whose { main} properties are shown in Fig. \ref{fig:cwe}.\footnote{{ Additional maps are shown in Fig.\ref{fig:P14672_bis}.}} Nonetheless, this classification is highly uncertain, as the redshift coverage is very poor in the part of the sky surrounding the galaxy and we can not properly characterize its environment in order to detect the possible presence of a filament (Fig. \ref{fig:P14672_env}). The closest group at similar redshift (z=0.051) to the galaxy is at 800 kpc. The velocity difference between the galaxy and the group center is 370 km/s. { In this case, the environmental information is the key, therefore even though all the other criteria adopted to define the mechanism are met, we assign flag$_c$=0.} The rgb shows no peculiar features, but it is evident that the galaxy is lopsided {( $A_o=65\%$).} We note that the elongated source towards the West, at the edge of the galaxy, is a background star-forming galaxy. The H$\alpha$\xspace flux map shows a large number of knots, many detached clouds that are at the same velocity as the galaxy, so they belong to them. These clouds have no preferential direction. In \cite{Vulcani2019_fil} we selected galaxies with a maximum extent of H$\alpha$\xspace in units of $r_e$ (R(H$\alpha$\xspace)$_{max}$) $>4$. For P14672 R(H$\alpha$\xspace)$_{max}$=3.97, therefore it just barely did not pass the selection threshold. As for the galaxies in \cite{Vulcani2019_fil}, both the gas and stellar kinematics are quite regular, { the stellar velocity field is symmetric ($A_v$=0.17)} and the stellar velocity dispersion is relatively low. The ratios of emission line fluxes confirm that they do belong to the galaxy gas disk, the metallicity map shows a rather regular gradient, with the metallicity in the clouds consistent with that of the galaxy outskirts. The SFH maps point to a recent formation of the clouds. There is no other mechanism able to explain the morphology of the galaxy, so alternatively it could simply be an irregular galaxy. However, in the literature we have found no object with similar properties. Narrow band image surveys like the H$\alpha$\xspace Galaxy Survey \citep[H$\alpha$\xspace GS,][]{Shane2001}, the H$\alpha$\xspace galaxy survey \citep{James2004}, the H-alpha Galaxy Groups Imaging Survey (H$\alpha$ggis, PI. Erwin), An H$\alpha$\xspace Imaging Survey of Galaxies in the Local 11 Mpc Volume \cite[11Hugs][]{Kennicutt2008}, Dynamo \citep{Green2014}, or Fabry-Perot observations like the Gassendi H$\alpha$\xspace survey of SPirals \cite[GHASP][]{Epinat2008} have no field galaxies with such extended and luminous ($\rm{\log (H\alpha [erg/s/cm^2/arcsec^2]>-17.5}$) H$\alpha$\xspace regions located well beyond R25. \subsection{Galaxies experiencing gas accretion} \label{sec:accr} P11695 is the best example in our sample of a galaxy fed by low-metallicity gas inflow from the cosmic web, as it is found in isolation and has one of the steepest and most asymmetric metallicity gradients observed so far \citep[e.g.][]{Pilyugin2014}. All its properties are described at length in \cite{Vulcani2018_b}. P40457 and P4946 are instead good candidates for the galaxies that underwent substantial gas accretion from a metal rich less massive object, whose presence is not visible, even though in both cases the classification is uncertain. P4946 has been already discussed in \cite{Vulcani2018_g} and it is also characterized by a gas disk counter-rotating with respect to the stellar disk, suggesting that the accretion happened with a retrograde motion. \begin{table*} \centering \small \begin{tabular}{cccccccccc} \hline \multicolumn{1}{c}{\multirow{2}{*}{ID}} & \multirow{2}{*}{isolated} & asymmetric & regular & asymmetric & asymmetric & counter-rotating & \multirow{2}{*}{$\rm flag_c$} \\ & & rgb and H$\alpha$\xspace &$\Delta v_{star}$ & metallicity & growth &disk\tablenotemark{\footnotesize{a}} & \\ \hline P40457 &{ ?} &\cmark & { ?} &\cmark &\cmark & \xmark & 0 \\ P11695 &\cmark &\cmark &\cmark &\cmark &\cmark & \xmark & 1\\ P4946 &\xmark &\xmark &\cmark & { ?} & \cmark & \cmark &0\\ \end{tabular} \tablenotetext{a}{This is not a critical feature, but a clear evidence for gas accretion.} \caption{{ Summary of the main features investigated to characterize galaxies undergoing gas accretion. The meaning of the symbols is as in Tab. \ref{tab:merger}. } \label{tab:accr}} \end{table*} \begin{figure*} \centering \includegraphics[scale=0.6]{P40457_foraccretion_v4.png} \caption{P40457: example of gas accretion from a minor object.From left to right: rgb image, H$\alpha$\xspace flux map, H$\alpha$\xspace and stellar kinematics maps, gas velocity dispersion map, metallicity map, and star formation histories maps in 4 age bins. The kpc scale is also shown in each panel. North is up, and east is left. Cyan or black contours represent the distribution of the oldest stellar population. { The magenta contour on the H$\alpha$\xspace flux map represents the galaxy rotated of 180$^\circ$ and is used to quantify the asymmetry (see text for details). } \label{fig:accretion_min} } \end{figure*} { Table \ref{tab:accr} summarizes some features that can point to accretion, even though it is important to keep in mind that galaxy properties can change depending on the properties of the accreted material. } Figure \ref{fig:accretion_min} shows the rgb image, H$\alpha$\xspace flux, H$\alpha$\xspace gas, and stellar kinematics, metallicity, SFHs maps for P40457, { while additional maps are shown in Fig.\ref{fig:P40457_bis}}. The rgb shows that the stellar disk, when compared to the contours representing the original body, is lopsided and extends towards the West. Note that the object with similar color detected 20\arcsec\ East from the galaxy is a foreground object (z=0.046). The lopsidedness is also very accentuated in H$\alpha$\xspace { $A_o=59\%$}. It is quite astonishing how extended the gas disk is compared to the stellar one, especially when we consider the original body. The stellar kinematics is quite regular ($-180<v [km/s]<180$, { $A_v=0.17$}), even though it needs to be taken with caution due to the low number of bins used, due to the low S/N. { The gas kinematics is instead asymmetric: $A_g=0.44$}. The gas velocity dispersion is also overall quite low ($<15 km/s$) and only slightly higher in the Northern side of the galaxy. The metallicity map is very asymmetric, with different gradients in the different parts of the galaxy (A. Franchetto et al. in prep.). The northern part is much richer than the southern one. The SFH maps unveil that the galaxy grew much in size in the last two age bins, during which it developed the lopsidedness. Regarding the environment, the galaxy is in a portion of the sky with quite low spectroscopic coverage, so the environment is difficult to identify. According to the available data, and the closest galaxy with the same redshift is 250\arcsec\ away, the closest group is about 1 Mpc away \citep{Tempel2014_g}. All these properties are consistent with gas accretion, most likely of some metal rich gas, proceeding from the North West. \subsection{Galaxies experiencing starvation} \label{sec:passive} \begin{table} \centering \small \begin{tabular}{cccccccc} \hline \multicolumn{1}{c}{\multirow{2}{*}{ID}} & no H$\alpha$\xspace & regular & homogeneous& \multirow{2}{*}{$\rm flag_c$} \\ & emission & $\Delta v_{star}$ & SF suppression & \\ \hline P16762 & \cmark & \cmark & \cmark &1 \\ P443 & \cmark & \cmark & \cmark &1 \\ P5169 & \cmark & \cmark & \cmark &1 \\ \end{tabular} \caption{{ Summary of the main features investigated to characterize galaxies underwent a starvation event. The meaning of the symbols is as in Tab. \ref{tab:merger}. } \label{tab:pass}} \end{table} \begin{figure*} \centering \includegraphics[scale=0.6]{P16762_forpassive_v4.png} \includegraphics[scale=0.33]{P16762_spectrum_i1.png} \caption{P16762: example of starvation candidate. The rgb, stellar velocity map, the comparison between the different age maps and the integrated spectrum are shown. Cyan or black contours represent the distribution of the oldest stellar population (from {\sc sinopsis}\xspace). In the comparison between the stellar maps of different ages. The old star formation bin ($t > 5.7\times 10^9$ yr; red lines) is shown in all the panels, for reference. Left: recent star formation bin ($2 \times 10^7$ yr $< t < 5.7 \times 10^8$ yr; blue lines). Middle: intermediate star formation bin ($5.7 \times 10^8$ yr $< t < 5.7 \times 10^9$ yr; green lines). Right: old star formation bin. Contours are logarithmically spaced between SFR = 0.0001 and 0.1 $M_\sun$/yr.\label{fig:passive} } \end{figure*} The three passive galaxies in the sample were inserted among the targets to study the final stages of galaxy evolution in the field. P5169 is a group galaxy that has been already studied in \cite{Vulcani2018_g}. The other two galaxies in this category are P443 and P16762. { The main properties investigated are summarized in Tab.\ref{tab:pass}. These are also shown in Fig. \ref{fig:passive} for P16762. These are: the rgb image, the stellar kinematics map, the comparison between the different age maps and the integrated spectrum. All the other properties are shown in Fig.\ref{fig:P16762_bis}.} To ease the comparisons between the SFR at the different epochs, we only plot the contours logarithmically spaced between SFR = 0.0001 and 0.1 $M_\sun$/yr/kpc$^2$ and use as reference the oldest age bin. Note that the youngest age bin is not shown because no ongoing star formation is detected. P443 is instead discussed in Sec.\ref{sec:passive_a}. \begin{figure*} \centering \includegraphics[scale=0.6]{JO134_formultiple_v4.png} \caption{JO134: example of a combination of merger and ram pressure stripping. From left to right, top to bottom: rgb image, H$\alpha$\xspace flux map, H$\alpha$\xspace and stellar kinematics maps, metallicity map, luminosity weighted age map and star formation histories maps in 4 age bins. The kpc scale is also shown in each panel. North is up, and east is left. Cyan or black contours represent the distribution of the oldest stellar population. \label{fig:mul} } \end{figure*} \begin{figure*} \centering \includegraphics[scale=0.5]{JO134_paper.png} \caption{Environment around JO134. As the galaxy is outside the SDSS footprint, no group definition is available. Grey dots represent galaxies in the redshift range $0.05<z<0.019$, blue filled circles are the galaxies in the redshift ranges indicated in the titles. JO134 is represented as red circle. The histogram represents the redshift distribution of galaxies. Vertical line is the redshift of JO134. Shaded areas are the redshift intervals indicated in the spatial distributions. \label{fig:JO134_env} } \end{figure*} P16762 is a typical example of passive disks \citep{bundy10, Bamford2009}: it is classified as an S0 galaxy \citep {Calvi2018MorphologyUniverse} and the rgb image reveals the presence of a disc, but there is no star formation throughout the disk, as indicated by the absence of emission lines in the integrated spectrum. The stellar velocity field is very regular (-240$< v[km/s]<$240), { and symmetric ($A_v=0.09$),} excluding the possibility that interactions or a merger affected the galaxy, at least recently. P16762 also has a very high central velocity dispersion ($\sigma v_{star}=$170 km/s), which indicates the presence of a prominent bulge and excludes the existence of a bar \citep{Das2008, Aguerri2009}. The comparison of the extent of the SFR at different epochs suggests the same suppression happened throughout the disk: the extension of the maps is very similar at all epochs. No outside-in quenching, { often used as a proxy for ram pressure stripping \citep[e.g.][]{Vulcani2020},} is observed. This result points to starvation as the main mechanism. The galaxy is part of a binary system: the companion is a massive galaxy (10$^{11.1}$$\rm M_\odot$\xspace) located at 70\arcsec. According to the \cite{Vollmer2005} formulation, tidal interaction could have an effect on the galaxy for $r>13\arcsec$, but no signs are visible. \subsection{JO134: a case of multiple processes} \label{sec:multiple} To conclude this overview, we focus on a galaxy that might be simultaneously affected by two mechanisms. This is actually the only clear example of coexistence of two mechanisms in the field sample, suggesting that typically the presence of a dominant mechanism washes out the effect of the secondary one \citep[but see][for an example in GASP clusters]{Fritz2017}. Figure \ref{fig:mul} shows the rgb image, the H$\alpha$\xspace flux, the H$\alpha$\xspace and stellar kinematics, the metallicity, the luminosity weighted age maps and star formation histories maps in 4 age bins for JO134, while Fig.\ref{fig:JO134_env} shows its environment. { Tables \ref{tab:merger} and \ref{tab:rps} summarize the most important features used to pin point the mechanisms. Fig.\ref{fig:JO134_bis} summarizes all the maps we have at our disposal for this galaxy.} The rgb image unveils a low luminosity central part, and a very clear bright region towards North East, composed by many blue knots. Overall the galaxy morphology is very irregular, especially in the North-West side of the galaxy. The H$\alpha$\xspace maps shows very bright regions in the Northern part, and a clear low luminosity tail towards the South is visible. Some detached clouds are also observed. The ionized gas has an overall { asymmetric ($A_g = 0.71$)} and low rotation, in the range -30$<v_{gas} [km/s]<$ 30. The rotation axis goes from South to North and clear distortions are seen in the Northern side of the galaxy, where the locus of zero velocity has a meander. The gas kinematics in the tail has a coherent rotation with the rest of the galaxy, though it is stretched. The stellar kinematics is much more chaotic and no regular rotation is detected. { The measured asymmetry is $A_s = 0.45$.} A clear trail of constant velocity (v$\sim 20$ km/s) crosses the galaxy from East to West. The stellar velocity dispersion is the highest along this trail (plot not shown) reaching a $\sigma_{star}>100$ km/s. The same region also has very low metallicity. The LWA map shows very young ages in all of the northern part of the galaxy (LWA$\sim10^6$ yr), distributed along an arc, and also at the end of the tail. Similarly, the SFH maps show that the galaxy was born as an overall regular object, but from t$<6\times10^9$ yr an asymmetry started to develop. In the youngest age bin, both the Northern side and the tail developed. JO134 had no redshift measurement prior to MUSE observations and is also outside the SDSS footprint, so no group definition is available. Extracting all the available redshifts in the surrounding area, there are two main structures, one at the redshift of the galaxy and one at slightly lower redshift (Fig. \ref{fig:JO134_env}). Both structures have a velocity dispersion of $\sim 250$km/s and are still in formation, with a no clear center, so it is not possible to measure the distance of JO134 from the groups' centers. The most probable scenario that could explain the characteristics of JO134 is that on one side the galaxy is falling towards a group and therefore feeling ram pressure stripping that is producing the bow shock in the Northern part and the tail developing on the opposite side, and on the other hand it is also undergoing a minor merger event. The bright blue clump seen towards North-West in rgb image could indeed be a bullet { (i.e. a compact galaxy moving at very high speed}) hitting the galaxy with a different velocity and affecting the orbits of the galaxies in the Northern part. The chaotic stellar velocity field and the high velocity dispersion in that region are consistent with this possibility. Alternatively to the bullet, it could be an old gas-rich merger from which the disk regrew before being stripped. \section{General trends} \label{sec:results2} \begin{figure} \centering \includegraphics[scale=0.47]{groups_general_classes.png} \caption{Incidence of the different categories in the sample: Accretion (either gas accretion or from a small object), Cosmic web enhancement (CWE), Cosmic web stripping (CWS), Interaction, Merger, Ram pressure stripping (RPS), Starvation. \label{fig:class} } \end{figure} In the previous section we have combined the information obtained from the spatially resolved properties of the galaxies and the characterization of the environment in which they are embedded to infer the most probable mechanism acting on them. Figure \ref{fig:class} presents a summary of the different categories and shows the distribution of galaxies within each class.\footnote{{ We note that as for JO134 we identified two coexisting main mechanisms, this galaxy will be counted twice in the following distributions. As for P5215, where we can not firmly distinguish between ram pressure stripping and cosmic web enhancement \citep{Vulcani2018_g}, we assume ram pressure stripping as the dominant process.} } Overall, the most populated category is that of merger (either minor or major) with 7 galaxies, followed by ram pressure stripping (6) and cosmic web enhancement (5). We remind the reader that GASP started with the aim of characterizing gas removal processes, paying particular attention to ram pressure stripping. It is therefore interesting to note that even though merging events and interactions were purposely excluded, they still represent more than 35\% of the non-cluster star-forming sample. This result indicates that a visual selection based on optical images can easily mismatch processes affecting both the stellar and gas component and processes that leave the stellar component unaltered. The presence of interacting galaxies also suggests that recognizing companions on the basis of optical images is not trivial, even for expert inspectors. \begin{figure} \centering \includegraphics[scale=0.44]{groups_general_mass_distr.png} \caption{Stacked mass distribution for galaxies in the sample, as indicated by the label. The black line shows the mass distribution of the undisturbed sample from \citet{Vulcani2019b}, for reference. \label{fig:mass_distr} } \end{figure} Taking into account the original selection performed by \cite{Poggianti2016JELLYFISHREDSHIFT}, three galaxies were classified as undisturbed (P12823, P14672 and P877), but instead they are not, while the other candidates were labelled with a different degree of stripping. There is however no clear correlation between the classification obtained from the optical selection (``JClass'', with numbers ranging from 0 to 5 with 5 representing the galaxies with the clearest evidence for stripping) and the proposed mechanism resulting from the spatially resolved analysis (e.g. it is not the case that all galaxies with JClass = 4 or 5 turning out as ram pressure stripping galaxies, while galaxies with JClass=1 being cosmic web enhancement). The three galaxies included as passive systems (P443, P16762, P6169) based on fiber spectroscopy turned out to be passive across their entire galaxy disk and no galaxy selected as star forming from fiber spectroscopy turned out to be passive instead. \begin{figure} \centering \includegraphics[scale=0.47]{groups_general_color_mass.png} \caption{Reconstructed Absolute B-V -$\rm M_\ast$\xspace relation of galaxies in the sample, compared to that of the undisturbed galaxies. Colors and symbols refer to galaxies of the different classes, as indicated in the label. The empty purple triangle represents JO134, which is simultaneously undergoing both a merging event and ram pressure stripping. \label{fig:color_mass} } \end{figure} We can now investigate some general trends of the sample, to inspect whether some category stands out in any of the main scaling relations, obtained using integrated values. The aim is to understand whether, in the absence of spatially resolved data, we can use one of these relations to select galaxies undergoing a specific process. In what follows, we will use for comparison the GASP undisturbed sample, already exploited e.g. in \cite{Vulcani2019b}. { Though small, this sample allows us to avoid systematics that would be introduced by using external samples, where different approaches to measure stellar masses, star formation rates, metallicities and sizes have been adopted.} This sample includes both field and cluster galaxies, for a total of 30 objects. \begin{figure*} \centering \includegraphics[scale=0.47]{groups_general_sfr_mass.png} \includegraphics[scale=0.47]{groups_general_deltaSFR.png} \caption{Left: SFR-$\rm M_\ast$\xspace relation of galaxies in the sample, compared to the undisturbed relation. Colors and symbols refer to galaxies of the different classes, as indicated in the label. The empty purple triangle represents JO134, which is simultaneously undergoing both a merging event and ram pressure stripping. The dashed line represents the fit to the control sample from \cite{Vulcani2018_g}. Right: Distributions of the differences between the galaxy SFRs and their expected value according to the fit to the control sample, given their mass. Colored histograms represent galaxies of the different classes, black empty histogram represent the control sample. The dashed vertical line is centered at 0. \label{fig:sfr_mass} } \end{figure*} Figure \ref{fig:mass_distr} shows the mass distribution of the galaxies in the different categories. Cosmic web stripping galaxies populate the lowest mass bin. In contrast, cosmic web enhancement tends to populate the high mass end of the distribution. This might be due to the fact that given the existence of mass segregation inside the filaments \citep{Malavasi2017}, more massive galaxies are found in the inner parts of the filaments where densities are higher, and therefore more easily develop signs of cosmic web enhancement. Galaxies undergoing all the other processes seem not to have a clear dependence on stellar mass. The lack of trends could also be due to the low number statistics, even though \cite{Gullieuszik2020} showed that for ram pressure stripping stellar mass is not the driving parameter. For comparison, the undisturbed sample is also plotted. Though the control sample lacks very low mass galaxies ($<10^{9.2}$$\rm M_\odot$\xspace), both median and mean values are very similar ($\sim 10^{10}$$\rm M_\odot$\xspace) and the Kolmogorov-Smirnov test is not able to detect statistically significant differences between the samples. \subsection{Scaling relations} \subsubsection{(B-V)-$\rm M_\ast$\xspace relation} Figure \ref{fig:color_mass} presents the (B-V)-$\rm M_\ast$\xspace relation. As expected, the passive galaxies are the reddest objects in the sample, followed by the ram pressure stripped galaxy P5055, which is a truncated disk and will most likely soon become completely passive \citep{Vulcani2018_g}. Compared to the control sample, none of the classes stand out in a particular part of the diagram, although there are hints that the other ram pressure stripping galaxies and the mergers might be slightly bluer, suggesting a large contribution of young stars. \subsubsection{SFR-$\rm M_\ast$\xspace relation} We next inspect the SFR-$\rm M_\ast$\xspace relation, which can probe whether a sub-population is currently forming stars at higher rate than the others. The same relation for cluster ram pressure stripping galaxies was presented in \cite{Vulcani2018_L} where we showed that galaxies undergoing ram pressure stripping occupy the upper envelope of the control sample SFR-$\rm M_\ast$\xspace relation, showing a systematic enhancement of the SFR at any given mass. The star formation enhancement occurs in the disk (0.2 dex), and additional SF takes place in the tails. \citeauthor{Vulcani2018_L}'s results suggest that strong ram pressure stripping events can moderately enhance the star formation in the disk prior to gas removal. Figure \ref{fig:sfr_mass} shows the SFR-$\rm M_\ast$\xspace relation and the distribution of the difference between the SFR of each galaxy and the value derived from the control sample fit given the galaxy mass \citep[from][]{Vulcani2018_L}. While the sample size is small to perform definitive statistical tests, some qualitative trends emerge. First of all, we note that overall the plane spanned by the sample analysed in this paper and the control sample are very similar, even though control sample galaxies have a narrower distribution in the difference between the measured and estimated SFRs. \begin{figure*} \centering \includegraphics[scale=0.47]{groups_general_mass_met.png} \includegraphics[scale=0.47]{groups_general_deltamet.png} \caption{Left: $\rm M_\ast$\xspace-ionized gas metallicity relation of galaxies in the sample, compared to the undisturbed relation. Colors and symbols refer to galaxies of the different classes, as indicated in the label. The empty purple triangle represents JO134, which is simultaneously undergoing both a merging event ans ram pressure stripping. The dashed line represents the fit to the control sample from \cite{Franchetto2020}. Right: Distributions of the differences between the galaxy ionized metallicity and their expected value according to the fit to the control sample, given their mass. Colored histograms represent galaxies of the different classes, black empty histogram represent the control sample. The dashed vertical line is centered at 0.\label{fig:mass_met} } \end{figure*} Focusing on the different sub-populations, all but one (P63947) of the mergers lie well above the control sample fit. Their mean SFR excess is $\sim 0.2$ dex { (corresponding to a 2$\sigma$ deviation)}. The two interactions also lie just above the relation. This result is consistent with the notion that mergers and interactions can indeed enhance star formation \citep[e.g.,][]{Barnes2004, Kim2009, Saitoh2009, Schweizer2009, Davies2015, lin08, Perez2011, Athanassoula2016}. Only two ram pressure stripping candidates (P96244 and JO134) lie above the control sample relation, while all the others actually tend to occupy the lower envelope, suggesting they could be transitioning toward the passive sequence. We remind the reader that P96244 (see Sec. \ref{sec:RPS}) is our best candidate for ram pressure stripping in groups and is most likely found at the peak of the stripping. Its excess is of $\sim 0.4$ dex { (= 4$\sigma$ deviation)} and the enhancement was also visible in the SFH maps shown in Fig. \ref{fig:rps}. P5055, the ram pressure stripping candidate with the largest negative excess (0.3 dex, { i.e. 3$\sigma$ deviation}), is indeed a truncated disk. Cosmic web enhancement galaxies also lie along the upper envelope of the relation suggesting that the star formation induced by this mechanism alters the global star-forming properties of the galaxies. The only exception is P14672, which is the only galaxy in this group with an uncertain classification. Among the accretion candidates, P11695, which accreted gas with low metallicity, is well above the control sample SFR-$\rm M_\ast$\xspace relation, and it excess is of 0.7 dex. In contrast, the two galaxies undergoing a metal rich inflow show a suppression of their SFR. Both cosmic web stripping candidates are below the control sample fit. Overall, all the trends described above are very mild, suggesting that none of the considered physical mechanisms dramatically affect the properties of star-forming galaxies. \subsubsection{Ionized gas metallicity-$\rm M_\ast$\xspace relation} We next inspect the ionized gas metallicity-$\rm M_\ast$\xspace relation, to test whether the different physical mechanisms can impact the metallicity of galaxies measured at R$_e$. The same relation for cluster galaxies undergoing ram pressure stripping was presented in \cite{Franchetto2020} where it was shown that the chemical properties of these galaxies are similar to those of the control sample galaxies. Figure \ref{fig:mass_met} shows the ionized gas metallicity -$\rm M_\ast$\xspace relation and the distribution of the difference between the metallicity of each galaxy and the value derived from the control sample fit given the galaxy mass \citep[from][]{Franchetto2020}.\footnote{{ We note that for P4946 we can not obtain a reliable estimate of the ionized gas metallicity. The presence of the central AGN does not allow for a sufficient number of star-forming spaxels to estimate the metallicity at the effective radius, therefore this galaxy is not shown in these plots.}} As in the case of the SFR-$\rm M_\ast$\xspace relation, some trends are worthwhile to mention, even though the small number of galaxies in the sample prevent us from confirming the results on a statistical ground. \begin{figure*} \centering \includegraphics[scale=0.47]{groups_general_mass_size.png} \includegraphics[scale=0.47]{groups_general_deltasize.png} \caption{Left: R$_e$-$\rm M_\ast$\xspace relation of galaxies in the sample, compared to the undisturbed relation. Colors and symbols refer to galaxies of the different classes, as indicated in the label. The dashed line represents the fit to the control sample from \cite{Franchetto2020}. Right: Distributions of the differences between the galaxy size and their expected value according to the fit to the control sample, given their mass. Colored histograms represent galaxies of the different classes, black empty histogram represent the control sample. The dashed vertical line is centered at 0. \label{fig:mass_size} } \end{figure*} All ram pressure stripped galaxies but JO134\footnote{{ We recall that JO134 is also affected by a merger. Moreover, since it has no estimate of R$_e$, the expected R$_e$ was computed from the size-mass relation and therefore its metallicity at R$_e$ has an additional uncertainty.}} have a lower metallicity than expected given their mass. Excluding JO134, their mean $\log(O/H)-\log(O/H)_{fit}$ is -0.13 dex ({ $\sim 1\sigma$}). This suggests that for these galaxies the stripping is at a relatively advanced stage, as ram pressure have been able to alter the metallicity of the gas at R$_e$. Most mergers have a positive metallicity excess and two of them (P96949 and JO20) a negative one. The metallicity of mergers is clearly influenced by the metallicity of the merging galaxies, thus a large spread is indeed expected. Among the interacting systems, one is slightly richer than expected given the fit of the control sample, the other is metal poorer. Cosmic web stripping galaxies are both metal richer, with a mean excess of 0.2 dex ({ 1.6$\sigma$}). Cosmic web enhancement galaxies have typically a negative excess of metallicity. The only outlier is again P14672 which has instead an excess of metallicity of almost 0.3 dex ({ 2.5$\sigma$}). This is another indication that this galaxy could actually be undergoing some other process. The mean negative excess of the other four cosmic web enhancement galaxies is -0.15 dex ({ 1.25$\sigma$}). Finally, the two candidates that might have experienced gas accretion and for which we have a metallicity measurement show a negative excess. While this was expected in the case of P11695, whose properties are explained in terms of an inflow of low metallicity gas \citep{Vulcani2018_b} that is indeed affecting the properties of the gas at R$_e$, this is surprising for P40457 (Sec.\ref{sec:accr}), for which we proposed an infall of enriched gas. It could be that such infall has not yet affected the gas properties at R$_e$. \subsubsection{Size-$\rm M_\ast$\xspace relation} Figure \ref{fig:mass_size} shows the size-$\rm M_\ast$\xspace relation and the distribution of the difference between the size of each galaxy and the value derived from the control sample fit given the galaxy mass \citep[from][]{Franchetto2020}.\footnote{{ We stress again that for JO134, JO190 and JO20 structural parameters could not be determined, given the irregularities of the galaxies, therefore they will be excluded from this analysis.}} Similarly to the stripping galaxies in clusters, also the group galaxies undergoing ram pressure stripping have larger sizes (of $\sim 0.1$dex = 1$\sigma$) than control sample galaxies of similar mass. Galaxies that suffered from starvation have instead systematically smaller sizes (by 0.2 dex = 2$\sigma$), in agreement with many literature results that found that passive/early-type systems are much smaller than their star-forming counterparts \citep[e.g.,][]{shen03}. No other clear trend is visible. To conclude, none of the scaling relations studied above could give us irrefutable evidence for one acting mechanisms rather than another, even used in combination. This means that integrated values for galaxies with optical signs of unilateral debris can not firmly distinguish among the different processes. \section{Discussion and conclusions} \label{sec:summary} In this paper we have analyzed the spatially resolved properties of the GASP galaxies targeted for showing unilateral debris and gas tails in the optical imaging \citep{Poggianti2016JELLYFISHREDSHIFT} and located in the field (groups, pairs, filaments, isolated), for a total of 24 objects. In addition, we have also studied three passive galaxies in the same environments, that were included in the GASP sample aiming at characterizing the final stages of galaxy evolution. Considering also the characteristics of their hosting environment, we have therefore identified the most probable mechanism occurring for each galaxy. The intent was to test whether a visual inspection of optical images is suitable to select and classify the processes that remove exclusively the gas from galaxies, and probe the role of spatially resolved data in distinguishing among the different mechanisms. To classify galaxies, we have strictly followed a scheme (Fig.\ref{fig:mech}) that, being very general, could be used to classify all field galaxies in the local universe, provided the availability of spatially resolved observations that cover the whole extent of the disk, including the galaxy outskirts, and some knowledge on the galaxy environment. We note that in previous papers \citep{Vulcani2017c, Vulcani2018_b, Vulcani2018_g, Vulcani2019_fil}, we had already characterized 10/27 galaxies, but without putting them in a general context. To perform the classification, we mainly inspected the rgb images, the H$\alpha$\xspace flux maps, the BPT maps, the stellar and gas kinematics, the metallicity maps and ages maps (either the luminosity weighted ages or the SFHs in four age bins), and the mass density maps. Having classified all galaxies (21/27 with a secure classification), we have inspected the position of such galaxies on scaling relations (SFR-$\rm M_\ast$\xspace, R$_e$-$\rm M_\ast$\xspace, ionized gas metallicity - $\rm M_\ast$\xspace) obtained studying undisturbed GASP galaxies, to identify possible deviations and test whether these relations could be used to identify the acting physical mechanism. Thanks to the exquisite quality of the MUSE data, which allows us to study the gas and stellar properties on the kpc scale well beyond the stellar disk extent, we have identified the following mechanisms: \begin{itemize} \item {\it Galaxy interactions:} The two interacting galaxies show an enhancement of the SFR, but are located on opposite sides in all the other scaling relations. Differences are most likely due to the different properties of the interactions: in one case the interacting galaxies are quite similar in mass and size and could be at the first approach, in the other case the companion is much smaller and could have been orbiting around the main galaxy for longer. \item {\it Mergers:} Seven galaxies enter this category. Almost all of them have an excess of SFR given their mass, suggesting that new stars were born after the impact. They cover a rather wide range of sizes and metallicities, most likely due to the diversity in ages, orbits of the mergers and in properties of the progenitors. \item {\it Ram pressure stripping:} Six galaxies are undergoing ram pressure stripping, one in combination with a merger event. They are clearly at different stages of stripping: one of them has already lost most of its gaseous disk \citep{Vulcani2018_g}, is very red and has a low SFR, another one is instead at the peak of its stripping and shows an enhancement of the star formation (especially in the core). Except for the galaxy also undergoing a merger, all these galaxies lie below the metallicity-mass relation, differently from what is found for ram pressure stripped galaxies in clusters by \cite{Franchetto2020}. Similarly to cluster galaxies \citep{Franchetto2020}, the ram pressure stripping galaxies in groups have slightly larger sizes than undisturbed galaxies. \item {\it Cosmic web stripping:} The two galaxies undergoing this process are the lowest mass galaxies in the sample, therefore they are in the most favourable position to detect even mild environmental effects. These galaxies are quite blue, have a suppressed SFR with respect the control sample SFR-Mass relation, but a much higher metallicity. \item {\it Cosmic web enhancement:} Five galaxies belong to this class, even though for one the classification is quite uncertain. This galaxy has also a very different position on the scaling relations with respect to the other four objects. In general, cosmic web enhancement galaxies show an excess of SFR and lie below the metalllicity-mass relation. The outlier is the reddest of this class, and has suppressed SFR and enhanced metallicity. \item {\it Gas accretion:} Three galaxies enter this class. One of them is most likely undergoing an inflow of low metallicity gas, as largely discussed in \cite{Vulcani2018_b}, while the others are most likely acquiring enriched gas. The former galaxy shows an enhancement of the SFR with respect to the SFR-mass relation, suggesting that this new gas prompted the formation of new stars. In contrast, the other two galaxies are well below the control sample SFR-mass relation. \item {\it Starvation:} All three passive galaxies can be explained in terms of starvation having suppressed the SFR homogeneously throughout the disk. As expected, the galaxies are redder and have smaller sizes compared to the star-forming population. They are clear examples of passive disks \citep{bundy10, Bamford2009, Rizzo2018} \end{itemize} To summarize, mechanisms that affect only the gas component leaving the stellar properties unaltered (ram pressure stripping, cosmic web enhancement, cosmic web stripping, accretion) constitute 65\% of the mechanisms affecting the star-forming sample. We stress again that this sample was selected using deep optical imaging to characterize hydrodynamical processes, especially ram pressure stripping, and great care was taken to exclude mergers and interactions \citep{Poggianti2016JELLYFISHREDSHIFT}. Our results therefore imply that at least in the non-cluster sample almost 2/5 times a visual inspection of B band images is not able to identify processes affecting both the stellar and gas component, nor detect companions, highlighting how difficult it is to identify these processes from optical images. Also, three galaxies were observed with MUSE as part of the control sample, but the spatially resolved observations unveiled signs of disturbance. On the other hand, we also remind the reader that passive and star-forming galaxies were selected on the basis of fiber spectroscopy \citep{Calvi2011TheGalaxies, Moretti2014WINGSClusters} and all galaxies maintained their classification after spatially resolved observations. This result implies that galaxies with highly star-forming cores but passive outskirts or galaxies with passive cores but still star forming in the outskirts are overall quite rare \citep{Tuttle2020}. The analysis of the scaling relations does not give definitive results, although some general trends are observed. In general, different physical processes considered in this work produce similar signatures on global galaxy properties, and integrated values can not firmly distinguish among the different processes. To conclude, spatially resolved data are very powerful to robustly identify the different mechanisms, when used in combination with accurate definition of the environment. The latter is very important and in some of the uncertain cases a better characterization of the surrounding environment would help. Secondly, some uncertain cases could also be solved using observations at other wavelengths. This paper indeed focused only on ionised gas, while the signature of environmental processes are visible also for other gas phases (molecular gas, atomic gas). For example, the presence of tails, streams or bridges between galaxies of atomic gas could help to better study the cases of ram pressure stripping, gas accretion and interactions. As the cold gas discs are typically more extended than ionised gas discs, it is possible that in some cases stripping of the cold gas occurs while the ionised gas is still undisturbed. Similarly, also theoretical predictions on the spatially resolved properties of galaxies in the different environments and identifying the different mechanisms would be very important to support our findings. \acknowledgments We thank the anonymous referee whose comments helped us to improve the manuscript. We thank Stephanie Tonnesen for useful discussions and for providing comments on the manuscript. Based on observations collected at the European Organization for Astronomical Research in the Southern Hemisphere under ESO programme 196.B-0578. This project has received funding from the European Research Council (ERC) under the Horizon 2020 research and innovation programme (grant agreement N. 833824). We acknowledge financial contribution from the contract ASI-INAF n.2017-14-H.0, from the grant PRIN MIUR 2017 n.20173ML3WW\_001 (PI Cimatti) and from the INAF main-stream funding programme (PI Vulcani). Y.~J. acknowledges support from CONICYT PAI (Concurso Nacional de Inserci\'on en la Academia 2017) No. 79170132 and FONDECYT Iniciaci\'on 2018 No. 11180558. J.F. acknowledges financial support from the UNAM-DGAPA-PAPIIT IN111620 grant, M\'exico.
1004.0341
\section{Introduction}\label{sec:int} Let $\Omega$ be an open bounded subset of $\RR^N$, $1\le N\le 3$, with smooth boundary $\Gamma$. We are interested in the following evolution problem \begin{eqnarray} \partial_t v & - & \Delta \mu + (j+\sigma)''(v)\ \mu - \overline{(j+\sigma)''(v)\ \mu} = 0 \,, \quad (t,x)\in (0,\infty)\times\Omega\,, \label{a1} \\ \mu & = & - \Delta v + (j+\sigma)'(v)\,, \quad (t,x)\in (0,\infty)\times\Omega\,, \label{a2} \\ \nabla v\cdot \nu & = & \nabla \mu \cdot \nu = 0\,, \quad (t,x)\in (0,\infty)\times\Gamma\,, \label{a3} \\ v(0) & = & v_0\,, \quad x\in \Omega\,, \label{a4} \end{eqnarray} where the nonlinearity $j+\sigma$ is a smooth double well-potential (for instance, $(j+\sigma)(r)=(r^2-1)^2/4$), $\nu$ is the outward unit normal vector field to $\Gamma$, and $\overline{f}$ denotes the spatial mean value of an integrable function $f$, namely, $$ \overline{f} := \frac{1}{|\Omega|}\ \int_\Omega f(x)\ dx \;\;\mbox{ for }\;\; f\in L^1(\Omega)\,. $$ As one can easily realize from \eqref{a1} and \eqref{a3} by integrating over $\Omega$, the mean value of $v$ is conserved during the evolution, that is, $\overline{v}(t) = \overline{v_0}$. The initial-boundary value problem \eqref{a1}-\eqref{a4} is a phase-field approximation of the Willmore flow (cf., in particular, \cite{DLW04, DLRW05}) which belongs to a class of geometric evolutions of hypersurfaces involving nonlinear functions of the principal curvatures of the hypersurface. Recall that the Willmore flow \emph{with volume constraint} for a family of (smooth) hypersurfaces $(\Sigma(t))_{t\ge 0}$ reads \begin{equation}\label{spip} \mathcal{V} = - \Delta_\Sigma H - \frac{H}{2}\ (H^2-4K) + \lambda\,, \end{equation} where $\mathcal{V}$, $H$, $K$, and $\Delta_\Sigma$ denote the normal velocity of $\Sigma$, the sum of its principal curvatures (scalar mean curvature), the product of its principal curvatures (Gau\ss{} curvature), and the Laplace-Beltrami operator on $\Sigma$, respectively, while $\lambda$ is the Lagrange multiplier accounting for the volume conservation $$ \int_\Sigma \mathcal{V}\ ds = 0\,. $$ In addition, the Willmore flow is the $L^2$-gradient flow of the Willmore energy \begin{equation}\label{spirou} \mathcal{E}_W(\Sigma) := \int_\Sigma H^2\ ds\,. \end{equation} Related geometric evolution flows involve more complicated energies such as the Helfrich energy and additional constraints, for instance on the area, and are found in the modelling of biological cell membranes. We refer, e.g., to \cite{BGN08,BMxx,DG91,DLW04,DLRW05,RS06} and the references therein for a more detailed description of these flows and their applications. To our knowledge, the energetic phase-field approximation \eqref{a1}-\eqref{a4} has been introduced in \cite{DLW04} in order to describe the deformation of a vesicle membrane under the elastic bending energy, with prescribed bulk volume and surface area, a related model without constraints being considered in \cite{LM00}. Here, we restrict our analysis to the case of only the volume constraint, leaving the more complex case of two constraints as in \cite{DLW04} to a subsequent investigation. A nice feature of \eqref{a1}-\eqref{a4} already reported in \cite{DLW04} is that it inherits the gradient flow structure of the Willmore flow and it is actually a gradient flow in $L^2(\Omega)$ for the functional \begin{equation}\label{fantasio} E(v) := \frac{1}{2} \int_\Omega \left[ - \Delta v(x) + (j+\sigma)'(v(x)) \right]^2\ dx\,, \end{equation} a property which is a cornerstone of the forthcoming analysis. The connection between the minimizers of the Willmore energy \eqref{spirou} and those of a suitably rescaled version of the energy \eqref{fantasio} of the stationary phase-field model has been investigated in \cite{DG91,Mo05,RS06}, and we refer to \cite{DLW04,DLRW05,Wa08} for the analysis of the relationship between the phase-field approach \eqref{a1}-\eqref{a4} and the Willmore flow, with or without volume and surface constraints. However, the well-posedness of the phase-field approximation does not seem to have been considered so far, and the aim of this note is to show the well-posedness of \eqref{a1}-\eqref{a4} under suitable assumptions on the data: more precisely, we assume that there is $C_0>0$ such that \begin{eqnarray} & & j\in\mathcal{C}^3(\RR) \;\mbox{ is a convex function with }\; j(0)=j'(0)=0\,, \label{a6} \\ & & \sigma\in\mathcal{C}^3(\RR) \;\mbox{ with }\; \sigma''\in L^\infty(\RR)\,, \label{a7} \\ & & j+\sigma\ge 0 \;\mbox{ and }\; r\ (j+\sigma)'(r) \ge -C_0\,, \quad r\in\RR\,. \label{a8} \end{eqnarray} Next, owing to the already mentioned expected time invariance of the spatial mean-value of solutions to \eqref{a1}-\eqref{a4}, for $\alpha\in\RR$ we define the functional space \begin{equation} W := \left\{ w\in H^2(\Omega)\ :\ \nabla w\cdot \nu = 0 \;\;\mbox{ on }\;\; \Gamma \right\}\quad \mbox{ and its subset } \quad W_\alpha := \left\{ w\in W\ :\ \overline{w} = \alpha \right\}\,. \label{a5} \end{equation} The paper is devoted to the proof of the following existence and uniqueness result. \begin{theorem}\label{th:a1} Given $\alpha\in\RR$ and $v_0\in W_\alpha$, there is a unique solution $v$ to \eqref{a1}-\eqref{a4} satisfying $$ v\in \mathcal{C}([0,T];L^2(\Omega))\cap L^\infty(0,T;W_\alpha) \;\;\mbox{ and }\;\; \mu := -\Delta v + (j+\sigma)'(v) \in L^2(0,T;W) $$ for all $T>0$. In addition, \begin{eqnarray} & & t \longmapsto E\left( v(t) \right) := \frac{1}{2} || \mu(t)||_2^2 \;\;\mbox{ is a non-increasing function }\,, \label{a9} \\ & & \int_0^\infty \left\| -\Delta\mu(t) + (j+\sigma)''(v(t))\ \mu(t) - \overline{(j+\sigma)''(v)\ \mu}(t) \right\|_2^2\ dt \le 2 E(v_0)\,. \label{a10} \end{eqnarray} \end{theorem} Owing to the above mentioned gradient flow structure, a classical approach to existence is to use an implicit time scheme and solve a minimization problem at each step, see, e.g., \cite{AGS08} or \cite[Chap.~8]{Vi03}. The existence of a minimizer to the corresponding stationary problem is discussed in Section~\ref{sec:ex}, and Subsection~\ref{sec:tef} also collects some properties of the auxiliary variable $\mu$. The time discretization is next implemented in Subsection~\ref{sec:td} and convergence of the time discrete scheme is proved in Subsection~\ref{sec:cv} with the help of monotonicity and compactness properties. Finally, uniqueness is shown in Section~\ref{sec:un} by a standard contraction argument. \section{Existence}\label{sec:ex} \subsection{The energy functional}\label{sec:tef} Following \cite{DLW04}, we define the functional $E$ on $W$ by \begin{equation} E(w) := \frac{1}{2} \int_\Omega \left[ - \Delta w(x) + (j+\sigma)'(w(x)) \right]^2\ dx\,. \label{b0} \end{equation} Observe that $E$ is well defined for any $w\in W$ thanks to the continuous embedding of $H^2(\Omega)$ in $L^\infty(\Omega)$, \eqref{a6}, and \eqref{a7}. Indeed, for $w\in W$, we have $w\in L^\infty(\Omega)$ and $$ \left| (j+\sigma)'(w) \right| \le \int_0^w j''(r)\ dr + |\sigma'(0)| + \|\sigma''\|_\infty |w| \le |\sigma'(0)| + \left( \sup_{[-\|w\|_\infty,\|w\|_\infty]}{\left\{ j'' \right\}} + \|\sigma''\|_\infty \right)\ |w|\,. $$ Consequently, $(j+\sigma)'(w)\in L^2(\Omega)$ and $E$ is well defined. We gather some properties of $E$ in the next lemma. \begin{lemma}\label{le:b0} Given $\alpha\in\RR$, there is $C_1(\alpha)>0$ depending only on $\Omega$, $\sigma$, $C_0$ in \eqref{a8}, and $\alpha$ such that \begin{equation}\label{b1} \|w\|_{H^2} + \|j'(w)\|_2 \le C_1(\alpha)\ \left( 1 + \sqrt{E(w)} \right)\, \quad \mbox{for all} \quad w\in W_\alpha\,. \end{equation} \end{lemma} \begin{proof} Consider $w\in W_\alpha$ and put $\mu:= -\Delta w + (j+\sigma)'(w)$. Then $\mu\in L^2(\Omega)$ with $\|\mu\|_2^2=2E(w)$, and we infer from \eqref{a8} that $$ \int_\Omega w\ \mu\ dx = \|\nabla w\|_2^2 + \int_\Omega w\ (j+\sigma)'(w)\ dx \ge \|\nabla w\|_2^2 -C_0\ |\Omega|\,. $$ Combining the above inequality with the Poincar\'e-Wirtinger inequality \begin{equation}\label{b2} \| w - \overline{w} \|_2 \le C_2\ \|\nabla w\|_2\,, \end{equation} we obtain \begin{eqnarray*} \|\nabla w\|_2^2 & \le & C_0 |\Omega| + \int_\Omega w\ \mu\ dx \le C_0 |\Omega| + \| w\|_2 \|\mu\|_2 \\ & \le & C_0 |\Omega| + \sqrt{2 E(w)}\ \left( \alpha |\Omega|^{1/2} + \|w-\alpha\|_2 \right) \le C_0 |\Omega| + \sqrt{2 E(w)}\ \left( \alpha |\Omega|^{1/2} + C_2\ \|\nabla w\|_2 \right) \\ & \le & C_0 |\Omega| + \alpha |\Omega|^{1/2}\ \sqrt{2 E(w)} + \frac{1}{2}\ \|\nabla w\|_2^2 + C_2^2\ E(w)\,, \end{eqnarray*} hence $\|\nabla w\|_2^2 \le C(\alpha)\ (1+E(w))$. Using again \eqref{b2}, we conclude that \begin{equation}\label{b3} \| w \|_{H^1} \le C(\alpha)\ \left( 1 + \sqrt{E(w)} \right)\,. \end{equation} Now, $w\in W$ solves $-\Delta w + j'(w) = \mu - \sigma'(w)$ and, owing to the monotonicity of $j'$, a classical monotonicity argument shows that $$ \|\Delta w\|_2 + \|j'(w)\|_2 \le \| \mu - \sigma'(w)\|_2\,. $$ It then follows from \eqref{a7} that $$ \|\Delta w\|_2 + \|j'(w)\|_2 \le \| \mu \|_2 + |\sigma'(0)| |\Omega|^{1/2} + \|\sigma''\|_\infty\ \|w\|_2\,, $$ which, together with \eqref{b3} and $\| \mu \|_2 = \sqrt{2E(w)}$, gives \eqref{b1}. \end{proof} \medskip Next, given $\tau>0$ and $f\in L^2(\Omega)$, we define the functional $F_{\tau,f}$ on $W$ by \begin{equation}\label{b4} F_{\tau,f}(w) := \frac{1}{2}\ \| w-f\|_2^2 + \tau\ E(w)\,, \quad w\in W\,. \end{equation} \begin{lemma}\label{le:b1} Given $\alpha\in\RR$, the functional $F_{\tau,f}$ has (at least) a minimizer in $W_\alpha$. \end{lemma} \begin{proof} We set $F:=F_{\tau,f}$ to simplify notations. Since $E$ is nonnegative, $F$ is obviously nonnegative and there is a minimizing sequence $(w_n)_{n\ge 1}$ in $W_\alpha$ such that \begin{equation}\label{b5} m_\alpha := \inf_{w\in W_\alpha}{\{ F(w) \}} \le F(w_n) \le m_\alpha + \frac{1}{n}\,, \quad n\ge 1\,. \end{equation} Since $F(w_n)\ge \tau\ E(w_n)$, we readily infer from \eqref{b5} that $(E(w_n))_{n\ge 1}$ is bounded, a property which in turn implies that $(w_n)_{n\ge 1}$ is bounded in $H^2(\Omega)$ by Lemma~\ref{le:b0}. Owing to the compactness of the embedding of $H^2(\Omega)$ in $\mathcal{C}(\bar{\Omega})$, we deduce that there are $w\in H^2(\Omega)$ and a subsequence of $(w_n)_{n\ge 1}$ (not relabeled) such that \begin{equation}\label{b6} w_n \longrightarrow w \;\mbox{ in }\; \mathcal{C}(\bar{\Omega}) \;\mbox{ and }\; w_n \rightharpoonup w \;\mbox{ in }\; H^2(\Omega)\,. \end{equation} Clearly, the first convergence implies that $\left( (j+\sigma)'(w_n)\right)_{n\ge 1}$ converges towards $(j+\sigma)'(w)$ in $L^2(\Omega)$ and therefore $$ F(w) \le \liminf_{n\to\infty} F(w_n) \le m_\alpha\,. $$ As $w$ obviously belongs to $W_\alpha$ by \eqref{b6}, we also have $F(w)\ge m_\alpha$ and $w$ is a minimizer of $F$ in $W_\alpha$. \end{proof} \medskip We next derive an energy inequality and the Euler-Lagrange equation satisfied by minimizers of $F_{\tau,f}$ in $W_\alpha$ when $\overline{f}=\alpha$. \begin{lemma}\label{le:b2} Consider $\alpha\in\RR$ and a minimizer $w$ of $F_{\tau,f}$ in $W_\alpha$. Assume further that $\overline{f}=\alpha$. Then $\mu:= -\Delta w + (j+\sigma)'(w)$ belongs to $W$, \begin{equation}\label{b7} \int_\Omega \left[ \frac{w-f}{\tau} - \Delta\mu + (j+\sigma)''(w)\ \mu - \overline{(j+\sigma)''(w)\ \mu} \right] \psi\ dx = 0 \quad \mbox{for all} \quad \psi\in W\,, \end{equation} and \begin{equation}\label{b8} \left\| -\Delta \mu + (j+\sigma)''(w)\ \mu - \overline{(j+\sigma)''(w)\ \mu} \right\|_2 \le \frac{\|w - f\|_2}{\tau}\,. \end{equation} \end{lemma} \begin{proof} We set $$ \mu := -\Delta w + (j+\sigma)'(w)\,. $$ Consider $\varepsilon\in (0,1)$ and $\varphi\in W_0$. As $w+\varepsilon\varphi$ belongs to $W_\alpha$, we have $F_{\tau,f}(w)\le F_{\tau,f}(w+\varepsilon\varphi)$ from which we deduce by classical arguments (after passing to the limit as $\varepsilon\to 0$) that $$ \frac{1}{\tau}\ \int_\Omega (w-f)\ \varphi\ dx + \int_\Omega \mu\ \left( -\Delta\varphi + (j+\sigma)''(w)\ \varphi \right)\ dx \ge 0\,. $$ Since the above inequality is valid for $\varphi$ and $-\varphi$, we actually have the identity \begin{equation}\label{b9} \frac{1}{\tau}\ \int_\Omega (w-f)\ \varphi\ dx + \int_\Omega \mu\ \left( -\Delta\varphi + (j+\sigma)''(w)\ \varphi \right)\ dx = 0 \end{equation} for all $\varphi\in W_0$. Now, if $\psi\in W$, the function $\psi-\overline{\psi}$ belongs to $W_0$ and it follows from \eqref{b9} that \begin{equation}\label{b10} \frac{1}{\tau}\ \int_\Omega (w-f)\ \psi\ dx + \int_\Omega \mu\ \left( -\Delta\psi + (j+\sigma)''(w)\ \psi \right)\ dx = \overline{(j+\sigma)''(w)\ \mu}\ \int_\Omega \psi\ dx\,, \end{equation} since $w$ and $f$ have the same mean value $\alpha$. Since $\mu \in L^2(\Omega)$ solves the variational equality \eqref{b10} for all test functions $\psi\in W$, we deduce that $\mu\in W$ and satisfies \eqref{b7}. Next, for $\eta\in (0,1)$, let $\varphi_\eta$ be the unique solution in $W_0$ to $$ \varphi_\eta - \eta\ \Delta\varphi_\eta = -\Delta \mu + (j+\sigma)''(w)\ \mu - \overline{(j+\sigma)''(w)\ \mu} \;\;\mbox{ in }\;\; \Omega\,, $$ the right-hand side of the previous equation being in $L^2(\Omega)$ since $\mu\in W$ and $w\in H^2(\Omega)$ is bounded. Also, the right-hand side of the previous equation has a zero mean-value so that $\varphi_\eta\in W_0$. Taking $\psi=\varphi_\eta$ in \eqref{b7}, we realize that $$ \int_\Omega \left[ \frac{w-f}{\tau} + \varphi_\eta - \eta\ \Delta\varphi_\eta \right] \varphi_\eta\ dx = 0\,, $$ from which we deduce that $$ \|\varphi_\eta\|_2^2 \le \|\varphi_\eta\|_2^2 + \eta\ \|\nabla\varphi_\eta\|_2^2 = - \int_\Omega \frac{w-f}{\tau}\ \varphi_\eta\ dx \le \frac{\|w-f\|_2}{\tau}\ \|\varphi_\eta\|_2 \,, $$ whence $$ \|\varphi_\eta\|_2 \le \frac{\|w-f\|_2}{\tau}\,. $$ Since $(\varphi_\eta)_\eta$ converges toward $(-\Delta \mu + (j+\sigma)''(w)\ \mu - \overline{(j+\sigma)''(w)\ \mu} )$ in $L^2(\Omega)$ as $\eta\to 0$, \eqref{b8} follows from the above inequality. \end{proof} \subsection{Time discretization}\label{sec:td} Let $\alpha\in\RR$ and take an initial condition $v_0\in W_\alpha$. We consider a positive time step $\tau\in (0,1)$ and define a sequence $(v_n^\tau)_{n\ge 1}$ inductively as follows: \begin{eqnarray} & & v_0^\tau := v_0\,, \label{c1} \\ & & v_{n+1}^\tau \;\mbox{ is a minimizer of }\; F_{\tau,v_n^\tau} \;\mbox{ in }\; W_\alpha\,, \quad n\ge 0\,, \label{c2} \end{eqnarray} the functional $F_{\tau,v_n^\tau}$ being defined in \eqref{b4}. Setting \begin{equation}\label{c3b} \mu_n^\tau := - \Delta v_n^\tau + (j+\sigma)'(v_n^\tau) \;\;\mbox{ and }\;\; M_n^\tau:= \overline{(j+\sigma)''(v_n^\tau)\ \mu_n^\tau}\,, \end{equation} we define three piecewise constant time-dependent functions $v^\tau$, $\mu^\tau$, and $M^\tau$ by \begin{equation}\label{c3} \left( v^\tau(t) , \mu^\tau(t) , M^\tau(t) \right) := \left( v_n^\tau , \mu_n^\tau , M_n^\tau \right) \;\;\mbox{ for }\;\; t\in [n\tau,(n+1)\tau) \;\;\mbox{ and }\;\; n\ge 0\,. \end{equation} \begin{lemma}\label{le:c1} For $\tau\in (0,1)$, $t_1\ge 0$, and $t_2>t_1$, we have \begin{eqnarray} & & E\left( v^\tau(t_2) \right) \le E\left( v^\tau(t_1) \right) \le E(v_0)\,, \label{c4} \\ & & \|v^\tau(t_2) -v^\tau(t_1)\|_2^2 \le 2E(v_0)\ (\tau+t_2-t_1)\,, \label{c5} \\ & & \int_\tau^\infty \left\| -\Delta\mu^\tau(t) + (j+\sigma)''(v^\tau(t))\ \mu^\tau(t) - M^\tau(t) \right\|_2^2\ dt \le 2 E(v_0)\,. \label{c6} \end{eqnarray} \end{lemma} \begin{proof} Consider $n\ge 0$. Since $v_n^\tau\in W_\alpha$, we infer from \eqref{c2} that $F_{\tau,v_n^\tau}(v_{n+1}^\tau)\le F_{\tau,v_n^\tau}(v_n^\tau)$, that is, \begin{equation} \frac{1}{2\tau}\ \left\| v_{n+1}^\tau - v_n^\tau \right\|_2^2 + E\left( v_{n+1}^\tau \right) \le E\left( v_n^\tau \right)\,. \label{c7} \end{equation} Let $t_2> t_1\ge 0$ and put $n_i:=[t_i/\tau]$ (the integer part of $t_i/\tau$), $i=1,2$. On the one hand, $n_2\ge n_1$ and it readily follows from \eqref{c7} by induction that $$ E\left( v^\tau(t_2) \right) = E\left( v_{n_2}^\tau \right) \ \le\ E\left( v_{n_1}^\tau \right) = E\left( v^\tau(t_1) \right)\,, $$ whence \eqref{c4}. In particular, we have \begin{equation} \frac{1}{2} \, \sup_{t\ge 0} \| \mu^\tau(t) \|_2^2 = \sup_{t\ge 0} E\left( v^\tau(t) \right) = \sup_{n\ge 0} E\left( v_n^\tau \right) \le E\left( v_0^\tau \right) = E(v_0)\,.\label{c8} \end{equation} On the other hand, summing \eqref{c7} over $n\in\NN$ gives \begin{equation} \frac{1}{2\tau}\ \sum_{n=0}^\infty \left\| v_{n+1}^\tau - v_n^\tau \right\|_2^2 \le E\left( v_0^\tau \right) = E(v_0)\,,\label{c9} \end{equation} from which we deduce that \begin{eqnarray*} \left\|v^\tau(t_2) - v^\tau(t_1) \right\|_2 & = & \left\|v_{n_2}^\tau - v_{n_1}^\tau \right\|_2 \le \sum_{n=n_1}^{n_2-1} \left\|v_{n+1}^\tau - v_n^\tau \right\|_2 \\ & \le & \left( n_2 - n_1 \right)^{1/2}\ \left( \sum_{n=n_1}^{n_2-1} \left\|v_{n+1}^\tau - v_n^\tau \right\|_2^2 \right)^{1/2} \\ & \le & \left( 1 + \frac{t_2-t_1}{\tau} \right)^{1/2}\ \left( 2\tau E(v_0) \right)^{1/2} \\ & \le & \sqrt{2E(v_0)}\ \left( \tau + (t_2-t_1) \right)^{1/2}\,, \end{eqnarray*} and thus \eqref{c5}. Finally, for $n\ge 0$, we have $\overline{v_{n+1}^\tau} = \overline{v_n^\tau}=\alpha$ by \eqref{c2} and we infer from \eqref{b8} that $$ \left\| -\Delta \mu_{n+1}^\tau + (j+\sigma)''(v_{n+1}^\tau)\ \mu_{n+1}^\tau - M_{n+1}^\tau \right\|_2 \le \frac{\|v_{n+1}^\tau - v_n^\tau\|_2}{\tau}\,. $$ Combining \eqref{c9} and the previous inequality give \begin{eqnarray*} & & \int_\tau^\infty \left\| -\Delta \mu^\tau(t) + (j+\sigma)''(v^\tau(t))\ \mu^\tau(t) - M^\tau(t) \right\|_2^2\ dt \\ & \le & \sum_{n=0}^\infty \int_{(n+1)\tau}^{(n+2)\tau} \left\| -\Delta \mu_{n+1}^\tau + (j+\sigma)''(v_{n+1}^\tau)\ \mu_{n+1}^\tau - M_{n+1}^\tau \right\|_2^2\ dt \\ & \le & \sum_{n=0}^\infty \frac{\|v_{n+1}^\tau - v_n^\tau\|_2^2}{\tau} \le 2 E(v_0) \,, \end{eqnarray*} and the proof is complete. \end{proof} Useful bounds on $(v^\tau)_\tau$ and $(\mu^\tau)_\tau$ follow from Lemma~\ref{le:c1}. \begin{corollary}\label{co:c2} For all $T>0$, there is $C_3(T)>0$ depending only on $\alpha$, $v_0$, $j$, $\sigma$, and $T$ such that , for $\tau\in (0,1)\cap (0,T)$, \begin{eqnarray} \sup_{t\in [0,T]} \left\| v^\tau(t) \right\|_{H^2} & \le & C_3(T)\,, \label{c10} \\ \int_\tau^T \left( \left\| \mu^\tau(t) \right\|_{H^1}^4 + \left\| \mu^\tau(t) \right\|_{H^2}^2 \right)\ dt & \le & C_3(T)\,. \label{c11} \end{eqnarray} \end{corollary} \begin{proof} The boundedness \eqref{c10} of $(v^\tau)_\tau$ is a straightforward consequence of \eqref{b1} and \eqref{c8}. Next, owing to the continuous embedding of $H^2(\Omega)$ in $L^\infty(\Omega)$ and \eqref{c10}, the family $((j+\sigma)''(v^\tau))_\tau$ is bounded in $L^\infty((0,T)\times\Omega)$ which, together with \eqref{c8}, imply that \begin{equation} ((j+\sigma)''(v^\tau) \mu^\tau)_\tau \;\;\mbox{ is bounded in }\;\; L^\infty(0,T;L^2(\Omega))\,.\label{c12} \end{equation} Setting $f^\tau := -\Delta \mu^\tau + (j+\sigma)''(v^\tau)\ \mu^\tau - M^\tau$, it follows from \eqref{c6} and \eqref{c12} that \begin{eqnarray*} \left( \int_\tau^T \|\Delta \mu^\tau(t)\|_2^2\ dt \right)^{1/2} & = & \left( \int_\tau^T \left\| (j+\sigma)''(v^\tau(t))\ \mu^\tau(t) - M^\tau(t) - f^\tau(t) \right\|_2^2\ dt \right)^{1/2} \\ & \le & 2\ \left( \int_\tau^T \left\| (j+\sigma)''(v^\tau(t))\ \mu^\tau(t) \right\|_2^2\ dt \right)^{1/2} + \left( \int_\tau^T \left\| f^\tau(t) \right\|_2^2\ dt \right)^{1/2} \\ & \le & C(T)\,, \end{eqnarray*} which gives the boundedness of $(\mu^\tau)_\tau$ in $L^2(\tau,T;H^2(\Omega))$ with the help of \eqref{c8}. Finally, $\mu^\tau\in W$ and solves $$ -\Delta\mu^\tau + j''(v^\tau)\ \mu^\tau = f^\tau - \sigma''(v^\tau)\ \mu^\tau +M^\tau \;\;\mbox{ in }\;\; \Omega\,. $$ Taking the scalar product in $L^2(\Omega)$ of the previous equation with $\mu^\tau$ and using the nonnegativity of $j''$ due to the convexity \eqref{a6} of $j$ and the boundedness \eqref{a7} of $\sigma''$, we obtain \begin{eqnarray*} \|\nabla\mu^\tau\|_2^2 & \le & \|\nabla\mu^\tau\|_2^2 + \int_\Omega j''(v^\tau)\ (\mu^\tau)^2\ dx \\ & \le & \|f^\tau\|_2\ \|\mu^\tau\|_2 + \|\sigma''\|_\infty\ \|\mu^\tau\|_2^2 + |M^\tau|\ \|\mu^\tau\|_2 \,. \end{eqnarray*} We next deduce from \eqref{c8} and \eqref{c12} that $$ \|\nabla\mu^\tau\|_2^2 \le C(T)\ \left( 1 + \|f^\tau\|_2 \right)\,, $$ and the boundedness of the right-hand side of the above inequality in $L^2(\tau,T)$ follows at once from \eqref{c6}. \end{proof} \subsection{Convergence}\label{sec:cv} Owing to \eqref{c5}, \eqref{c10}, and the compactness of the embedding of $H^2(\Omega)$ in $\mathcal{C}(\bar{\Omega})$, a refined version of the Ascoli-Arzel\`a theorem (in the spirit of \cite[Prop.~3.3.1]{AGS08}) ensures that $(v^\tau)_\tau$ is relatively compact in $\mathcal{C}([0,T]\times\bar{\Omega})$ for all $T>0$. Consequently, there are three functions $v$, $\mu$, and $M$ and a subsequence $\left( v^{\tau_k} \right)_{k\ge 1}$ of $(v^\tau)_\tau$ such that, for all $T>0$, $$ v\in \mathcal{C}([0,T]\times\bar{\Omega})\cap L^\infty(0,T;H^2(\Omega))\,, \quad \mu\in L^\infty(0,T;L^2(\Omega))\,, \quad M\in L^\infty(0,T)\,, $$ and \begin{eqnarray} v^{\tau_k} & \longrightarrow & v \;\;\mbox{ in }\;\; \mathcal{C}([0,T]\times\bar{\Omega})\,, \label{c14} \\ v^{\tau_k} & \stackrel{*}{\rightharpoonup} & v \;\;\mbox{ in }\;\; L^\infty(0,T;H^2(\Omega))\,, \label{c15} \\ \mu^{\tau_k} & \stackrel{*}{\rightharpoonup} & \mu \;\;\mbox{ in }\;\; L^\infty(0,T;L^2(\Omega))\,, \label{c16} \\ M^{\tau_k} & \stackrel{*}{\rightharpoonup} & M \;\;\mbox{ in }\;\; L^\infty(0,T)\,. \label{c17} \end{eqnarray} Thanks to the smoothness of $j$ and $\sigma$ and the convergences \eqref{c14}--\eqref{c17}, it is straightforward to pass to the limit in \eqref{c3b} and conclude that \begin{equation} \mu = - \Delta v + (j+\sigma)'(v) \;\;\mbox{ and }\;\; M = \overline{(j+\sigma)''(v)\ \mu}\,. \label{c18} \end{equation} In addition, \eqref{c11}, \eqref{c16}, and a lower semicontinuity argument guarantee that \begin{equation} \mu\in L^4(0,T;H^1(\Omega)) \cap L^2(0,T;H^2(\Omega)) \quad \mbox{for all} \quad T>0\,. \label{c19} \end{equation} It remains to derive the equation solved by $v$. Let $\psi\in W$, $t>0$, $n=[t/\tau]$, and $m\in\{0,\ldots,n-1\}$. Using the definition of $v_{m+1}^\tau$ and Lemma~\ref{le:b2}, we are led to $$ \int_\Omega \left[ \frac{v_{m+1}^\tau-v_m^\tau}{\tau} - \Delta\mu_{m+1}^\tau + (j+\sigma)''(v_{m+1}^\tau)\ \mu_{m+1}^\tau - M_{m+1}^\tau \right] \psi\ dx = 0\,, $$ which also reads $$ \int_\Omega \left( v_{m+1}^\tau-v_m^\tau \right)\ \psi\ dx = \int_{(m+1)\tau}^{(m+2)\tau} \int_\Omega \left[ \Delta\mu^\tau(s) - (j+\sigma)''(v^\tau(s))\ \mu^\tau(s) + M^\tau(s) \right] \psi\ dxds\,. $$ Summing the above identities over $m\in \{0,\ldots,n-1\}$, we obtain $$ \int_\Omega \left( v_n^\tau-v_0^\tau \right)\ \psi\ dx = \int_\tau^{(n+1)\tau} \int_\Omega \left[ \Delta\mu^\tau(s) - (j+\sigma)''(v^\tau(s))\ \mu^\tau(s) + M^\tau(s) \right] \psi\ dxds\,. $$ $$ \int_\Omega \left( v^\tau(t)-v_0 \right)\ \psi\ dx = \int_\tau^{(n+1)\tau} \int_\Omega \left[ \Delta\mu^\tau(s) - (j+\sigma)''(v^\tau(s))\ \mu^\tau(s) + M^\tau(s) \right] \psi\ dxds\,. $$ Noticing that $t\le (n+1)\tau \le t+\tau$, we may take $\tau=\tau_k$ in the above identity and pass to the limit as $k\to\infty$ with the help of \eqref{c14}--\eqref{c17} to obtain \begin{equation} \int_\Omega \left( v(t)-v_0 \right)\ \psi\ dx = \int_0^t \int_\Omega \left[ \Delta\mu(s) - (j+\sigma)''(v(s))\ \mu(s) + M(s) \right] \psi\ dxds\,.\label{c20} \end{equation} Collecting \eqref{c18}-\eqref{c20} completes the proof of the existence part of Theorem~\ref{th:a1}. The properties \eqref{a9} and \eqref{a10} next follow from \eqref{c4}, \eqref{c6}, and the convergences \eqref{c14}-\eqref{c17}. \section{Uniqueness}\label{sec:un} Let $v_1$ and $v_2$ be two solutions to \eqref{a1}-\eqref{a4} with $\mu_i:=-\Delta v_i + (j+\sigma)'(v_i)$ and $M_i := \overline{(j+\sigma)''(v_i) \mu_i}$, $i=1,2$. Fix $T>0$. Since $H^2(\Omega)$ is continuously embedded in $L^\infty(\Omega)$, the regularity properties of $v_1$, $v_2$, $\mu_1$, and $\mu_2$ listed in Theorem~\ref{th:a1} ensures that there is $K>0$ depending on $T$ such that \begin{equation} \sup_{t\in [0,T]} \left( \|v_1(t)\|_\infty + \|v_2(t)\|_\infty + \|\mu_1(t)\|_2 + \|\mu_2(t)\|_2 \right) + \int_0^T \left( \|\mu_1(s)\|_\infty^2 + \|\mu_2(s)\|_\infty^2 \right)\ ds \le K\,. \label{d1} \end{equation} It then follows from \eqref{d1} and the smoothness of $j$ and $\sigma$ that \begin{eqnarray} & & \hskip-1cm \left| (j+\sigma)''(v_1)\ \mu_1 - (j+\sigma)''(v_2)\ \mu_2 \right| \label{d2}\\ & \le & \left| (j+\sigma)''(v_1) - (j+\sigma)''(v_2) \right|\ \left| \mu_1 \right| + \left| (j+\sigma)''(v_2) \right|\ \left| \mu_1 - \mu_2 \right| \nonumber \\ & \le & \left\| (j+\sigma)''' \right\|_{L^\infty(-K,K)}\ \left| v_1 - v_2 \right|\ \left| \mu_1 \right| + \left\| (j+\sigma)'' \right\|_{L^\infty(-K,K)}\ \left| \mu_1 - \mu_2 \right|\,, \nonumber \\ & \le & C\ \left( \left| \mu_1 \right|\ \left| v_1 - v_2 \right| + \left| \mu_1 - \mu_2 \right| \right)\,,\nonumber \end{eqnarray} from which we deduce that \begin{eqnarray} |M_1-M_2| & \le & \frac{1}{|\Omega|}\ \int_\Omega \left| (j+\sigma)''(v_1)\ \mu_1 - (j+\sigma)''(v_2)\ \mu_2 \right|\ dx \label{d3} \\ & \le & C\ \int_\Omega \left( \left| \mu_1 \right|\ \left| v_1 - v_2 \right| + \left| \mu_1 - \mu_2 \right| \right)\ dx \nonumber \\ & \le & C\ \left( \|\mu_1\|_2\ \|v_1-v_2\|_2 + \|\mu_1-\mu_2\|_2 \right)\,. \nonumber \end{eqnarray} Since $v_1-v_2$ solves $$ \partial_t (v_1-v_2) - \Delta (\mu_1-\mu_2) = M_1-M_2 - (j+\sigma)''(v_1)\ \mu_1 + (j+\sigma)''(v_2)\ \mu_2 $$ and $v_1-v_2$ and $\mu_1-\mu_2$ both belong to $W$, we have \begin{eqnarray*} \frac{1}{2}\ \frac{d}{dt} \|v_1-v_2\|_2^2 & = & \int_\Omega (\mu_1-\mu_2)\ \Delta (v_1-v_2)\ dx + \int_\Omega (M_1-M_2)\ (v_1-v_2)\ dx \\ & - & \int_\Omega \left[ (j+\sigma)''(v_1)\ \mu_1 - (j+\sigma)''(v_2)\ \mu_2 \right]\ (v_1-v_2)\ dx \,. \end{eqnarray*} We deduce from \eqref{a2}, \eqref{d1}, \eqref{d2}, and \eqref{d3} that \begin{eqnarray*} \frac{1}{2}\ \frac{d}{dt} \|v_1-v_2\|_2^2 & = & \int_\Omega (\mu_1-\mu_2)\ \left[ (j+\sigma)'(v_1)-(j+\sigma)'(v_2) - (\mu_1-\mu_2) \right]\ dx \\ & + & \int_\Omega (M_1-M_2)\ (v_1-v_2)\ dx \\ & - & \int_\Omega \left[ (j+\sigma)''(v_1)\ \mu_1 - (j+\sigma)''(v_2)\ \mu_2 \right]\ (v_1-v_2)\ dx \\ & \le & \|(j+\sigma)''\|_{L^\infty(-K,K)}\ \|\mu_1-\mu_2\|_2\ \|v_1-v_2\|_2 - \|\mu_1-\mu_2\|_2^2 \\ & + & C\ \left( \|\mu_1\|_2\ \|v_1-v_2\|_2 + \|\mu_1-\mu_2\|_2 \right)\ \|v_1-v_2\|_2 \\ & + & C\ \int_\Omega \left( \left| \mu_1 \right|\ \left| v_1 - v_2 \right| + \left| \mu_1 - \mu_2 \right| \right)\ |v_1-v_2|\ dx \\ & \le & C\ \|\mu_1-\mu_2\|_2\ \|v_1-v_2\|_2 - \|\mu_1-\mu_2\|_2^2 \\ & + & C\ \left( 1+ \|\mu_1\|_\infty \right)\ \|v_1-v_2\|_2^2 \\ & \le & C\ \left( 1+ \|\mu_1\|_\infty \right)\ \|v_1-v_2\|_2^2\,. \end{eqnarray*} Therefore, recalling \eqref{d1}, $$ \|(v_1-v_2)(t)\|_2^2 \le \|(v_1-v_2)(0)\|_2^2\ \exp{\left( C\ \int_0^t \left( 1+\|\mu_1(s)\|_\infty \right)\ ds \right)} \le C\ \|(v_1-v_2)(0)\|_2^2 $$ for $t\in [0,T]$, and the uniqueness assertion follows. \section*{Acknowledgments} This work was initiated during a visit of the first author at the Institut de Math\'ematiques de Toulouse, Universit\'e Paul Sabatier, whose financial support and kind hospitality are gratefully acknowledged.
1004.0387
\section{Introduction} The study of strongly interacting matter under extreme conditions of temperature and/or density is one of the most fascinating areas of contemporary subatomic physics. This program aims to explore the many facets of the {\it bulk} behavior of Quantum ChromoDynamics (QCD): The theory of the strong interaction. It seeks to map out the different phases allowed by QCD and the nature of the possible phase transitions connecting them or in short, to elucidate the QCD phase diagram. In this context, the existence of an exotic phase of QCD, a quark-gluon plasma, has been a prediction of lattice QCD whose details have continuously being refined over the years \cite{Bazavov:2009zn}. On the experimental front, several observables have been put forward as signature of the quark gluon plasma. These include electromagnetic radiation \cite{Gale:2009gc}, the quenching of energetic QCD jets \cite{Gyulassy:2003mc}, and the dissolution (with increasing collision centrality and energy) of heavy quark bound states according to the seminal suggestion in Ref. \cite{Matsui:1986dk}. The Relativistic Heavy Ion Collider (RHIC), now at the end of its first decade of operation, has uncovered an intriguing set of phenomena suggestive of new physics. One of these is the observation of strong hydrodynamic flow effects, highly suggestive of a ``strongly coupled'' quark gluon plasma \cite{Gyulassy:2004zy}. The fate of quarkonium is being analyzed at RHIC, as it was at the SPS before. A surprising fact to emerge of these studies is that the suppression of the $c \bar{c}$ ground state - the $J/\psi$ - at RHIC is entirely comparable to that at the SPS, in spite of the much larger energy densities being reached at the first facility. This triggered many analyses with scenarios where the enhanced dissociation at RHIC was roughly compensated by an extra formation owing, for example, to quark-antiquark coalescence near hadronization \cite{coal}. Related investigations are concerned by the fate of the quarkonium spectral density above $T_c$, the deconfinement temperature \cite{Mocsy:2009ca}. It is fair to write that the study of quarkonium imbedded in a finite-temperature strongly interacting medium is a flourishing industry: The modifications of its spectral profile can be related to in-medium effects. A related topic of investigation on the lattice consists of calculating the quark-antiquark potential as a function of temperature \cite{lat_T}. These calculations show a Coulomb potential at zero temperature, with an added linear part that slowly disappears as $T$ is raised, leading eventually to the unbinding of quarkonia bound states. Our goal in this work is to approach this softening of the potential from a different point of view. In parallel with the studies described in the previous paragraph, the physics of hot and dense strongly interacting matter, and thus that of the quark-gluon plasma, has recently benefited from the use of a new set of techniques, germane to string theory. The gauge-string duality can indeed provide a sophisticated toolbox with which to treat strongly-coupled, strongly interacting systems \cite{Mal-1,Witt-1,son-1}. Our purpose here is to bring closer the more traditional investigations in QCD with those pursued in string theory. In gauge-string duality, a finite-temperature medium is dual to a black hole. Even though in a large number of applications the associated field theory is conformal, we use a framework which is ``QCD-like''. More specifically, we construct the dual gravity of thermal field theory which becomes almost conformal in the UV but has logarithmic running of coupling in the IR with matter in the fundamental representation. Without being explicitly QCD, this string theory will provide some of the features associated with large $N$ Quantum Chromodynamics, and its study may shed more light on the behavior of strongly coupled, strongly interacting matter at finite temperature. Our paper is organized as follows: In the next section we define the geometry in which our solution will exist. The description of the full geometry is subtle, so we will divide the geometry in three regions. The far IR geometry will be described in sec. 2.1, and the far UV geometry will be described in sec. 2.3. These two geometries are connected by an interpolating geometry that we will describe in details in sec. 2.2. Once we have the full geometry, we compute the heavy quark potential from the Nambu-Goto action first for zero, and then for finite temperature in sec. 3. In this section we will also provide a generic argument for confinement both for zero and non-zero temperatures. Although most of our analysis in this paper will be done analytically, we will do some detailed numerical analysis to study regimes that are difficult to access analytically. We will show that the numerical analysis fits consistently with the expected behavior of the heavy quarkonium states in this theory. Finally, we summarize and conclude. \section{Construction of the Geometry} Following the development in \cite{KS}, in \cite{FEP} it was shown that a geometry where the dual thermal field theory was almost conformal in the UV, and had a logarithmic running of the coupling in the IR existed. The gauge theory studied in \cite{FEP} had a dual weakly coupled gravity description at zero temperature in terms of a warped deformed conifold with seven branes and fluxes. The gauge theory in turn is strongly coupled with a {\it smooth} RG flow but no well defined colors at any given scale. When the gauge theory is weakly coupled, the description can be presented in terms of cascades of Seiberg dualities that slows down quite a bit when one approaches the far IR because of the presence of fundamental flavors. There is no supergravity dual description available for this case, and the cascade is only captured by the full string theory on the relevant geometry. Once a non-zero temperature is switched on, the strongly coupled gauge theory description is given by a dual supergravity solution on a {\it resolved} warped deformed conifold with seven branes and fluxes \cite{FEP}. The resolution factor is directly related to the temperature because in the presence of a black hole a consistent solution of the system can only be achieved by introducing a non-zero resolution factor for the two cycle. In a Klebanov-Tseytlin type geometry, this resolution factor would in fact remove the naked singularity. One of the other key ingredient of the solution presented in \cite{FEP} is the far UV picture. In all the previous known attempts to this problem, the dual supergravity solution was always afflicted by the presence of Landau Poles. Such problems arose because of the behavior of the axio-dilaton, that typically blows up due to their logarithmic behavior. What we pointed out in \cite{FEP} is that the logarithmic behavior, which is so ubiquitous in these constructions, appears because we are studying the theory near any one of the seven branes. In the full F-theory picture the large $r$ behavior is perfectly finite, and in fact also has a good description in terms of the metric too. The behavior of the warp factor for large $r$ is given by: \bg\label{larger} h ~= ~ \sum_{\alpha} {L^4_{(\alpha)}\over r^4_{(\alpha)}} \nd with $r_{(\alpha)} = r^{1+{\epsilon_{(\alpha)}\over 4}}$ and $\epsilon_{(\alpha)}$ is a small positive number that is a function of $g_sN_f, g_sM$ and $g_sN$ (see eq. (3.36) of \cite{FEP}. The sign difference from \cite{FEP} is just a matter of convention). The log $r$ appearing in the warp factor doesn't create much of a problem at UV: the theory is perfectly holographically renormalisable, and any fluctuations of the background are under control. The fact that we can have well defined and renormalisable interactions in this background in the presence of fundamental flavors was shown, we believe for the first time, in \cite{FEP} (see \cite{AB} for renormalisability argument without fundamental flavors). For the present purpose, we want to ask a slightly different question here, namely: can we construct a dual supergravity background that allows logarithmic RG flow in the IR but has a vanishing beta function at far UV? From our discussion of the UV caps in \cite{FEP} it is clear what we should be looking for: we need a gravitational background that resembles OKS geometry for small $r$, but has a UV cap given by an asymptotic AdS geometry. To extend this configuration to high temperature, we need OKS-BH geometry\footnote{For more details on the construction of OKS-BH (Ouyang-Klebanov-Strassler-Black-Hole) geometry, see \cite{FEP} sections 3.1 and 3.3.} at small $r$, and asymptotic AdS-Schwarchild geometry at large $r$. Such a geometry looks complicated, so we may want to ask whether we can switch off the three form fluxes and still have a dual description with running couplings. If this were possible then the analysis could be made much more simpler. It turns out however that such a simplification cannot occur in our set-up. To elucidate the last point, let us give a brief discussion. The RG runnings of the two gauge groups in this theory are determined by the following dual maps in terms of the bulk axio-dilaton $\tau$ and NS potential $B$: \bg\label{maps} &&{4\pi^2\over g_1^2} + {4\pi^2\over g_2^2} = \pi~ {\rm Im}~\tau\nonumber\\ && {4\pi^2\over g_1^2} - {4\pi^2\over g_2^2} = {{\rm Im}~\tau\over 2\pi\alpha'} \int_{S^2} B - \pi~{\rm Im}~\tau ~~({\rm mod}~2\pi) \nd Once we switch off $B$ the two couplings would be the same and would induce a Shifman-Vainstein $\beta$-function of the form: \bg\label{svbeta} {\partial\over \partial ~{\rm log}~\Lambda} {8\pi^2\over g^2_{\rm YM}} ~ = ~ 3N - 2N(1-\gamma_{A, B}) -N_f(1-\gamma_q) \nd where $\Lambda$ is the energy scale that is related to the radial coordinate $r$ in the gravity side, and $\Gamma_{A,B}$ and $\gamma_q$ are the anomalous dimensions of bi-fundamental and fundamental fields respectively. With such a picture of the flow, we might think that the F-theory completion might be to simply add sufficient number of seven branes parallel to the spacetime directions and wrapping the two internal two-spheres (so that they are points in the ($r, \psi$) plane). This simple picture would unfortunately be inconsistent with the underlying cascading dynamics as could be seen from a T-dual framework \cite{dasmukhi, ouyang}, and therefore would be incapable of showing certain important behavior expected from this model. To understand the problem, observe that in the T-dual picture {\it \'a la} \cite{dasmukhi}, the D3-D7-conifold geometry is mapped to a configuration of D4-D6 and intersecting NS5 branes. The NS5 branes, that are T-dual to the conifold, are along $x^{012345}$ and $x^{012389}$ directions and have $N$ D4 branes (T-dual of $N$ D3 branes) between them. Everytime we cross the NS5 branes we expect extra D4 branes to appear because of the D6 branes. This is however only possible if the D6 branes are along $x^{0123457}$ which in turn would imply that in the brane side the D7 branes have to be along the radial direction. Additionally, motion of the NS5 brane would imply a $B_{NS}$ field in the brane side that is not a constant but has at least a log $r$ dependence along the radial direction. This means that $H_{NS}$ is non-zero, and we need to switch on $H_{RR}$ to satisfy the equations of motion, bringing us back to the model originally advocated in \cite{FEP}! The discussion above should convince the readers that there aren't much avenue to simplify the original proposal of \cite{FEP}. The original model proposed in \cite{FEP} is structurally complicated, but is possibly the {\it simplest} in realising some of the properties of IR large $N$ QCD. A model simpler than this would be deviod of any interesting physics. Once this is settled, we want to see how to construct the kind of geometry that we mentioned above. Our requirement is to impose confinement at far IR and vanishing beta function at far UV. Since the original model studied in \cite{FEP} doesn't quite have the right large $r$ behavior because the warp factor therein goes as \eqref{larger}, we need to add appropriate UV cap. However before we actually go about constructing the background, let us clarify how addition of UV caps in general can change IR geometries. One thing should of course be clear, the far IR geometries {\it cannot} change by the addition of UV caps. This is because the UV caps corresponding to adding non-trivial irrelevant operators in the dual gauge theory\footnote{Assuming of course that the {\it relevant} operators were responsible for creating the cascading dynamics from a given UV completion in the first place!}. These operators keep far IR physics completely unchanged, but physics at not-so-small energies may change a bit. So the question is how are these changes registered in our analysis? Additionally we may also want to ask how entropy of our gauge theories affected by the addition of UV caps? Both the above questions may be answered if we could figure out how the UV caps affect the energy momentum tensors of our gauge theories. The generic form of the energy-momentum tensor that we derived in our earlier paper \cite{FEP} can be reproduced as: \bg\label{wakeup} && T^{mm}_{{\rm medium} + {\rm quark}} = \int \frac{d^4q}{(2\pi)^4}\sum_{\alpha, \beta} \Bigg\{({H}_{\vert\alpha\vert}^{mn}+ {H}_{\vert\alpha\vert}^{nm})s_{nn}^{(4)[\beta]} -4({K}_{\vert\alpha\vert}^{mn}+ {K}_{\vert\alpha\vert}^{nm})s_{nn}^{(4)[\beta]}\nonumber\\ && ~~~~~~~~~~~ +({K}_{\vert\alpha\vert}^{mn}+ {K}_{\vert\alpha\vert}^{nm})s_{nn}^{(5)[\beta]} +\sum_{j=0}^{\infty}~\hat{b}^{(\alpha)}_{n(j)} \widetilde{J}^n \delta_{nm} e^{-j{\cal N}_{\rm uv}} + {\cal O}({\cal T} e^{-{\cal N}_{uv}})\Bigg\} \nd where ${H}_{\vert\alpha\vert}^{mn}$ and ${K}_{\vert\alpha\vert}^{mn}$ depend on the full background geometry via eq. (3.124) in \cite{FEP} with $s_{nn}^{(p)[\beta]}$ being the Fourier coefficients. The other terms namely $\hat{b}^{(\alpha)}_{n(j)}$ and ${\cal N}_{\rm uv}$ together specify the boundary theory for a specific UV completion \cite{FEP}. Now its easy to see how the UV caps would change our results. Once we add a UV cap the local region $r_c - \alpha_1 \le r \le r_c + \alpha_2$ near the junction\footnote{Clearly $\alpha_1 << r_c$ because the far IR geometry should remain completely unaltered.} at $r = r_c$ changes, with ($\alpha_1, \alpha_2$) being some appropriate neighborhood around $r_c$. This means that $C_1^{mn}, A_1^{mn}$ and $B_1^{mn}$ etc. in eq (3.124) of \cite{FEP} would change. These changes can be registered as \bg\label{changela} &&{H}_{\vert\alpha\vert}^{mn} ~\to ~{\widetilde H}_{\vert\alpha\vert}^{mn} ~\equiv ~ {H}_{\vert\alpha\vert}^{mn} ~+~ (\delta C_{1(\alpha)}^{mn} - \delta A'^{mn}_{1(\alpha)})e^{-4\left[1 - \epsilon_{(\alpha)}\right]{\cal N}_{\rm eff}} ~+~ {\cal O}(e^{-j{\cal N}_{\rm uv}}) \nonumber\\ &&{K}_{\vert\alpha\vert}^{mn} ~ \to ~ {\widetilde K}_{\vert\alpha\vert}^{mn} ~\equiv~ {K}_{\vert\alpha\vert}^{mn} ~+~ (\delta B_{1(\alpha)}^{mn} - \delta A^{mn}_{1(\alpha)}) e^{-4\left[1 - \epsilon_{(\alpha)}\right]{\cal N}_{\rm eff}} ~+~ {\cal O}(e^{-j{\cal N}_{\rm uv}}) \nd where the last terms in both the above equations appear from additional UV degrees of freedom, $C_{1(\alpha)}^{mn}$ etc are the relevant $\alpha$-th components of $C_{1}^{mn}$ etc, and ${\cal N}_{\rm eff}$ is the effective number of degrees of freedom at the cutoff. Once we know these changes, its not too difficult to figure out the changes in the entropies due to the addition of UV caps. All we need are the RHS of eq (3.220) in \cite{FEP} using the results from \eqref{changela} and taking care of the boundary temperatures $T_b$ from the changes in the warp factors\footnote{There may be interesting cases where the changes in the energy-momentum tensors are compensated by the changes in the boundary temperatures. In such cases the entropies may remain unchanged. Here we will not consider such cases.}. Using \eqref{changela} the result can be written as: \bg\label{entropych} {\delta s\over s} ~&= &~ \left({1\over {\cal T}} + {1\over 2h({\cal T})}{dh({\cal T})\over d{\cal T}}\right) \delta{\cal T} \\ && ~ + {\int d^4 q \sum_{\alpha, \beta}\left[\delta H^{(mn)}_{\vert\alpha\vert} {\tilde s}_{nn}^{(4)[\beta]} -\delta K^{(mn)}_{\vert\alpha\vert}\left(4 {\tilde s}_{nn}^{(4)[\beta]} - {\tilde s}_{nn}^{(5)[\beta]}\right)\right] \over \int d^4 q' \sum_{\alpha, \beta}\left[H^{(mn)}_{\vert\alpha\vert} {\tilde s}_{nn}^{(4)[\beta]} - K^{(mn)}_{\vert\alpha\vert}\left(4 {\tilde s}_{nn}^{(4)[\beta]} - {\tilde s}_{nn}^{(5)[\beta]}\right) + {\cal O}(e^{-{\cal N}_{uv}})\right]}\nonumber \nd However physics that are only sensitive to far IR dynamics of our theory will not be affected by the addition of UV caps. On the other hand in all cases, far IR or not, none of our results could depend on the cut-off $r_c$. The results are only sensitive to the changes in IR geometries (via \eqref{changela}) and the UV degrees of freedom (via $e^{-j{\cal N}_{\rm uv}}$). From the above discussions we see how IR geometries could be affected by the addition of UV caps. This then tells us that we cannot simply add an AdS geometry at $r = r_c$. The vanising beta function at UV could be realised by an asymptotic AdS geometry, a geometry whose warp factor behave as $r^{-4}$ only asymptotically. In other words we require: \bg\label{hde} && h ~= ~\frac{L^4}{r^4}\left[1+\sum_{i=1}^\infty\frac{a_i(\psi,\theta_j,\phi_j)}{r^i}\right]~~~ {\rm for ~~ large} ~r\nonumber\\ && h ~= ~\frac{L^4}{r^4}\left[\sum_{j,k=0}\frac{b_{jk}(\psi,\theta_i,\phi_i){\rm log}^kr}{r^j}\right]~~~{\rm for ~~ small} ~r \nd where ($\theta_i, \phi_i, \psi$) are the coordinates of the internal space. Observe also that we are now identifying the small $r$ behavior of the warp factor to the relation \eqref{larger} given above. The precise connection will be spelled out in details below. Let us now make this a bit more precise. We require a gauge theory with confining IR dynamics and almost free UV dynamics at zero temperature, and then we want to study this theory at a temperature higher than the deconfining temperature, as mentioned before. Our dual gravity background that could in principle reproduce the gauge theory dynamics couldn't be the pure OKS (or OKS-BH) background of \cite{FEP}. We need an appropriate UV cap. Again, as we mentioned earlier, the UV cap should be asymptotically AdS. The warp factor should have the form \eqref{hde} at UV and IR, so we need an interpolating geometry between them to have a well defined background. The logarithmic warp factor at far IR tells us that the geometry is influenced by one or a set of coincident D7 branes. These seven branes wrap the $T^{1,1}$ as in {\it branch 2} of \cite{FEP} while extending in the radial $r$ direction and filling up four Minkowski directions (see eq (3.9) of \cite{FEP}). In particular the embedding equation for a D7 brane is given by \cite{ouyang, FEP} \bg \label{embedding} z\equiv ~r^{3/2} e^{i(\psi-\phi_1-\phi_2)}{\rm sin}~\frac{\theta_1}{2}~{\rm sin}~\frac{\theta_2}{2}~=\mu \nd where $\mu$ is a parameter. For supersymmetric case $\mu$ would be related to the deformation parameter of the conifold. Since we don't require supersymmetry we can take $\mu$ to be arbitrary\footnote{The issue of supersymmetry is a little subtle here. The susy can of course be broken by choosing a different $\mu$, but can also be broken by choosing the right $\mu$ but separating the wrapped D5 branes along ($\theta_2, \phi_2$) directions. One may say that if we allow bound states of D5 and D7 branes we might restore zero temperature susy. Alternatively we can consider the seven-branes to be oriented as in \cite{gtpapers} which is related to our far IR configuration. In \cite{gtpapers} heavy fundamental quarks could still restore susy. In this paper we will not consider the seven-brane configurations of \cite{gtpapers}.}. Different values of $\mu$ will tell us how far the D7 branes are from the origin $r = 0$. The $N_f$ D7 branes may have $N_f$ different locations given by $N_f$ different values of $\mu$ or D7 branes may be coincident with just a single value of $\mu$. The positions of the seven branes are therefore parametrised by the coordinate $z$. Since the seven branes are in {\it branch 2} of \cite{FEP} their positions can be precisely parametrised by the internal coordinates ($\theta_2, \phi_2$). Thus the seven branes stretch along $r$ and can be placed at any point on the ($\theta_2, \phi_2$) plane\footnote{In actual case the embedding is a union of branch 1 and branch 2. Therefore the seven-branes will trace a complicated surface in ($r, \theta_i, \phi_i, \psi$) plane. For simplicity we will assume the embedding to be given by branch 2 of \cite{FEP}. Later on when we study fluxes, the non-trivial nature of the seven-brane embeddings will become important.}. Because of this distribution, as we shall see shortly, axion-dilaton field runs with the coordinate $z$ and the running is determined by F theory. Our UV cap, in the full F-theory picture, should allow a distribution of seven branes that could eventually reproduce the warp factors \eqref{hde}. This is however not the {\it only} requirement: we also want to study the potential of heavy quarkonium type bound states in our theory (this means that we need to study the bound states of very heavy quark-antiquark pairs). Which in turn implies that we require a set of seven branes as far away from the origin as possible (or, at high temperature, as far away as possible from the black hole horizon). There are a few possible ways to distribute the seven branes that might be able to reproduce the required picture. The simplest way would be to distribute the seven branes as in Figure 1 below. \begin{figure}[htb]\label{sevenbraneconf1} \begin{center} \includegraphics[height=6cm]{sevenbraneconf1.eps} \caption{Simplest way to distribute localised seven branes in our model. The seven branes wrap the internal sphere parametrised by ($\theta_1, \phi_1$) and are stretched along the spacetime directions. Their extensions along the radial directions are parametrised by $\mu$ as in embedding equation above. The coordinate $r_{\rm min}$ denote the distance of the nearest seven brane from the black hole horizon. A string stretched between this seven brane and the black hole horizon is the lightest {\it fundamental} quark in our model. The heaviest quark, on the other hand, will be from the seven brane that is farthest from the horizon. A string whose two ends lie on such a seven brane will form a quark antiquark bound state. The temporal evolution of such a string will determine the Wilson loop in our picture.} \end{center} \end{figure} This picture, although simple and desirable, however does not quite suffice for us because we need a configuration of seven branes that could interpolate between the IR and UV configurations. One of the simplest way to have an interpolating geometry using the configurations studied in \cite{FEP} is to make the seven branes delocalised along the ($r, \theta_2, \phi_2$) directions and call the resulting quantity as ${\widetilde N}_f(r, \theta_2, \phi_2)$. This means that \bg\label{nfcon} N_f(r) ~\equiv~ \int d\theta_2 d\phi_2 ~{\widetilde N}_f(r, \theta_2, \phi_2)~ {\rm sin}~\theta_2 \nd An immediate way to realise such a configuration is given in Figure 2 below. Such a configuration has been advocated in some recent works (see for example \cite{cotrone, cotrone2} where the delocalised seven branes are embedded via the Kuperstein embeddings \cite{kuperstein}). \begin{figure}[htb]\label{sevenbraneconf2} \begin{center} \includegraphics[height=6cm]{sevenbraneconf2.eps} \caption{Complete delocalisation of the seven branes along ($r, \theta_2, \phi_2$) directions.} \end{center} \end{figure} Such a configuration of seven branes, although useful for many purposes, unfortunately still does not quite suffice for us because heavy quarks in such a scenario would tend to go to configurations of lighter quarks spontaneously. Furthermore we want to impose the F-theory constraint, for scales $r > {\hat r}$: \bg\label{nfconstraint} N_f(r)\Big\vert_{r > {\hat r}}~= ~ 24 \nd which would be a little difficult to impose in the fully delocalised scenario\footnote{In fact F-theory {\it can} allow number of seven-branes to be arbitrarily large. For this case we need to carefully study the singularity structure of the underlying manifold. Here, for most of the paper, we will restrict ourselves to $24$ seven-branes. This means that $g_s$ could be as small as 0.042.}. Therefore the configuration that we would be mostly interested in is given in Figure 3. In this picture, which should be viewed as a cross between the earlier two figures, every individual set of seven branes are delocalised a little bit. \begin{figure}[htb]\label{sevenbraneconf3} \begin{center} \includegraphics[height=6cm]{sevenbraneconf3.eps} \caption{{Another configuration of seven branes where the delocalisation is milder compared to the earlier picture. The local minima of every set of seven branes help us to study various configurations of quark antiquarks pairs.}} \end{center} \end{figure} The F-theory constraint on the number of flavors i.e \eqref{nfconstraint} can be easily imposed without making $N_f(r, \theta_2, \phi_2)$ arbitrarily small. The final picture that we want to emphasise which would capture the underlying dynamics is given as Figure 4 below. The figure is a slight variant of the previous figure. We have divided our geometry into three regions of interest: Regions 1, 2 and 3. \begin{figure}[htb]\label{sevenbraneconf4} \begin{center} \includegraphics[height=6cm]{sevenbraneconf4.eps} \caption{{This figure, which is a slight variant of the previous figure, shows the various regions of interest. As should be clear, most of the seven branes lie in Region 3, except for a small number of coincident seven branes that dip till $r_{\rm min}$ i.e Region 1. The interpolating region is Region 2. The detailed backgrounds for each of these regions are given in the text. Note however that, although we have emphasised Region 1 more, we will only consider the case where Region 3 $>>$ Region 1 + Region 2.}} \end{center} \end{figure} Region 1 is basically the one discussed in great details in \cite{FEP}. In this region there is one (or a coincident set of) seven brane(s). The logarithmic dependences of the warp factor and fluxes come from these coincident (or single) seven branes. In fact the logarithmic runnings of the gauge theory coupling constants also stem from these seven branes. Since we require UV free (or more appropriately, strongly coupled and conformal) the IR logarithmic runnings wouldn't be very desirable. Therefore the UV cap in the full F-theory framework is depicted as Region 3 in the above figure. In this region we expect all the seven branes to be distributed so that axio-dilaton has the right behavior. We also expect vanishing $H_{NS}$ and $H_{RR}$ fields (just like the AdS cases). It is clear that one cannot jump from Region 1 to Region 3 abruptly. There should be an interpolating geometry where fluxes and the metric should have the necessary property of connecting the two solutions. This is Region 2 in our figure above. For all practical purposes, we expect Region 3 to dominate, in other words, Region 3 should be greater than both Regions 1 and 2 combined together. In such a scenario analysis of Wilson loop for heavy quark - antiquark bounds states would be easy: we wouldn't have to worry too much about the intermediate regions. Another big advantage about our UV cap is related to the issues raised in \cite{cschu}. Since the $H_{NS}$ and the axio-dilaton fields have well defined behaviors at large $r$, there would be {\it no} UV divergences of the Wilson loops in our picture! Therefore our configuration can not only boast of holographically renormalisability, but also of the absence of Landau poles and the associated UV divergences of the Wilson loops. In the following, let us therefore discuss the backgrounds for all the three regions in details. \subsection{Region 1: Fluxes, Metric and the Coupling Constants Flow} The background for Region 1 is discussed in details in \cite{FEP}, so we will be brief. All the logarithmic behaviors for the fluxes and metric come from the single set of seven branes. The metric has the following typical form: \bg\label{bhmet1} ds^2 = {1\over \sqrt{h}} \Big[-g_1(r)dt^2+dx^2+dy^2+dz^2\Big] +\sqrt{h}\Big[g_2(r)^{-1}dr^2+ d{\cal M}_5^2\Big] \nd where $g_i(r)$ are the black-hole factors and $d{\cal M}_5^2$ is the metric of warped resolved-deformed conifold (see the form in eq (3.5) of \cite{FEP}). The internal space retains its resolved-deformed conifold form upto ${\cal O}(g_sN_f)$. Beyond this order the internal space loses its simple form and becomes a complicated non-K\"ahler manifold. The warp factor to this order, in terms of $N^{\rm eff}_f, M_{\rm eff}$ (see eq (3.10) of \cite{FEP} for details), is: \bg \label{hvalue} h =\frac{L^4}{r^4}\Bigg[1+\frac{3g_sM_{\rm eff}^2}{2\pi N}{\rm log}r\left\{1+\frac{3g_sN^{\rm eff}_f}{2\pi}\left({\rm log}r+\frac{1}{2}\right)+\frac{g_sN^{\rm eff}_f}{4\pi}{\rm log}\left({\rm sin}\frac{\theta_1}{2} {\rm sin}\frac{\theta_2}{2}\right)\right\}\Bigg]\nonumber\\ \nd As discussed in \cite{FEP}, the background has {\it all} the type IIB fluxes switched on, namely, the three-forms, five-form and the axio-dilaton. Both $N^{\rm eff}_f$ and $M_{\rm eff}$ are different from $N_f$ and $M$. We will give detailed reason for this when we discuss the full geometry in the next two subsections. The three-form fluxes are: \begin{eqnarray}\label{koremi} {\widetilde F}_3 & = & 2M_{\rm eff} {\bf A_1} \left(1 + {3g_sN^{\rm eff}_f\over 2\pi}~{\rm log}~r\right) ~e_\psi \wedge \frac{1}{2}\left({\rm sin}~\theta_1~ d\theta_1 \wedge d\phi_1-{\bf B_1}~{\rm sin}~\theta_2~ d\theta_2 \wedge d\phi_2\right)\nonumber\\ && -{3g_s M_{\rm eff}N^{\rm eff}_f\over 4\pi} {\bf A_2}~{dr\over r}\wedge e_\psi \wedge \left({\rm cot}~{\theta_2 \over 2} ~{\rm sin}~\theta_2 ~d\phi_2 - {\bf B_2}~ {\rm cot}~{\theta_1 \over 2}~{\rm sin}~\theta_1 ~d\phi_1\right)\nonumber \\ && -{3g_s M_{\rm eff}N^{\rm eff}_f\over 8\pi}{\bf A_3} ~{\rm sin}~\theta_1 ~{\rm sin}~\theta_2 \left({\rm cot}~{\theta_2 \over 2} ~d\theta_1 + {\bf B_3}~ {\rm cot}~{\theta_1 \over 2}~d\theta_2\right)\wedge d\phi_1 \wedge d\phi_2\label{brend} \\ H_3 &=& {6g_s {\bf A_4} M_{\rm eff}}\Bigg(1+\frac{9g_s N^{\rm eff}_f}{4\pi}~{\rm log}~r+\frac{g_s N^{\rm eff}_f}{2\pi} ~{\rm log}~{\rm sin}\frac{\theta_1}{2}~ {\rm sin}\frac{\theta_2}{2}\Bigg)\frac{dr}{r}\nonumber \\ && \wedge \frac{1}{2}\Bigg({\rm sin}~\theta_1~ d\theta_1 \wedge d\phi_1 - {\bf B_4}~{\rm sin}~\theta_2~ d\theta_2 \wedge d\phi_2\Bigg) + \frac{3g^2_s M_{\rm eff} N^{\rm eff}_f}{8\pi} {\bf A_5} \Bigg(\frac{dr}{r}\wedge e_\psi -\frac{1}{2}de_\psi \Bigg)\nonumber \\ && \hspace*{1.5cm} \wedge \Bigg({\rm cot}~\frac{\theta_2}{2}~d\theta_2 -{\bf B_5}~{\rm cot}~\frac{\theta_1}{2} ~d\theta_1\Bigg)\nonumber \end{eqnarray} where $\widetilde F_3 \equiv F_3 - C_0 H_3$, $C_0$ being the ten dimensional axion and the so-called asymmetry factors ${\bf A_i}, {\bf B_i}$ are given in eq. (3.83) of \cite{FEP} (see also \cite{sullyf}). The axio-dilaton and the five-form fluxes are: \bg\label{dilato} &&C_0 ~ = ~ {N^{\rm eff}_f \over 4\pi} (\psi - \phi_1 - \phi_2)\nonumber\\ && e^{-\Phi}~ =~ {1\over g_s} -\frac{N^{\rm eff}_f}{8\pi} ~{\rm log} \left(r^6 + 9a^2 r^4\right) - \frac{N^{\rm eff}_f}{2\pi} {\rm log} \left({\rm sin}~{\theta_1\over 2} ~ {\rm sin}~{\theta_2\over 2}\right)\nonumber\\ && F_5 ~ = ~ {1\over g_s} \left[d^4 x \wedge d h^{-1} + \ast(d^4 x \wedge dh^{-1})\right] \nd with $a$ being the resolution parameter of the internal space that depends on the horizon radius $r_h$ as $a = a(r_h) + {\cal O}(g_s^2 M_{\rm eff}N^{\rm eff}_f)$. Once we consider the slice: \bg\label{sol}\theta_1~ = ~ \theta_2 ~ = ~ \pi,~~~~~~\phi_i~ = ~ 0,~~~~~~~\psi~ = ~0 \nd the background along the slice simplifies quite a bit. To ${\cal O}(g_sN_f)$ the background is: \bg\label{slicebg} && h =\frac{L^4}{r^4}\Bigg[1+\frac{3g_sM_{\rm eff}^2}{2\pi N_{\rm eff}}{\rm log}r\left\{1+\frac{3g_sN^{\rm eff}_f}{2\pi}\left({\rm log}r+\frac{1}{2}\right)\right\}\Bigg]\nonumber\\ && H_3 ~ = ~ \widetilde F_3 ~ = ~ C_0 ~ = ~ 0 \nonumber\\ && e^{-\Phi}~ =~ {1\over g_s} -\frac{N^{\rm eff}_f}{8\pi} ~{\rm log} \left(r^6 + 9a^2 r^4\right) \nd alongwith $F_5$ given by \eqref{dilato}. The simplicity of the background is the reason why our analysis of the mass and the drag of the quark in \cite{FEP} were straightforward enough to see the underlying physics, yet were not afflicted by problems like UV divergences of \cite{cschu}\footnote{On the slice \eqref{sol} the pull-backs of the $B$-fields are zero. This means that Wilson loops or other equivalent constructions could be carried out without any interference from the logarithmic $B$-fields.}. Note that the logarithmic RG flows of the two couplings come from the logarithmic $B_{NS}$ field, leading to confinement at the far IR (at zero temperature). In the following, to avoid clutter, ($N, N_f, M$) would denote their effective values. \subsection{Region 2: Interpolating Region and the Detailed Background} To attach a UV cap that allows a vanishing beta function we need at least a configuration of vanishing NS three-form. This cannot be {\it abruptly} attached to Region 1: we need an interpolating region. This region, which we will call Region 2, should have the behavior that at the outermost boundary the three-forms vanish, while solving the equations of motion. The innermost boundary of Region 2 $-$ that also forms the outermost boundary of Region 1 $-$ will be determined by the scale associated with the mass of the lightest quark, $m_0$, in our system. In terms of Figure 4, this is given by region in the local neighborhood of $r_{\rm min} \equiv m_0 T_0^{-1} + r_h$, where $T_0$ and $r_h$ are the string tension and the horizon radius respectively. We have already discussed some aspects of this in our previous paper \cite{FEP} when we discussed the issue of UV caps. It is now time to spell this in more details. The structure of the warp factor should be clear from \cite{FEP}. We expect the form to look like \eqref{larger} discussed earlier. For our purpose, it would make more sense to rewrite this in such a way that the radial $r$ dependence shows up explicitly. For this we need to first define two functions $f(r)$ and $M(r)$ as (see Figure 5): \bg\label{mdefo} f(r) ~ \equiv ~ {e^{\alpha(r-r_0)}\over 1 + e^{\alpha(r - r_0)}}, ~~~~~~~ M(r) ~\equiv~ M [1-f(r)], ~~~~~ \alpha >> 1 \nd where the scale $r_0$ will be explained below and $M$ is as before related to the effective number of five-branes (or the RR three-form charge). Note that for $r << r_0$, $f(r) \approx e^{r-r_0}$, whereas for $r > r_0$, $f(r) \approx 1$. Thus for $r$ smaller than the scale $r_0$, $f(r)$ is a very small quantity; whereas for $r$ bigger than the scale $r_0$, $f(r)$ is identity. In terms of $M(r)$ this means that for $r < r_0$, $M(r) \approx M$ whereas for $r > r_0$, $M(r) \to 0$. This will be useful below. Using these functions, we see that the simplest way in which logarithmic behavior along the radial direction may go to inverse $r$ behavior, is when the warp factor has the following form: \bg\label{warpy} h ~ = ~ { c_0 + c_1 f(r) + c_2f^2(r)\over r^4} \sum_{\alpha} ~{L_{\alpha} \over r^{\epsilon_{(\alpha)}}} \nd where $c_i$ are constant numbers, and the denominator can be mapped to $r_{(\alpha)}$ defined in \eqref{larger} with $\epsilon_{(\alpha)}$ functions of $g_sN_f, M, N$ and the resolution parameter $a$. $L_{\alpha}$'s are functions of the angular coordinates ($\theta_i, \phi_i, \psi$). For other details see \cite{FEP}. The warp factor $h$ has the required logarithmic behavior as long as the exponents of $r$ are small and fractional, and indeed switches to the inverse $r$ behavior as soon as the exponents become integers. In \cite{FEP} we gave some examples where the exponents are small and fractional numbers, and alluded to the case where they become integers\footnote{See the section on holographic renormalisability in \cite{FEP}.}. Since $N_f$ is a delocalised function, this behavior could be naturally realised now and would eventually give way to the required inverse $r$ behavior of the warp factor in Region 3. Its at least clear that such a behavior of the warp factor do solve the background supergravity equations of motion near $r = r_{\rm min}$ (see \cite{FEP} for a concrete example, and we will give more details on this below), however what we want to know whether such a behavior of the warp factor is generically a solution to EOM, or we need to add sources to the theory. It will turn out that we need to add sources at the outermost boundary of region 2. Question now is to figure out consistently the specific point in the radial direction beyond which Region 3 would start. This way we will know exactly {\it where} to add the sources and the AdS cap. The demarcation point can be found easily by looking at the behavior of $H_{NS}$ and $H_{RR}$. For this we need to use the functions \eqref{mdefo} to write the RR three-form. Our ansatze for ${\widetilde F}_3$ then is: \begin{eqnarray} &&{\widetilde F}_3 = \left({a}_o - {3 \over 2\pi r^{g_sN_f}} \right) \sum_\alpha{2M(r)c_\alpha\over r^{\epsilon_{(\alpha)}}} \left({\rm sin}~\theta_1~ d\theta_1 \wedge d\phi_1- \sum_\alpha{f_\alpha \over r^{\epsilon_{(\alpha)}}}~{\rm sin}~\theta_2~ d\theta_2 \wedge d\phi_2\right)\nonumber\\ &&~~ \wedge~ {e_\psi\over 2}-\sum_\alpha{3g_s M(r)N_f d_\alpha\over 4\pi r^{\epsilon_{(\alpha)}}} ~{dr}\wedge e_\psi \wedge \left({\rm cot}~{\theta_2 \over 2}~{\rm sin}~\theta_2 ~d\phi_2 - \sum_\alpha{g_\alpha \over r^{\epsilon_{(\alpha)}}}~ {\rm cot}~{\theta_1 \over 2}~{\rm sin}~\theta_1 ~d\phi_1\right)\nonumber \\ && -\sum_\alpha{3g_s M(r) N_f e_\alpha\over 8\pi r^{\epsilon_{(\alpha)}}} ~{\rm sin}~\theta_1 ~{\rm sin}~\theta_2 \left({\rm cot}~{\theta_2 \over 2}~d\theta_1 + \sum_\alpha{h_\alpha \over r^{\epsilon_{(\alpha)}}}~ {\rm cot}~{\theta_1 \over 2}~d\theta_2\right)\wedge d\phi_1 \wedge d\phi_2\label{brend} \end{eqnarray} where $a_o = 1 + {3\over 2\pi}$ and ($c_\alpha, ..., h_\alpha$) are constants. One may also notice three things: first, how the internal forms get deformed near the innermost boundary of the region, second, how the function $f(r)$ appears for all the components, and finally, how $N_f$ is, as before, not a constant but a delocalised function\footnote{We will soon see that $N_f$ in fact is the effective number of seven-branes.}. The function $f(r)$ becomes identity for $r > r_0$ and therefore ${\widetilde F}_3 \to 0$ for $r > r_0$. For $r < r_0$, the corrections coming from $f(r)$ is exponentially small. Integrating ${\widetilde F}_3$ over the topologically non-trivial three-cycle: \bg\label{3cycle} {1\over 2}{e_\psi} \wedge \left({\rm sin}~\theta_1~ d\theta_1 \wedge d\phi_1- \sum_\alpha{f_\alpha \over r^{\epsilon_{(\alpha)}}}~{\rm sin}~\theta_2~ d\theta_2 \wedge d\phi_2\right) \nd we find that the number of units of RR flux vary in the following way with respect to the radial coordinate $r$: \bg\label{fvary} M_{\rm tot}(r) = M(r) \left(1 + {3\over 2\pi} - {3 \over 2\pi r^{g_sN_f}} \right) \sum_\alpha{c_\alpha\over r^{\epsilon_{(\alpha)}}} \nd which is perfectly consistent with the RG flow, because for $r < r_0$, and $r \to r e^{-{2\pi\over 3 g_s M}}$, $M_{\rm tot}$ decreases precisely as $M - N_f$ as the correction factor $e^{r-r_0}$ coming from $f(r)$ is negligible. For $r > r_0$, $M_{\rm tot}$ shuts off completely. This also means that below $r_0$, the total colors $N$ decrease by $M_{\rm tot}$ exactly as one would have expected for the RG flow with $N_f$ flavors. \begin{figure}[htb]\label{ffunction} \begin{center} \includegraphics[height=8cm,width=6cm,angle=-90]{ffunction.eps} \caption{A plot of the $f(r)$ function for $r_0 = 5$ in appropriate units, and various choices of $\alpha$. Observe that for large $\alpha$ the function quickly approaches 1 for $r > r_0$.} \end{center} \end{figure} Using similar deformed internal forms, one can also write down the ansatze for the NS three-form. This is given as: \begin{eqnarray} &&H_3 = \sum_\alpha {6g_s M(r) k_\alpha \over r^{\epsilon_{(\alpha)}}}\Bigg[1+\frac{1}{2\pi} - \frac{\left({\rm cosec}~\frac{\theta_1}{2}~{\rm cosec}~\frac{\theta_2}{2}\right)^{g_sN_f}}{2\pi r^{{9g_sN_f\over 2}}} \Bigg]~ dr \nonumber\\ &&\wedge \frac{1}{2}\Bigg({\rm sin}~\theta_1~ d\theta_1 \wedge d\phi_1 -\sum_\alpha{p_\alpha \over r^{\epsilon_{(\alpha)}}} ~{\rm sin}~\theta_2~ d\theta_2 \wedge d\phi_2\Bigg) +\sum_\alpha \frac{3g^2_s M(r) N_f l_\alpha}{8\pi r^{\epsilon_{(\alpha)}}} \Bigg(\frac{dr}{r}\wedge e_\psi -\frac{1}{2}de_\psi \Bigg)\nonumber\\ && \wedge \Bigg({\rm cot}~\frac{\theta_2}{2}~d\theta_2 -\sum_\alpha{q_\alpha \over r^{\epsilon_{(\alpha)}}}~{\rm cot}~\frac{\theta_1}{2} ~d\theta_1\Bigg) + g_s {dM(r) \over dr} \left(b_1(r)\cot\frac{\theta_1}{2}\,d\theta_1+b_2(r)\cot\frac{\theta_2}{2}\,d\theta_2\right)\nonumber\\ &&\wedge e_\psi \wedge dr +{3g_s\over 4\pi} {dM(r) \over dr}\left[\left(1+g_sN_f -{1\over r^{2g_sN_f}} + {9a^2g_sN_f\over r^2}\right) \log\left(\sin\frac{\theta_1}{2}\sin\frac{\theta_2}{2}\right) + b_3(r)\right]\nonumber\\ && \sin\theta_1\,d\theta_1\wedge d\phi_1\wedge dr -{g_s\over 12\pi}{dM(r) \over dr} \Bigg(2 -{36a^2g_sN_f\over r^2} + 9g_sN_f -{1\over r^{16g_sN_f}} - {1\over r^{2g_sN_f}} + {9a^2g_sN_f\over r^2}\Bigg)\nonumber\\ && ~~~~~~~~~~~~~~~~ \sin\theta_2\,d\theta_2\wedge d\phi_2\wedge dr - {g_sb_4(r)\over 12\pi}{dM(r) \over dr}~ \sin\theta_2\,d\theta_2\wedge d\phi_2\wedge dr \label{brend2} \end{eqnarray} with ($k_\alpha, ..., q_\alpha$) being constants and $b_n = \sum_m {a_{nm}\over r^{m + \widetilde{\epsilon}_m}}$ where $a_{nm} \equiv a_{nm}(a^2, g_sN_f)$ and $\widetilde{\epsilon}_m \equiv \widetilde{\epsilon}_m(g_sN_f)$. The way we constructed the three-forms imply that $H_3$ is closed. In fact the ${\cal O}(\partial f)$ terms that we added to \eqref{brend2} ensures that. However $F_3$ is not closed. We can use the non-closure of $F_3$ to analyse {\it sources} that we need to add for consistency. These sources should in general be ($p, q$) five-branes, with ($p, q$) negative, so that they could influence both the three-forms and since the ISD property of the three-forms is satisfied near $r = r_{\rm min}$ the sources should be close to the other boundary. A simplest choice could probably just be anti five-branes because adding anti D5-branes would change ${\widetilde F}_3$, and to preserve the ISD condition, $H_3$ would have to change accordingly. Furthermore, as we mentioned before, as $r \to r_0$, both $H_3 = {\widetilde F}_3 \to 0$. Therefore $r = r_0$ is where Region 2 ends and Region 3 begins, and we can put the sources there. They could be oriented along the spacetime directions, located around the local neighborhood of $r = r_0$ and wrap the internal two-sphere ($\theta_1, \phi_1$) so that they are parallel to the seven-branes. However, putting in anti D5-branes near $r= r_0$ would imply non-trivial forces between the five-branes and seven-branes as well as five-branes themselves. Therefore if we keep, in general the ($p, q$) the five-branes close to say one of the seven-brane then they could get {\it dissolved} in the seven-brane as electric and magnetic gauge fluxes $\ast F^{(1)}$ and $F^{(1)}$ respectively. Thus the seven-brane soaks in the five-brane charges, which in turn would mean that ${\widetilde F}_3$ in \eqref{brend} and $H_3$ in \eqref{brend2} will satisfy the following EOMs: \bg\label{eombrend} d{\widetilde F}_3~&=&~ F^{(1)} \wedge \Delta_2(z) - d\left({\bf Re}~\tau\right) \wedge H_3\nonumber\\ d\ast H_3 ~&=&~ \ast F^{(1)} \wedge \Delta_2(z) - d(C_4 \wedge F_3) \nd where the tension of the seven-brane is absorbed in $\Delta_2(z)$, which is the term that measures the delocalisation of the seven branes (for localised seven branes this would be copies of the two-dimensonal delta functions) and $\tau$ is the axio-dilaton that we will determine below. In addition to that $d\ast F_3$ will satisfy its usual EOM. For all the analysis in this paper we will also assume: \bg\label{regide} \vert r_0 - r_{\rm min}\vert ~\le~ a_1, ~~~~ \vert r_{\rm min} - r_h \vert ~\le ~a_2, ~~~~ {\rm Region}~3 ~>>~ a_1 + a_2 \nd to be our approximation. This way, as we said before, Region 3 will dominate our calculations. However the above set of equations \eqref{eombrend} is still not the full story. Due to the anti GSO projections between anti-D5 and D7-brane, there should be tachyon between them. It turns out that the tachyon can be removed (or made massless) by switching on additional electric and magnetic fluxes on D7 along, say, ($r, \psi$) directions! This would at least kill the instability due to the tachyon, although susy may not be restored. For details on the precise mechanism, the readers may refer to \cite{susyrest}. But switching on gauge fluxes on D7 would generate extra D5 charges and switching on gauge fluxes on anti-D5s will generate extra D3 charges. This is one reason why we write ($N, N_f, M$) as effective charges. This way a stable system of anti-D5s and D7 could be constructed. To complete the rest of the story we need the axio-dilaton $\tau$ and the five-form. The five-form is easy to determine from the warp factor $h$ \eqref{warpy} using \eqref{dilato}. The total five-form charge should have contribution from the gauge fluxes also, which in turn would effect the warp factor. For regions close to $r_{\rm min}$ it is clear that $\tau$ goes as $z^{-g_sN_f}$ where $z$ is the embedding \eqref{embedding}. More generically and for the whole of Region 2, looking at the warp factor and the three-form fluxes, we expect the axio-dilaton to go as\footnote{One may use this value of axio-dilaton and the three-form NS fluxes \eqref{brend2} to determine the beta function from the relations \eqref{maps}. To lowest order in $g_sN_f$ we will reproduce the SV beta function \eqref{svbeta} as expected. Notice that for $r > r_0$ the beta function {\it does not} vanish and both the gauge groups flow at the same rate. This will be crucial for our discussion in the following subsection.}: \bg\label{axdilato} \tau ~=~ [b_0 + b_1 f(r)]\sum_\alpha {C_\alpha\over r^{\epsilon_{(\alpha)}}} \nd where $b_i$ are constants and $C_\alpha$ are functions of the internal coordinates and are complex. These $C_\alpha$ and the constants $b_i$ are determined from the dilaton equation of motion \cite{drs, gkp}: \bg\label{dieq} {\widetilde\nabla}^2~\tau = {{\widetilde\nabla}\tau\cdot {\widetilde\nabla}\tau\over i{\rm Im}~\tau} - {4\kappa_{10}^2 ({\rm Im}~\tau)^2\over \sqrt{-g}} {\delta S_{\rm D7}\over \delta\bar\tau} + (p,q)~ {\rm sources} \nd where tilde denote the unwarped internal metric $g_{mn}$, and $S_{\rm D7}$ is the action for the {\it delocalised} seven branes. The $f(r)$ term in the axio-dilaton come from the ($p, q$) sources that are absorbed as gauge fluxes on the seven-branes\footnote{The $r^{-\epsilon_{(\alpha)}}$ behavior stems from additional anti seven-branes that we need to add to the existing system to allow for the required UV behavior from the F-theory completion. The full picture will become clearer in the next sub-section when we analyse the system in Region 3.}. Because of this behavior of axio-dilaton we don't expect the unwarped metric to remain Ricci-flat to the lowest order in $g_sN_f$. The Ricci tensor becomes: \bg\label{rten} {\widetilde{\cal R}}_{mn} = \kappa^2_{10} {\partial_{(m}\partial_{n)}\tau\over 4({\rm Im}~\tau)^2} + \kappa_{10}^2 \left({\widetilde T}^{\rm D7}_{mn} - {1\over 8}{\widetilde g}_{mn}{\widetilde T}^{\rm D7}\right) + \kappa_{10}^2 \left({\widetilde T}^{(p,q)5-{\rm brane}}_{mn} - {1\over 4}{\widetilde g}_{mn}{\widetilde T}^{(p,q)5-{\rm brane}}\right)\nonumber\\ \nd where we see that ${\widetilde{\cal R}}_{rr}$ picks up terms proportional to $\epsilon^2_{(\alpha)}$ and derivatives of $f(r), N_f(r)$, implying that to zeroth order in $g_sN_f$ the interpolating region may not remain Ricci-flat. However since the coefficients are small, the deviation from Ricci-flatness is consequently small. In this paper we will not give the explicit form for $C_\alpha, L_\alpha$ etc but it should be clear from our above discussions that EOMs are easily satisfied. The one last thing to check would be the equation for the warp factor. This is given by the five-form equation of motion: \bg\label{5form} d\ast d h^{-1} = H_3 \wedge {\widetilde F}_3 + \kappa_{10}^2 ~{\rm tr} \left(F^{(1)}\wedge F^{(1)} - {\cal R}\wedge {\cal R}\right) \Delta_2(z) + \kappa_{10}^2 ~{\rm tr}~F^{(2)} {\widetilde \Delta}_4({\cal S})\nonumber\\ \nd where $F^{(1)}$ is the seven-brane gauge fields that we discussed earlier, $F^{(2)}$ is the ($p, q$) five-brane gauge fields required for the proper interpretation of the colors in the gauge theory side\footnote{In fact one should view the gauge fluxes on the seven-branes and the five-branes as the total gauge fluxes that are needed to stabilise the system. We will see in the next subsection that the full stabisation would require additional fluxes, but the structure would remain the same.}, ${\cal R}$ is the pull-back of the Riemann two-form, and ${\widetilde \Delta}_4({\cal S})$ is the term that measures the delocalisation of the dissolved ($p, q$) five-branes over the space ${\cal S}$ embedded in the seven-brane (again for localised five-branes there would be copies of four-dimensional delta functions). The $H_3 \wedge {\widetilde F}_3$ term in \eqref{5form} is proportional to ${M^2(r)\over r^{2\epsilon_{(\alpha)}}}$. This is precisely the form for the warp factor ansatze \eqref{warpy} with the $f^2(r)$ term there accounting for the $M^2(r)$ term above. This way with the warp factor \eqref{warpy} and the three-forms \eqref{brend} and \eqref{brend2} we can satisfy \eqref{5form} by switching on small gauge fluxes on the seven-branes and five-branes. Therefore combining \eqref{warpy}, \eqref{brend}, \eqref{brend2}, \eqref{axdilato} and the five-form, we can pretty much determine the supergravity background for the interpolating region $r_{\rm min} < r \le r_0$. At the outermost boundary of Region 2 we therefore only have the metric and the axio-dilaton. Both the three-forms exponentially decay away fast, giving us a way to attach an AdS cap there. \subsection{Region 3: Seven Branes, F-Theory and UV Completions} The interpolating region, Region 2, that we derived above can be interpreted alternatively as the {\it deformation} of the neighboring geometry once we attach an AdS cap to the OKS-BH geometry. The OKS-BH geometry is the range $r_h \le r \le r_{\rm min}$ and the AdS cap is the range $r > r_0$. The geometry in the range $r_{\rm min} \le r \le r_0$ is the deformation. Such deformations should be expected for all other UV caps advocated in \cite{FEP}. In this section we will complete the rest of the picture by elucidating the background from $r > r_0$ in the AdS cap. But before that let us give a brief gauge theory interpretation of background\footnote{The discussion in the following paragraph is motivated by a correspondence that we had with Peter Ouyang. We thank him for his comments.}. For the UV region $r > r_0$ we expect the dual gauge theory to be $SU(N + M) \times SU(N + M)$ with fundamental flavors coming from the seven-branes. This is because addition of ($p, q$) branes at the junction, or more appropriately anti five-branes at the junction with gauge fluxes on its world-volume, tell us that the number of three-branes degrees of freedom are $N + M$, with the $M$ factor coming from five-branes anti-five-branes pairs. Furthermore, the $SU(N + M) \times SU(N + M)$ gauge theory will tell us that the gravity dual is approximately AdS, but has RG flows because of the fundamental flavors (This RG flow is the remnant of the flow that we saw in the previous subsection. We will determine this in more details below). At the scale $r = r_0$ we expect one of the gauge group to be Higgsed, so that we are left with $SU(N + M) \times SU(N)$. Now both the gauge fields flow at different rates and give rise to the cascade that is slowed down by the $N_f$ flavors. In the end, at far IR, we expect confinement at zero temperature. The few tests that we did above, namely, (a) the flow of $N$ and $M$ colors, (b) the RG flows, (c) the decay of the three-forms, and (d) the behavior of the dual gravity background, all point to the gauge theory interpretation that we gave above. What we haven't been able to demonstrate is the precise Higgsing that takes us to the cascading picture. From the gravity side its clear how this could be interpreted. From the gauge theory side it would be interesting to demonstrate this. Coming back to the analysis of Region 3, we see that in the region $r > r_0$ we do not expect three-forms but we do expect non-zero axio-dilaton. These non-zero axio-dilaton come from the rest of the seven branes. As mentioned in \cite{FEP} the complete set of seven-branes should be determined from the F-theory picture \cite{vafaF} to capture the full non-perturbative corrections. This is now subtle because the seven-branes are embedded non-trivially here (see \eqref{embedding}). A two-dimensional base, parametrised by a complex coordinate $z$, on which we can have a torus fibration: \bg\label{torus} y^2 = x^3 + x F(z) + G(z) \nd can be identified with the $z$ coordinate of \eqref{embedding}. This way vanishing discriminant $\Delta$ of \eqref{torus} i.e $\Delta \equiv 4F^3 + 27G^2 = 0$, will specify the positions of the seven-branes exactly as \eqref{embedding}. Here we have taken $F(z)$ as a degree eight polynomial in $z$ and $G(z)$ as a degree 12 polynomial in $z$. The delocalisation ${\widetilde N}_f(r, \theta_2, \phi_2)$ should be thought of somewhat as the distribution of bunches of seven branes along ($\theta_2, \phi_2$) directions with varying {\it sizes} along the radial $r$ direction such that \eqref{nfconstraint} is maintained with the deviation $\delta \equiv {\hat r} - r_0$ a finite but not very large number. As is well known, embedding of seven-branes in F-theory also tells us that we can have $SL(2, {\bf Z})$ jumps of the axio-dilaton. We can define the axio-dilaton $\tau \equiv C_0 + i e^{-\phi}$ as the modular parameter of a torus ${\bf T}^2$ fibered over the base parametrised by the coordinate $z$. The holomorphic map\footnote{Holomorphic in $\tau$, the modular parameter.} from the fundamental domain of the torus to the complex plane is given by the famous $j$-function: \bg \label{axdil1} j(\tau) ~\equiv ~ \frac{\left[\Theta_1^8(\tau)+\Theta_2^8(\tau)+\Theta_3^8(\tau)\right]^3}{\eta^{24}(\tau)} ~= ~ \frac{4(24{F}(z))^3}{27{G}^2(z)+4{F}^3(z)} \nd where $\Theta_i, i = 1, 2, 3$ are the well known Jacobi Theta-functions and $\eta$ is the Dedekind $\eta$-function: \bg\label{dedekind} \eta(\tau) ~=~ q^{1\over 24}\prod_n (1 - q^n),~~~~~~~~~ q ~= ~ e^{2\pi i \tau} \nd For our purpose, we can write the discriminant $\Delta(z)$ and the polynomial $F(z)$ generically as: \bg\label{delF} \Delta(z) ~=~ 4F^3 + 27G^2 ~=~ a \prod_{j =1}^{24} (z - {\widetilde z}_j), ~~~~~~ F(z) ~=~ b \prod_{i = 1}^8 (z - z_i) \nd so that when we have weak type IIB coupling i.e $\tau = C_0 + i\infty$, $j(\tau) \approx e^{-2\pi i \tau}$ and using \eqref{axdil1} the modular parameter can be mapped to the embedding coordinate $z$ as: \bg\label{modmap} \tau ~ &=& ~ {i\over g_s} ~+~ {i\over 2\pi} ~{\rm log}~(55926 ab^{-1}) - {i\over 2\pi} \sum_{n = 1}^\infty \left[{1\over nz^n} \left(\sum_{i=1}^8 3 z_i^n - \sum_{j=1}^{24} {\widetilde z}_j^n\right)\right] \nonumber\\ &=&~ \sum_{n = 0}^\infty {{\cal C}_n + i{\cal D}_n\over {\widetilde r}^n} \nd where ${\cal C}_n \equiv {\cal C}_n(\theta_i, \phi_i, \psi)$ and ${\cal D}_n \equiv {\cal D}_n(\theta_i, \phi_i, \psi)$ are real functions and ${\widetilde r} = r^{3/2}$. To avoid cluttering of formulae, we will use $r$ instead of ${\widetilde r}$ henceforth unless mentioned otherwise. So the coordinate $r$ will parametrise Region 3, and $\tau = \sum {{\cal C}_n + i{\cal D}_n\over r^n}$. The above computation was done assuming that $z > (z_i, {\widetilde z}_j)$, which at this stage can be guaranteed if we take $\theta_{1,2}$ small. This gives rise to special set of configurations of seven-branes where they are distributed along other angular directions. However one might get a little worried if there exists some ${\widetilde z}_j \equiv {\widetilde z}_o$ related to the {\it farthest} seven-brane(s) where the above approximation fails to hold. This can potentially happen when we try to compute the mass of the heaviest quark in our theory. The question is whether we can still use the $\tau$ derived in \eqref{modmap}, or we need to modify the whole picture. Before we go into answering this question, the choice of $z$ bigger than ($z_i, {\widetilde z}_j$) already needs more convincing elaboration because allowing $\theta_{1,2}$ small is a rather naive argument. The situation at hand is more subtle than that and, as we will argue below, the picture that we have right now is incomplete. To get the full picture, observe first that $z$ being given by our embedding equation \eqref{embedding}, means that if we want to be in Region 3, we need to specify the condition $r > r_0$ in the defination of $z$. This way a given $z$ will {always} imply points in Region 3 for varying choices of the angular coordinates ($\theta_i, \phi_i, \psi$). However similar argument cannot be given for any choices of ($z_i, {\widetilde z}_j$). A particular choice of ($z_i, {\widetilde z}_j$) may imply very large $r$ with small angular choices or small $r$ with large angular choices. Thus analysing the system only in terms of the $r$ coordinate is tricky. In terms of the full complex coordinates, $z > (z_i, {\widetilde z}_j)$ would mean that we are always looking at points away from the surfaces given by $z = z_i$ and $z = {\widetilde z}_j$. What happens when we touch the $z = z_i$ surfaces? For these cases $F(z_i) \to 0$ and therefore we are no longer in the weak coupling regime. For all $F(z_i) = 0$ imply $j(\tau) \to 0$ which in turn means $\tau = {\rm exp}~(i\pi/3)$ on these surfaces. These are the constant coupling regimes of \cite{DM} where the string couplings on these surfaces are {\it not} weak. On the other hand, near any one of the seven-branes $z = {\widetilde z}_j$ we are in the weak coupling regimes and so \eqref{modmap} will imply \bg\label{taulog} \tau(z) = {1\over 2\pi i} ~{\rm log}~(z - {\widetilde z}_j) ~\to~i\infty \nd which of course is expected but nevertheless problematic for us. This is because we need logarithmic behavior of axio-dilaton in Region 2, but not in Region 3. For a good UV behavior, we need axio-dilaton to behave like \eqref{modmap} everywhere in Region 3. In addition to that there is also the issue of the heaviest quarks creating additonal log divergences that we mentioned earlier. These seven branes are located at $z = {\widetilde z}_j \equiv {\widetilde z}_o$, and therefore if we can make the axio-dilaton independent of the coordinates ${\widetilde z}_o$ then at least we won't get any divergences from these seven-branes. It turns out that there are configurations (or rearrangements) of seven-brane(s) that allow us to do exactly that. To see one such configuration, let us define $F(z), G(z)$ and $\Delta(z)$ in \eqref{delF} in the following way: \bg\label{delFnow} &&F(z) ~ = ~ (z - {\widetilde z}_o)\prod_{i = 1}^7 (z - z_i), ~~~~~~~~~ G(z) ~=~ (z- {\widetilde z}_o)^2 \prod_{i =1}^{10} (z - {\hat z}_i)\nonumber\\ &&\Delta(z) ~ = ~ (z - {\widetilde z}_o)^3 \prod_{j = 1}^{21} (z - {\widetilde z}_j) \nd which means that we are stacking a bunch of {\it three} seven-branes at the point $z = {\widetilde z}_o$, and \bg\label{deldefn} \prod_{j = 1}^{21} (z - {\widetilde z}_j) ~ \equiv ~ 4\prod_{i = 1}^7 (z - z_i)^3 ~ + ~ 27 (z - {\widetilde z}_o) \prod_{i = 1}^{10} (z - {\hat z}_i)^2 \nd implying that the axio-dilaton $\tau$ becomes independent of ${\widetilde z}_o$ and behaves exactly as in \eqref{modmap} with ($i, j$) in \eqref{modmap} varying upto (7, 21) respectively. The situation is now getting better. We have managed to control a subset of log divergences. To get rid of the other set of log divergences that appear on the remaining twenty-one surfaces, one possible way would be to modify the embedding \eqref{embedding}. Recall that our configuration is non-supersymmetric and therefore we are not required to use the embedding \eqref{embedding}. In fact a change in the embedding equation will also explain the axio-dilaton choice \eqref{axdilato} of Region 2. To change the embedding equation \eqref{embedding} we will use similar trick that we used to kill off the three-form fluxes, namely, attach anti-branes. These anti seven-branes\footnote{They involve both local and non-local anti seven-branes.} are embedded via the following equation: \bg \label{ABembedding} r^{3/2} e^{i(\psi-\phi_1-\phi_2)}{\rm sin}~\frac{\theta_1}{2}~{\rm sin}~\frac{\theta_2}{2}~=~r_0 e^{i\Theta} \nd where $\Theta$ is some angular parameter, and could vary for different anti seven-branes. The above embedding will imply that their overlaps with the corresponding seven-branes are only partial\footnote{For example if we have a seven-brane at $z = {\widetilde z}_1$ such that lowest point of the seven brane is $r = \vert {\widetilde z}_1\vert^{2/3} < r_o$, then the corresponding anti-brane has only partial overlap with this.}. And since we require $$ \vert{\widetilde z}_j\vert^{2/3} ~ < ~ r_o$$ it will appear effectively that we can only have seven-branes in Regions 1 and 2, and {\it bound} states of seven-branes and anti seven-branes in Region 3.\footnote{Of course this effective descrition is only in terms of the axio-dilaton charges. In terms of the embedding equation for the seven-branes \eqref{embedding} this would imply that we can define $z$ with $r > r_o$ and ${\widetilde z}_j$ with $r < r_o$.} This way the axio-dilaton in Region 3 will indeed behave as \eqref{modmap} for all $z$ (except for the above mentioned seven points). There are two loose ends that we need to tie up to complete this side of the story. The first one is the issue of Gauss' law, or more appropriately, charge conservation. The original configuration of 24 seven branes had zero global charge, but now with the addition of anti seven-branes charge conservation seems to be problematic. There are a few ways to resolve this issue. First, we can asume that that branes wrap topologically trivial cycles, much like the ones of \cite{ouyang}. Then charge conservation is automatic. The second alternative is to isolate six seven-branes using some appropriate $F$ and $G$ functions, so that they are charge neutral. This is of course one part of the constant coupling scenario of \cite{senF}. Now if we make the ($\theta_2, \phi_2$) directions non-compact then we can put in a configuration of 18 seven-branes and anti seven-branes pairs together using the embeddings \eqref{embedding} and \eqref{ABembedding} respectively. The system would look effectively like what we discussed above. Since the whole system is now charge neutral, compactification shouldn't be an issue here. The second loose end is the issue of tachyons between the seven-brane and anti seven-brane pairs. Again, as for the anti-D5 branes and D7-brane case \cite{susyrest}, switching on appropriate electric and magnetic fluxes will make the tachyon massless! Therefore the system will be stable and would behave exactly as we wanted, namely, the axio-dilaton will not have the log divergences over any slices in Region 3. This behavior of axio-dilaton justifies the $r^{-\epsilon_{(\alpha)}}$ in \eqref{axdilato} in Region 2. So the full picture would be a set of seven-branes with electric and magnetic fluxes embedded via \eqref{embedding} and another set of anti seven-branes embedded via \eqref{ABembedding} lying completely in Region 3. Thus in Region 3 both the three-forms vanish and therefore $g_1 = g_2 = g_{\rm YM}$ with $g_1, g_2$ being the couplings for $SU(N+M), SU(N+M)$. From \eqref{maps} we can compute the $\beta$-function for $g_{\rm YM}$ as: \bg\label{betym} \beta(g_{\rm YM}) ~\equiv~ {\partial g_{\rm YM}\over \partial {\rm log}~\Lambda} ~ = ~ {g^3_{\rm YM}\over 16 \pi}~ \sum_{n = 1}^\infty ~ {n{\cal D}_n \over \Lambda^n} \nd where $\Lambda$ is the usual RG scale related to the radial coordinate in the supergravity approximation. For $\Lambda \to \infty$, $\beta(g_{\rm YM}) \to 0$ implying a conformal theory in the far UV. We can fix the 't Hooft coupling to be strong to allow for the supergravity approximation to hold consistently at least for all points away from the $z = z_i, i= 1,..., 7$ surfaces. Existence of axio-dilaton $\tau$ of the form \eqref{modmap} and the seven-brane sources will tell us, from \eqref{rten}, that the unwarped metric may not remain Ricci flat. For example it is easy to see that \bg\label{rrr} \widetilde{\cal R}_{rr} = {{\cal A}_{\cal D}\over r^2{\cal D}_0^2} \sum_{n,m =1}^{\infty} nm {({\cal C}_n + i{\cal D}_n)({\cal C}_m - i{\cal D}_m)\over r^{n+m}} + {\cal O}\left({1\over r^n}\right) \nd where the last term should come from the seven-brane sources and, because of these sources, we don't expect $\widetilde{\cal R}_{rr}$ to vanish to lowest order in $g_sN_f$.\footnote{Although, as discussed before, the deviation from Ricci flatness will be very small.} The term ${\cal A}_{\cal D}$ is given by the following infinite series: \bg\label{adddef} {\cal A}_{\cal D} ~ = ~ 1-\sum_{k,l=1}^\infty {{\cal D}_k {\cal D}_l {\cal D}_0^{-2}\over r^{k+l}} + \sum_{k,l,p,q=1}^\infty {{\cal D}_k{\cal D}_l{\cal D}_p{\cal D}_q {\cal D}_0^{-2}\over r^{k+l+p+q}} + ... \nd Similarly one can show that \bg\label{rab} \widetilde{\cal R}_{ab} = {{\cal A}_{\cal D}\over {\cal D}_0^2} \sum_{n,m=0}^\infty {(\partial_a{\cal C}_n + i\partial_a{\cal D}_n) (\partial_b{\cal C}_m - i\partial_b{\cal D}_m)\over r^{n+m}} + {\cal O}\left({1\over r^n}\right) \nd for ($a, b$) $\ne r$. For $\widetilde{\cal R}_{rb}$ similar inverse $r$ dependence can be worked out. In the far UV we expect the unwarped curvatures should be equal to the AdS curvatures. The warp factor $h$ on the other hand can be determined from the following variant of \eqref{5form}: \bg\label{wfac} d\ast d h^{-1} = \kappa_{10}^2 ~{\rm tr} \left(F^{(1)}\wedge F^{(1)} - {\cal R}\wedge {\cal R}\right) \Delta_2(z) + ... \nd because we expect no non-zero three-forms in Region 3. The dotted terms are the non-abelian corrections from the seven-branes. As $r$ is increased i.e $r >> r_0$, we expect $F^{(1)}$ to fall-off (recall that they appear from the anti (1,1) five-branes located in the neighborhood of $r = r_0$) and therefore can be absorbed in ${\cal R}$. Once we embed the seven-brane gauge connection in some part of spin-connection, we expect \bg\label{boxh} \square ~h^{-1} ~ = ~ {\cal O}\left({1\over r^n}\right) \nd from the non-abelian corrections via pull-backs. Solving this will reproduce the generic form for $h$: \bg\label{heaft} h ~= ~ \frac{L^4}{r^4}\left[1+\sum_{i=1}^\infty\frac{a_i(\psi,\theta_i, \phi_i)}{r^i}\right] \nd with a constant $L^4$ and $a_i$'s are suppressed by powers of $g_sN_f$. More details on this is given in the {\bf Appendix A} and {\bf B}. At far UV we recover the AdS picture implying a strongly coupled conformal behavior in the dual gauge theory. From the above discussion we can conclude that the warp factor and the axio-dilaton will have the inverse $r$ behavior. We will use this background to do the Wilson loop computation in the next section. \section{Heavy Quark Potential from Gravity} Before we go into the actual computation of the Wilson loop, let us point out some generic standard arguments that map the Wilson loop computation to the string action and then to the quark anti-quark potential. Consider the Wilson loop of a rectangular path ${\cal C} $ with spacelike width $d$ and timelike length $T$. The timelike paths can be thought of as world lines of pair of quarks $Q\bar{Q}$ separated by a spatial distance $d$. Studying the expectation value of the Wilson loop in the limit $T\rightarrow \infty$, one can show that it behaves as \bg \label{WL-1} \langle W({\cal C})\rangle ~\sim~ {\rm exp}(-T E_{Q\bar{Q}}) \nd where $E_{Q\bar{Q}}$ is the energy of the $Q\bar{Q}$ pair which we can identify with their potential energy $V_{Q\bar{Q}}(d)$ as the quarks are static. At this point we can use the principle of holography \cite{Mal-1} \cite{Witt-1} \cite{Mal-2} and identify the expectation value of the Wilson loop with the exponential of the {\it renormalised} Nambu-Goto action, \bg \label{Holo-1} \langle W({\cal C})\rangle ~\sim ~ {\rm exp}(-S^{\rm ren}_{{\rm NG}}) \nd with the understanding that ${\cal C}$ is now the boundary of string world sheet. Note that we are computing Wilson loop of gauge theory living on flat four dimensional space-time $x^{0, 1, 2, 3}$. Whereas the string worldsheet is embedded in curved five-dimensional manifold with coordinates $x^{0, 1, 2, 3}$ and $r$. We will identify the five-dimensional manifold with Region 3 that we discussed above. To be consistent with the recipe in \cite{Witt-1}, we need to make sure that the induced four dimensional metric at the boundary of the string world sheet ${\cal C}$ is flat. For an AdS space, this is guranteed as long as the world sheet ends on boundary of AdS space where the induced four dimensional metric can indeed be written as $\eta_{\mu\nu}$. Using the geometry constructed in the previous section for Region 3, we see that the metric is asymptotically AdS and therefore induces a flat Minkowski metric at the boundary via: \bg\label{induce} \lim_{u \rightarrow 0}~ u^2 g_{\mu\nu}~ = ~ \eta_{\mu\nu} \nd where $u = r^{-1}$ and $g_{\mu\nu}$ is the full metric (including the warp factor) in Region 3. Thus we can make the identification (\ref{Holo-1}). Once this subtlety is resolved, comparing (\ref{WL-1}) and (\ref{Holo-1}) we can read off the potential \bg \label{Vqq} V_{Q\bar{Q}}~ = ~ \lim_{T \to \infty} \frac{S^{\rm ren}_{{\rm NG}}}{T} \nd Thus knowing the renormalised string world sheet action, we can compute $V_{Q\bar{Q}}$ for a strongly coupled gauge theory. The above discussion was all for gauge theory at zero temperature. What happens when we allow non-zero temperatures? Does the above identification \eqref{Vqq} between the quark anti-quark potential and the renormalised Nambu-Goto action go through again? The answer is yes, but the derivation is a little more subtle than what we presented for the zero temperature case. At high temperatures and density we expect the medium effects to {\it screen} the interaction between the heavy quark and anti-quark pairs. The resulting effective potential between the quark anti-quark pairs separated by a distance $d$ at temperature ${\cal T}$ can then be expressed succinctly in terms of the free energy $F(d, {\cal T})$, which generically takes the following form: \bg\label{freeenergy} F(d, {\cal T}) = \sigma d ~f_s(d, {\cal T}) - {\alpha\over d} f_c(d, {\cal T}) \nd where $\sigma$ is the string tension, $\alpha$ is the gauge coupling and $f_c$ and $f_s$ are the screening functions\footnote{We expect the screening functions $f_s, f_c$ to equal identity when the temperature goes to zero. This gives the zero temperature Cornell potential.} (see for example \cite{karsch} and references therein). For the quark and the anti-quark pair kept at $+{d\over 2}$ and $-{d\over 2}$ we expect the Wilson lines $W\left(\pm {d\over 2}\right)$ to be related to the free energy via: \bg\label{wlfe} {\rm exp}\left[-{F(d, {\cal T})\over {\cal T}}\right] ~ = ~ {\langle W^\dagger\left(+{d\over 2}\right) W\left(- {d\over 2}\right)\rangle \over \langle W^\dagger\left(+{d\over 2}\right)\rangle \langle W\left(-{d\over 2}\right)\rangle} \nd In terms of Wilson loop, the free energy \eqref{freeenergy} is now related to the renormalised Nambu-Goto action for the string on a background with a black-hole\footnote{There is a big literature on the subject where quark anti-quark potential has been computed using various different approaches like pNRQCD \cite{brambilla}, hard wall AdS/CFT \cite{polstrass, boschi} and other techniques \cite{reyyee, cotrone2}. Its reassuring to note that the results that we get using our newly constructed background matches very well with the results presented in the above references. This tells us that despite the large $N$ nature there is an underlying universal behavior of the confining potential.}. One may also note that the theory we get is a four-dimensional theory {\it compactified} on a circle in Euclideanised version and not a three-dimensional theory. \subsection{Computing the Nambu-Goto Action: Zero Temperature} Our first attempt to compute the NG action would be to consider the zero temperature case. This means that we make the black-hole factors $g_i$ in \eqref{bhmet1} to be identity. The string configuration that we will take to do the required computation is given below in Figure 6. Note that we have configured our geometry such that the string is exclusively in Region 3. We will provide a stronger motivation for this soon. For the time being observe that the configuration in Figure 6 has one distinct advantage over all other configurations studied in the literature, namely, that because of the absence of three-forms in Region 3 we will not have the UV divergence of the Wilson loop attributed to the logarithmically varying $B$ field \cite{cschu}. In fact even if the string enters Regions 2 and 1 we will not encounter any problems because there are no UV three-forms in our model. \begin{figure}[htb]\label{wilsonloop} \begin{center} \includegraphics[height=6cm]{wilsonloop.eps} \caption{{The string configuration that we will use to evaluate the Wilson loop in the dual gauge theory. The line $A$ determines the actual boundary, with the line $B$ denoting the extent of the seven brane. We will assume that line $B$ is very close to the line $A$. The line $C$ at $r = r_o$ denotes the boundary between Region 3 and Region 2. Region 2 is the interpolating region that ends at $r = r_{\rm min}$. At the far IR the geometry is cut-off at $r = a$ from the blown-up $S^3$. As discussed in the text, the string has a maximum dip that will eventually lead to the confining potential between the heavy quark and the antiquark.}} \end{center} \end{figure} \noindent Since the system is not dynamical, the world line for the static $Q\bar{Q}$ can be chosen to be \bg \label{qline} x^1~=~\pm \frac{d}{2},~~~~~ x^2 ~= ~ x^3 ~= ~0 \nd and using $u \equiv 1/r$ we can rewrite the metric in Region 3 as\footnote{We will be using the Einstein summation convention henceforth unless mentioned otherwise.}: \bg \label{reg3met} ds^2&=& g_{\mu\nu} dX^\mu dX^\nu ~=~ {\cal A}_n(\psi,\theta_i,\phi_i)u^{n-2}\left[-g(u)dt^2+d\overrightarrow{x}^2\right]\nonumber\\ &+&\frac{ {\cal B}_l(\psi,\theta_i,\phi_i)u^{l}}{ {\cal A}_m(\psi,\theta_i,\phi_i)u^{m+2}g(u)}du^2+\frac{1} {{\cal A}_n(\psi,\theta_i,\phi_i)u^{n}}~ds^2_{{\cal M}_5} \nd where ${\cal A}_n$ are the coefficients that can be extracted from the $a_i$ in \eqref{heaft}, the black hole factor $g(u) = 1$ for the zero-temperature case, and $ds^2_{{\cal M}_5}$ is the metric of the internal space that includes the corrections given in \eqref{rab}. This can be made precise as \bg\label{cala} {1\over \sqrt{h}}~ = ~ {1\over L^2 u^2 \sqrt{a_i u^i}} \equiv {\cal A}_n u^{n-2} ~=~ {1\over L^2 u^2}\left[a_0 -{a_1u\over 2} + \left({3a_1^2\over 8a_0} - {a_2\over 2}\right)u^2 + ...\right] \nd giving us ${\cal A}_0 = {a_0\over L^2}, {\cal A}_1 = -{a_1\over 2L^2}, {\cal A}_2 = {1\over L^2}\left({3a_1^2\over 8a_0} - {a_2\over 2}\right)$ and so on. Note that since $a_i$, $i \ge 1$ are of ${\cal O}(g_sN_f)$ and $L^2 \propto \sqrt{g_sN}$, all ${\cal A}_i$ are very small. The $r^{-n}$ corrections along the radial direction given in \eqref{rrr} are accomodated above via ${\cal B}_l u^l$ series. Now suppose $X^\mu:(\sigma,\tau)\rightarrow (x^{0123}, u,\psi,\phi_i,\theta_i)$ is a mapping from string world sheet to space-time. Choosing a parametization $\tau= x^0 \equiv t,\sigma= x^1 \equiv x$ with the boundary of the world sheet embedding being the path ${\cal C}$, we see that we can have \bg \label{ws-1} &&X^0~= ~t,~~~ X^1~= ~x,~~~ X^2~ = ~ X^3 ~ = ~ 0, ~~~ X^7~=~u(x), ~~~X^6 ~=~\psi ~=~ 0 \nonumber\\ &&(X^4, X^5) ~ = ~ (\theta_1, \phi_1) ~ = ~ (\pi/2, 0), ~~~ (X^8, X^9) ~ = ~ (\theta_2, \phi_2) ~ = ~ (\pi/2, 0) \nd which is almost like the slice \eqref{sol} that we choose in \cite{FEP}. The advantage of such a choice is to get rid of the ackward angular variables that appear for our background so that we will have only a $r$ (or $u$) dependent background like in \eqref{slicebg} discussed before. We will also impose the boundary condition \bg \label{bc-1} u(\pm d/2)~ = ~ u_\gamma ~\approx ~ 0 \nd where $u_\gamma$ denote the position of the seven brane {\it closest} to the boundary. The Nambu-Goto action for the string connecting this seven brane is: \bg \label{NG-1} && S_{\rm string}= {T_0\over 2\pi}\int d\sigma d\tau \Big[\sqrt{-{\rm det}\left[(g_{\mu\nu} + \partial_\mu\phi\partial_\nu\phi)\partial_a X^\mu \partial_b X^\nu\right]} + {1\over 2} \epsilon^{ab} B_{ab} + J(\phi)\nonumber\\ && ~~~~~~~~+ \epsilon^{ab} \partial_a X^m \partial_b X^n ~\bar\Theta~ \Gamma_m \Gamma^{abc....} \Gamma_n ~\Theta ~F_{abc....} + {\cal O}(\Theta^4)\Big] \nd where $a, b=1,2$, $\partial_1\equiv \frac{\partial}{\partial \tau}$, $\partial_2\equiv \frac{\partial}{\partial \sigma}$. The other fields appearing in the action are the pull backs of the NS $B$ field $B_{ab}$, the dilaton coupling $J(\phi)$ and the RR field strengths $F_{abc..}$. Its clear that once we switch off the fermions i.e $\Theta = \bar\Theta = 0$ the RR fields decouple. The $B_{NS}$ field do couple to the fundamental string but as we discussed before, in Region 3 we don't expect to see any three-form field strengths. This is because the amount of $B_{\rm NS}$ that could leak out from Region 2 to Region 3 is: \bg\label{leaking} B_{\rm NS} ~= ~ M {\cal S} [1-f(r)] ~ = ~ M{\cal S} ~e^{-\alpha(r - r_0)}, ~~~~ r > r_0 \nd where ${\cal S}$ is the two-form: \bg\label{sdefin} &&{\cal S} = g_s \left(b_1(r)\cot\frac{\theta_1}{2}\,d\theta_1+b_2(r)\cot\frac{\theta_2}{2}\,d\theta_2\right)\wedge e_\psi - {g_sb_4(r)\over 12\pi}\sin\theta_2\,d\theta_2\wedge d\phi_2\\ && +{3g_s\over 4\pi}\left[\left(1+g_sN_f -{1\over r^{2g_sN_f}} + {9a^2g_sN_f\over r^2}\right) \log\left(\sin\frac{\theta_1}{2}\sin\frac{\theta_2}{2}\right) + b_3(r)\right] \sin\theta_1\,d\theta_1\wedge d\phi_1 \nonumber\\ &&-{g_s\over 12\pi}\Bigg(2 -{36a^2g_sN_f\over r^2} + 9g_sN_f -{1\over r^{16g_sN_f}} - {1\over r^{2g_sN_f}} + {9a^2g_sN_f\over r^2}\Bigg)\sin\theta_2\,d\theta_2\wedge d\phi_2\nonumber \nd and $b_n$ have been defined before. We see that not only $B_{\rm NS}$ has an inverse $r$ fall off, but also has a strong exponential decay because $\alpha >> 1$. This is the main reason why there are no NS or RR three-forms in Region 3, making our computation of the Wilson loop relatively easier compared to the pure Klebanov-Strassler case. On the other hand dilaton {will} couple {\it additionally} via the $J(\phi)$ term. Although this coupling of $\phi$ is not to the $X^\mu$, we can still control this coupling by arranging the other seven-branes such that: \bg\label{sevbrarr} {\rm {\bf Re}}\left(\sum_{i=1}^{n_1} {3z^n_i\over z^n} - \sum_{j=1}^{n_2} {{\widetilde z}^n_j \over z^n}\right) ~ < ~ \epsilon ~~~~~ {\rm for}~~ 0\le n \le m_o \nd with $\epsilon$ very small and $m_o$ a sufficiently big number. Under this condition the dilaton will be essentially constant and the axio-dilaton $\tau$ would behave as: \bg\label{axdbel} \tau ~ = ~ \tau_0 + \sum_{n = 1}^\infty {{\cal C}_n \over r^n} + i\sum_{n > m_o}^\infty {{\cal D}_n \over r^n} \nd so that its contribution to NG action can be ignored although the ${\cal B}_l u^l$ contribution still dominates, because the seven-branes continue to affect the geometry from their energy-momentum tensors and the axion charges. In this limit both string and Einstein frame metrics are identical and the background dilaton is \bg\label{bgdil} \phi = {\rm log}~g_s - g_s{\cal D}_{n+m_o} u^{n+m_o} + {\cal O}(g_s^2) \nd which, in the limit $g_s \to 0$, will be dominated by the constant term (note that $m_o$ is fixed). Because of this form, the NG string will see a slightly different background metric as evident from \eqref{NG-1}. Thus once the dust settles, using the metric \eqref{reg3met} with the embedding $X^\mu$ given by \eqref{ws-1}, one can easily show that at zero temperature the NG action is given by: \bg \label{NG-2} S_{\rm{NG}}=\frac{T}{2\pi}\int_{-{d\over 2}}^{+{d\over 2}} \frac{dx}{u^2}\sqrt{\Big({\cal A}_n u^n\Big)^2 + \Big[{\cal B}_m u^m + 2g_s^2 {\widetilde{\cal D}}_{n+m_o} {\widetilde{\cal D}}_{l+m_o} {\cal A}_k u^{n+l+k+2m_o} + {\cal O}(g_s^4)\Big] \left(\frac{\partial u}{\partial x}\right)^2 }\nonumber\\ \nd where we have used $\int dt=T/T_0 \equiv T$ (with $T_0 \equiv 1$ henceforth), ${\widetilde{\cal D}}_{n+m_o} = (n+m_o){\cal D}_{n+m_o}$; and ${\cal A}_n, {\cal B}_n$ and ${\cal D}_{n+m_o}$ are now defined for choices of the angular coordinates given in \eqref{ws-1}. The above action can be condensed if we redefine: \bg\label{redefine} {\cal B}_m u^m + 2g_s^2 {\widetilde{\cal D}}_{n+m_o} {\widetilde{\cal D}}_{l+m_o} {\cal A}_k u^{n+l+k+2m_o} + {\cal O}(g_s^4) ~ \equiv ~ {\cal G}_l u^l \nd which would mean that the constraint equation i.e $\partial_1 T^1_1 = 0$, $T^1_1$ being the stress-tensor, for $u(x)$ derived from the action (\ref{NG-2}) using \eqref{redefine} can be written as \bg \label{EL-1} \frac{d}{dx}\left(\frac{\left({\cal A}_n\ u^n\right)^2}{u^2\sqrt{\left({\cal A}_m\ u^m\right)^2 + {\cal G}_m u^m\left(\frac{\partial u}{\partial x}\right)^2 }}\right) ~= ~0 \nd implying that: \bg\label{feqn} \frac{\left({\cal A}_n u^n\right)^2}{u^2\sqrt{\left({\cal A}_m u^m\right)^2 + {\cal G}_m u^m ~u'(x)^2}} ~ = ~ C_o \nd where $C_o$ is a constant, and $u'(x)\equiv\frac{\partial u}{\partial x}$. This constant $C_o$ can be determined in the following way: as we have the endpoints of the string at $x=\pm d/2$, by symmetry the string will be U shaped and if $u_{\rm max}$ is the maximum value of $u$, we can define $u(0)= u_{\rm max}$ and $u'(x=0)=0$. Plugging this in \eqref{feqn} we get: \bg \label{C} C_o ~= ~ \frac{{\cal A}_n u_{\rm max}^n }{u_{\rm max}^2} \nd Once we have $C_o$, we can use \eqref{EL-1} to get the following simple differential equation: \bg \label{EL-2} {du\over dx} ~= ~ \pm \frac{1}{C_o\sqrt{{\cal G}_m u^m}}\left[\frac{\left({\cal A}_n u^n\right)^4}{u^4} -C_o^2\left({\cal A}_m u^m\right)^2\right]^{1/2} \nd which in turn can be used to write $x(u)$ as: \bg \label{EL-3} x(u) ~= ~ C_o \int_{u_{\rm max}}^u dw \frac{w^2\sqrt{{\cal G}_m w^m}}{\left({\cal A}_n w^n\right)^2} \left[1-\frac{C_o^2 w^4}{\left({\cal A}_m w^m\right)^2}\right]^{-1/2} \nd where we have used $x(u_{\rm max})=0$. Now using the boundary condition given in \eqref{bc-1} i.e $x(u=u_\gamma)=d/2$, and defining $w = u_{\rm max} v, \epsilon_o = {u_\gamma\over u_{\rm max}}$ we have \bg \label{D-1} d~ = ~2u_{\rm max} \int_{\epsilon_o}^{1} dv ~v^2\frac{\sqrt{{\cal G}_m u_{\rm max}^m v^m} \left({\cal A}_n u_{\rm max}^n\right)}{\left({\cal A}_m u_{\rm max}^m v^m\right)^2} \left[1-v^4\left(\frac{{\cal A}_n u_{\rm max}^n}{{\cal A}_m u_{\rm max}^m v^m}\right)^2\right]^{-1/2} \nd At this stage we can assume all ${\cal A}_n > 0$. This is because for ${\cal A}_n > 0$ we can clearly have degrees of freedom in the gauge theory growing towards UV, which is an expected property of models with RG flows. Of course this is done to simplify the subsequent analyses. Keeping ${\cal A}_n$ arbitrary will also allow us to derive the linear confinement behavior, but this case will require a more careful analysis. We will leave this for future works. Note also that similar behavior is seen for the the Klebanov-Strassler model, and we have already discussed how degrees of freedom run in Regions 2 and 3. Another obvious condition is that $d$, which is the distance between the quarks, cannot be imaginary. From (\ref{D-1}) we can see that the integral becomes complex for \bg \label{real-1} {\cal F}(v)~\equiv~ v^4\left(\frac{{\cal A}_n u_{\rm max}^n}{{\cal A}_m u_{\rm max}^m v^m}\right)^2~ > ~ 1 \nd whereas for ${\cal F}(v)= 1$ the integral becomes singular. Then for $d$ to be always real we must have \bg\label{real-2} {\cal F}(v)~ \leq ~ 1 \nd We can now use, without loss of generality, ${\cal A}_0 = 1$ and ${\cal A}_1 = 0$ in units of $L^2$. Such a choice is of course consistent with supergravity solution for our background (as evident from \eqref{cala}). Therefore analyzing the condition (\ref{real-2}), one easily finds that we must have \bg\label{real-3} {1\over 2}(m+1){\cal A}_{m + 3}\ u_{\rm max}^{m + 3} ~ \leq ~ 1 \nd for $d$ to be real. This condition puts an upper bound on $u_{\rm max}$ and we can use this to constrain the fundamental string to lie completely in Region 3 as depicted in Figure 5 earlier. Observe that for AdS spaces, ${\cal A}_n=0$ for $n>0$ and hence there is no upper bound for $u_{\rm max}$. This is also the main reason why we see confinement using our background but not from the AdS backgrounds. Furthermore one might mistakenly think that generic Klebanov-Strassler background should show confinement because the space is physically cut-off due to the presence of a blown-up $S^3$. Although such a scenario implies a $u_{\rm max}$ for the fundamental string, this doesn't naturally lead to confinement because due to the presence of logarithmically varying $B_{\rm NS}$ fields there are UV divergences of the Wilson loop. These divergences {\it cannot} be removed by simple regularization schemes \cite{cschu}. Coming back to \eqref{NG-2} we see that it can be further simplified. Using (\ref{feqn}), (\ref{C}) and (\ref{EL-2}) in (\ref{NG-2}), we can write it as an integral over $u$: \bg \label{NG-3} S_{\rm NG}&=&~\frac{T}{\pi} \int_{u_\gamma}^{u_{\rm max}} {du\over u^2} \sqrt{{\cal G}_l u^l} \left[1-\frac{C_o^2 u^4}{\left({\cal A}_m u^m\right)^2}\right]^{-1/2}\nonumber\\ &=&~\frac{T}{\pi}\frac{1}{u_{\rm max}}\int_{\epsilon_o}^1 {dv\over v^2} \sqrt{{\cal G}_m u_{\rm max}^m v^m} \left[1-v^4\left(\frac{{\cal A}_n u_{\rm max}^n}{{\cal A}_m u^m_{\rm max} v^m}\right)^2\right]^{-1/2} \nd where in the second equality we have taken $v= u/u_{\rm max}$. This simplified action \eqref{NG-3} is not the full story. It is also divergent in the limit $\epsilon_o \to 0$. We isolate the divergent part of the above integral (\ref{NG-3}) by first computing it as a function of $\epsilon_o$. The result is \bg S_{\rm NG}&\equiv& S_{\rm NG}^{\rm I} + S_{\rm NG}^{\rm II} ~ = ~ \frac{T}{\pi}\frac{1}{u_{\rm max}}\int_{\epsilon_o}^1 {dv\over v^2} \sqrt{{\cal G}_m u_{\rm max}^m v^m}\nonumber\\ &+&\frac{T}{\pi}\frac{1}{u_{\rm max}}\int_{\epsilon_o}^1 {dv\over v^2} \sqrt{{\cal G}_m u_{\rm max}^m v^m} \left\{\left[1-v^4\left(\frac{{\cal A}_n u_{\rm max}^n}{{\cal A}_m u^m_{\rm max} v^m}\right)^2\right]^{-1/2}-1\right\} \nd Now by expanding $\sqrt{{\cal G}_m u_{\rm max}^m v^m}={\widetilde{\cal G}}_lv^l$ we can compute the first integral to be \bg \label{NG-3A} S_{\rm NG}^{\rm I}&=&\frac{T}{\pi}\frac{1}{u_{\rm max}}\left(-{\widetilde{\cal G}}_0+\frac{{\widetilde{\cal G}}_0} {\epsilon_o}+\sum_{l=2}\frac{{\widetilde{\cal G}}_l}{l-1}+{\cal O}(\epsilon_o)+..\right) \nd where ${\widetilde{\cal G}}_0 = {\cal G}_0, ~{\widetilde{\cal G}}_1 = {1\over 2} {\cal G}_1 u_{\rm max}$ and so on. The second integral becomes \bg \label{NG-3B} S_{\rm NG}^{\rm II}&=&\frac{T}{\pi}\frac{1}{u_{\rm max}}\int_0^1 {dv\over v^2} \sqrt{{\cal G}_m u_{\rm max}^m v^m} \left\{\left[1-v^4\left(\frac{{\cal A}_n u_{\rm max}^n}{{\cal A}_m u^m_{\rm max} v^m}\right)^2\right]^{-1/2}-1\right\}+{\cal O}(\epsilon_o^3)\nonumber\\ \nd where the $\epsilon_o$ dependence here appears to ${\cal O}(\epsilon^3_o)$; and we have set ${\cal G}_1=0$ without loss of generality. Now combining the result in (\ref{NG-3A}) and (\ref{NG-3B}), we can obtain the renormalized action by subtracting the divergent term ${\cal O}(1/\epsilon)$ in the limit $\epsilon_o \to 0$ and obtain the following result \bg \label{NG-4} S_{\rm NG}^{\rm ren}&=&\frac{T}{\pi}\frac{1}{u_{\rm max}} \Bigg\{-{\widetilde{\cal G}}_0+ \sum_{l=2}\frac{{\widetilde{\cal G}}_l}{l-1} - \int_0^1 {dv\over v^2} \sqrt{{\cal G}_m u_{\rm max}^m v^m} + {\cal O}(g_s^2) \nonumber\\ &+& \int_0^1 {dv\over v^2} \sqrt{{\cal G}_m u_{\rm max}^m v^m} \left[1-v^4\left(\frac{{\cal A}_n u_{\rm max}^n}{{\cal A}_m u^m_{\rm max} v^m}\right)^2\right]^{-1/2} + {\cal O}(\epsilon_o)\Bigg\} \nd where the third term in \eqref{NG-4}, including the ${\cal O}(g_s^2)$ correction, is related to the action for a straight string in this background in the limit $g_s \to 0$. Our subtraction scheme is more involved because the straight string sees a complicated metric due to the background dilaton and non-Ricci flat unwarped metric. This effect is {\it independent} of any choice of the warp factor. We expect this action to be finite in the limit $\epsilon_o \to 0$. Once we have the action, we should use this to compute the $Q\bar{Q}$ potential through (\ref{Vqq}). Looking at (\ref{D-1}) we observe that the relation between $d$ and $u_{\rm max}$ is parametric and can be quite involved depending on the coefficients ${\cal A}_n$. If we have ${\cal A}_n=0, {\cal G}_n=0$ for $n>0$, we recover the well known AdS result, namely: $d\sim u_{\rm max}$ and $V_{Q\bar{Q}}\sim {1\over d}$. But in general (\ref{D-1}) and (\ref{NG-4}) should be solved together to obtain the potential. As it stands, \eqref{D-1} and \eqref{NG-4} are both rather involved. So to find some correlation between them we need to go to the limiting behavior of $u_{\rm max}$. Therefore in the following, we will study the behaviour of $d$ and $S_{\rm NG}^{\rm ren}$ for the cases where $u_{\rm max}$ is large and small. \subsubsection{Quark-Antiquark potential for small $u_{\rm max}$} Let us first consider the case where $u_{\rm max}$ is small. In this limit we can impose $u_\gamma = \epsilon u_{\rm max}$, so that $\epsilon_o = \epsilon$ in all the above integrals and consequently their lower limits will be independent of $u_{\rm max}$. We can also approximate \bg\label{lbaz} {\cal A}_n u^n_{\rm max} ~=~ {\cal A}_0 ~+~ {\cal A}_2 u^2_{\rm max} ~ \equiv~ 1 ~+~ \eta \nd where ${\cal A}_0 = 1$ and ${\cal A}_2 u^2_{\rm max}=\eta$. Using this we can write both (\ref{D-1}) and (\ref{NG-4}) as Taylor series in $\eta$ around $\eta=0$. The result is \bg \label{D-3} && d ~ = ~ \sqrt{\eta}\left[a_0 ~ + ~ a_1\eta + {\cal O}(\eta^2)\right]\nonumber\\ && S_{NG}^{\rm ren} ~ = ~ {T\over \pi} \left[{b_0 + b_1\eta + {\cal O}(\eta^2)\over \sqrt{\eta}}\right] \nd with $a_0, a_1, b_0, b_1$ are defined in the following way: \bg\label{abdefn} &&a_0 ~=~ {2\over \sqrt{{\cal A}_2}} \int_0^1 dv {v^2\over \sqrt{1-v^4}} ~=~ {1.1981\over \sqrt{{\cal A}_2}}\nonumber\\ && a_1 ~=~ {2\over \sqrt{{\cal A}_2}} \int_0^1 dv {v^2\over \sqrt{1-v^4}}\left[{1-v^6\over 1-v^4} + \left({{\cal G}_2 - 4{\cal A}_2\over 2{\cal A}_2}\right)v^2\right]\nonumber\\ &&b_0 ~ = ~ \sqrt{{\cal A}_2}\left[-1 + \int_0^1 dv \left({1-\sqrt{1-v^4} \over v^2\sqrt{1-v^4}}\right)\right] ~ = ~ -0.62 \sqrt{{\cal A}_2}\nonumber\\ && b_1 ~ = ~ {1\over 2\sqrt{{\cal A}_2}}\Bigg\{{\cal G}_2 + \int_0^1 dv \left[{2{{\cal A}_2} v^4 + {\cal G}_2 v^2(1+v^2)(1- \sqrt{1-v^4})\over v^2(1+v^2)\sqrt{1-v^4}}\right]\Bigg\} \nd where we have taken ${\cal G}_0 = 1$ and ${\cal G}_1 = 0$ without loss of generality. Note that all the above integrals are independent of $\eta$ (or $u_{\rm max}$) because all ${\cal O}(\epsilon)$ corrections are independent of $u_{\rm max}$. Note also that $b_0 = -\vert b_0\vert$. In this limit clearly increasing $\eta$ increases $d$, the distance between the quarks. For small $\eta$, $d = a_0\sqrt{\eta}$, and therefore the Nambu-Goto action will become: \bg\label{NGac} S_{\rm NG}^{\rm ren} ~ = ~ T\left[-\left({a_0 \vert b_0\vert\over \pi}\right) {1\over d} ~+~ \left({b_1\over \pi a_0}\right)d + {\cal O}(d^3)\right] \nd where all the constants have been defined in \eqref{abdefn}. Using \eqref{Vqq} we can determine the short-distance potential to be (recall $T_0 = 1$): \bg\label{sdpot} V_{Q\bar{Q}} ~&& = ~ -\left({a_0\vert b_0\vert\over \pi}\right) {1\over d} ~+~ \left({b_1\over \pi a_0}\right)d + {\cal O}(d^3)\nonumber\\ &&= ~ -{0.236\over d} ~+~\left(0.174{\cal G}_2 + 0.095{\cal A}_2\right) d ~ + ~ {\cal O}(d^3) \nd which is dominated by the inverse $d$ behavior, i.e the expected Coulombic behavior. Note that the coefficient of the Coulomb term is independent of the warp factor and therefore should be universal. This result, in appropriate units, is of the same order of magnitude as the real Coulombic term for the Charmonium spectra \cite{karsch, charmonium, brambilla, boschi}. This prediction, along with the overall minus sign, should be regarded as a success of our model (see also \cite{zakahrov} where somewhat similar results have been derived in a string theory inspired model). The second term on the other hand is model dependent, and vanishes in the pure AdS background. Note also that the above computations are valid for infinitely massive quark-antiquark pair. For lighter quarks, we expect the results to differ. It would be interesting to compare these results with the ones where quarks are much lighter. \subsubsection{Quark-Antiquark potential for large $u_{\rm max}$} To analyse the quark-antiquark potential for large $u_{\rm max}$ we first define a quantity called $z_{\rm max} \equiv {\widetilde u}^{-1}_{\rm max}$ which would be our small tunable parameter. We note that the {\it smallest} $z_{\rm max}$ will come from the following equality: \bg\label{real-8} {1\over 2}\sum_{m}(m+1){{\cal A}_{m + 3}\over z_{\rm max}^{m + 3}} ~ = ~ 1 \nd which is the upper bound on the inequality \eqref{real-3}. Furthermore since we demanded ${\cal A}_n \ge 0$, the above condition will imply that the coefficients ${\cal A}_n$ has to quickly become very small because in the limit $z_{\rm max} < 1$ \bg\label{condp} \lim_{m\to \infty} ~{m+1\over z_{\rm max}^{m + 3}} ~\to ~\infty \nd which is perfectly consistent with \eqref{cala} because higher ${\cal A}_n$ are proportional to higher powers of $g_sN_f$ and therefore strongly suppressed in the limit $g_sN_f \to 0$. This will also mean that we can retain only few of the ${\cal A}_n$'s to study the potential for small $z_{\rm max}$. In fact we will soon give an estimate of the largest $n$ that we should keep in our analysis. To determine the distance between the quark we can again use \eqref{EL-3}. However we have to be careful about a few subtleties that appear due to our choice of the scale $z_{\rm max}$. First of all note that we will use $w = z_{\rm max} v$ in \eqref{EL-3}. This will immediately imply that the lower bound of the integral is no longer $\epsilon$ that we had in the previous subsection, but it is \bg\label{lbe} {u_\gamma\over z_{\rm max}} ~ = ~ {\epsilon u_{\rm max} \over z_{\rm max}} ~ = ~ {\widetilde\epsilon} ~ \to ~0 \nd where $u_{\rm max}$ is the lowest value from the inequality \eqref{EL-3} that we chose in the previous subsection (to avoid clutter we use the same notation). Note that ${\widetilde\epsilon}$ is independent of $z_{\rm max}$. Using this we can now write the distance $d$ between the quark and the antiquark as: \bg\label{dbqaq} d ~ = ~ 2V_0 z^5_{\rm max} \int_{\widetilde\epsilon}^{1/z^{2}_{\rm max}} dv ~v^2 ~{\sqrt{{\cal G}_m v^m z^m_{\rm max}} \over ({\cal A}_nv^nz^n_{\rm max})^2} \Bigg[1 - z^8_{\rm max}{V_0^2\over ({\cal A}_kv^kz^k_{\rm max})^2}\Bigg]^{-1/2} \nd where we have defined $V_0 = {\cal A}_n z^{-n}_{\rm max}$ and sum over repeated index is implied as before. Comparing $V_0$ with \eqref{real-8} we see that $V_0$ can be made small if ${\cal A}_2 << 1$ (which is consistent with \eqref{cala} and \eqref{lbaz}) as all ${\cal A}_n$ for $n \ge 3$ are very small. Additionally, from \eqref{dbqaq}, the term $z^8_{\rm max} V_0^2$ will imply that it will be sufficient to restrict $V_0$ to the following series: \bg\label{v0zseries} V_0 ~ = ~ 1 ~+~ {{\cal A}_2 \over z^2_{\rm max}}~+~ {{\cal A}_3 \over z^3_{\rm max}}~+~ {{\cal A}_4 \over z^4_{\rm max}} \nd because ${\cal A}_5$ onwards are very small to consistently maintain the reality of $d$ in \eqref{dbqaq} as well as their $g_sN_f$ dependences from \eqref{cala}. This means $d$ in \eqref{dbqaq} can be further simplified to: \bg\label{dbeq} d &&= ~ 2V_0 z^5_{\rm max} \int_{\widetilde\epsilon}^{1/z^{2}_{\rm max}} dv ~v^2 ~~ {1-\left({\cal A}_2 -{1\over 2}{\cal G}_2\right) z_{\rm max}^2 v^2 \over \sqrt{1 - z_{\rm max}^8V_0^2 + 2 z_{\rm max}^{10} V_0^2 {\cal A}_2 v^2}}\\ && \approx ~ 2V_0\Bigg\{\left(1+ {1\over 2}z_{\rm max}^8 V_0^2\right){1\over 3 z_{\rm max}} + \left[{1\over 2} \left({\cal G}_2 - 4{\cal A}_2\right)z_{\rm max}^2 + {1\over 4}\left({\cal G}_2 - 8{\cal A}_2\right)V_0^2 z_{\rm max}^{10}\right]{1\over 5 z_{\rm max}^5}\Bigg\}\nonumber \nd Since we have taken both ${\cal A}_2$ as well as ${\cal B}_2$ to be very small, and plugging in the value of $V_0$ from \eqref{v0zseries} we see that $d$ is dominated mostly by inverse $z_{\rm max}$ terms, i.e \bg\label{ddom} d ~ = ~ {2\over 3z_{\rm max}}\left[1+{{\cal A}_4^2\over 2} + {\cal O}({\cal A}_n^3)\right] + {2\over 3z_{\rm max}^2} {\cal O}({\cal A}_n^2) + {\cal O}\left({1\over z_{\rm max}^3}\right) \nd The renormalised Nambu-Goto action on the other hand takes the following form: \bg\label{rnnmo} S_{\rm NG}^{\rm ren} && =~ {T\over \pi z_{\rm max}}\int_0^{1/z_{\rm max}^2}~{dv\over v^2} \sqrt{{\cal G}_l z^l_{\rm max} v^l}\Bigg[ \left(1 - z^8_{\rm max}{V_0^2\over ({\cal A}_kv^kz^k_{\rm max})^2}\right)^{-1/2} -1\Bigg]\nonumber\\ && ~~~ + {T\over \pi z_{\rm max}}\left[-z_{\rm max}^2 + {{\cal G}_2\over 2} + {{\cal G}_3\over 4}{1\over z_{\rm max}} + {8{\cal G}_4 - {\cal G}_2^2\over 48} {1\over z_{\rm max}^2} + ... \right]\\ && \approx ~ -{T\over 2\pi} (2 + {\cal A}_4^2) z_{\rm max} - {T{\cal A}_4^2\over 2\pi}\left[2{\cal A}_2 - {{\cal G}_2}\left({{\cal A}_3^2\over {\cal A}_4^2} + 2{{\cal A}_2\over {\cal A}_4}\right) - {{\cal G}_2\over {\cal A}_4^2} \right] ~{1\over z_{\rm max}} ~+ ~ {\cal O}\left({1\over z_{\rm max}^2}\right)\nonumber \nd where its clear that the string action is dominated by the inverse $z_{\rm max}$ terms. Now substituting \eqref{ddom} and \eqref{rnnmo} in \eqref{Vqq}, we get \bg\label{linpot} V_{Q\bar Q} ~ = ~ {3{\cal A}^2_4\over 4\pi}\left[2{\cal A}_2 - {{\cal G}_2}\left({{\cal A}_3^2\over {\cal A}_4^2} + 2{{\cal A}_2\over {\cal A}_4}\right) - {{\cal G}_2\over {\cal A}_4^2} + ... \right]~d + {\cal O}\left({1\over d}\right) \nd which is the required linear potential between the quark and the antiquark. The above potential can also be rewritten as: \bg\label{potred} V_{Q\bar Q} ~ = ~ \left({{\cal H}_n \alpha^n_{\rm max}\over \pi \alpha^2_{\rm max}}\right) d + {\cal O}\left({1\over d}\right) \nd where $\alpha_{\rm max} \equiv {1\over {\cal A}_4}$ and ${\cal H}_0 = {3{\cal A}_2\over 2}, {\cal H}_1 = -{3{\cal A}_2 {\cal G}_2\over 2}, {\cal H}_2 = -{3{\cal G}_2\over 4}(1+{\cal A}_3^2)$ etc. It will soon become clearer why we want to express the potential \eqref{potred} in this way. However there is one nagging issue that might be bothering the reader, namely, how do we know that the potential \eqref{potred} or equivalently \eqref{linpot} only has a linear term? To answer this question convincingly, we go to the next section where we provide a generic derivation of the linear term. \subsubsection{Generic argument for confinement} In the above subsection we argued for the linear potential taking all ${\cal A}_n$ for $n \ge 1$ to be small. This is consistent with the {\it supergravity} limit of our background because in this limit we expect $g_s \to 0$ and $g_sN_f \to 0$ with $g_sN \to \infty$. For these choices of ${\cal A}_n$, \eqref{real-8} will imply $z_{\rm max} < 1$ because we expect ${\widetilde u}_{\rm max}$ to take the highest value in Region 3. Under such a situation, condition like \eqref{condp} will be fully under control, and an analysis of the Wilson loop above reproduces the required linear potential at large $d$. However a little thought will tell us that the above derivation cannot be the complete story. What if $u_{\rm max}$, in appropriate units, is of order ($1-\epsilon$) where $\epsilon \to 0$? In that case its inverse $z_{\rm max}$ is of order 1, so both $u_{\rm max}$ and $z_{\rm max}$ can no longer be good expansion parameters. We may also consider simultaneously the case where $g_s$ is no longer small so that ${\cal A}_n$ for $n \ge 1$ are not small either. Such choices will take us away from the supergravity limit that we have been following. In this limit, we want to ask whether we can still show linear confinement of quarks. Or more generically we want to study confinement for a choice of $u_{\rm max}$ that saturates the inequality \eqref{real-3} but does not presuppose any limiting behavior of $u_{\rm max}, {\cal A}_n$ or ${\cal G}_n$. In the following therefore we will analyze the integrals (\ref{D-1}) and (\ref{NG-4}) in the limit $u_{\rm max}$ is close to it's upper bound set by (\ref{real-3}) (see also \cite{zakahrov}). In particular if ${\bf u}_{\rm max}$ is the upperbound of $u_{\rm max}$, then it is found by solving \bg \label{real-4} {1\over 2}(m+1){\cal A}_{m + 3} {\bf u}_{\rm max}^{m + 3} ~ = ~ 1 \nd We observe that both the integrals (\ref{D-1}) and (\ref{NG-4}) are dominated by $v\sim 1$ behaviour of the integrands. Near $v=1$ and $u_{\rm max}\rightarrow {\bf u}_{\rm max}$ the distance $d$ between the quark and the antiquark can be written as: \bg \label{D-4} d&& =~~2\frac{\sqrt{{\cal G}_m {\bf u}_{\rm max}^m}{\bf u}_{\rm max}}{{\cal A}_n {\bf u}_{\rm max}^n} \int_0^{1} \frac{dv}{\sqrt{{\bf A} (1-v)+{\bf B}(1-v)^2}}\nonumber\\ && = -2 \frac{\sqrt{{\cal G}_m {\bf u}_{\rm max}^m}{\bf u}_{\rm max}}{{\cal A}_n {\bf u}_{\rm max}^n} \left[{{\rm log}{\bf A}-{\rm log}\left(2\sqrt{{\bf B}({\bf A}+{\bf B})}+2{\bf B}+{\bf A}\right)\over \sqrt{\bf B}}\right] \nd where note that we have taken the lower limit to 0. This will not change any of our conclusion as we would soon see. On the other hand, the renormalised Nambu-Goto action for the string now becomes: \bg \label{NG-6} S_{\rm NG}^{\rm ren}&&= ~ {T\over \pi}\frac{\sqrt{{\cal G}_m {\bf u}_{\rm max}^m}}{{\bf u}_{\rm max}} \left[\int_0^{1} \frac{dv}{\sqrt{{\bf A} (1-v)+{\bf B}(1-v)^2}} -1\right] - {T\over \pi {\bf u}_{\rm max}} + {\cal O}({\bf u}^2_{\rm max})\nonumber\\ && = ~ -{T\over \pi}\frac{\sqrt{{\cal G}_m {\bf u}_{\rm max}^m}}{{\bf u}_{\rm max}} \left[{{\rm log}{\bf A}-{\rm log}\left(2\sqrt{{\bf B}({\bf A}+{\bf B})}+2{\bf B}+{\bf A}\right)\over \sqrt{\bf B}} -1\right]\nonumber\\ && ~~~~~~ - {T\over \pi {\bf u}_{\rm max}} + {\cal O}({\bf u}^2_{\rm max}) \nd where ${\bf A}$ and ${\bf B}$ are defined as: \bg \label{AB} {\bf A}&&= ~ 4 - 2\frac{n{\cal A}_n {u}_{\rm max}^n}{{\cal A}_m {u}_{\rm max}^m}\\ {\bf B}&& = ~ 8\frac{n{\cal A}_n {u}_{\rm max}^n}{{\cal A}_m {u}_{\rm max}^m} -3\left(\frac{n{\cal A}_n {u}_{\rm max}^n}{{\cal A}_m {u}_{\rm max}^m}\right)^2 + \frac{(n^2-n){\cal A}_n {u}_{\rm max}^n}{{\cal A}_m{u}_{\rm max}^m} - 6\nonumber \nd Observe that in the integral (\ref{D-4}) and (\ref{NG-6}) we have to take the limit $u_{\rm max}\rightarrow {\bf u}_{\rm max}$. So ${\bf A},{\bf B}$ should be evaluated in the same limit. Interestingly, comparing \eqref{AB} to \eqref{real-4} we see that \bg \label{AB1} \lim_{u_{\rm max}\rightarrow {\bf u}_{\rm max}}~{\bf A}\rightarrow 0 \nd thus vanishes when computed exactly at ${\bf u}_{\rm max}$. The other quantity ${\bf B}$ remains finite at that point and in fact behaves as: \bg\label{bbeh} {\bf B} = {n^2{\cal A}_n {\bf u}^n_{\rm max} \over {\cal A}_m {\bf u}^m_{\rm max}} - 4 ~ > ~0 \nd Our above computation would mean that the distance $d$ between the quark and the antiquark, and the Nambu-Goto action will have the following dominant behavior: \bg\label{dNG} d && = ~ \lim_{\epsilon\to 0}~ \frac{2\sqrt{{\cal G}_m {\bf u}_{\rm max}^m}{\bf u}_{\rm max}}{{\cal A}_n {\bf u}_{\rm max}^n} ~ {{\rm log}~\epsilon \over \sqrt{\bf B}}\nonumber\\ S_{\rm NG}^{\rm ren}&& = ~ \lim_{\epsilon\to 0}~ {T\over \pi}\frac{\sqrt{{\cal G}_m {\bf u}_{\rm max}^m}}{{\bf u}_{\rm max}}~ {{\rm log}~\epsilon \over \sqrt{\bf B}} \nd which means both of them have identical logarithmic divergences. Thus the {finite} quantity is the {\it ratio} between the two terms in \eqref{dNG}. This gives us: \bg\label{ratio} {S_{\rm NG}^{\rm ren}\over d} ~ = ~ {T\over \pi}{{\cal A}_n {\bf u}_{\rm max}^n \over {\bf u}_{\rm max}^2} ~ = ~ T \times {\rm constant} \nd Now using the identity \eqref{Vqq} and the above relation \eqref{ratio} we get our final result: \bg\label{lipo} V_{Q\bar Q} ~ = ~ \left({{\cal A}_n {\bf u}_{\rm max}^n \over \pi {\bf u}_{\rm max}^2}\right) ~d \nd which is the required linear potential between the quark and the antiquark. Observe that the above potential has exactly the same form as \eqref{potred}, justifying the fact that ${\cal O}(d^2)$ terms are not generated for this case. Before we end this section one comment is in order. The result for linear confinement only depends on the existence of ${\bf u}_{\rm max}$ which comes from the constraint equation \eqref{real-4}. We have constructed the background such that ${\bf u}_{\rm max}$ lies in Region 3, although a more generic case is essentially doable albeit technically challenging without necessarily revealing new physics. For example when ${\bf u}^{-1}_{\rm max}$ is equal to the size of the blown up $S^3$ at the IR will require us to consider a Wilson loop that goes all the way to Region 1. The analysis remains somewhat identical to what we did before except that in Regions 2 and 1 we have to additionally consider $B_{\rm NS}$ fields of the form $u^{\epsilon_{(\alpha)}}$ and ${\rm log}~u$ respectively. Of course both the metric and the dilaton will also have non-trivial $u$-dpendences in these regions. One good thing however is that the Wilson loop computation have no UV or IR divergences whatsoever despite the fact that now the analysis is technically more challenging. Our expectation would be to get similar linear behavior as \eqref{lipo} here too. We will however leave a more detailed exposition of this for future works. \subsection{Computing the Nambu-Goto Action: Non-Zero Temperature} After studying the zero temperture behavior it is now time to discuss the case when we switch on a non-zero temperature i.e make $g(u) < 1$ or equivalently the inverse horizon radius, $u_h$ finite in \eqref{reg3met}, where \bg\label{g} g(u)=1-{u^4\over u_h^4} \nd Choosing the same quark world line (\ref{qline}) and the string embedding (\ref{ws-1}) with the same boundary condition (\ref{bc-1}) but now in Euclidean space with compact time direction, the string action at finite temperature can be written as \bg \label{NGfinT} S_{\rm{NG}}=\frac{T}{2\pi}\int_{-{d\over 2}}^{+{d\over 2}} \frac{dx}{u^2}\sqrt{g(u)\Big({\cal A}_n u^n\Big)^2 + \Bigg[{\cal G}_m u^m - {2g_s^2 {\widetilde{\cal D}}_{n+m_o} {\widetilde{\cal D}}_{l+m_o} {\cal A}_k u^{4+n+l+k+2m_o} \over u_h^4}\Bigg] \left(\frac{\partial u}{\partial x}\right)^2 }\nonumber\\ \nd where ${\cal G}_m u^m$ is defined in \eqref{redefine} and the correction to ${\cal G}_m u^m$ is suppressed by $g_s^2$ as well as $u^4/u_h^4$ because the background dilaton and non-zero temperature induces a slightly different world-sheet metric than what one would have naively taken. To avoid clutter, one can further redefine these corrections as: \bg\label{redefagain} {\cal G}_m u^m - {2g_s^2 {\widetilde{\cal D}}_{n+m_o} {\widetilde{\cal D}}_{l+m_o} {\cal A}_k u^{4+n+l+k+2m_o} \over u_h^4} ~\equiv~ {\widetilde{\cal D}}_l u^l \nd Minimizing this action gives the equation of motion for $u(x)$ and using the exact same procedure as for zero temperature, the corresponding equation for the distance between the quarks can be written as: \bg \label{DT-1} d&=&2u_{\rm max} \int_{0}^{1} dv \Bigg\{v^2 \sqrt{{\widetilde{\cal D}}_m u_{\rm max}^m v^m} \frac{\sqrt{1-\frac{u_{\rm max}^4}{u_h^4}}{\cal A}_n u_{\rm max}^n} {\left(1-\frac{v^4u_{\rm max}^4}{u_h^4}\right)\left({\cal A}_m u_{\rm max}^m v^m\right)^2}\nonumber\\ && \left[1-v^4\frac{\left(1-\frac{u_{\rm max}^4}{u_h^4}\right)}{\left(1-\frac{v^4u_{\rm max}^4}{u_h^4}\right)} \left(\frac{{\cal A}_n u_{\rm max}^n}{{\cal A}_m u_{\rm max}^m v^m}\right)^2\right]^{-1/2}\Bigg\} \nd Once we have $d$, the renormalized Nambu-Goto action can also be written following similar procedure. The result is \bg \label{NGT-3} S_{\rm NG}^{\rm ren}&=&\frac{T}{\pi}\frac{1}{u_{\rm max}} \Bigg\{-{\widehat{\cal D}}_0+ \sum_{l=2}\frac{{\widehat{\cal D}}_l}{l-1} - \int_0^1 {dv\over v^2} \sqrt{{\widetilde{\cal D}}_m u_{\rm max}^m v^m} + {\cal O}(g_s^2)\\ &+& \int_0^1 {dv\over v^2} \sqrt{{\widetilde{\cal D}}_m u_{\rm max}^m v^m} \left[1-v^4\frac{\left(1-\frac{u_{\rm max}^4}{u_h^4}\right)}{\left(1-\frac{v^4u_{\rm max}^4}{u_h^4}\right)} \left(\frac{{\cal A}_n u_{\rm max}^n}{{\cal A}_m u^m_{\rm max} v^m}\right)^2\right]^{-1/2} + {\cal O}(\epsilon_o)\Bigg\}\nonumber \nd which is somewhat similar in form with \eqref{NG-4}, which we reproduce in the limit $u_h \to \infty$. Also as in \eqref{NG-4}, we have defined $\sqrt{{\widetilde{\cal D}}_m u_{\rm max}^m v^m} \equiv {\widehat{\cal D}}_l v^l$. Now just like the zero temperature case, requiring that $d$ be real, sets an upper bound to $u_{\rm max}$, that we denote again by ${\bf u}_{\rm max}$, and is found by solving the following equation: \bg\label{real-5} {1\over 2} (m+ 1){\cal A}_{m+3} {\bf u}_{\rm max}^{m+3} + {1\over j!}\prod_{k=0}^{j-1}\left(k-\frac{1}{2}\right)\left(\frac{{\bf u}_{\rm max}^4}{u_h^4}\right)^j \left[{\cal A}_{l}{\bf u}_{\rm max}^{l}\left({l\over 2}+ 2j - 1\right)\right] ~= ~1\nonumber\\ \nd Once we fix $u_h$ and the coefficients of the warp factor ${\cal A}_n$, ${\bf u}_{\rm max}$ will be known. We will assume that ${\bf u}_{\rm max}$ lies in Region 3. Rest of the analysis is almost similar to the zero temperature case, although the final conclusions would be quite different. To proceed further, let us define certain new variables in the following way: \bg\label{newvar} {\widetilde{\cal A}}_l ~& = & ~ \sum_m {{\cal A}_m\over u_h^{l-m}} {1\over \left({l-m\over 4}\right)!} \prod_{k=0}^{{l-m\over 4}-1} \left(k - {1\over 2}\right), ~~~~~ l-m \ge 4 \nonumber\\ &= & ~ 0 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ l-m < 4\nonumber\\ & = & ~ {\cal A}_l ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ l - m = 0 \nd As before, we observe that for $u_{\rm max}\rightarrow {\bf u}_{\rm max}$, both the integrals (\ref{DT-1}),(\ref{NGT-3}) are dominated by the behaviour of the integrand near $v\sim 1$, where we can write \bg\label{dT} d&& =~~2\frac{\sqrt{{\widetilde{\cal D}}_m {\bf u}_{\rm max}^m}{\bf u}_{\rm max}} {\sqrt{1-{{\bf u}_{\rm max}^4\over {u}^4_{h}}}{\cal A}_n {\bf u}_{\rm max}^n} \int_0^{1} \frac{dv}{\sqrt{{\widetilde{\bf A}} (1-v)+ {\widetilde{\bf B}}(1-v)^2}}\nonumber\\ && = -2 \frac{\sqrt{{\widetilde{\cal D}}_m {\bf u}_{\rm max}^m}{\bf u}_{\rm max}} {\sqrt{1-{{\bf u}_{\rm max}^4\over {u}^4_{h}}}{\cal A}_n {\bf u}_{\rm max}^n} \left[{{\rm log}{\widetilde{\bf A}}-{\rm log}\left(2\sqrt{{\widetilde{\bf B}}({\widetilde{\bf A}}+{\widetilde{\bf B}})} +2{\widetilde{\bf B}}+{\widetilde{\bf A}}\right)\over \sqrt{\widetilde{\bf B}}}\right] \nd where taking the lower limit of the integral to 0 again do not change any of our conclusion. On the other hand, the renormalised Nambu-Goto action for the string now becomes: \bg \label{NGT} S_{\rm NG}^{\rm ren}&&= ~ {T\over \pi}\frac{\sqrt{{\widetilde{\cal D}}_m {\bf u}_{\rm max}^m}}{{\bf u}_{\rm max}} \left[\int_0^{1} \frac{dv}{\sqrt{{\widetilde{\bf A}} (1-v)+ {\widetilde{\bf B}}(1-v)^2}} -1\right] - {T\over \pi {\bf u}_{\rm max}} + {\cal O}({\bf u}^2_{\rm max})\nonumber\\ && = ~ -{T\over \pi}\frac{\sqrt{{\widetilde{\cal D}}_m {\bf u}_{\rm max}^m}}{{\bf u}_{\rm max}} \left[{{\rm log}{\widetilde{\bf A}}-{\rm log}\left(2\sqrt{{\widetilde{\bf B}}({\widetilde{\bf A}}+{\widetilde{\bf B}})} +2{\widetilde{\bf B}}+{\widetilde{\bf A}}\right)\over \sqrt{\widetilde{\bf B}}} -1\right]\nonumber\\ && ~~~~~~ - {T\over \pi {\bf u}_{\rm max}} + {\cal O}({\bf u}^2_{\rm max}) \nd where ${\widetilde{\bf A}}$ and ${\widetilde{\bf B}}$ are defined exactly as in \eqref{AB} but with ${\cal A}_n$ replaced by ${\widetilde{\cal A}}_n$ given by \eqref{newvar} above. It is also clear that: \bg \label{ABnm} \lim_{u_{\rm max}\rightarrow {\bf u}_{\rm max}}~{\widetilde{\bf A}}\rightarrow 0 \nd and so both \eqref{dT} as well as \eqref{NGT} have identical logarithmic divergences. This would imply that the finite quantity is the ratio between \eqref{NGT} and \eqref{dT}: \bg\label{rationow} {S_{\rm NG}^{\rm ren}\over d} ~ = ~ {T\over \pi} \left({1-{{\bf u}_{\rm max}^4\over {u}^4_{h}}}\right)^{1\over 2}{{\cal A}_n {\bf u}_{\rm max}^n \over {\bf u}_{\rm max}^2} \nd Now using the identity \eqref{wlfe} and the above relation \eqref{ratio} we get our final result: \bg\label{liponow} V_{Q\bar Q} ~ = ~ {\sqrt{1-{{\bf u}_{\rm max}^4\over {u}^4_{h}}}}\left({{\cal A}_n {\bf u}_{\rm max}^n \over \pi {\bf u}_{\rm max}^2}\right) ~d \nd \subsubsection{Analysis of the melting temperature} To determine the behavior of the potential $V_{Q\bar Q}$ as the temperature is raised or decreased (or $u_h$ is decreased or increased respectively) we have to carefully analyse the behavior of ${\bf u}_{\rm max}$ as a function of $u_h$. Comparing this with \eqref{real-5} and fixed ${\cal A}_n$ we observe that \bg\label{diffva} {\delta{\bf u}_{\rm max} \over \delta u_h} ~=~ {{1\over j!} \prod \left(k - {1\over 2}\right) {{\bf u}_{\rm max}^{4j}\over u_h^{4j+1}} {\cal A}_l u^l \left({l\over 2} + 2j -1\right)\over {1\over 2} m(m+1) {\cal A}_m u^m + {1\over j!} \prod \left(k - {1\over 2}\right) {{\bf u}_{\rm max}^{4j-1}\over u_h^{4j}} {\cal A}_l u^l (4j+l)\left({l\over 2} + 2j -1\right)} \nd where the repeated indices are all summed over and the product runs from $k = 0$ to $k = j-1$. Observe that the numerator of \eqref{diffva} is always negative, and for large $u_h$ the denominator will be positive (because we are taking all ${\cal A}_n > 0$). This means $${\delta{\bf u}_{\rm max} \over \delta u_h}~ < ~ 0$$ and therefore as $u_h$ is decreased (i.e the temperature is increased), ${\bf u}_{\rm max}$ increases making the ratio ${u_{\rm max}^4\over u_h^4}$ to increase. This in turn would imply that the slope of the potential $V_{Q\bar Q}$ decreases. Therefore there would be a temperature where the slope would be minimum and the system would show the property of {\it melting}. To start off let us consider \eqref{real-5} for a simple case where we keep $u_{\rm max}$ only to quartic order\footnote{This is a subtle issue because we are {\it truncating} the series \eqref{real-5} and especially $u_h$ to only this order to have an analytic control on our calculations. A more generic analysis can be done numerically, which we present in the next section.}. This means: \bg\label{quartico} {1\over 2} {\cal A}_3 u_{\rm max}^3 ~+~ \left({\cal A}_4 - {1\over 2u_h^4}\right)u_{\rm max}^4 ~ = ~ 1 \nd This is a quartic equation and one can easily solve it for $u_{\rm max}$. To make the analysis a little more simpler, let us also assume ${\cal A}_3 = 0$. Such a choice will immediately give us the following potential between the quark and the antiquark: \bg\label{simpol} V_{Q\bar Q} ~ = ~ \left[\left(1+ {{\cal A}_2\over \pi}\right) + {1\over \pi} \sqrt{{\cal A}_4 - {1\over 2u_h^4}}\right]d \nd which tells us that $u_h$ has to be bounded by $$u_h ~ > ~ {0.84\over {\cal A}^{1/4}_4}$$ for \eqref{simpol} to make sense, and the slope of the potential would decrease as $u_h$ approaches this value. On the other hand $u_{\rm max}$ increases as $u_h$ is lowered and for \bg\label{uhval} u_h ~ = ~ {1.1067\over {\cal A}^{1/4}_4} \nd we expect the potental to have a minimum slope where the onset of {\it melting} should appear. A temperature greater than this is physically not possible because the string would break. Also note if we kept ${\cal A}_0 \ne 1$, we would have $({\cal A}_0/{\cal A}_4)^{1/4}$ in \eqref{uhval}. The above conclusion is certainly interesting, but may be a little naive because there is no strong reason to terminate the constraint \eqref{real-5} to ${\cal A}_4$ because \eqref{newvar} would tell us that higher powers of $u_{\rm max}$ will have all the lower ${\cal A}_n$'s as coefficients. This would make the subsequent analysis complicated, and we have to resort to numerical methods. On the other hand the above analysis does shed some light on the situation where the ratio ${u_{\rm max}^4\over u_h^4} > 1$. It is clear from our above calculation what happens when $u_h$ becomes too small: since by lowering $u_h$ there is an increase in ${\bf u}_{\rm max}$, we are always bounded by the constraint: \bg\label{conh} {\bf u}_{\rm max} ~\le~u_h \nd where the equality would lead to \eqref{uhval}. Therefore for \eqref{simpol} and \eqref{conh} to make sense, the string connecting quark and the antiquark should {\it break} when ${\bf u}_{\rm max}$ starts exceeding $u_h$ as mentioned above. This is the point where total melting happens, and the linear potential goes from the minimum slope $\sigma$, where \bg\label{minslope} \sigma ~\equiv ~ {V_{Q\bar Q}\over d} ~ = ~ 1 + 0.32~{\cal A}_2 \nd to zero as soon as the temperature is increased, or alternatively $u_h$ is decreased, beyond \eqref{uhval}. In fact beyond certain temperature, constraint equation like \eqref{real-5} is no longer valid, and we are only left with the Coulombic term that eventually dies off at large distances. \subsection{Numerical analysis} Most of the calculations that we did in the previous sections have been analytic. Under some approximations we could see certain important properties of quark-antiquark potentials at zero and non-zero temperatures. However as mentioned in the footnote earlier, truncating the series \eqref{real-5} as \eqref{quartico} may not capture the full story although the above toy example does give us a way to compute the melting temperature where the slope of the potential hits a minima \eqref{minslope}. Our main conclusion of the previous section is that there exists a set of warp factors ${\cal A}_n$ for which the equality ${\bf u}_{\rm max} = u_h$ is valid. That means out of a large sets of possible backgrounds (classified by the choices of warp factors satisfying EOMs) this equality would select a particular subset of backgrounds that allow deconfinement and quarkonium meltings at the temperatures \eqref{uhval} for a subset of ${\cal A}_4$ in this approximation. What happens if we choose an arbitrary set of warp factors that do not lie in this subset? In this section we will perform a numerical analysis to study the behavior of the quark-antiquark potentials for this case at all temperatures. For this particular choice of coefficients of the warp factor we can numerically compute the interquark distance $d$ in (\ref{D-1}), (\ref{DT-1}) and the NG action $S_{\rm NG}^{\rm ren}$ (\ref{NG-3}),(\ref{NGT-3}), for various values of $u_{\max}$ and using this, plot $V_{Q\bar{Q}}$ as a function of $d$. The analytic behavior of d and $V_{Q\bar{Q}}$ discussed in the previous sections for $u_{\rm max}$ very large and small will turn out to be consistent with our numerical analysis although the property of melting will not be visible now. For simplicity we will choose the following values for the coefficients of the warp factor: \bg\label{wfnumber} {\cal A}_0~=~1,~~~~{\cal G}_0~=~1,~~~~{\cal A}_2~=~{\cal A}_4~=~{\cal G}_2~=~{\cal G}_4~=~0.24 \nd with $g_s\sim 0.02$ and $N_f=24$. This is indeed a reasonable choice, and hopefully satisfies EOMs despite being outside the required subset, because as we saw from F-theory, all corrections to warp factor due to running of the $\tau$ field comes as ${\cal O}(g_s^2N_f^2)$. Figure 7 shows how the inter-quark distance $d$ varies with $u_{\rm max}$. \begin{figure}[htb]\label{VQQ3} \begin{center} \includegraphics[height=9cm,angle=-90]{VQQ3.eps} \caption{Inter-quark distance d as a function of $u_{\rm max}$ evaluated for our choice of warp factor given earlier. The red curve is the zero temperature limit. Here $T \equiv 1/u_h \equiv r_h$ henceforth.} \end{center} \end{figure} For $T=0$, from the figure we see that there exists an upper bound $\bf{u}_{\rm max}$ near which $d\rightarrow \infty$. A similar analysis also gives as $u_{\max}\rightarrow \bf{u}_{\rm max}$, $V_{Q\bar{Q}}\rightarrow \infty$. By increasing $u_{\rm max}$ near $\bf{u}_{\rm max}$, we can get {\it all} the values of $d$ and $V_{Q\bar{Q}}$ and using this we can plot $V_{Q\bar{Q}}$ as a function of $d$ as shown in Figure 8. Note that for large $d$, the potential grows exactly linear with distance indicating linear confinement. This is consistent with our earlier analytical calculations. Additionally for small distances, the potential behaves like a Coulomb potential as also predicted by our analysis. \begin{figure}[htb]\label{VQQ2} \begin{center} \includegraphics[height=9cm,angle=-90]{VQQ2.eps} \caption{Quarkonium potential as a function of inter quark distance $d$ at various temperatures. Note the linear and the Coulombic behaviors at large and small distances respectively.} \end{center} \end{figure} On the other hand for $T\neq 0$, again for the choice of our warp factors \eqref{wfnumber}, from Figure 7 we observe that for every $T\neq 0$ curve, there exists a maximum value of $d$, say $d_{\rm max}$ and therefore for every value of $d$, there are two distinct values for $u_{\rm max}$. Such a behavior has also been observed in \cite{reyyee} for the AdS case and in \cite{cotrone, cotrone2} for the pure Klebanov-Strassler case. This means for a particular choice of $d$ and boundary condition $u(\pm d/2)=0$, there are two $U$-shaped strings with two values for $u_{\rm max}$, namely $u_{\rm max,1}$ and $u_{\rm max,2}$ with $u_{\rm max,2}> u_{\rm max,1}$. As $u_{\rm max,2} > u_{\rm max,1}$, the $U$-shaped string with $u_{\rm max,2}$ has higher energy than the one with $u_{\rm max,1}$. We have denoted the $U$-shaped string with $u_{\rm max,1}$ by branch I and the one with $u_{\rm max,2}$ by branch II in Figure 8. It is clear from the plot that branch I has lower energy than branch II. Thus at small $d$, the potential for branch I behaves as Coulomb potential and by comparing this to the zero temperature Coulomb behavior, we see that the $T\neq 0$ Coulomb potential is suppresed. Now note that as we lower the temperatue, the value for $d_{\rm max}$ increases. Therefore in the limit $T\rightarrow 0$, $d_{\rm max}\rightarrow \infty$, which is perfectly consistent with our zero temperature curve in Figure 7. This implies as $T\rightarrow 0$, the curves in Figure 7 converge with the zero temperature curve, which in turn blows up at $\bf{u}_{\rm max}$. Such a behavior is not inconsistent with the high temperature case because we can view the $T=0$ curve to go straight up and never come down, resulting in a single solution for the $U$-shaped string for every $d$. The high temperature curves go upto some $d_{\rm max}$ and then come down. For large $d < d_{\rm max}$, we have linear potential for branch I $-$ which is suppressed compared to the zero temperature curve as shown in Figure 8. As the temperature is increased more is the suppression and smaller is the value for $d_{\rm max}$. In Figure 8 with our choice of the warp factor coefficients the slopes of linear potentials and the resulting suppressions are not very significant. For a better view of the suppressions, we present a blown up version of Figure 8 in Figure 9. \begin{figure}[htb]\label{VQQ1} \begin{center} \includegraphics[height=9cm,angle=-90]{VQQ1.eps} \caption{{Suppressions at non zero temperatures of the linear potential magnified by choosing a slightly different values of the coefficients of the warp factor given earlier. In this figure one can clearly see how with high temperatures the quark-antiquark potentials melt.}} \end{center} \end{figure} For $d>d_{\rm max}$, there is no real solution to the differential equation for $u(x)$ with boundary condition $u(\pm d/2)=0$ and thus there is no $U$-shaped string between the quarks. Thus for $d > d_{\rm max}$, the string breaks and we have deconfined quarks. Also as $d_{\rm max}$ decreases with increasing temperature, at high temperatures the quarks get screened at shorter distances $-$ which is consistent with heavy quarkonium suppression in thermal QCD. Our numerical analysis and the plots should be instructive for a generic choice of the coefficients of the warp factor where we again may not see the melting temperature. To study the generic case, first consider the zero temperature limit. Note that the existence of real positive $\bf{u}_{\rm max}$ is guaranteed if all $\widetilde{{\cal A}}_{n}$ in \eqref{newvar} are positive. However this may not be true if the original warp factor coefficients ${\cal A}_n$ are a {\it finite} set\footnote{As should be obvious from \eqref{newvar}, a finite set of ${\cal A}_n$ still implies an infinite set of $\widetilde{{\cal A}}_{n}$.}. This is because for a finite set of ${\cal A}_n$'s, there will be some $\widetilde{\cal A}_n$'s which are negative and the equation \bg \label{mastereqn} \frac{m+1}{2}\widetilde{\cal A}_{m+3}{\bf u}_{\rm max}^{m+3}~= ~ 1 \nd may not have any real positive solutions. This would imply the existence of $d_{\rm max}$ and consequently two branches of solution. For large $d < d_{\rm max}$, we will have linear potentials with suppressions at higher temperatures with lower values of $d_{\rm max}$. On the other hand, at zero temperature if all ${\cal A}_n > 0$, there is always a positive real $\bf{u}_{\rm max}$ and we will have linear potential at large distances. If some ${\cal A}_n$'s are negative, we could have no real positive solution $\bf{u}_{\rm max}$ which means $0\leq u_{\rm max} \leq \infty$ as there is no black hole horizon. In this case the behavior of d will be dominated by $d\sim u_{\rm max}$ and that of $V_{Q\bar{Q}}$ will be dominated by $V_{Q\bar{Q}}\sim 1/u_{\rm max}$ which means $V_{Q\bar{Q}}\sim 1/d$ $-$ and we will have the Coulomb potential for all $d$. Our above numerical analysis certainly illustrates the decrease in the slope of the linear potential as the temperature is increased but {\it does not} show us the melting temperature. What would happen if we restrict our warp factor choice to the required subset of ${\cal A}_n$? In this case all the high temperature curves will grow linearly and will not come back, and at a certain temperature the slope of the curve will drop to zero. This will be the melting temperature. In this paper we will not pursue this anymore, and more details will be presented elsewhere. \section{Conclusions and Discussions} In this paper we have tried to achieve two goals: First is to find the dual to large $N$ gauge theory that resembles large $N$ QCD i.e. at far IR the theory confines and at far UV the theory shows a conformal behavior. We then extend this to high temperatures. Second, is to compute the heavy quarkonium potential in this theory both at zero and non-zero temperatures. We have shown that, under some rather generic conditions, zero temperature linear confinement for heavy quarkonium states can be demonstrated. At high temperature, the expected deconfinement and quarkonium melting follow from our analysis. There are however still a few loose ends that need to be tightened to complete the full story. The first one is the issue of supersymmetry. Although we have shown that all the unnecessary tachyons can be removed from our picture, this still doesn't imply low energy supersymmetry in our model (at zero temperature). Having {\it no} supersymmetry should be viewed as desirable because we don't expect low energy susy in real world! However susy breaking in our model may trigger corrections in the potential that need to be worked out. We expect these corrections to change the coefficient of the linear term {\it without} generating an ${\cal O}(d^2)$ term. These corrections should be higher orders in $g_sN_f$, so will not change any of our conclusions presented here. This is because the linear potential arises from the limit where the Nambu-Goto action and the distance $d$ exhibit identical logarithmic divergences. This behavior should remain the same whether or not we have low enery susy in our model or not. Thus the linear confinement argument is particularly robust for our case. On the other hand the Coulombic behavior is model independent, so the coefficient of the Coulombic term should remain unchanged whether we take susy or non-susy models (see for example the model of \cite{zakahrov}). One may choose other embeddings of seven-branes, like \cite{kuperstein} or the model studied in \cite{gtpapers}, to study the quarkonium potentials at zero and non-zero temperatures. But such choices of embeddings will not change our main conclusions. The second one is the issue of Higgsing that breaks the gauge group from $SU(N+M) \times SU(N+M)$ to $SU(N+M) \times SU(N)$ in the gauge theory side. As mentioned in sec. 2.3, the story in the dual gravity side is somewhat clearer. What one needs is to analyse the gauge theory operators carefully that will allow the above mentioned Higgsing. We leave this for future work. The third one is to find the precise set of warp factors ${\cal A}_n$ that satisy EOMs and allow us to get the melting temperature for the heavy quarkonium states. In the previous section we gave a numerical analysis with an arbitrary truncated set of warp factors that shows the decrease in the slope of the linear potential with increasing temperatures. Our numerical analysis certainly shows the possibility of melting, but doesn't tell us the melting temperature. On the other hand our analytic way of getting the melting temperature is not very generic. So it would be interesting to find the full set of warp factors to complete the story\footnote{Note that in the series of papers \cite{karsch, boschi, brambilla} similar screening like what we have in \eqref{liponow} is also observed using completely different techniques than ours. Our prediction then would be that the square-root suppression that we see in \eqref{liponow} at high temperatures is {\it universal}. Of course it would be interesting to figure out the Coulomb screening at high temperature also.}. Finally, we haven't actually computed the exact gauge fluxes on the seven and five-branes that would cancel the tachyons in this model. Following the works of \cite{susyrest} this may not look like a difficult task to do, at least for the flat background. What makes it non-trivial here is that all the branes are embedded in a {\it curved} background. Quantisation of strings in a curved background is highly non-trivial, so it'll be rather challenging to work this out in full details. Nevertheless, if we restrict everything to Region 3 and away from the brane-antibrane systems, these subtletes will not affect our results in any significant way. Happily this is the regime where most of our calculations have been performed in this paper. \vskip.2cm \noindent {\bf Note}: As this draft was being written, we became aware of the work of Gaillard {\it et ~al} \cite{martelli} which has some overlap with this paper. See also the earlier work \cite{marmal}. \vskip1cm \noindent{\bf Acknowledgements} \vskip.2cm Its our pleasure to thank Peter Ouyang for many helpful discussions, and comments on the preliminary version of our draft. We would also like to thank Dongsu Bak, Niky Kamran and Omid Saremi for helpful comments. M. M would like to thank the organisers of {\it String 2010} for comments on the poster demonstration of our work. He would also like to thank Chris Herzog, Dario Martelli, Jorge Noronha and Ashoke Sen for helpful comments. This work is supported in part by the Natural Sciences and Engineering Research Council of Canada, and in part by McGill University.
hep-th/9312137
\section*{Contents} \contentsline {section}{\numberline {1}Introduction}{\pageref{AS}} \contentsline {section}{\numberline {2}An Equation for the Complete Effective Action}{\pageref{BS}} \contentsline {subsection}{\numberline {2.1}Do We Need an Equation? --- Methodological Considerations}{\pageref{BS1}} \contentsline {subsection}{\numberline {2.2}Proposing an Equation for the Complete Effective Action}{\pageref{BS2}} \contentsline {subsection}{\numberline {2.3}Exploring the Equation}{\pageref{BS3}} \contentsline {section}{\numberline {3}Gauge Field Theories}{\pageref{CS}} \contentsline {subsection}{\numberline {3.1}Ward-Takahashi Identities}{\pageref{CS1}} \contentsline {subsection}{\numberline {3.2}Schwinger-Dyson Equations}{\pageref{CS2}} \contentsline {section}{\numberline {4}QED --- An Approximative Approach to the Equation for the Complete Effective Action}{\pageref{DS}} \contentsline {subsection}{\numberline {4.1}The Approximative Approach in General}{\pageref{DS1}} \contentsline {subsection}{\numberline {4.2}Designing an Approximation Strategy}{\pageref{DS2}} \contentsline {subsubsection}{\numberline {4.2.1}Consequences of Gauge Invariance for the Kernel of the Fer\discretionary {-}{}{}mion Action $\Gamma _I^F$}{\pageref{DS21}} \contentsline {subsubsection}{\numberline {4.2.2}Requirements on the Kernel of the Gauge Field Action $\Gamma _I^G$}{\pageref{DS22}} \contentsline {subsubsection}{\numberline {4.2.3}The Approximation Strategy in Ideal, and in Practice}{\pageref{DS23}} \contentsline {subsection}{\numberline {4.3}Bringing the Approximation Strategy to Work: Explicit Calculation}{\pageref{DS3}} \contentsline {subsubsection}{\numberline {4.3.1}Performing the Functional Integration}{\pageref{DS31}} \contentsline {subsubsection}{\numberline {4.3.2}The Integral Equation for the Kernel of the Fermion Action}{\pageref{DS32}} \contentsline {paragraph}{\numberline {4.3.2.1}Solving the Integral Equation in the Asymptotic UV Region}{\pageref{DS321}} \contentsline {paragraph}{\numberline {4.3.2.2}Solving the Integral Equation in the Asymptotic IR Region}{\pageref{DS322}} \contentsline {subsubsection}{\numberline {4.3.3}The Fixed Point Condition for the Kernel of the Gauge Field Action and the Approximative Calculation of the QED Coup\-ling Constant $\alpha $}{\pageref{DS33}} \contentsline {section}{\numberline {5}The Vacuum Energy, and Related Problems}{\pageref{ES}} \contentsline {section}{\numberline {6}Discussion and Conclusions}{\pageref{FS}} \contentsline {section}{Appendix A}{\pageref{GS}} \vspace{-0.3cm} \contentsline {section}{Appendix B}{\pageref{HS}} \vspace{-0.3cm} \contentsline {section}{References}{\pageref{IS}} \vspace{-0.3cm} \contentsline {section}{Figures}{\pageref{JS}} \newpage \section{\label{AS}Introduction} \setcounter{equation}{0} Physical reality can be approached by means of quantum field theory from different perspectives. This in particular depends on the kind of information one is interested to extract in order to solve a problem under consideration but it is also influenced by the individual view toward the fundamental difficulties met in present day standard quantum field theory (and its generalized concepts like string theory). To a large extent, these different approaches reflect technical difficulties to fully (in particular, nonperturbatively) understand quantum field theoretical models rather than really differences in concept on a fundamental level. However, few pioneers of quantum field theory like {\sc Dirac} \cite{dira1},\cite{dira2} and {\sc Feynman} \cite{feyn1},\cite{feyn2} in particular pointing to the UV divergency problem always maintained the view that the right theory has not yet been found. This attitude has apparently not received majority support in time but in this respect it does not seem to exist any majority opinion at all \footnote{For a description of the attitude in one large part of the community see ref.\ \cite{shir}, e.g..}. From this state of affairs we feel free to draw justification for a reconsideration of certain conceptual foundations of quantum field theory constituting the purpose of the present paper.\\ Notwithstanding above mentioned problems, it seems to exist wide agreement that the scattering matrix can be considered as the fundamental object for describing a particular quantum field theoretic model. This amounts to saying that full knowledge of the complete scattering matrix is considered equivalent to the solution of a quantum field theory and all interesting information, at least in principle, can be extracted from it. Construction of the scattering matrix can be attempted by different methods. For instance, the so-called S-matrix theory as studied in the 1950s in reaction to the emergence of the divergency problem in Lagrangian quantum field theory was designed to find the (finite) scattering matrix from rather general fundamental principles like causality, unitarity, Lorentz invariance using dispersion techniques without making reference to any Lagrangian underlying the theory (see \cite{brow},\cite{eden}, e.g.). However, although quite general and interesting results have been obtained principles applied turned out not restrictive enough to completely fix the scattering matrix for realistic theories. Nowadays, after the successful re-emergence of (renormalizable) Lagrangian quantum field theory at the end of the 1960s description of the scattering matrix is supplied in a standard way in terms of the effective action of the theory considered \cite{b}. In this sense, we may view the effective action as the genuine fundamental object of interest and will concentrate on its study in this article.\\ Historically, beyond the S-matrix theory already mentioned attempts to cure UV divergencies by nonlocal field theories played a particular role since the emergence of the divergency problem in the 1930s (for a review including references see \cite{efim1},\cite{efim2}; also \cite{efim3}). Although it has been recognized early that nonlocal field theories may be accompanied by new, perhaps even more unpleasant difficulties, so with unitarity and (macro-)causality, theoretical thinking in this direction never ceased to exist. Most prominent, present day string theory although much more ambitious can be viewed as a particular way of giving preference to a special kind of nonlocality \cite{elie}. In recent years, few papers were again dealing with nonlocal quantum gauge field theories \cite{part}--\cite{corn} (to mention only this subject) where in part the nonlocalities introduced are understood as regulators. Although having a different aim than fighting UV divergencies, also the average action concept proposed recently should be mentioned here \cite{wett1},\cite{wett2}. In principle, the drawback of all these nonlocal approaches however consists in the arbitrariness in the choice of the nonlocality introduced. So far, no unique recipe starting from first principles has been proposed.\\ However, the dominant paradigm in the field remains local renormalizable Lagrangian quantum field theory (Throughout the paper we will denote it by the term standard quantum field theory.). But, also there nonlocality is a well-known phenomenon because it is a feature of the effective action that can be derived for any quantum field theory (either local or nonlocal) and which also serves (in most cases) as generating functional of the one-particle-irreducible (1PI) Green functions. In general, the effective action is attributed different meanings by different authors. Few regard the effective action as some low energy representation of a quantum field theory obtained by integrating out certain (massive) degrees of freedom, while others consider the effective action as a full fledged description of the model under investigation from which arbitrary S-matrix elements (related to any observation one might be able to perform) can be derived. We will stick here to the latter view. To us, very pragmatically the effective action is that object which contains all the information ever to be measured under certain defined circumstances and there is no other (independent) object linking theory to physical reality. The shape of the effective action of course may depend on some of these circumstances (external conditions, e.g.). A similar point of view has recently been described with respect to the gravitational effective action by {\sc Vilkovisky} \cite{vilk1}. The effective action concept we have in mind aims at quantum field theoretic models, especially those which are realistic like QED, and assumes that certain sectors of physical reality can be described in a consistent way independently of each other. It is therefore quite different from the TOE ('theory of everything') concept often related to superstring theory.\\ In short, the program of the present article can be described by saying that we intend to find a concept which allows to determine the structure of the (highly complex) observable 'effective action' without making reference to any other quantity not accessible to observation. In particular, the approach to quantum field theory will be based on a critical attitude towards the distinction artificial from an experimental point of view between the so-called classical action and the effective action. This way we will be lead to propose an equation for determining the (finite) effective action, which can be understood as a certain fixed point condition. It will be an equation for functionals of fields (actions) and is therefore designed to remove (to a certain extent --- the field content has to be prescribed as usual) the arbitrariness in the choice of the Lagrangian standing at the beginning of any field theory. Such however can only be expected to happen for interacting theories, where our approach differs from the established paradigm. For free field theories, where this is not the case, nothing new is accomplished in this respect. As technical tool we rely on the functional integral formulation of Lagrangian quantum field theory which seems to be the appropriate and most convenient language for the description of our concept. While nonlocality will be an inherent feature of our approach in most cases, it is by no means the conceptual starting point of the present investigation. Of course, the program as just sketched is an abstract one. However, once we have proposed the general concept it will simply serve us as a guiding line for finding an appropriate approximative approach to perform explicit calculations (in this article: in QED as the prototype gauge field theory).\\ In the past decade the effective action concept has received interest from the point of view of its invariant geometrical formulation. This is an important step in ensuring the physical relevance of the effective action because its physical consequences should not depend on the particular choice of coordinates for the field variables. Initial work in this direction traces back to {\sc Vilkovisky} \cite{vilk2},\cite{vilk3} and {\sc DeWitt} \cite{dewi}, for a recent discussion of the geometrical effective action see \cite{camb}, for a review including further references \cite{buch}. For the purpose of the present article (to reduce complexity of the considerations) we simply bypass the subject and maintain that always those field coordinates are applied in terms of which the formalism takes it naive (non-geometrical) shape. Furthermore, for gauge field theories, a main concern of the unique (geometrical) effective action concept, we find that generalized Landau gauge is the only sensible gauge. Inasmuch as for gauge field theories the geometrical effective action has been found to agree with the naive one (calculated by means of the standard background field method) exactly for generalized Landau gauge we feel free to ignore the subject also there \cite{frad}--\cite{nach}.\\ The outline of the article is as follows. In chapter 2 we explain the general concept in some length. This is done in three steps. In section 2.1 based on a methodological analysis we establish a quest for an equation for the complete effective action. While section 2.2 serves to suggest a particular answer to this question by imposing a certain fixed point condition in terms of a functional integral equation section 2.3 discusses some features of this equation, among others the relation between standard quantum field theory and the present approach. Chapter 3 then applies the concept to gauge field theories. Specifically, there we formulate the functional integral equation for the complete effective action of QED and then in sections 3.1 and 3.2 Ward-Takahashi identities and Schwinger-Dyson equations are discussed respectively. Chapter 4 contains the major body of the explicit calculation performed. The model under investigation is QED in 4D Minkowski (Euclidean) space. While section 4.1 spells out what kind of approximative approach to the functional integral equation for the complete effective action of QED is applied in general section 4.2 and its subsections serve to establish a more concrete approximation strategy suited for explicit calculation and are concentrating on the quadratic kernels of the action. Subsections 4.2.1 and 4.2.2 discuss certain general requirements on the quadratic kernels of the fermion and gauge field actions respectively while subsection 4.2.3 explains the approximation strategy finally chosen. Section 4.3 also split into several subsections then presents the explicit calculation in some detail. Subsection 4.3.1 contains technical details of the functional integration performed. While for the quadratic kernel of the gauge field action we rely on a certain Ansatz subsection 4.3.2 establishes an integral equation for the quadratic kernel of the fermion action. This integral equation is then approximatively solved in subsections 4.3.2.1 and 4.3.2.2 in the asymptotic UV and IR regions respectively. This analysis yields certain nonperturbative information about the quadratic kernel of the fermion action. In the final subsection 4.3.3 of chapter 4 as particular application of the present method the approximative calculation of the QED coupling constant $\alpha$ is explicitly studied. It is understood as one of the characteristics of a fixed point given as a solution of the functional integral equation proposed. Certain technical details of the calculation described in chapter 4 are deferred to two Appendices at the end of the article. Chapter 5 shortly discusses the vacuum energy problem for QED on the 1-loop level. Final consideration then is devoted to the relevance of the proposed approach to the induced gravity concept. The article closes in chapter 6 with a discussion of some aspects of the results obtained.\\ \newpage \section{\label{BS}An Equation for the Complete Effective \hfill\break Action} \subsection{\label{BS1}Do We Need an Equation? \hfill\break \hspace*{0.4cm} --- Methodological Considerations} \setcounter{equation}{0} As introductory step let us begin with displaying key elements of the standard formulation of the effective action. We consider Lagrangian quantum field theory in flat (Minkowski) space-time and in this chapter we use scalar field theory to pursue the discussion. Hereby, it is understood that generalization to more complicated theories (in particular, gauge field theories) can be performed merely by standard means.\\ Construction starts with the generating functional of Green functions \parindent0.cm \begin{equation} \label{BA1} Z[J] =\ C \int D\phi\ \ {\rm e}^{\displaystyle\ i\Gamma_0 [\phi]\ +\ i \int dx J(x) \phi (x)} \hspace{1.5cm},\hspace{0.5cm}\\ \end{equation} where $\Gamma_0 [\phi]$ is the so-called classical action of the theory and $C$ some fixed normalization constant. Then, the generating functional of the connected Green functions is \begin{equation} \label{BA2} W[J]\ =\ -i \ln Z[J] \hspace{1.5cm}.\hspace{0.5cm} \\ \end{equation} The effective action $\Gamma [\bar\phi]$ which also is the generating functional of the one-particle-irreducible (1PI) Green functions is obtained as the first Legendre transform of $W[J]$. \begin{equation} \label{BA3} \Gamma [\bar\phi]\ =\ W[J] - \int dx J(x) \bar\phi (x) \\ \end{equation} Here, \begin{equation} \bar\phi (x)\ =\ {\delta W[J]\over \delta J(x)} \\ \end{equation} is understood which in turn leads to \begin{equation} \label{BA4} {\delta \Gamma [\bar\phi]\over \delta \bar\phi (x)}\ =\ -\ J(x)\\ \end{equation} in analogy to the classical field equation for $\Gamma_0 [\phi]$. Equivalently, using above relations following formula for the effective action can be considered as the defining one \begin{equation} \label{BA5} {\rm e}^{\displaystyle\ i \Gamma [\bar\phi]}\ \ =\ C \int D\phi\ \ {\rm e}^{\displaystyle\ i\Gamma_0 [\phi + \bar\phi]\ +\ i \int dx J(x) \phi (x)} \hspace{1.5cm} ,\hspace{0.5cm}\\ \end{equation} where the r.h.s.\ of above equation has to be calculated at a current $J(x)$ which is a functional of $\bar\phi $ and given by eq.\ (\ref{BA4}). Therefore, as the r.h.s.\ is a functional of both $J$ and $\bar\phi $ eqs.\ (\ref{BA4}), (\ref{BA5}) have to be understood as functional integro-differential equations for determining the (off-shell) effective action and give an implicit definition only. But, as we will argue below eq.\ (\ref{BA5}) is not an equation in the narrow sense of the meaning of the word, instead it rather should be called a formula.\\ \parindent1.5em The latter point is barely discussed in the literature and shall now be considered from a methodological point of view. Observe that eq.\ (\ref{BA1}) defines a map $g_1: \Gamma_0 [\phi ]\longrightarrow Z[J]$ from the class of functionals called classical actions to the class of functionals $Z$. Furthermore, we have mappings $g_2: Z[J]\longrightarrow W[J]$ (eq.\ (\ref{BA2}), single-valued up to the uninteresting for the present purpose fixing of the sheet of the Riemann surface) and $g_3: W[J]\longrightarrow \Gamma [\bar\phi]$ (eq.\ (\ref{BA3})). These three maps together define a map $g_3\circ g_2\circ g_1 = f: \Gamma_0 [\phi ] \longrightarrow \Gamma [\bar\phi] $ (eq.\ (\ref{BA5})) from the set of so-called classical actions to the set of effective actions. In total, this map is unique up to the renormalization problem which can always be treated in the present context by applying an appropriate regularization procedure for properly handling the divergencies. Inasmuch as this map $f$ is constructed explicitly eq.\ (\ref{BA5}) is not a genuine equation with possibly a variety of solutions but rather expresses the image $\Gamma $ of $\Gamma_0$ with respect to the map $f$ --- it is a formula.\\ Above consideration justifies following view. Once the functional integral measure is constructed (and typically this is done for a whole class of classical actions and then fixed forever) the classical action $\Gamma_0$ uniquely determines the corresponding effective action $\Gamma $. In other words, the effective action does not contain more information than (implicitly) contained in the classical action (supplemented by the functional integral measure). This point is usually not stressed in studying concrete models due to the calculational complexity involved. Although it is of next to no practical (i.e., calculational) relevance therefore, it involves important methodological implications. The most important one consist in the fact that the effective action does not appear as object in its own right but as a derived quantity only. Mere reformulations of the calculational tools used to determine the effective action, like Schwinger-Dyson equations, e.g., do not change this character.\\ Before proceeding further let us mention that formulas (\ref{BA1})--(\ref{BA5}) reflect two features of modern quantum field theory. On one hand side, they stand for the convincing success of quantum field theory as witnessed in the last few decades, of a theory providing us with operational instruments producing numbers which agree with measurement to a degree not seen elsewhere in physics (or in any other science). On the other hand, they also stand for the fundamental conceptual difficulties inherent to local quantum field theory. The most important of them is manifesting itself in terms of the well-known ultraviolet divergencies. Although there exist two or three (different) mainstream opinions with respect to this issue (and some other, related ones) numerous dissenting ones can also be found. From this observation one may conclude that research apparently has not yet lead to any generally accepted concept explaining and removing the problems in a finally convincing manner as judged from physics as an inherently consistent building combining theory and experiment. This amounts to saying that search in different directions seems justified and even certain doubt in the foundations of quantum field theory should not be rejected at once. With this in mind, in what follows we will apply the point of view that perhaps even certain foundations of quantum field theory are not understood up to their end and we will see whether we can throw different light on them. In this context, as outlined in the Introduction, we will focus on the effective action which we consider as the appropriate object to be studied.\\ Let us ask for principles effective actions should be governed by in general. While we have no problem in giving principles they should obey, like Lorentz invariance, CPT invariance, e.g., the answer to the question what they are in detail determined by in view of considerations given further above reduces to saying that they are uniquely given as image of the corresponding classical action by means of the map $f$ containing information about the functional integral measure. This way, the question is traced back to the uncertainty in classical field theory what Lagrangian to choose. Although, one does not necessarily need to worry about this point here we will. Basically, we prescribe an effective action in terms of some low energy information rather than to find it from independent (quantum) principles not exhausted by fixing the classical action. And, if we are honest, at best we may say that our prescription is approximately right.\\ One may now confront the methodological insight obtained so far with the deductive idea often applied in theoretical physics that the special case (here the classical action) should be derived from the more general one (here the effective action) and not the other way around. In this sense, the complete effective action is the genuine fundamental object to be studied. If so, up to further investigation, one is willing to allow that the complete effective action might be an object in its own right \footnote{Of course, any effective action has a certain classical limit but coincidence of its classical limit with that of another effective action does not necessarily entail identity of both effective actions then.} then one has to find a method of determining the complete effective action differing from the established method \footnote{ Certainly, also such a different method which does not start with the classical action may, at the end, lead to the conclusion that classical actions and effective actions are related to each other one-to-one, but then this is a result of the method and not the starting point.}. There are not so many methods available and to use an equation for determining the complete effective action seems to be an approach natural within theoretical physics. Therefore, above view leads to the task to find such an equation for the complete effective action.\\ To the best of the authors knowledge such a question has not been raised so far in the existing literature. Independent of the kind of further answer to it, it should be emphasized that in view of the fundamental role of the effective action in quantum field theory it deserves one. Even rejection of the question (e.g., by closely sticking to the established formalism) has important methodological consequences as we have demonstrated above.\\ Search for an equation for the complete effective action needs to be ruled by a couple of principles. First, solutions of such an equation should be able to reproduce standard quantum field theoretic result with the required accuracy in order to stay in line with experiment. Obviously, this leaves not much room for an answer differing from the known one. Second, the formalism connected with such an equation should sufficiently differ from standard quantum field theory in order to be able to remove known problems, at least in part. And third, any sensible search for an equation for the complete effective action should take into account that the eventual result needs to be sufficiently general in order to be applicable to various situations and has to be restrictive enough at the same time in order to allow to derive from it concrete information.\\ While the call for an equation for the complete effective action still might be shared by a number of researchers and probably represents the least disputable part of the present investigation, to reach agreement with respect to an eventual answer to it very likely will be much more difficult. In the following section we are going to propose an answer which then shall be investigated in some further detail.\\ \subsection{\label{BS2}Proposing an Equation for the Complete Effective \hfill\break Action} Basically, there are two different routes to find the particular answer on the question put forward in the preceding section we prefer by proposing a specific equation for the complete effective action. One way is to discuss certain principles to be built in and then to write down an equation which embodies these. The other way which we will choose now is heuristically to motivate an equation which then will be analyzed with respect to its conceptual content.\\ Let us consider the map $f: \Gamma_0 [\phi ] \longrightarrow \Gamma [\bar\phi] $ defined in section \ref{BS1} mapping so-called classical actions to effective actions. Although it is not necessarily well defined for the domain of classical actions (which are local functionals in general) we will not change the map $f$ itself but instead we will now extend the domain of this map. For this purpose it suffices to mention that the set of so-called classical actions can be considered as a sub-set of the class of effective actions. From now on we understand the map $f$ as a mapping of the set of effective actions into itself.\\ On the basis of formulas given in the preceding section we will now explicitly define the map $f$ for the extended domain. Again, we define the generating functional of Green functions by \parindent0.cm \begin{equation} \label{BB1} Z_n[J_n] =\ C\ {\rm e}^{\displaystyle\ -i\Gamma_{n-1} [0]}\ \int D\phi\ \ {\rm e}^{\displaystyle\ i\Gamma_{n-1} [\phi]\ +\ i \int dx J_n(x) \phi (x)} \hspace{1.5cm},\hspace{0.5cm}\\ \end{equation} where as in eq.\ (\ref{BA1}) $C = C(\mu)$ is some fixed dimensional normalization constant depending on an arbitrary mass parameter $\mu$ and compensating the dimension of the functional integral measure $D\phi$. Changes in $\mu$ correspond to changes in the normalization of the vacuum energy connected with $\Gamma [0]$. In extending the domain of the map $f$ we have introduced an additional normalization factor $\exp(\ -i\Gamma_{n-1} [0])$ (This is not a major point but worth to be appreciated from a conceptual point of view.). Classical actions typically are normalized to obey $\Gamma_0 [0] = 0$. Then, eq.\ (\ref{BA5}) tells us that $\Gamma [0]$ is completely originated by vacuum fluctuations governed by the classical action $\Gamma_0$ (up to some normalization of the vacuum energy fixed for a whole class of actions). By including the additional normalization factor this principle is generalized to the map $f$ acting in the extended domain and admits calculation of the vacuum energy as usual \footnote{Having in mind standard quantum field theory, of course, here we refer to vacuum energy modifications under external conditions.}.\\ \parindent1.5em The generating functional of the connected Green functions is \begin{equation} \label{BB2} W_n[J_n]\ =\ -i \ln Z_n[J_n] \hspace{1.5cm}.\hspace{0.5cm} \\ \end{equation} \parindent0.em The generating functional of the 1PI Green functions (the image of $\Gamma_{n-1}$) is given by \begin{equation} \label{BB3} \Gamma_n [\bar\phi_n]\ =\ W_n[J_n] - \int dx J_n(x) \bar\phi_n (x) \hspace{1.5cm},\hspace{0.5cm}\\ \end{equation} where \begin{equation} \bar\phi_n (x)\ =\ {\delta W_n[J_n]\over \delta J_n(x)} \\ \end{equation} and consequently \begin{equation} \label{BB4} {\delta \Gamma_n [\bar\phi_n]\over \delta \bar\phi_n (x)}\ =\ -\ J_n(x) \hspace{1.5cm} .\hspace{0.5cm}\\ \end{equation} The generalization of eq.\ (\ref{BA5}) reads \begin{equation} \label{BB5} {\rm e}^{\displaystyle\ i \Gamma_n [\bar\phi_n]}\ \ =\ C\ {\rm e}^{\displaystyle\ -i\Gamma_{n-1} [0]}\ \int D\phi\ \ {\rm e}^{\displaystyle\ i\Gamma_{n-1} [\phi + \bar\phi_n]\ +\ i \int dx J_n(x) \phi (x)} \hspace{0.3cm} ,\hspace{0.3cm} \end{equation} where the r.h.s.\ of above equation is again to be calculated at a current $J_n(x)$ which is a functional of $\bar\phi_n $ given by eq.\ (\ref{BB4}), and eqs.\ (\ref{BB5}), (\ref{BB4}) are acting as functional integro-differential equations for determining $\Gamma_n$ (and the same accompanying comment as in section \ref{BS1}). The map $g_3\circ g_2\circ g_1 = f: \Gamma_{n-1} \longrightarrow \Gamma_n$ is explicitly given by eqs.\ (\ref{BB1}) ($g_1: \Gamma_{n-1}\longrightarrow Z_n$), (\ref{BB2}) ($g_2: Z_n\longrightarrow W_n$), and (\ref{BB3}) ($g_3: W_n\longrightarrow \Gamma_n$).\\ \parindent1.5em Consider now iterations of the map $f$ leading to some discrete series of effective actions $\ldots \stackrel{f}{\longrightarrow} \Gamma_{n-1} \stackrel{f}{\longrightarrow}\Gamma_{n} \stackrel{f}{\longrightarrow}\Gamma_{n+1} \stackrel{f}{\longrightarrow}\ldots\ $. Eventually this still can be combined with a certain truncation procedure, e.g., acting on the obtained effective action after each application of the map $f$. It is worth noting that the successive calculation of higher loop contributions to the effective action in standard quantum field theory is such an iteration and truncation procedure. However, for the present purpose we do not consider any truncation procedure. Obviously, the most interesting question one may ask with respect to the iterations of the map $f$ is whether it has any fixed point. It should be expected that the fixed point condition for the map $f$ is not trivially fulfilled for any arbitrary action and should distinguish certain (complete) effective actions. Now, we propose that the fixed point condition for the map $f$ defined above yields the equation for the complete effective action we are looking for.\\ The equation for the complete effective action which is equivalent to the fixed point condition for the map $f$ reads \begin{equation} \label{BB6} {\rm e}^{\displaystyle\ i \Gamma [\bar\phi]}\ \ =\ C\ {\rm e}^{\displaystyle\ -i\Gamma [0]}\ \int D\phi\ \ {\rm e}^{\displaystyle\ i\Gamma [\phi + \bar\phi]\ +\ i \int dx J(x) \phi (x)} \hspace{1.5cm} ,\hspace{0.5cm}\\ \end{equation} \parindent0.cm where \begin{equation} \label{BB7} J(x) \ =\ -\ {\delta \Gamma [\bar\phi]\over \delta \bar\phi (x)} \hspace{1.5cm} .\hspace{0.5cm}\\ \end{equation} Eqs.\ (\ref{BB6}) and (\ref{BB7}) together define a genuine functional integro-differential equation for determining the complete (off-shell) effective action $\Gamma$ of a quantum field theory. Of course, this equation needs to be supplemented by additional information to specify the particular conditions under which it should be solved. Accumulated experience in quantum field theory tells us that in general solutions of eq.\ (\ref{BB6}) -- if there exists any at all -- should be expected to be nonlocal and nonpolynomial functionals $\Gamma$ of the field $\phi$. Optimistically, one might think that above equation for the complete effective action is sufficiently restrictive in the case of interacting theories to enable us not only to find the structure of the effective action but hopefully also to determine dimensionless parameters it contains (e.g., coupling constants and mass ratios). What concerns its applicability, so the eventual range of theories remains to be explored. But it seems, that at least any theory which cannot be understood as being induced by some more fundamental one should be subject to the concept.\\ \parindent1.5em \subsection{\label{BS3}Exploring the Equation} Before we will analyze eq.\ (\ref{BB6}) from the conceptual side let us ask whether it has any solution at all. The answer is that any free field theory solves eq.\ (\ref{BB6}) (In saying so, of course, we neglect the vacuum energy problem.). For free field theories the formulation proposed in section \ref{BS2} completely agrees with the standard formulation of quantum field theory displayed in section \ref{BS1}. However, the former obviously differs from the latter for interacting theories. In the future it remains to be seen whether there exists any interacting field theory which solves eq.\ (\ref{BB6}).\\ Now, we will study eq.\ (\ref{BB6}) with respect to its methodological content. Observe, that the proposed equation for the complete effective action is exclusively expressed in terms of an observable (at least, in principle) quantity namely the complete effective action which should be finite, of course. This specifies the concept of renormalizable quantum field theory by relying on observable objects only (Bare and dressed quantities agree here.). In this context, one may wonder whether the conceptual distinction between classical action and effective action is really a productive one. Although any theoretician may extract the classical limit from any solution of eq.\ (\ref{BB6}) one may justified ask what does this tell an experimental physicist. In reality, vacuum fluctuations cannot be switched off (at best, they can be modified) and the experimentally relevant quantity is the effective action. Rather, the experimental physicist is interested in the leading (low energy, long distance, low intensity) terms of the derivative expansion of the effective action but these do not necessarily coincide with what is called the classical action although they will contain it in most cases. In view of our equation for the complete effective action also of limited sense is to ask which effective action term is induced and which is not because eq.\ (\ref{BB6}) is a self-consistency condition.\\ Continuing above consideration, it should be mentioned that already in standard quantum field theory there is no difference in principle between a certain mode of vacuum fluctuations and macroscopic (external) fields. This is reflected by the insight that the effective action has a dual nature, namely on one hand side it is considered as action governing the behaviour of macroscopic (external) fields and at the same time it is the generating functional of 1PI Green functions playing herewith a central role in describing vacuum fluctuations. In addition, any particular mode of vacuum fluctuations is acting in the background of all of them and merely experiences their total effective impact as described by the complete effective action. Therefore, the path integral construction should not rely on the classical action governing the weight of each path (mode) as is done in standard quantum field theory but the weight of each path (mode) should be determined by the complete effective action expressing the vacuum properties in total. Of course, this involves a certain self-referentiality which finds its adequate formulation in terms of a genuine equation. Concluding this we may say that eq.\ (\ref{BB6}) is the theoretical expression of the dual nature of the complete effective action being effective action and generating functional at the same time. In other words, vacuum fluctuations are governed by one and the same action like macroscopic phenomena.\\ Having obtained certain insight into principles embodied in the proposed equation for the complete effective action in the following let us turn to eventual methods of its solution. To expect any final answer on this right now clearly would not be realistic, instead few aspects which come to mind immediately should be discussed only. Although there is no quick answer at hand to the question, one may ask whether the map $f$ has something like a contraction property in a certain neighborhood of a solution of eq.\ (\ref{BB6}). If this is the case one could attempt its solution by iteration. With this concept in mind we will see how the relation of standard quantum field theory to the present formulation can be described. The standard formulation of quantum field theory can be viewed as first iteration of the map $f$ starting from a certain low energy (local) approximation (the so-called classical action) to the complete effective action. This can be considered as natural starting point which is expected to be close to a fixed point of the map $f$ for 'experimental' reasons. However, it is clear that in view of eq.\ (\ref{BB6}) even the 'complete' (assuming we had summed up usual perturbation theory) effective action of standard quantum field theory given by eq.\ (\ref{BA5}) is not the complete one in the sense of eq.\ (\ref{BB6}) but remains just an approximation. The approximation method represented by standard local quantum field theory works reasonably good in lower spacetime dimensions, with considerable effort in 4 dimensions, but it becomes badly defined for most theories in higher dimensions. So, one may consider the properties of a theory with respect to renormalization as information about the possible quality of an approximate solution of eq.\ (\ref{BB6}) obtained from some local Ansatz by iteration of the map $f$. Quantization of a classical theory can be understood as method for approximately solving eq.\ (\ref{BB6}). However, simple extrapolation of the classical Lagrangian to arbitrary high energies leads to the well-known UV divergencies. \\ For practical (i.e., calculational) purposes the map $f$ is not a very convenient one. Instead, one may use a somewhat simpler map $\tilde f$ which differs from $f$ but, as one may see easily from eq.\ (\ref{BB6}), it has one and the same set of fixed points like $f$. This simpler map $\tilde f : \Gamma_{n-1}\longrightarrow \Gamma_n $ can be given by the following formula. \parindent0.em \begin{equation} \label{BC3} {\rm e}^{\displaystyle\ i \Gamma_n [\bar\phi]}\ \ =\ C\ {\rm e}^{\displaystyle\ -i\Gamma_{n-1} [0]}\ \int D\phi\ \ {\rm e}^{\displaystyle\ i\Gamma_{n-1} [\phi + \bar\phi]\ +\ i \int dx J_{n-1}(x) \phi (x)} \end{equation} The advantage of this formula is that it provides us with a compact and explicit representation of the $\tilde f$-image of $\Gamma_{n-1}$. However, in general an image of this map $\tilde f$ will not have the property to be generating functional of 1PI Green functions.\\ \parindent1.5em Concluding this section, let us express our view that the proposed equation for the complete effective action embodies a couple of features which seem reasonable and interesting from a physical point of view and also offers a guiding line for a re-evaluation of the established technical approach to quantum field theory and eventually its appropriate modification. From now on we simply will take eq.\ (\ref{BB6}) as granted and consider it as starting point for further analysis.\\ \newpage \section{\label{CS}Gauge Field Theories} \setcounter{equation}{0} In the present (and in the following) chapter we are going to study the equation for the complete effective action derived in chapter \ref{BS} in the case of gauge field theories. Although we will have in mind gauge field theories in general here we restrict ourselves to QED and comment only the case of non-Abelian gauge theories. In doing so it is understood that the Faddeev-Popov procedure used in standard quantum field theory for defining the functional integral measure can be applied in a slightly generalized way also in the present context, in particular, taking into account that in general solutions of eq.\ (\ref{BB6}) are nonlocal and the gauge condition to be chosen will be, for convenience, nonlocal likewise.\\ We start by defining the generalized map $f$ for QED. The generating functional Z of the Green functions is \parindent0.cm \begin{eqnarray} &&\hspace{-1.cm}Z_n[J_n,\bar\eta_n,\eta_n]\ =\ C\ {\rm e}^{\displaystyle\ -i\Gamma_{n-1} [0,0,0]} \ \int D\left[a_{\mu}\right]\ D\psi D\bar\psi\ {\rm e}^{\displaystyle\ i\Gamma_{n-1} [a,\psi,\bar\psi]} \ \cdot\hfill\nonumber\\ \vspace{0.2cm}\nonumber\\ \label{C1} &&\cdot\ {\rm e}^{\displaystyle\ i\Gamma_{gf}[a] + \ i \int d^4x \left[ J_{n\mu}(x) a^{\mu} (x) + \bar\eta_n (x) \psi (x) + \bar\psi (x) \eta_n (x)\right]} \hspace{0.3cm},\hspace{0.3cm} \end{eqnarray} where \begin{eqnarray} \label{C2a} \Gamma_{gf}[a] &=& -\ {1\over 2 \lambda}\ \int d^4y\ \left(F[a;y]\right)^2 \ \ \ ,\\ \vspace{0.3cm}\nonumber\\ \label{C2b} &&\hspace{1.cm}F[a;y]\ =\ \int d^4x\ n_\mu(y-x)\ a^\mu(x)\ \ \ . \end{eqnarray} As usual, $\Gamma_{gf}$ is a gauge breaking term containing a linear, homogeneous functional $F$ of $a_{\mu}$ (for the moment $n_\mu$ is any arbitrary but appropriately chosen vector-valued distribution) and the brackets in $D\left[a_{\mu}\right]$ (eq.\ (\ref{C1})) are thought to indicate that the Faddeev-Popov determinant has to be taken into account \footnote{It is an almost trivial factor for Minkowski space QED, but already at finite temperature it becomes important. In addition, always having in mind possible generalization to non-Abelian gauge theories it serves as reminder for this complication then to be considered.}. $\Gamma_{n-1}$ is out of the class of gauge invariant effective actions.\\ \parindent1.5cm Then, the $W$-functional is given by \begin{equation} \label{C3} W_n[J_n,\bar\eta_n,\eta_n]\ =\ -i \ln Z_n[J_n,\bar\eta_n,\eta_n] \hspace{1.5cm},\hspace{0.3cm} \end{equation} \parindent0.cm and the image of $\Gamma_{n-1}$ is \begin{eqnarray} &&\hspace{-0.5cm}\Gamma_n [A_n,\Psi_n,\bar\Psi_n]\ =\ \hfill\nonumber\\ \vspace{0.2cm}\nonumber\\ \label{C4} &&=\ W_n[J_n,\bar\eta_n,\eta_n] - \int d^4x \left[ J_{n\mu}(x) A_n^{\mu} (x) + \bar\eta_n (x) \Psi_n (x) + \bar\Psi_n (x) \eta_n (x)\right] \hspace{0.3cm}.\hspace{0.3cm} \end{eqnarray} Again, we have the relations \begin{eqnarray} \label{C5} A_{n\mu} (x)\ &=\ {\displaystyle{\delta W_n[J_n,\bar\eta_n,\eta_n] \over \delta\ J_n^{\mu}(x)}}\ \ \ ,\hspace{0.5cm} {\displaystyle{\delta \Gamma_n [A_n,\Psi_n,\bar\Psi_n] \over \delta\ A_n^{\mu} (x)}}\ &=\ -\ J_{n\mu}(x) \hspace{0.5cm},\hspace{0.5cm}\\ \vspace{0.2cm}\nonumber\\ \label{C7} \Psi_n (x)\ &=\ {\displaystyle{\delta W_n[J_n,\bar\eta_n,\eta_n] \over\delta\ \bar\eta_n (x)}}\ \ \ ,\hspace{0.5cm} {\displaystyle{\delta \Gamma_n [A_n,\Psi_n,\bar\Psi_n] \over \delta\ \Psi_n (x)}}\ &=\ \bar\eta_n (x) \hspace{0.5cm},\hspace{0.5cm}\\ \vspace{0.2cm}\nonumber\\ \label{C9} \bar\Psi_n (x)\ &=\ -\ {\displaystyle{\delta W_n[J_n,\bar\eta_n,\eta_n] \over\delta\ \eta_n (x)}}\ \ \ ,\hspace{0.5cm} {\displaystyle{\delta \Gamma_n [A_n,\Psi_n,\bar\Psi_n] \over \delta\ \bar\Psi_n (x)}}\ &=\ -\ \eta_n (x) \hspace{0.5cm}.\hspace{0.5cm} \end{eqnarray} \parindent1.5em Performing now shifts in the integration variables we find \begin{eqnarray} &&\hspace{-0.7cm}{\rm e}^{\displaystyle\ i \Gamma_n [A_n,\Psi_n,\bar\Psi_n]}\ \ =\hfill\nonumber\\ \vspace{0.3cm}\nonumber\\ &&=\ C\ {\rm e}^{\displaystyle\ -i\Gamma_{n-1} [0,0,0]}\ \int D\left[a_{\mu}\right]\ D\psi D\bar\psi\ \ {\rm e}^{\displaystyle\ i\Gamma_{n-1} [a + A_n,\psi + \Psi_n,\bar\psi + \bar\Psi_n]}\ \ \cdot \nonumber\\ \vspace{0.3cm}\nonumber\\ \label{C11} &&\hspace{0.8cm}\cdot\ {\rm e}^{\displaystyle\ i\Gamma_{gf}[a + A_n] +\ i \int d^4x \left[ J_{n\mu}(x) a^{\mu} (x) + \bar\eta_n (x) \psi (x) + \bar\psi (x) \eta_n (x)\right]} \end{eqnarray} \parindent0.em describing the map $f$ from the gauge invariant effective action $\Gamma_{n-1}$ to its image $\Gamma_n$. From the discussion leading to the background field method in gauge field theories we know that $\Gamma_n$ is in general not gauge invariant because as one easily recognizes from eq.\ (\ref{C11}) the shift in the gauge field integration interferes with the gauge fixing term for the quantum fluctuations \footnote{Further features, met in non-Abelian gauge field theories, we may disregard here.}. This is remedied in standard quantum field theory by starting in eq.\ (\ref{C1}) with a modified gauge fixing term $\Gamma_{gf}[a - A]$ and the field $A_\mu$ is fixed to obey $A_\mu = A_{n\mu}$ (cf.\ \cite{c} and references therein). But, in our approach the application of this procedure would entail that the map $f$ (in particular, the gauge condition for the quantum fluctuations) had to be modified in each iteration step in dependence on the actual shape (gauge) of $A_{n\mu}$, i.e., of $F[A_n - A;y]$. While in standard quantum field theory $A_{n\mu}$ can be understood as some fixed background field (essentially, this makes the background field method acceptable) our situation is worse because $A_{n\mu}$ also contains pieces of arbitrary vacuum fluctuations to be integrated over later on. There is only one safe way to ensure that the gauge for $A_{n\mu}$ and that for the vacuum fluctuations $a_\mu$ do not interfere in a gauge dependent way (i.e., that the shift in the argument of the gauge field integration does not interfere with the gauge fixing term), namely one has to choose for $A_{n\mu}$ the gauge \begin{eqnarray} \label{C11a} F[A_n;y]\ =\ 0 \end{eqnarray} If $A_{n\mu}$ is a sum of independent pieces condition (\ref{C11a}) applies to each component because $F$ is linear and homogenous. Now, as already mentioned in general $A_{n\mu}$ contains pieces of vacuum fluctuations to be integrated over in further iterations, consequently we have to impose condition (\ref{C11a}) also onto these vacuum fluctuations. This argument of course applies to each iteration step of the map $f$ and therefore the only consistent gauge is the generalized Landau gauge $\lambda = 0$. So, a 'sharp' gauge has to be imposed on all gauge fields, on external fields as well as on vacuum fluctuations, i.e., the whole system of functional relations is bound to one definite gauge. Of course, the gauge functional $F$ can be chosen as convenience may require and the full gauge invariant effective action $\Gamma_n$ consequently is obtained by letting $F$ vary. The conclusion that only the generalized Landau gauge leads to sensible and invariant results well agrees with investigations dealing with the concept of the unique (geometrical) effective action \cite{frad}--\cite{nach}.\\ \parindent1.5em From eq.\ (\ref{C11}) we read off now the equation for the complete (gauge invariant) effective action of QED. \begin{eqnarray} &&\hspace{-1.5cm}{\rm e}^{\displaystyle\ i \Gamma [A,\Psi,\bar\Psi]}\ \ =\hfill\nonumber\\ \vspace{0.3cm}\nonumber\\ &&=\ C\ {\rm e}^{\displaystyle\ -i\Gamma [0,0,0]}\ \int D\left[a_{\mu}\right]\ D\psi D\bar\psi\ \ {\rm e}^{\displaystyle\ i\Gamma [a + A,\psi + \Psi,\bar\psi + \bar\Psi]}\ \ \cdot \nonumber\\ \vspace{0.3cm}\nonumber\\ \label{C12} &&\hspace{1.0cm}\cdot\ {\rm e}^{\displaystyle\ i\Gamma_{gf}[a] +\ i \int d^4x \left[ J_{\mu}(x) a^{\mu} (x) + \bar\eta (x) \psi (x) + \bar\psi (x) \eta (x)\right]}\\ \vspace{0.5cm}\nonumber\\ &&\hspace{7.cm}F[A;y]\ =\ 0\ ;\ \ \lambda\ \longrightarrow\ 0\nonumber \end{eqnarray} \parindent0.em In any explicit calculation we will always leave the gauge parameter $\lambda$ unfixed because this allows to better keep track of terms involved, and in the final results one may simply set $\lambda = 0$ then to find the correct answer.\\ \parindent1.5em Having defined above the notation we are prepared now to study in the following Ward-Takahashi identities and Schwinger-Dyson equations within the present formulation of QED.\\ \subsection{\label{CS1}Ward-Takahashi Identities} In standard QED derivation of Ward-Takahashi identities merely relies on the fact that the classical action is gauge invariant. Therefore, generalization of this consideration to the present formulation is straightforward and reasoning proceeds without any major formal difference to the standard approach. Here, for convenience we will closely follow ref.\ \cite{d}, sect.\ 7.4., as an appropriate textbook treatment.\\ Consider in eq.\ (\ref{C1}) an infinitesimal gauge transformation \parindent0.cm \begin{equation} \label{CA1} a_{\mu}\ \longrightarrow\ a_{\mu} + \partial_{\mu}\Lambda\ \ , \hspace{0.5cm}\psi\ \longrightarrow\ \psi - ie\Lambda\ \psi \ , \hspace{0.5cm}\bar\psi\ \longrightarrow\ \bar\psi + ie\Lambda\ \bar\psi\ \ . \end{equation} Then, we obtain in first order of $\Lambda (x)$ (remember that $F$ was chosen as a linear functional) \begin{eqnarray} &&\hspace{-1.5cm}\left\{\ {1\over\lambda} \ _x\partial_{\mu} \int d^4y\ {\delta F[A_n;y]\over\delta\ A_{n\mu}(x)}\ F\left[-i{\delta\over\delta J_n};y\right] -\ \partial_{\mu} J_n^{\mu}(x)\ - \right. \hfill\nonumber\\ \vspace{0.5cm}\nonumber\\ \label{CA2} &&\left. -\ e \left( \bar\eta_n(x)\ {\delta\over\delta\bar\eta_n(x)} - \eta_n(x)\ {\delta\over\delta\eta_n(x)}\right)\right\}\ Z_n[J_n,\bar\eta_n,\eta_n]\ =\ 0 \hspace{0.5cm}.\hspace{0.5cm} \end{eqnarray} By means of eqs.\ (\ref{C3})-(\ref{C9}) above equation yields \begin{eqnarray} &&\hspace{-1.5cm}{1\over\lambda} \ _x\partial_{\mu} \int d^4y\ {\delta F[A_n;y]\over\delta\ A_{n\mu}(x)}\ F[A_n;y] \ +\ \partial_{\mu} {\displaystyle{\delta \Gamma_n [A_n,\Psi_n,\bar\Psi_n]\over \delta\ A_{n\mu} (x)}}\ + \hfill\nonumber\\ \vspace{0.5cm}\nonumber\\ \label{CA3} &&+\ ie\ \Psi_n\ {\displaystyle{\delta \Gamma_n [A_n,\Psi_n,\bar\Psi_n]\over \delta\ \Psi_n (x)}}\ -\ ie\ \bar\Psi_n\ {\displaystyle{\delta \Gamma_n [A_n,\Psi_n,\bar\Psi_n]\over \delta\ \bar\Psi_n (x)}}\ =\ 0 \hspace{0.5cm}.\hspace{0.5cm} \end{eqnarray} From this equation different Ward-Takahashi identities can be derived. As standard example let us consider the following. Taking functional derivatives of eq.\ (\ref{CA3}) with respect to $\bar\Psi_n (z^\prime)$, $\Psi_n (z)$ and then setting $\bar\Psi_n = \Psi_n = A_{n\mu} = 0$ one finds \begin{eqnarray} &&\hspace{-1.3cm} \ _x\partial_{\mu}\ {\displaystyle{\delta^3 \Gamma_n [A_n,\Psi_n,\bar\Psi_n]\over \delta\bar\Psi_n (z^\prime) \ \delta\Psi_n (z)\ \delta A_{n\mu} (x)}}\ \Bigg\vert_{\bar\Psi_n = \Psi_n = A_{n\mu} = 0}\ = \hfill\nonumber\\ \vspace{0.5cm}\nonumber\\ \label{CA4} &&\hspace{-1.2cm}=\ ie\ \left\{\delta^{(4)} (x-z^\prime)\ {\displaystyle{\delta^2\Gamma_n [0,\Psi_n,\bar\Psi_n]\over \delta\bar\Psi_n (z^\prime)\ \delta\Psi_n (z)}}\ -\ \delta^{(4)} (x-z)\ {\displaystyle{\delta^2 \Gamma_n [0,\Psi_n,\bar\Psi_n]\over \delta\bar\Psi_n (z^\prime)\ \delta\Psi_n (z)}} \right\}_{\bar\Psi_n = \Psi_n = 0}\hspace{-0.8cm}.\ \end{eqnarray} With \begin{eqnarray} &&\hspace{-1.5cm}\int d^4x\ d^4z\ d^4z^\prime\ {\rm e}^{i(p^\prime z^\prime - pz - qx)}\ {\displaystyle{\delta^3 \Gamma_n [A_n,\Psi_n,\bar\Psi_n]\over \delta\bar\Psi_n (z^\prime) \ \delta\Psi_n (z)\ \delta A_n^{\mu} (x)}}\ \Bigg\vert_{\bar\Psi_n = \Psi_n = A_{n\mu} = 0}\ =\hfill\nonumber\\ \vspace{1.1cm}\nonumber\\ \label{CA5} &&\hspace{5.cm}=\ e\ (2\pi )^4\ \delta^{(4)} (p^\prime - p - q)\ \tilde\Gamma_{n\mu}(p,q,p^\prime) \end{eqnarray} and \begin{equation} \label{CA6} \int d^4z\ d^4z^\prime\ {\rm e}^{i(p^\prime z^\prime - pz)}\ {\displaystyle{\delta^2 \Gamma_n [0,\Psi_n,\bar\Psi_n]\over\delta\Psi_n (z) \ \delta\bar\Psi_n (z^\prime)}}\ \Bigg\vert_{\bar\Psi_n = \Psi_n = 0}\ =\ (2\pi )^4\ \delta^{(4)} (p^\prime - p)\ \tilde S_n^{-1}(p) \end{equation} eq.\ (\ref{CA4}) yields the well-known Ward-Takahashi identity \begin{equation} \label{CA7} q^{\mu}\ \tilde\Gamma_{n\mu}(p,q,p + q)\ =\ \tilde S_n^{-1}(p + q)\ -\ \tilde S_n^{-1}(p) \hspace{0.5cm}.\hspace{0.5cm} \end{equation} We have seen that each image $\Gamma_n$ of the map $f$ respects the Ward-Takahashi identity (\ref{CA7}) which is a consequence of the gauge invariance of its counter image $\Gamma_{n-1}$ (Beyond this property the counter image $\Gamma_{n-1}$ does not show up explicitly.). This in particular is also true for any solution of eq.\ (\ref{C12}).\\ \parindent1.5em Now, one may convince oneself that also in non-Abelian gauge field theories the derivation of generalized Ward identities (i.e., Slavnov-Taylor identities; cf.\ \cite{b}, sect.\ IV.7) remains unchanged and they also hold at each step of any iteration of the map $f$. Violation of these (generalized) Ward identities (so, if anomalies occur) means that the equation for the complete effective action of such a theory will not have any solution. To see this note that the existence of an anomaly would entail that the image $\Gamma_n = f(\Gamma_{n-1})$ of an action has a different behaviour than its counter image $\Gamma_{n-1}$, so blocking any attempt to solve the equation. In this sense, the well-known model builders requirement of anomaly cancellation (cf.\ \cite{d}, sect.\ 9.10., e.g.) can be understood as solvability condition for the functional integral equation for the complete effective action of a theory under consideration \footnote{Of course, as in standard quantum field theory this concerns only dynamical fields.}.\\ \subsection{\label{CS2}Schwinger-Dyson Equations} Let us start the study of Schwinger-Dyson equations with a comment concerning the nature of these equations in standard quantum field theory. They represent a chain of hierarchical equations connecting (1PI) Green functions of a theory and can be seen as a formulation standing in a certain equivalence to the functional integral representation given (for a scalar theory) by eqs.\ (\ref{BA1})-(\ref{BA3}). However, as we have argued in section \ref{BS1} the effective action $\Gamma$ is merely the image of the classical action $\Gamma_0$ with respect to the map $f$ and therefore Schwinger-Dyson equations are formulas to be viewed as a device for tackling the calculational complexity met in explicitly determining the effective action $\Gamma$, rather than genuine equations (Whether beyond this they also admit other solutions should not be further considered here.). From this it is clear that Schwinger-Dyson equations can be understood as a kind of representation of the map $f$ and they can also be formulated for the map $f$ acting in the extended domain of effective actions in general. Only, if we impose the fixed point condition for the map $f$ Schwinger-Dyson equations turn out to be genuine equations corresponding to the equation for the complete effective action (\ref{BB6}).\\ In the present section we study QED Schwinger-Dyson equations for the map $f$ acting in the extended domain of (gauge invariant) effective actions. Only at the end we will specialize the result to the fixed point condition for the map $f$. For convenience, in deriving Schwinger-Dyson equations here we follow the textbook treatment given in ref.\ \cite{e}, sect.\ 10.1, as far as possible.\\ \parindent0.cm First we exploit the gauge field integration. From eq.\ (\ref{C1}) we find \begin{eqnarray} &&\hspace{-2.cm}\left\{ J_{n\mu}(x)\ +\ {\displaystyle{\delta \Gamma_{gf}\over\delta A_n^{\mu}(x)}} \left[ -i{\delta\over\delta J_n} \right]\ \right.+\ \hfill\nonumber\\ \vspace{0.5cm}\nonumber\\ \label{CB1} &&+\ \left. {\displaystyle{\delta \Gamma_{n-1}\over\delta A_n^{\mu}(x)}} \left[ -i{\delta\over\delta J_n},-i{\delta\over\delta \bar\eta_n}, i{\delta\over\delta \eta_n} \right]\ \right\} \ Z_n[J_n,\bar\eta_n,\eta_n]\ =\ 0 \hspace{0.3cm}.\hspace{0.3cm} \end{eqnarray} Splitting $\Gamma_{n-1}$ into a free (quadratic) and an interaction part (denoted by $\Gamma^{(int)}_{n-1}$) we obtain \begin{eqnarray} &&\hspace{-1.5cm} - {\displaystyle{\delta\Gamma_n [A_n,\Psi_n,\bar\Psi_n] \over\delta\ A_n^{\mu} (x)}}\ -\ {1\over\lambda} \int d^4x^\prime\ {\delta F[A_n;x^\prime]\over\delta\ A^{n\mu}(x)}\ F\left[A_n;x^\prime\right]\ + \hfill\nonumber\\ \vspace{0.5cm}\nonumber\\ &&\hspace{-1.5cm} + \int d^4x^\prime\ D_{n-1 \mu\nu}^{-1} (x-x^\prime )\ A_n^{\nu} (x^\prime) + \hfill\nonumber\\ \vspace{0.5cm}\nonumber\\ \label{CB2} &&\hspace{-1.5cm} +\ {\rm e}^{\displaystyle -i W_n[J_n,\bar\eta_n,\eta_n]}\ \ {\displaystyle{\delta\Gamma_{n-1}^{(int)}\over\delta A_n^{\mu}(x)}} \left[ -i{\delta\over\delta J_n},-i{\delta\over\delta\bar\eta_n}, i{\delta\over\delta\eta_n} \right]\ {\rm e}^{\displaystyle\ i W_n[J_n,\bar\eta_n,\eta_n]}\ \ =\ 0 \end{eqnarray} with \begin{equation} \label{CB3} {\displaystyle{\delta^2 \Gamma_{m} [A,0,0]\over \delta A^{\mu}(x)\ \delta A^{\nu}(x^\prime)}}\ \Bigg\vert_{A = 0}\ =\ D_{m\ \mu\nu}^{-1} (x-x^\prime ) \hspace{0.3cm}.\hspace{0.3cm} \end{equation} Taking a functional derivative with respect to $A_{n\nu}$ and setting $\bar\eta_n = \eta_n = J_n = 0$ (and equivalently $\bar\Psi_n = \Psi_n = A_{n\mu} =0$) eq.\ (\ref{CB2}) yields \begin{eqnarray} &&\hspace{-1.2cm} - D_{n\ \mu\nu}^{-1} (x-x^\prime )\ +\ D_{n-1\ \mu\nu}^{-1} (x-x^\prime )\ -\ {\rm e}^{\displaystyle -i W_n[0,0,0]}\ \int d^4z\ D_{n-1\ \nu\lambda}^{-1} (x^\prime -z)\ \cdot\hfill\nonumber\\ \vspace{0.5cm}\nonumber\\ \label{CB4} &&\hspace{-1.2cm}\cdot\ {\displaystyle{\delta \Gamma_{n-1}^{(int)}\over\delta A_n^{\mu}(x)}} \left[ -i{\delta\over\delta J_n},-i{\delta\over\delta\bar\eta_n}, i{\delta\over\delta\eta_n} \right]\ {\displaystyle{\delta\over\delta J_{n\lambda}(z)}}\ {\rm e}^{\displaystyle\ i W_n[J_n,\bar\eta_n,\eta_n]}\ \Bigg\vert_{\bar\eta_n = \eta_n = J_n = 0}\ =\ 0\ . \end{eqnarray} Finally, the fixed point condition for the map $f$ leads to the following Schwinger-Dyson equation. \begin{equation} \label{CB5} {\displaystyle{\delta\ \Gamma^{(int)}\over\delta A^{\mu}(x)}} \left[ -i{\delta\over\delta J},-i{\delta\over\delta\bar\eta}, i{\delta\over\delta\eta} \right]\ {\displaystyle{\delta\over\delta J^{\nu}(z)}}\ {\rm e}^{\displaystyle\ i W[J,\bar\eta,\eta]}\ \Bigg\vert_{\bar\eta = \eta = J = 0}\ =\ 0 \end{equation} Let us now exploit the fermionic integration. Likewise we obtain from eq.\ (\ref{C1}) \begin{equation} \label{CB6} \left\{ \eta_n (x)\ +\ {\displaystyle{\delta \Gamma_{n-1}\over\delta\bar\Psi_n(x)}} \left[ -i{\delta\over\delta J_n},-i{\delta\over\delta \bar\eta_n}, i{\delta\over\delta \eta_n} \right]\ \right\} \ Z_n[J_n,\bar\eta_n,\eta_n]\ =\ 0 \hspace{0.3cm}.\hspace{0.3cm} \end{equation} Again, splitting $\Gamma_{n-1}$ into a free (quadratic) and an interaction part we find \begin{eqnarray} &&\hspace{-1.5cm} - {\displaystyle{\delta \Gamma_n [A_n,\Psi_n,\bar\Psi_n] \over \delta\ \bar\Psi_n (x)}} \ +\ \int d^4x^\prime\ S_{n-1}^{-1} (x-x^\prime )\ \Psi_n (x^\prime) \ + \hfill\nonumber\\ \vspace{0.5cm}\nonumber\\ \label{CB7} &&\hspace{-1.5cm}+\ {\rm e}^{\displaystyle -i W_n[J_n,\bar\eta_n,\eta_n]}\ {\displaystyle{\delta \Gamma_{n-1}^{(int)}\over\delta\bar\Psi_n (x)}} \left[ -i{\delta\over\delta J_n},-i{\delta\over\delta\bar\eta_n}, i{\delta\over\delta\eta_n} \right]\ {\rm e}^{\displaystyle\ i W_n[J_n,\bar\eta_n,\eta_n]}\ \ =\ 0 \end{eqnarray} with \begin{equation} \label{CB8} {\displaystyle{\delta^2 \Gamma_{m} [0,\Psi,\bar\Psi]\over \delta\Psi (x^\prime)\ \delta\bar\Psi (x)}}\ \Bigg\vert_{\bar\Psi = \Psi = 0}\ =\ S_{m}^{-1} (x-x^\prime ) \hspace{0.3cm}.\hspace{0.3cm} \end{equation} Taking a functional derivative with respect to $\Psi_n$ and setting $\bar\eta_n = \eta_n = J_n = 0$ (and equivalently $\bar\Psi_n = \Psi_n = A_{n\mu} = 0$) eq.\ (\ref{CB7}) yields \begin{eqnarray} &&\hspace{-1.5cm} -\ S_n^{-1} (x-x^\prime )\ +\ S_{n-1}^{-1} (x-x^\prime ) \ -\ {\rm e}^{\displaystyle -i W_n[0,0,0]}\ \int d^4z\ S_{n-1}^{-1} (z-x^\prime )\ \cdot\hfill\nonumber\\ \vspace{0.5cm}\nonumber\\ \label{CB9} &&\hspace{-1.5cm}\cdot\ {\displaystyle{\delta\Gamma_{n-1}^{(int)}\over\delta\bar\Psi_n(x)}} \left[ -i{\delta\over\delta J_n},-i{\delta\over\delta\bar\eta_n}, i{\delta\over\delta\eta_n} \right]\ {\displaystyle{\delta\over\delta\eta_n (z)}}\ \ {\rm e}^{\displaystyle\ i W_n[J_n,\bar\eta_n,\eta_n]}\ \ \Bigg\vert_{\bar\eta_n = \eta_n = J_n = 0}\hspace{-0.4cm} =\ 0 \ .\ \end{eqnarray} And, the fixed point condition for the map $f$ leads to this Schwinger-Dyson equation. \begin{equation} \label{CB10} {\displaystyle{\delta\Gamma^{(int)}\over\delta\bar\Psi (x)}} \left[ -i{\delta\over\delta J},-i{\delta\over\delta\bar\eta}, i{\delta\over\delta\eta} \right]\ {\displaystyle{\delta\over\delta\eta (z)}}\ \ {\rm e}^{\displaystyle\ i W[J,\bar\eta,\eta]}\ \ \Bigg\vert_{\bar\eta = \eta = J = 0}\ \ =\ 0 \end{equation} \parindent1.5em The Schwinger-Dyson equations (\ref{CB4}), (\ref{CB5}), (\ref{CB9}), (\ref{CB10}) cannot be further studied unless the interaction part of the effective action has been specified, at least in a certain approximation. This in particular also concerns the final transition to relations between 1PI Green functions which hinges on this information. Therefore, presently it remains open how useful this kind of representation of the map $f$ will be in future investigations.\\ \newpage \section{\label{DS}QED --- An Approximative Approach to the Equation for the Complete Effective Action} \setcounter{equation}{0} Besides structural investigation of the equation for the complete effective action of most interest appears to be whether the proposed approach is enabling us to extract concrete information for specific models not at all or not easily obtainable by established standard methods. We will focus here on QED as a realistic physical theory at the same time also being of major theoretical interest as simple prototype of a gauge field theory. The aim of this chapter is to demonstrate that the present approach indeed admits explicitly to find certain information about the complete effective action of QED that, in addition, can be seen to be of nonperturbative nature. Of course, the concrete study of the equation for the complete effective action of QED (eq.\ (\ref{C12})) cannot be expected to be rigorous for the time being. It will be necessary to apply an approximation which however in certain respect should circumvent some of the problems appearing in standard quantum field theory. In particular, as far as possible we will take care that no inappropriate approximation giving rise to UV divergencies is introduced. Although most approximations we will exploit in this chapter can be expected to be reasonable for small values of the QED coupling constant $\alpha$, the explicit calculation we will undertake has to be understood in the first place as a model game to test in principle the calculational accessibility of the concept proposed. As a particular application of the new concept we will explicitly study how to determine the coupling constant $\alpha$ (i.e., the theoretical value of the fine structure constant) being understood as one of the characteristics of a fixed point of the map $f$. This is done using certain simple approximations (capable of future improvement) which at the end however turn out somewhat too simple yet to succeed numerically. \\ The approximative approach in general relied on in the present chapter will be as follows.\\ \subsection{\label{DS1}The Approximative Approach in General} We will study one iteration of the map $f$ starting from a certain Ansatz $\Gamma_I$ which is mapped by means of $f$ to its image $\Gamma_{II}$. The gauge invariant Ansatz for $\Gamma_I$ is chosen as a natural generalization of the so-called classical action $\Gamma_0$ (to obtain this replace $d_I, a_I, b_I$ by delta functions) which is the starting point for standard QED perturbation theory. \parindent0.em \begin{eqnarray} \label{D1} \Gamma_I [A,\Psi,\bar\Psi]\ &=&\ \Gamma_I^{G} [A]\ +\ \Gamma_I^{F} [A,\Psi,\bar\Psi]\\ \vspace{0.3cm}\nonumber\\ \label{D2} \Gamma_I^{G} [A]\ &=&\ {1\over 2}\ \int d^4x\ d^4x^\prime\ A^\mu(x)\ \cdot\nonumber\\ &&\hspace{1.5cm}\cdot\ \left[ g_{\mu\nu}\ _x\Box - \ _x\partial_\mu \ _x\partial_\nu\right] d_I\left(x-x^\prime\right)\ A^\nu(x^\prime)\\ \vspace{0.3cm}\nonumber\\ \label{D3} \Gamma_I^{F} [A,\Psi,\bar\Psi]\ &=&\ \int d^4x\ d^4x^\prime\ \ \bar\Psi (x) \ \ {\rm e}^{\displaystyle\ ie \int^{x^\prime}_x dy_\mu\ A^\mu(y)}\ \ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-1.cm}\cdot\ \left[ a_I\left(x-x^\prime\right) \ \left( i\not\hspace{-0.07cm}\partial_{x^\prime} - e \not\hspace{-0.13cm}A(x^\prime) \right)\ -\ m\ b_I\left(x-x^\prime\right) \right]\Psi (x^\prime) \end{eqnarray} $m$ is the electron mass, $d_I$, $a_I$, $b_I$ are functions (distributions) arbitrary for the moment and the gauge functional $F$ appearing in eq.\ (\ref{C11}) is to be chosen later in a way appropriate and convenient for explicit calculation \footnote{For future purposes we are introducing the notation \begin{eqnarray*} \Gamma^F_I [A,\Psi,\bar\Psi]&=&\int d^4x\ d^4x^\prime\ \ \bar\Psi (x)\ S_I^{-1}[A](x,x^\prime)\ \Psi (x)\\ \vspace{0.3cm}\\ S_I^{-1}[0](x,x^\prime)\ =\ S_I^{-1}(x-x^\prime)\ &=&\ i\ a_I\left(x-x^\prime\right)\not\hspace{-0.07cm}\partial_{x^\prime}\ -\ m\ b_I\left(x-x^\prime\right)\hspace{0.5cm}. \end{eqnarray*} In general, we will alternatively write $l(x)$ or $l(r)$, $r = -m^2 x^2 $ for one and the same function what however will not lead to any confusion in the context used respectively (All functions $l(x)$ we are studying depend on $x$ via $x^2$ only. $l$ stands here for $d$, $a$, $b$.). Fourier transforms are defined for $l(x)$ by \begin{eqnarray*} l(x)\ =\ \int {d^4p\over (2\pi)^4}\ \ {\rm e}^{\ ipx}\ \ \tilde l(p) \end{eqnarray*} and equivalently we use the notation $\tilde l(p)$ and $\tilde l(s)$, $s = - {p^2\over m^2}$ for one and the same function respectively.}. Furthermore, the line integration in the phase factor in eq.\ (\ref{D3}) is understood to be performed along a straight line connecting starting and end point. Eq.\ (\ref{D3}) is written in such shape as to keep contact with standard QED ($\tilde a\ =\ \tilde b\ \equiv\ 1$) as close as possible.\\ \parindent1.5em Finally, the equation for the complete effective action (\ref{C12}) will be taken into account in such a way that we require at the end $d_I = d_{II}$, $a_I = a_{II}$, $b_I = b_{II}$, at least in some approximation. All new structures of $\Gamma_{II}$ not appearing in the Ansatz $\Gamma_I$ will be viewed as induced ones within this approximation and remain beyond the scope of present interest.\\ It should find mention that an Ansatz similar to eq.\ (\ref{D3}) (with $a_I = b_I$) has unsuccessfully been explored earlier within the framework of nonlocal QED by {\sc Chr\'etien} and {\sc Peierls} \cite{f} (see also \cite{peie}). For a discussion and an explanation of the failure of the attempt turn to \cite{scharn2}. With reference to \cite{f}, the action (\ref{D3}) has also recently been studied in a different context (effective Lagrangians in nuclear theory) than ours \cite{ohta}--\cite{tern}.\\ \subsection{\label{DS2}Designing an Approximation Strategy} After having spelt out above what general kind of approximative approach we are going to rely on we need now to translate it into operational terms which are fundamental to the explicit calculation we are aiming at. So far, $d_I$, $a_I$, $b_I$ are understood as completely arbitrary and clearly it is difficult to perform an explicit calculation based on such a general Ansatz. Therefore, below first we will discuss whether the mostly general Ansatz for $d_I$, $a_I$, $b_I$ can sensibly be restricted to a certain subclass the final solution can be searched in. Of particular interest is whether these distributions can adequately be modelled by means of local operators. Let us start with the consideration of $a_I$, $b_I$ characterizing the fermion action $\Gamma_I^F$.\\ \subsubsection{\label{DS21}Consequences of Gauge Invariance for the Kernel of the Fer\-mion Action $\Gamma_I^F$} One of the crucial solvability conditions of eq.\ (\ref{C12}) is that the map $f$ should not violate gauge invariance. This in particular entails that the map $f$ must not induce any mass term for the gauge field $A_\mu$. Even a finite non-vanishing coefficient of such a mass term is not allowed not to speak about infinite ones which are pushed aside in standard QED by applying a gauge invariant regularization. Inasmuch as here we are aiming at finite solutions of the equation for the complete effective action (i.e., some approximation to it) even in a gauge non-invariant regularization scheme (like cut-off regularization) mass terms should not survive after lifting the regularization.\\ In the following we study restrictions arising from gauge invariance on the possible behaviour of the so far arbitrary kernel $S^{-1}_I$ of the fermion action $\Gamma_I^F$. In order to look for a mass term of the gauge field $A_\mu$ we restrict ourselves to the class of constant gauge potentials $A_\mu (x)\ =\ e^{-1}k_\mu\ \equiv\ const.$ the consideration of which is sufficient for this purpose. For this simple background $\Gamma_{II}$ is given by the determinant of $S^{-1}_I$ in the presence of the constant background $k_\mu$ which can be viewed in momentum space representation as a constant external momentum. Because we cannot assume from the very beginning that the result in eq.\ (\ref{DF2}) below will be finite (This is related to the vacuum energy problem which we will not consider in this chapter.) we are barred from simply using a shift $p\longrightarrow p-k$ (which would make vanish the dependence on $k$ at once; this would only be applicable in a gauge invariant regularization). The effective action reads \parindent0.em \parindent0.em \begin{eqnarray} \label{DF1} \Gamma_{II} [e^{-1} k,0,0]\ &=&\ const.\ - \ i\ln\ {\rm Det}_\Lambda\ \left( S^{-1}_I[e^{-1} k]\right)\\ \label{DF2} &=&\ const.\ -\ 2i\ V_4 \int\limits_\Lambda {d^4p\over (2\pi)^4}\ \ h\left(-(p+k)^2\over m^2 \right) \end{eqnarray} where \begin{equation} \label{DF3} h(s)\ =\ \ln\left[ \ s\ \tilde a_I^2 (s)\ +\ \tilde b_I^2 (s)\ \right] \hspace{0.5cm},\hspace{0.5cm} s\ \ =\ \ - {p^2\over m^2}\ \ =\ \ {p_E^2\over m^2}\ \ \ . \end{equation} The subscript $\Lambda$ in eqs.\ (\ref{DF1}), (\ref{DF2}) indicates that we apply a cut-off regularization with a (radial) momentum space UV cut-off at $\Lambda$. The subscript $E$ in eq.\ (\ref{DF3}) refers to the (Wick rotated) Euclidean momentum variable.\\ \parindent1.5em Now, let us further transform the integral appearing in eq.\ (\ref{DF2}). First, we perform a Wick rotation and then we expand the integrand in powers of $k$ (up to $O(k^4)$; the notation is $h^\prime= d/ds\ h$). \parindent0.em \begin{eqnarray} \label{DF4} \hspace{-0.5cm}\int\limits_\Lambda d^4p_E\ h\left( {(p_E+k_E)^2\over m^2}\right)\ &=& \ \int\limits_\Lambda d^4p\ \left\{ \ h(s) \ +\ 2\ {pk\over m^2}\ h^\prime (s) \ +\ {k^2\over m^2}\ h^\prime (s)\ +\right.\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-0.25cm}+\ 2\ {(pk)^2\over m^4}\ h^{\prime\prime} (s)\ +\ 2\ {k^2\ pk\over m^4}\ h^{\prime\prime} (s)\ + \nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-0.25cm} +\ {4\over 3}\ {(pk)^3\over m^6}\ h^{\prime\prime\prime} (s)\ +\ {1\over 2}\ {(k^2)^2\over m^4}\ h^{\prime\prime} (s)\ +\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-0.25cm} \left. +\ 2\ {k^2(pk)^2\over m^6}\ h^{\prime\prime\prime} (s)\ +\ {2\over 3}\ {(pk)^4\over m^8}\ h^{\prime\prime\prime\prime} (s)\ +\ \ldots\ \right\} \end{eqnarray} For convenience, we have omitted the index $E$ on the r.h.s.. Deleting in the integrand terms antisymmetric with respect to $p \longrightarrow -p$ and applying following equivalences (valid under the 4D integral) \begin{eqnarray} \label{DF5} (pk)^2\ &{\displaystyle\hat{=}}&\ {1\over 4}\ k^2\ p^2\\ \label{DF6} (pk)^4\ &{\displaystyle\hat{=}}&\ {1\over 8}\ (k^2)^2\ (p^2)^2 \end{eqnarray} we find after some manipulations \begin{eqnarray} \label{DF7} \Gamma_{II} [e^{-1} k,0,0]\ &=&\ const.\ +\ {V_4\over 8\pi^2}\ m^4\ \left\{\ \int_0^{\Lambda^2\over m^2} ds\ s\ h(s)\ - \right.\nonumber \\ \vspace{0.3cm}\nonumber\\ &&\hspace{0.2cm}-\ {1\over 2}\ {k^2\over m^2}\ \left[ \ s^2\ h^\prime (s)\ \right]_0^{\Lambda^2\over m^2}\ +\ \nonumber \\ \vspace{0.3cm}\nonumber\\ &&\hspace{0.2cm}\left. +\ {1\over 12}\ {(k^2)^2\over m^4}\ \left[ \ 3\ s^2\ h^{\prime\prime} (s)\ +\ s^3\ h^{\prime\prime\prime} (s)\ \right]_0^{\Lambda^2\over m^2}\ +\ \ldots\ \right\} \end{eqnarray} where $k_\mu$ denotes the constant (Minkowski space) gauge potential.\\ \parindent1.5em From the term proportional to $k^2$ in eq.\ (\ref{DF7}) we see that the requirement of gauge invariance (i.e., vanishing of any mass term) yields that $h(s)$ should behave for $s \longrightarrow \infty $ like \parindent0.em \begin{equation} \label{DF8} h(s)\ \ \stackrel{s \longrightarrow\infty }{\sim } \ \ const.\ +\ O\left(s^\kappa\right) \hspace{0.5cm},\hspace{0.5cm} \kappa\ < \ -1 \ \ . \end{equation} Above condition obviously is also sufficient in order to make vanish all higher (in powers of $k$) gauge non-invariant structures. By translating information contained in (\ref{DF8}) one finds following conditions to obey it \footnote{\label{foot1}We disregard here the somewhat weaker condition \begin{eqnarray*} \tilde a(s)\ \ \stackrel{s \longrightarrow \infty}{\sim} \ \ s^{-1/2}\ +\ O\left(s^\kappa\right) \hspace{0.5cm},\hspace{0.5cm}\kappa\ <\ -{3\over 2}\ \ \ \ , \end{eqnarray*} and all other variants requiring some fine tuning between $\tilde a$ and $\tilde b$.}. \begin{eqnarray} \label{DF9} &&\tilde a_I(s)\ \ \stackrel{s \longrightarrow \infty}{\sim} \ \ O\left(s^\kappa\right) \hspace{0.5cm},\hspace{0.5cm}\kappa\ <\ -1\ \ \ \ ,\\ \vspace{0.4cm}\nonumber\\ \label{DF10} &&\tilde b_I(s)\ \ \stackrel{s \longrightarrow \infty}{\sim} \ \ const.\ +\ O\left(s^\kappa\right) \hspace{0.5cm},\hspace{0.5cm}const. \not= 0\ \ ,\ \ \kappa\ <\ -1 \ \ \ .\ \end{eqnarray} From these relations one recognizes that $\tilde a_I$ and $\tilde b_I$ should behave differently for $s \longrightarrow \infty$, i.e., they cannot be identical. This requirement is in line with results for the fermion self-energy calculated in lowest order of standard QED perturbation theory where $\tilde a$ and $\tilde b$ already differ (see, e.g., \cite{e}).\\ \parindent1.5em Finally, let us come back to the purpose of this subsection. Although, with (\ref{DF9}), (\ref{DF10}) we have found certain expectations for the UV behaviour of $a_I$ and $b_I$ this result does not seem to improve our situation. Even worse, it indicates that $a_I$ and $b_I$ cannot adequately be approximated by any local operator Ansatz because it would exhibit an unacceptable UV behaviour. So, from this analysis we conclude that for the moment $a_I$ and $b_I$ should indeed be kept arbitrary and the hope for simplifying our Ansatz is exclusively placed on the kernel of the gauge field action $\Gamma_I^G$ which we will discuss now.\\ \subsubsection{\label{DS22}Requirements on the Kernel of the Gauge Field Action $\Gamma_I^G$} The requirements on the kernel of the gauge field action to be given below will not be made obvious immediately in this subsection but will be commented at the appropriate place in the course of the further calculation. Here we simply mention them in order to explain the approximation strategy and the reader is asked to find justification for them later on only.\\ The first requirement (cf.\ subsection 4.3.1, eq.\ (\ref{DI8})) is that we expect the (time integrated) self-energy ($D^{\mu\nu}_I$ is the photon propagator derived from the action $\Gamma^G_I + \Gamma_{gf}$.) \begin{eqnarray} \label{DG1} {1\over2}\ \int d^4 y\ d^4 y^\prime\ \ \ \bar{J}_\mu (x,x^\prime;y)\ D^{\mu\nu}_I(y - y^\prime)\ \bar{J}_\nu (x,x^\prime;y^\prime)\ \end{eqnarray} \parindent0.em of a charged point particle represented by the current \begin{eqnarray} \label{DG2} \bar{J}_\mu (x,x^\prime;y)\ &=\ e\ {\displaystyle \int_0^1} d\tau\ \dot{z}_\mu\ \delta^{(4)}(z(\tau) - y)&\ \ \ ,\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-0.5cm} z_\mu (\tau)\ = (x^\prime -x)_\mu\ \tau + x_\mu \ \ \ \ ,\nonumber \end{eqnarray} and propagating over a finite time interval to be finite. This is needed in order to properly define the map $f$. Above requirement yields the condition \begin{eqnarray} \label{DG3} &&\tilde d_I(s)\ \ \stackrel{s \longrightarrow \infty}{\sim} \ \ O\left(s^\kappa\right) \hspace{0.5cm},\hspace{0.5cm}\kappa\ >\ {1\over 2}\ \ \ . \end{eqnarray}\\ \parindent1.5em To take into account condition (\ref{DG3}) is sufficient for most part of the explicit calculation we are attempting. However, it turns out that in finally imposing our approximation to the fixed point condition for the map $f$ and then searching for a solution to it we need to request more in order to find some \footnote{More precisely, this concerns the integral equation for the kernel of the fermion action to be studied further below (see subsection 4.3.2.1).}. Specifically, a solution correct in the asymptotic UV region can only be found if the photon propagator $D^{\mu\nu}_I(x)$ is finite in the coincidence limit $x \rightarrow 0$. This entails for the kernel of the gauge field action the stronger requirement \begin{eqnarray} \label{DG4} &&\tilde d_I(s)\ \ \stackrel{s \longrightarrow \infty}{\sim} \ \ O\left(s^\kappa\right) \hspace{0.5cm},\hspace{0.5cm}\kappa\ >\ 1\ \ \ . \end{eqnarray} We see that $d_I$ characterizing the kernel of the gauge field action should behave qualitatively quite different than $a_I$ and $b_I$ defining the kernel of the fermion action do. Conditions (\ref{DG3}), (\ref{DG4}) induce justified hope that $d_I$ can indeed be modelled by a local operator. Inasmuch as to respect condition (\ref{DG3}) is sufficient for most of the further explicit calculation (i.e., in particular for the analysis of the asymptotic IR region) we choose the Ansatz \begin{eqnarray} \label{DG5} d_I(x)\ &=&\ \left[\ 1 + \beta\ {\Box\over m^2}\ \right]\ \delta^{(4)}(x) \ \ \ , \end{eqnarray} \parindent0.em where $\beta$ is an arbitrary real (positive) constant parameterizing the Ansatz \footnote{We have immediately normalized the first term to 1 hereby freezing the arbitrariness against (finite) gauge field renormalizations the formalism admits.}. The analysis of the asymptotic IR region will be merely independent of further terms to be introduced in (\ref{DG5}) to satisfy (\ref{DG4}) and therefore they are ignored in the present Ansatz for calculational simplicity. Of course, the Ansatz introduces an additional (spurious) pole at $p^2 = \beta^{-1} m^2$ in the momentum space photon propagator representation. However, we will not be worried by this fact because we simply see eq.\ (\ref{DG5}) as a model representation of an unknown and possibly complicated kernel of the gauge field action, and so it cannot be expected to be free of perhaps unpleasant properties in any respect. Also, eq.\ (\ref{DG5}) can be understood as some low energy (i.e., IR) approximation that however can safely be extended to arbitrary high energies without severely misrepresenting the required true UV behaviour. For a discussion of some features and drawbacks of the particular model Ansatz (\ref{DG5}) see \cite{pais},\cite{barc} and references therein. The analysis of the asymptotic UV region will not demand any further explicit knowledge of the photon propagator beyond condition (\ref{DG4}) so that the Ansatz (\ref{DG5}) can be used for most of the further calculation (that focuses on the IR analysis) and needs not to be supplemented by any specific UV Ansatz.\\ \subsubsection{\label{DS23}\label{SH3}The Approximation Strategy in Ideal, and in Practice} After above considerations we are ready to define the approximation strategy to be followed in the explicit calculation. For reducing the calculational complexity we will make use of the map $\tilde f$ (i.e., source terms are given by $\Gamma_I$ and not by $\Gamma_{II}$) instead of the map $f$. In practice, $\tilde f$ will be slightly modified still further as we will explain in section \ref{SH1} below.\\ \parindent1.5em The local operator Ansatz (\ref{DG5}) for the kernel of the gauge field action admits following procedure for applying the map $\tilde f$. First, starting from $\Gamma_I$ with eq.\ (\ref{DG5}) inserted we will perform the functional integration over the gauge potentials. This can be done exactly, independently of the Ansatz (\ref{DG5}). Then, we perform the integration over the fermion fields, and consequently we impose the fixed point approximation $a_I = a_{II}$, $b_I = b_{II}$. These integral equations are to be solved. In practice, solution of these coupled integral equations can be attempted in a certain approximation only. Specifically, we will explicitly solve them in the asymptotic UV region and in the asymptotic IR region respectively. Solutions $a$, $b$ of these integral equations are still parameterized by $\alpha$ ($\alpha = e^2/4\pi$) \footnote{Let us assume that there is an unique solution $a$, $b$ only what is supported (in practice) by explicit calculation to be discussed further below.} while we find that the parameter $\beta$ (of the kernel of the gauge field action) has to be considered as function of $\alpha$ in order to find any consistent solution at all. However, it remains to impose the third condition $d_I = d_{II}$ yet.\\ The fermionic integration finally has induced a contribution $\Delta\Gamma^G_I$ to the gauge field action as follows \footnote{Gauge non-invariant structures do not occur because the solutions $a$ and $b$ exhibit an UV behaviour as will be shown preventing those from occurring even in a gauge non-invariant regularization (at removing the cut-off).}. \begin{eqnarray} \label{DP1} \hspace{-0.5cm}\Delta\Gamma^G_I [A]\ &=&\nonumber\\ \vspace{0.3cm}\nonumber\\ \hspace{-0.7cm} = \ {\alpha\over 4\pi} &\displaystyle{\int} d^4x &A^\mu(x)\ \left[ g_{\mu\nu} \Box\ -\ \partial_\mu \partial_\nu\right]\ \left[\ C_{1a}\ +\ C_{2a}\ {\Box\over m^2}\ +\ \ldots\ \right] \ A^\nu(x) \end{eqnarray} \parindent0.em $C_{1a}$, $C_{2a}$ are functionals of the distributions $a$ and $b$. Therefore, they can also be viewed as certain functions of $\alpha$ and of the parameter $\beta(\alpha)$. For the moment let us vary the parameter $\beta$ independently of $\alpha$ although we believe that the necessity to consider the parameter $\beta$ as a function of $\alpha$ in course of solving the integral equation for the quadratic kernel of the fermion action is not bound to the particular method we will apply. The condition $d_I = d_{II}$ then reads \begin{eqnarray} \label{DP2} C_{1a}(\alpha,\beta)\ &=&\ 0\ \ \,\\ \vspace{-0.1cm}\nonumber\\ \label{DP3} C_{2a}(\alpha,\beta)\ &=&\ 0 \end{eqnarray} and both these equations define an implicit function $\alpha(\beta)$ (or $\beta(\alpha)$), i.e., certain curves in the $\alpha$-$\beta$-plane. The crossing points of these curves correspond to the set of allowed values ($\alpha ,\beta$). So far, the functional $C_{1a}$ has been explicitly calculated (see Appendix A) with considerable effort in 1-loop approximation only (i.e., taking into account the quadratic kernel of the fermion action in the presence of an arbitrary gauge potential). To determine $C_{2a}$ in 1-loop approximation along the same lines is a trivial but extremely laborious task reserved to be undertaken in the future. But, if as mentioned the parameter $\beta$ has to be viewed as a function of $\alpha$ in advance of imposing $d_I = d_{II}$ eqs.\ (\ref{DP2}), (\ref{DP3}) cannot be satisfied simultaneously anyway (To expect that they are degenerate seems not to be very realistic.). Requiring that at least in the asymptotic IR (long distance, long wavelength) region the fixed point condition should be fulfilled we choose eq.\ (\ref{DP2}) as condition to be respected. So, in principle the equation \begin{eqnarray} \label{DP4} C_{1a}(\alpha,\beta(\alpha))\ =\ 0 \end{eqnarray} admits us to determine the QED coupling constant $\alpha$ within the present approximative approach. It is clear that the above method can easily be accommodated to the inclusion of additional terms in the Ansatz (\ref{DG5}).\\ \parindent1.5em So, at the end of this section we are equipped with a plan for the explicit calculations, and we will now proceed along the lines just discussed.\\ \subsection{\label{DS3}Bringing the Approximation Strategy to Work: \hfill\break Explicit Calculation} \subsubsection{\label{DS31}\label{SH1}Performing the Functional Integration} \parindent0.em According to our approximation strategy first we have to calculate the functional integral (cf.\ eqs.\ (\ref{BC3}), (\ref{C11})) \begin{eqnarray} &&\hspace{-0.7cm}{\rm e}^{\displaystyle\ i \Gamma_{II} [A,\Psi,\bar\Psi]}\ \ =\hfill\nonumber\\ \vspace{0.3cm}\nonumber\\ &&=\ C\ \int D\left[a_{\mu}\right]\ D\psi D\bar\psi\ \ {\rm e}^{\displaystyle\ i\Gamma_{I} [a + A,\psi + \Psi,\bar\psi + \bar\Psi]}\ \ \cdot \nonumber\\ \label{DI1} &&\hspace{1.2cm}\cdot\ {\rm e}^{\displaystyle\ i\Gamma_{gf}[a] +\ i \int d^4x \left[ J_{I\mu}(x) a^{\mu} (x) + \bar\eta_I (x) \psi (x) + \bar\psi (x) \eta_I (x)\right]} \end{eqnarray} with \begin{eqnarray} \label{DI2} {\displaystyle{\delta \Gamma_I [A,\Psi,\bar\Psi] \over \delta\ A^{\mu} (x)}}\ &=&\ -\ J_{I\mu}(x) \hspace{0.5cm},\hspace{0.5cm}\\ \vspace{0.2cm}\nonumber\\ \label{DI3} {\displaystyle{\delta \Gamma_I [A,\Psi,\bar\Psi] \over \delta\ \Psi (x)}}\ &=&\ \bar\eta_I (x) \hspace{0.5cm},\hspace{0.5cm}\\ \vspace{0.2cm}\nonumber\\ \label{DI4} {\displaystyle{\delta \Gamma_I [A,\Psi,\bar\Psi] \over \delta\ \bar\Psi (x)}}\ &=&\ -\ \eta_I (x) \end{eqnarray} inserted. In calculating $J_{I\mu}$ we may neglect the term stemming from $\Gamma^F_I$ because in $\Gamma_{II}$ it gives rise to fermion interactions only \footnote{Incidentally, it should be noted that reasoning leading to this fact also makes use of Furry's theorem (i.e., an appropriate generalization of it) which applies to our situation. It excludes a closed fermion loop tadpole contribution.}. Furthermore, by using a partial integration we rewrite eq.\ (\ref{D3}) in the following manner (for the definition of $\bar{J}$ see eq.\ (\ref{DG2})) \begin{eqnarray} \label{DI5} \Gamma_I^{F} [A,\Psi,\bar\Psi]\ &=&\ \int d^4x\ d^4x^\prime\ \ \bar\Psi (x) \ \ {\rm e}^{\displaystyle\ i \int d^4y\ \bar{J}_\mu (x,x^\prime;y) \ A^\mu(y)}\ \ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{1.cm}\cdot\ \left[ i\not\hspace{-0.07cm}\partial_x\ a_I\left(x-x^\prime\right) \ -\ m\ b_I\left(x-x^\prime\right) \right]\Psi (x^\prime)\ \ \ . \end{eqnarray} This will admit us to represent the result of the gauge field integration to be treated first in a very convenient way. To perform the gauge field integration we temporarily expand in eq.\ (\ref{DI1}) the term $\exp\{i \Gamma^F_I\}$ in a power series, \begin{eqnarray} \label{DI6} {\rm e}^{\displaystyle\ i \Gamma^F_I}\ =\ 1\ +\ i\ \Gamma^F_I\ -\ {1\over 2}\ \left( \Gamma^F_I \right)^2\ +\ \ldots\hspace{1.cm} , \end{eqnarray} what is a very natural procedure in view of the Grassmann integration. This way it turns out that the result of the gauge field integration can be given as an infinite sum of Gaussian integrals. Each term of this sum corresponds to a certain power $n$ of $\Gamma^F_I$ and contains the expression \begin{eqnarray} \label{DI7} \int D\left[a_{\mu}\right]&& {\rm e}^{\displaystyle\ \ i\Gamma^G_{I} [a]\ +\ i\Gamma_{gf}[a]\ +\ i\sum_{k=1}^n\ \int d^4y\ \bar{J}_\mu (x_k,x_k^\prime;y)\ a^\mu (y)} \end{eqnarray} where the arguments $\{x_k,x_k^\prime\}$ refer to the integration variables in the $k$-th copy of $\Gamma^F_I$. Having performed the Gaussian integration eq.\ (\ref{DI7}) reads \begin{eqnarray} \label{DI8} C\ \ {\rm e}^{\displaystyle\ -{i\over 2}\sum_{k=1}^n \sum_{l=1}^n\ \int d^4y\ d^4y^\prime\ \ \bar{J}_\mu (x_k,x_k^\prime;y)\ D_I^{\mu\nu}(y - y^\prime)\ \bar{J}_\nu (x_l,x_l^\prime;y^\prime)}\ \ \ . \end{eqnarray} Terms with $k=l$ are self-energy contributions while off-diagonal terms of the double sum in the exponent generate fermion interactions. We see that the requirement (\ref{DG3}) arises naturally in the course of the functional integration. Let us define following function from the self-energy term. \begin{eqnarray} \label{DI9} g(x - x^\prime)\ =\ {\rm e}^{\displaystyle\ -{i\over 2}\int d^4y\ d^4y^\prime\ \bar{J}_\mu (x,x^\prime;y)\ D_I^{\mu\nu}(y - y^\prime)\ \bar{J}_\nu (x,x^\prime;y^\prime)} \end{eqnarray} $g$ can be calculated explicitly, and for the Ansatz (\ref{DG5}) this is done in Appendix B. Using $g$ we introduce the new functions $a_{Ig}$, $b_{Ig}$ by defining a map ${\bf g}: a \longrightarrow a_g,\ b \longrightarrow b_g$ specified by the prescriptions \begin{eqnarray} \label{DI10a} a^\prime_{Ig}(x)\ &=&\ g(x)\ a^\prime_I(x)\ \ \ ,\\ \vspace{-0.1cm}\nonumber\\ \label{DI10b} b_{Ig}(x)\ &=&\ g(x)\ b_I(x)\ \ \ . \end{eqnarray} Here, the notation $a^\prime (x)\ =\ d/dr\ a(r)$, $r = - m^2 x^2$ is used. The uncertainty in $a_{Ig}$ due to the free integration constant is removed by noting that $g(0) = 1$ (this follows from condition (\ref{DG3})) and consequently requiring the same behaviour for $a_{Ig}(x)$ and $a_I(x)$ at $x \rightarrow 0$.\\ \parindent1.5em Now, we may reverse the procedure indicated in eq.\ (\ref{DI6}) and re-exponentiate the terms of the infinite sum under the remaining fermionic integration what however cannot be done in a closed form. Proceeding this way we obtain \begin{eqnarray} \label{DI11} &&\hspace{-0.7cm}{\rm e}^{\displaystyle\ i \Gamma_{II} [A,\Psi,\bar\Psi]}\ \ =\hfill\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-1.cm}=\ C\ \ {\rm e}^{\displaystyle\ i\Gamma^G_{I} [A]}\ \ \int D\psi D\bar\psi\ \ {\rm e}^{\displaystyle\ i \int d^4x \left[ \bar\eta_I (x) \psi (x) + \bar\psi (x) \eta_I (x)\right]}\ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-0.9cm}\ \ \ \cdot\ \exp\left\{ i\int d^4x\ d^4x^\prime\ \ \left( \bar\psi (x)\ +\ \bar\Psi (x)\right) \ \ {\rm e}^{\displaystyle\ ie \int^{x^\prime}_x dy_\mu\ A^\mu(y)}\ \ \right. \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\ \ \ \cdot\ \left[ a_{Ig}\left(x-x^\prime\right) \ \left( i\not\hspace{-0.07cm}\partial_{x^\prime} - e \not\hspace{-0.13cm}A(x^\prime) \right)\ -\ m\ b_{Ig}\left(x-x^\prime\right) \right] \left( \psi (x^\prime)\ +\ \Psi (x^\prime)\right)\ -\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-0.6cm}-\ {1\over 2}\int d^4x\ d^4x^\prime\ d^4z\ d^4z^\prime\ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-0.6cm}\cdot\ \Bigg[\ \left( \bar\psi (x)\ +\ \bar\Psi (x)\right)\ \left[ i\not\hspace{-0.07cm}\partial_x\ a_{Ig}\left(x-x^\prime\right) \ -\ m\ b_{Ig}\left(x-x^\prime\right) \right] \left( \psi (x^\prime)\ +\ \Psi (x^\prime)\right) \ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-0.6cm}\cdot\ \left(\bar\psi (z)\ +\ \bar\Psi (z)\right)\ \left[ i\not\hspace{-0.07cm}\partial_z\ a_{Ig}\left(z-z^\prime\right) \ -\ m\ b_{Ig}\left(z-z^\prime\right) \right] \left( \psi (z^\prime)\ +\ \Psi (z^\prime)\right)\ \ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-0.6cm}\cdot\ \left. \left. \left( {\rm e}^{\displaystyle\ -i \int d^4y\ d^4y^\prime\ \ \bar{J}_\mu (x,x^\prime;y)\ D_I^{\mu\nu}(y - y^\prime)\ \bar{J}_\nu (z,z^\prime;y^\prime)}\ -\ 1\ \right)\right]\ +\ \ldots\ \right\} .\ \ \end{eqnarray} \parindent0.em In the last term of eq.\ (\ref{DI11}) we have already put $A_\mu = 0$ because we will consider 1-loop contributions (i.e., those stemming in eq.\ (\ref{DI11}) from the quadratic kernel of the fermion action in the presence of the arbitrary gauge potential $A_\mu$) to the quadratic kernel of the gauge field action $\Gamma^G_{II}$ only \footnote{As long as $\alpha$ is sufficiently small higher loop contributions will only lead to small quantitative changes.}. In eq.\ (\ref{DI11}) the remaining fermionic integration is now done (in the sense of perturbation theory, and which after integration is formally summed up again). In performing the Gaussian integration (i.e., treating the last term (and all further terms) in eq.\ (\ref{DI11}) as perturbation) for calculational simplicity we neglect the source terms (linear in $\Psi$, $\bar\Psi$; others are not of our present interest) that contain $[ g(x) - 1 ]$ factors. For our envisaged study due to $g(0) = 1$ these source terms are irrelevant in the asymptotic UV region and in the asymptotic IR region they will lead to certain changes which are apparently small however as long as $\alpha$ is sufficiently small. Now, without appealing to the eventual range of $\alpha$ we simply understand this neglect as a certain further modification of the map $\tilde f$ but which preserves all important features (In particular, it does not lead to any change in the asymptotic UV region.). So we obtain for eq.\ (\ref{DI11}) the following result \footnote{We display only non-interaction terms of $\Gamma_{II}$ which we are exclusively interested in. Furthermore, on the r.h.s.\ only the term containing one photon propagator is shown.}. \begin{eqnarray} \label{DI12} &&\hspace{-1.2cm}{\rm e}^{\displaystyle\ i \Gamma_{II} [A,\Psi,\bar\Psi]}\ \ =\hfill\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-1.cm}=\ C\ \ {\rm e}^{\displaystyle\ i\Gamma^G_{I} [A]\ +\ i \Delta\Gamma^G_I [A]}\ \ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-0.9cm}\ \ \ \cdot\ \exp\left\{\ i\int d^4x\ d^4x^\prime\ \bar\Psi (x)\ \left[ i\not\hspace{-0.07cm}\partial_x\ a_{Ig}\left(x-x^\prime\right) \ -\ m\ b_{Ig}\left(x-x^\prime\right) \right] \Psi (x^\prime) \right. -\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{0.5cm} -\ \int d^4x\ d^4x^\prime\ d^4z\ d^4z^\prime\ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{0.7cm} \ \ \cdot\ \bar\Psi (x)\ \left[ i\not\hspace{-0.07cm}\partial_x\ a_{Ig}\left(x-x^\prime\right) \ -\ m\ b_{Ig}\left(x-x^\prime\right) \right]\ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{0.7cm} \ \ \cdot\ S_{I(g)} (x^\prime - z)\ \left[ i\not\hspace{-0.07cm}\partial_z\ a_{Ig}\left(z-z^\prime\right) \ -\ m\ b_{Ig}\left(z-z^\prime\right) \right]\ \Psi (z^\prime) \ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{0.7cm} \ \ \cdot\ \left. \int d^4y\ d^4y^\prime\ \ \bar{J}_\mu (x,x^\prime;y)\ D_I^{\mu\nu}(y - y^\prime)\ \bar{J}_\nu (z,z^\prime;y^\prime)\ +\ \ldots\ \ \right\} \end{eqnarray} Here, $\Delta\Gamma^G_I$ is defined by eq.\ (\ref{ZA1}). However, as is clear from eq.\ (\ref{DI11}) for the present purpose in eq.\ (\ref{ZA2}) $a_I$, $b_I$ have to be replaced by $a_{Ig}$, $b_{Ig}$ respectively. Accordingly, the fermion propagator $S_{I(g)} (x)$ used here reads \begin{eqnarray} \label{DI13} S_{I(g)} (x)\ =\ -\ \int {d^4p\over (2\pi)^4}\ {\rm e}^{\ ipx}\ \ {\not\hspace{-0.07cm}p\ \tilde a_{Ig}(p)\ -\ m\ \tilde b_{Ig}(p) \over p^2\ \tilde a_{Ig}^2 (p)\ -\ m^2\ \tilde b_{Ig}^2 (p)\ +\ i\epsilon} \hspace{1.cm} . \end{eqnarray} Eq.\ (\ref{DI12}) provides us with those terms of the image $\Gamma_{II}$ of $\Gamma_I$ we need to know for our further investigation. So we may now proceed to apply the fixed point condition to the kernel of the fermion action.\\ \subsubsection{\label{DS32}\label{SH4}The Integral Equation for the Kernel of the Fermion Action} Considering $\Gamma_{II} [0,\Psi,\bar\Psi] = \Gamma^F_{II} [0,\Psi,\bar\Psi]$ and writing the quadratic terms as \begin{eqnarray} \label{DK1a} &&\hspace{-2.cm}\Gamma^F_{II} [0,\Psi,\bar\Psi]\ =\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-1.cm}=\ \int d^4x\ d^4x^\prime\ \ \bar\Psi (x) \ \left[ i\ a_{II}\left(x-x^\prime\right) \ \not\hspace{-0.07cm}\partial_{x^\prime}\ -\ m\ b_{II}\left(x-x^\prime\right) \right]\Psi (x^\prime) \end{eqnarray} eq.\ (\ref{DI12}) provides us with expressions for $a_{II}$, $b_{II}$. Consequently, we may explicitly write down the fixed point condition $a_I = a_{II}$, $b_I = b_{II}$. For convenience, we do it in terms of $a_g$, $b_g$, but any information obtained for these quantities can be translated into terms of $a$, $b$ by means of relations (\ref{DI10a}), (\ref{DI10b}). The integral equation reads\footnote{Note, that the fixed point condition has been multiplied by $g(x-z^\prime)$ yet.} \begin{eqnarray} \label{DK1} &&\hspace{-1.5cm}\left[ g(x-z^\prime)\ -\ 1 \right] \left[ i\not\hspace{-0.07cm}\partial_x\ a_g\left(x-z^\prime\right) \ -\ m\ b_g\left(x-z^\prime\right) \right]\ = \nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-1.cm}=\ -i\ g(x-z^\prime)\ \left\{\ \ \int d^4x^\prime\ d^4z\ \left[ i\not\hspace{-0.07cm}\partial_x\ a_g\left(x-x^\prime\right) \ -\ m\ b_g\left(x-x^\prime\right) \right]\right. \ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{0.5cm}\cdot\ S_{(g)} (x^\prime - z)\ \left[ i\not\hspace{-0.07cm}\partial_z\ a_g\left(z-z^\prime\right) \ -\ m\ b_g\left(z-z^\prime\right) \right]\ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{0.5cm}\cdot\ \left. \int d^4y\ d^4y^\prime\ \ \bar{J}_\mu (x,x^\prime;y)\ D_I^{\mu\nu}(y - y^\prime)\ \bar{J}_\nu (z,z^\prime;y^\prime)\ +\ \ldots\ \ \right\}\ \ \ . \end{eqnarray} Eq.\ (\ref{DK1}) represents two coupled integral equations for $a_g$, $b_g$ and needs now to be solved. In general, this is a complicated task and we will restrict ourselves to the solution of eq.\ (\ref{DK1}) in the asymptotic UV (i.e., $-m^2 (x-z^\prime)^2 \rightarrow 0$) and IR (i.e., $-m^2 (x-z^\prime)^2 \rightarrow \infty $) regions respectively \footnote{We always have the Euclidean region in mind, of course.}. Before studying these cases let us mention that eq.\ (\ref{DK1}) has an exact but trivial solution, namely \begin{eqnarray} \label{DK2} a_g(x)\ =\ a(x)\ \equiv & \ 0\ &\ \ \ \ , \\ \vspace{0.3cm}\nonumber\\ \label{DK3} b_g(x)\ =\ b(x)\ =&\ \tilde b(\infty)\ \delta^{(4)}(x)&\ \ \ \ , \end{eqnarray} where $\tilde b(\infty)$ is some arbitrary real constant. Of course, this solution corresponds to the non-interacting case where the gauge and fermion sectors are decoupled and it is not very interesting therefore. However, in the following we will search the interacting solution of eq.\ (\ref{DK1}) as sum of the trivial solution (\ref{DK2}), (\ref{DK3}) and some additional nontrivial contribution. As already mentioned it seems to be rather complicated to find a nontrivial and exact solution of eq.\ (\ref{DK1}), but it appears possible to analyze it merely exactly at least in the asymptotic UV region and for small $\alpha$ to leading order in the IR region solely based on those terms explicitly displayed in eq.\ (\ref{DK1}). First we will turn to the asymptotic UV region.\\ \paragraph{\label{DS321}Solving the Integral Equation in the Asymptotic UV Region} \hspace{4.cm}\\ Playing around with eq.\ (\ref{DK1}) one soon recognizes, that to find a solution correct in the asymptotic UV region one needs to assume that the photon propagator $D_I^{\mu\nu}(x)$ is finite in the coincidence limit $x \rightarrow 0$. Consequently, the photon propagator written as \footnote{An appropriate gauge fixing term $\Gamma_{gf}$ has been added to the gauge field action $\Gamma^G_I$, i.e., it has been chosen $\tilde n_\mu(p) = i p_\mu\ \tilde d(p)^{1/2}$.} \begin{eqnarray} \label{DV1} D_I^{\mu\nu}(x)\ =\ -\ \int {d^4p\over (2\pi)^4}\ {{\rm e}^{\ ipx}\over p^2\ +\ i\epsilon}\ {1\over \tilde d(p)}\ \left[\ g^{\mu\nu}\ -\ (1-\lambda)\ {p^\mu p^\nu\over p^2\ +\ i\epsilon}\ \right] \end{eqnarray} reads in the coincidence limit \begin{eqnarray} \label{DV2} D_I^{\mu\nu}(0)\ &=&\ i\ g^{\mu\nu}\ {3+\lambda\over 4}\ K_A\ m^2\ \ \ \ ,\\ \vspace{0.3cm}\nonumber\\ &&\hspace{1.7cm}K_A\ =\ {1\over 4\pi^2}\ \int_0^\infty\ ds\ {1\over \tilde d(s)}\ \ \ \ ,\nonumber \end{eqnarray} where $K_A$ is some finite, real constant.\\ \parindent1.5em The analysis of the integral equation (\ref{DK1}) in the asymptotic UV region now starts by replacing the photon propagator (\ref{DV1}) by its leading short distance term (\ref{DV2}). Consequently, the current-current interaction then reads in the short distance limit \begin{eqnarray} \label{DV3} &&\hspace{-2.5cm}\int d^4y\ d^4y^\prime\ \ \bar{J}_\mu (x,x^\prime;y)\ D_I^{\mu\nu}(y - y^\prime)\ \bar{J}_\nu (z,z^\prime;y^\prime)\ =\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\ \ =\ i\ \alpha\pi\ (3+\lambda)\ K_A\ m^2\ (x-x^\prime)(z-z^\prime)\ +\ \ldots\hspace{1.cm}, \end{eqnarray} \parindent0.em and the function $g$ has the short distance behaviour \begin{eqnarray} \label{DV4} g(x)\ =\ 1\ +\ {\alpha\pi\over 2}\ (3+\lambda)\ K_A\ m^2 x^2\ +\ \ldots\hspace{1.cm}. \end{eqnarray} The leading short distance terms (\ref{DV3}), (\ref{DV4}) have to be inserted into the integral equation (\ref{DK1}) yielding \begin{eqnarray} \label{DV5} &&\hspace{-1.5cm}{1\over 2}\ (x-z^\prime)^2\ \left[ i\not\hspace{-0.07cm}\partial_x\ a_g\left(x-z^\prime\right) \ -\ m\ b_g\left(x-z^\prime\right) \right]\ = \nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-1.cm}=\ \int d^4x^\prime\ d^4z\ \left[ i\not\hspace{-0.07cm}\partial_x\ a_g\left(x-x^\prime\right) \ -\ m\ b_g\left(x-x^\prime\right) \right]\ S_{(g)} (x^\prime - z)\ \ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{0.5cm}\cdot\ \left[ i\not\hspace{-0.07cm}\partial_z\ a_{g}\left(z-z^\prime\right) \ -\ m\ b_{g}\left(z-z^\prime\right) \right]\ (x-x^\prime)(z-z^\prime)\ \ +\ \ldots\ \ \ \ \ . \end{eqnarray} Here, certain constants have been divided out. Conveniently, we will now further consider above integral equation in momentum space. For this purpose we translate coordinate difference factors occurring (i.e., $(x-z^\prime)^2$, $(x-x^\prime)(z-z^\prime)$) into momentum space derivatives. Having this in mind one may convince oneself that to leading order terms (indicated by dots $\ldots$) containing more than just one photon propagator and which are not all coupled to a closed fermion loop do not contribute because they are related to a higher number of derivatives in momentum space (and those terms then are falling faster off in the UV (i.e, high momentum) region). Effectively, to those terms shown in eq.\ (\ref{DV5}) only diagrams additionally contribute where all photon propagators are coupled to closed fermion loops. However, these closed fermion loops can always be summed up to give an effective (modified) photon propagator. As long as its coincidence limit remains finite eq.\ (\ref{DV5}) stays in effect. So, the UV analysis can be done merely exactly. In addition, already from eq.\ (\ref{DV5}) we recognize that the leading UV term of the solution we are in search of is independent of the coupling constant $\alpha$ as well as of the structure of the gauge field action (beyond condition (\ref{DG4})) determining the constant $K_A$ what will give the UV behaviour a kind of universal character.\\ \parindent1.5em Eq.\ (\ref{DV5}) now reads in momentum space (the subscript $_g$ is omitted for the moment) \begin{eqnarray} \label{DV6} &&\hspace{-0.6cm}\not\hspace{-0.07cm}p\ \left[\ s\ \tilde a^{\prime\prime}\ +\ 3\ \tilde a^\prime\ \right]\ +\ m\ \left[\ s\ \tilde b^{\prime\prime}\ +\ 2\ \tilde b^\prime\ \right]\ = \nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-0.3cm}=\ {2\over s \tilde a^2 + \tilde b^2}\ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\ \cdot\left\{\ \not\hspace{-0.07cm}p\ \left[\ s^2\ \tilde a \left( \tilde a^\prime\right)^2\ +\ s\ \tilde a^2 \tilde a^\prime\ +\ 2\ s\ \tilde a^\prime \tilde b \tilde b^\prime\ -\ s\ \tilde a \left( \tilde b^\prime\right)^2\ -\ {1\over 2}\ \tilde a^3\ +\ \tilde a \tilde b \tilde b^\prime\ \right]\right.\ +\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\ +\ m\ \left.\left[\ 2\ s^2\ \tilde a \tilde a^\prime \tilde b^\prime\ -\ s^2\ \left( \tilde a^\prime\right)^2 \tilde b\ +\ s\ \tilde a^2 \tilde b^\prime\ -\ s\ \tilde a \tilde a^\prime \tilde b\ +\ s\ \tilde b \left( \tilde b^\prime\right)^2\ -\ \tilde a^2 \tilde b\ \right]\ \right\}\ +\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\ +\ \ldots\ \ \end{eqnarray} \parindent0.em Here, the notation is $\tilde a = \tilde a (s)$, $\tilde a^\prime = d/ds\ \tilde a$, $s = -p^2/m^2$. We will now solve the two coupled differential equations represented by eq.\ (\ref{DV6}) in the asymptotic UV region $s\rightarrow\infty$. Our Ansatz in accordance with conditions (\ref{DF9}), (\ref{DF10}) will be $\tilde a = \tilde a_s$, $\tilde b = \tilde b(\infty) + \tilde b_s$, where $\tilde a_s$, $\tilde b_s$ are assumed to vanish power-like in leading order for $s\rightarrow\infty$. Neglecting all clearly nonleading terms the two coupled differential equations yielded by eq.\ (\ref{DV6}) then read \footnote{Note, that also a temporary transition $\tilde a_s\rightarrow \tilde b(\infty)\ \tilde a_s$, $\tilde b_s\rightarrow \tilde b(\infty)\ \tilde b_s$ has been applied and then the factor $2\ \tilde b(\infty)$ has been divided out of the equations below.} \begin{eqnarray} \label{DV7} &&\hspace{-1.5cm} {1\over 2}\ s\ \tilde a^{\prime\prime}_s\ +\ {3\over 2}\ \tilde a^\prime_s\ \ \ \stackrel{s \longrightarrow \infty}{=}\nonumber\\ \vspace{0.2cm}\nonumber\\ &&=\ s^2\ \tilde a_s \left(\tilde a^\prime_s\right)^2\ +\ s\ \tilde a^2_s \tilde a^\prime_s\ -{1\over 2}\ \tilde a^3_s\ +\ \left[\ 2\ s\ \tilde a^\prime_s\ +\ \tilde a_s\ \right]\ \tilde b^\prime_s\ +\ \ldots\ \ \ \ ,\\ \vspace{0.6cm}\nonumber\\ \label{DV8} &&{1\over 2}\ s\ \tilde b^{\prime\prime}_s\ +\ \tilde b^\prime_s\ \ \ \stackrel{s \longrightarrow \infty}{=}\ \ \ -\ s^2\ \left(\tilde a^\prime_s\right)^2\ -\ s\ \tilde a_s \tilde a^\prime_s\ -\ \tilde a^2_s\ +\ \ldots\ \ \ \ . \end{eqnarray} Let us first discuss eq.\ (\ref{DV7}) and its consequences on the asymptotic UV behaviour of $\tilde b_s$. As long as the term on the l.h.s.\ of eq.\ (\ref{DV7}) does not vanish to leading order we are forced to conclude that $\tilde b^\prime_s \stackrel{s \longrightarrow \infty}{\sim}\ 1/s $, i.e., $\tilde b_s \stackrel{s \longrightarrow \infty}{\sim}\ \ln s$ \footnote{Of course, one could also try the assumption that the term in front of $\tilde b^\prime_s$ vanishes (i.e., $\tilde a_s \stackrel{s \longrightarrow \infty}{\sim}\ s^{-1/2}$), however eq.\ (\ref{DV8}) immediately leads to the same result then.}. But, such a behaviour is in conflict with gauge invariance because it is not in line with condition (\ref{DF10}). So, we are forced to conclude that the l.h.s.\ of eq.\ (\ref{DV7}) should vanish to leading order, consequently it must hold ($C_{\tilde a}$ is some constant) \begin{eqnarray} \label{DV9} \tilde a_s\ &\stackrel{s \longrightarrow \infty}{=} &\ {C_{\tilde a}\over s^2}\ +\ \ldots \ \ \ . \end{eqnarray} This information is sufficient to determine the leading behaviour of $\tilde b_s$ from eq.\ (\ref{DV8}), and we find \begin{eqnarray} \label{DV10} \tilde b_s\ &\stackrel{s \longrightarrow \infty}{=} &\ -\ {C^2_{\tilde a}\over s^3}\ +\ \ldots\ \ \ . \end{eqnarray} We may now come back to eq.\ (\ref{DV7}) and determine the next-to-leading term of $\tilde a_s$. Writing $\tilde a_s$ without any loss of generality as \begin{eqnarray} \label{DV11} \tilde a_s &=&{C_{\tilde a}\over s^2}\ \ \tilde v(s) \hspace{2.cm},\\ \vspace{0.3cm}\nonumber\\ &&\hspace{4.5cm}\tilde v(\infty)\ =\ 1\ \ \ \ ,\nonumber \end{eqnarray} and taking into account (\ref{DV9}), (\ref{DV10}) eq.\ (\ref{DV7}) then reads \footnote{To be more precise, the vanishing of the leading term on the l.h.s.\ of eq.\ (\ref{DV12}) (eq.\ (\ref{DV11}) inserted) rests on the relation ($\hat{p} = (-p_0,{\bf p})$) \begin{eqnarray} _p\Box\ {\not\hspace{-0.07cm}p\over\left[ p^2\right]^2}&=& i\ 2\pi^2\ \not\hspace{-0.07cm}\partial_{\hat{p}}\ \delta^{(4)}(p)\ \ \ ,\nonumber \end{eqnarray} \parindent0.em accompanied by certain reasonable assumptions about $\tilde a_s(s\rightarrow 0)$ (i.e., $\tilde v \sim s^2, s\rightarrow 0$; or even some weaker condition).} \begin{eqnarray} \label{DV12} {C_{\tilde a}\over 2}\ {1\over s^2}\ \left[\ s\ \tilde v^{\prime\prime}\ -\ \tilde v^\prime\ \right]\ \ &\stackrel{s \longrightarrow \infty}{=} &\ -\ {15\over 2}\ {C^3_{\tilde a}\over s^6}\ +\ \ldots\ \ \ \ . \end{eqnarray} And we find \begin{eqnarray} \label{DV13} \tilde v(s)\ &\stackrel{s \longrightarrow \infty}{=} &\ 1\ -\ {C^2_{\tilde a}\over s^3}\ +\ \ldots\ \ \ \ . \end{eqnarray} \parindent1.5em Summarizing above results, one can say that eq.\ (\ref{DK1}) admits a (unique) solution respecting conditions (\ref{DF9}), (\ref{DF10}). It behaves in the asymptotic UV region as follows: \begin{eqnarray} \label{DV14} \tilde a_g(s)\ &\stackrel{s \longrightarrow \infty}{=} &\ {C_{\tilde a}\over s^2}\ \ \tilde b(\infty)\ \left[\ 1\ -\ {C^2_{\tilde a}\over s^3}\ +\ \ldots\ \right]\\ \vspace{0.3cm}\nonumber\\ \label{DV15} \tilde b_g(s)\ &\stackrel{s \longrightarrow \infty}{=} &\ \tilde b(\infty)\ \left[\ 1\ -\ {C^2_{\tilde a}\over s^3}\ +\ \ldots\ \right] \end{eqnarray} \parindent0.em Most important, in qualitative respect this asymptotic UV behaviour is independent of the coupling constant $\alpha$ and of any specific details of the photon propagator structure beyond condition (\ref{DG4}). Furthermore, due to $g(0) = 1\ $ (cf.\ eq.\ (\ref{DI9})) $\tilde a$, $\tilde b$ exhibit the same leading UV behaviour like $\tilde a_g$, $\tilde b_g$. We will discuss consequences of above results further below (see subsection 4.3.3 and chapter 5). In the next subsection we will study eq.\ (\ref{DK1}) in the asymptotic IR region.\\ \paragraph{\label{DS322}Solving the Integral Equation in the Asymptotic IR Region} \hspace{3.cm}\\ For the IR analysis of the integral equation (\ref{DK1}) we need to apply our Ansatz (\ref{DG5}) to the photon propagator \footnote{To obtain this propagator a gauge fixing term $\Gamma_{gf}$ with $\tilde n_\mu = i p_\mu\ \tilde d(p)^{1/2}$ has been added to the gauge field action $\Gamma^G_I$.}. Consequently, the current-current interaction reads in the long distance limit to leading order \footnote{Of course, it is not specifically related to the Ansatz (\ref{DG5}), only next-to-leading terms will be influenced.} \begin{eqnarray} \label{DR1} &&\hspace{-1.5cm}\int d^4y\ d^4y^\prime\ \ \bar{J}_\mu (x,x^\prime;y)\ D_I^{\mu\nu}(y - y^\prime)\ \bar{J}_\nu (z,z^\prime;y^\prime)\ =\nonumber\\ \vspace{0.3cm}\nonumber\\ &&=\ i\ {\alpha\over\pi}\ \left\{\ {(1+\lambda)\over 2}\ {(x-x^\prime)(z-z^\prime)\over (x-z^\prime)^2}\ +\right.\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{1.cm}+\ \left. (1-\lambda)\ {(x-x^\prime)(x-z^\prime)\ (x-z^\prime)(z-z^\prime)\ \over \left[\ (x-z^\prime)^2\ \right]^2}\ \right\}\ +\ \ldots\hspace{1.cm}, \end{eqnarray} Here, $(x-x^\prime)^2$, $(z-z^\prime)^2$ are understood to be small compared with $(x-z^\prime)^2$ \footnote{We always have in mind the region $-m^2(x-z^\prime)^2 \rightarrow\infty $. More precisely, for any large but fixed value of $(x-z^\prime)^2$ contributions from integration regions in the integral equation (\ref{DK1}) where $(x-x^\prime)^2$, $(z-z^\prime)^2$ are not small compared to $(x-x^\prime)^2$ can be expected to be small due to the expected decay of $a_g$, $b_g$ there. Furthermore, terms containing higher powers of $1/(x-z^\prime)^2$ are suppressed in the asymptotic IR region whatever their coefficient numerically might be.}. The function $g$ has the long distance behaviour (We give it here right for Euclidean space. For the full expression and its derivation see Appendix B.) \begin{eqnarray} \label{DR2} g(x_E)&=&C_g\ \left(m^2 x_E^2\right)^{\displaystyle\ \alpha (3-\lambda)/4\pi}\ \ {\rm e}^{\displaystyle\ -\ {\alpha\over 2 \sqrt{\beta}} \ m \vert x_E \vert}\ \ \Big[\ 1\ + \ \ldots\ \Big]\ \ \ \ ,\ \ \ \\ \vspace{0.3cm}\nonumber\\ C_g&=&\left( 4\beta\right)^{\displaystyle\ - \alpha (3-\lambda)/4\pi}\ \ \exp\left\{\ {\alpha\over 4\pi}\left[\ (3+\lambda)\ +\ 2\ (3-\lambda)\ \gamma\ \right]\ \right\}\ \ .\nonumber \end{eqnarray} Please note, that eq.\ (\ref{DR2}) contains the Bloch-Nordsieck contribution (cf.\ \cite{h},\cite{i} and references therein) exhibiting a power-like behaviour with the well-known exponent $\alpha (3-\lambda)/ 4\pi$. It appears justified to assume that the leading IR behaviour displayed in eqs.\ (\ref{DR1}), (\ref{DR2}) will depend on additional terms to be introduced in the Ansatz (\ref{DG5}) in order also to satisfy condition (\ref{DG4}) very weakly only. For the purpose of calculational simplicity those terms can be safely disregarded therefore.\\ \parindent1.5em We may now insert eqs.\ (\ref{DR1}), (\ref{DR2}) into the integral equation (\ref{DK1}). Having in mind IR analysis in Euclidean space on the l.h.s.\ of eq.\ (\ref{DK1}) we replace the factor $[ 1 - g(x-z^\prime) ]$ simply by $1$ because this is the leading contribution due to the exponential decay (i.e., oscillation in Minkowski space) of $g(x_E)$ for $m^2 x^2_E\rightarrow\infty$. Furthermore, coordinate difference factors (i.e., $(x-x^\prime)_\mu$, $(z-z^\prime)_\nu$) occurring on the r.h.s.\ of eq.\ (\ref{DR1}) are translated into momentum space derivatives acting on the Fourier transform of the kernel $S^{-1}$ of the fermion action. So, eq.\ (\ref{DK1}) reads now \begin{eqnarray} \label{DR3} &&\hspace{-1.5cm}\left[ i\not\hspace{-0.07cm}\partial_x\ a_g\left(x-z^\prime\right) \ -\ m\ b_g\left(x-z^\prime\right) \right]\ = \nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-1.cm}=\ {\alpha\over\pi}\ \ g(x-z^\prime)\ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\cdot\ \left[\ {(1+\lambda)\over 2}\ {g^{\mu\nu}\over (x-z^\prime)^2}\ +\ (1-\lambda)\ {(x-z^\prime)^\mu\ (x-z^\prime)^\nu \ \over \left[\ (x-z^\prime)^2\ \right]^2}\ \right]\ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\cdot\ \int {d^4p\over (2\pi)^4}\ \ {\rm e}^{\ ip(x-z^\prime)}\ \left[\ _p\partial_\mu\ \left(\ \not\hspace{-0.07cm}p\ \tilde a_g(p)\ +\ m\ \tilde b_g(p)\ \right)\ \right]\ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\cdot\ \ {\not\hspace{-0.07cm}p\ \tilde a_g(p)\ -\ m\ \tilde b_g(p) \over p^2\ \tilde a_g^2 (p)\ -\ m^2\ \tilde b_g^2 (p)\ +\ i\epsilon}\ \left[\ _p\partial_\nu\ \left(\ \not\hspace{-0.07cm}p\ \tilde a_g(p)\ +\ m\ \tilde b_g(p)\ \right)\ \right]\ +\nonumber\\ \vspace{0.3cm}\nonumber\\ &&+\ \ldots\hspace{8.cm}. \end{eqnarray} \parindent0.em What concerns the contribution of terms containing more than just one photon propagator (indicated by dots $\ldots $) following comments are due. Most of those terms will finally yield higher powers of $1/(x-z^\prime)^2$ at least and these can therefore be neglected in the asymptotic IR region. However, one should expect that also terms are occurring which are of the same order as the 1-loop term given above. However, such terms should be expected to only weakly contribute numerically as long as $\alpha$ is sufficiently small because each additional photon propagator is accompanied by an additional factor of $\alpha$. This argument is what is left within the present approximative approach of the line of reasoning applied in standard QED perturbation theory. Of course, the belief based on this reasoning may turn out wrong by nonperturbative mechanisms which are not easily seen at the present stage of the investigation. Anyway, in the region where $\alpha$ is of order 1 terms containing more than just one photon propagator cannot be neglected anymore in principle. But for the purpose of the present model calculation (without appealing to the eventual range of $\alpha$) we simply ignore all terms containing more than just one photon propagator also in the region where $\alpha$ is not small.\\ \parindent1.5em To determine the IR tail of $a_g$, $b_g$ (i.e., the l.h.s.\ of eq.\ (\ref{DR3})) it remains to find the leading long distance contribution of the Fourier integral on the r.h.s.\ of eq.\ (\ref{DR3}). To further proceed we would preferably need to know the analytic structure of the integrand, in particular that of the denominator. We do not have any reliable information on this, but as appears reasonable we will assume that the integrand has a simple pole at some $p_0 = \pm \sqrt{{\bf p}^2 - s_0\ m^2}$ with \begin{eqnarray} \label{DR4} s_0 &=& -\ {\tilde b^2_g(s_0)\over\tilde a^2_g(s_0)}\ \ \ \ \ \ ,\ \ (\ s_0\ <\ 0\ )\ \ \ \ , \end{eqnarray} \parindent0.em and that just this pole determines the leading long distance behaviour of the Fourier integral. Consequently, we may exploit the residue of this pole and the leading long distance contribution of the Fourier integral is simply given by the product of the nominator of its integrand (appropriately treated by considering $p_\kappa$ factors occurring as configuration space derivatives acting on eq.\ (\ref{DR5})) taken at $p^2/m^2 = - s_0$ and the leading long distance term of \begin{eqnarray} \label{DR5} {1\over \tilde a_g^2 (s_0)}\ \int {d^4p\over (2\pi)^4}\ {{\rm e}^{\ ip(x-z^\prime)} \over p^2\ +\ s_0\ m^2\ +\ i\epsilon}\hspace{2.cm}.\ \end{eqnarray} \parindent1.5em The explicit calculation is now straightforward but somewhat tedious and we will comment few points only. So, as one intermediate step one calculates the following useful relation ($\tilde a^\prime = d/ds\ \tilde a(s)$, $s = - p^2/m^2$). \begin{eqnarray} \label{DR6} &&\hspace{-1.6cm}\left[\ _p\partial_\mu\ \left(\ \not\hspace{-0.07cm}p\ \tilde a(p)\ +\ m\ \tilde b(p)\ \right)\ \right]\ \left[\ \not\hspace{-0.07cm}p\ \tilde a(p)\ -\ m\ \tilde b(p)\ \right]\ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-1.4cm}\cdot\ \left[\ _p\partial_\nu\ \left(\ \not\hspace{-0.07cm}p\ \tilde a(p)\ +\ m\ \tilde b(p)\ \right)\ \right]\ =\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-0.8cm}=\ \left[\ \not\hspace{-0.07cm}p\ \tilde a\ -\ m\ \tilde b\ \right]\ \left[\ \gamma_\mu \gamma_\nu\ \tilde a^2\ +\ 4\ {p_\mu p_\nu\over m^2}\ {p^2\over m^2}\ \left(\tilde a^\prime\right)^2\ +\ 4\ {p_\mu p_\nu\over m^2}\ \left(\tilde b^\prime\right)^2\ \right. -\nonumber\\ \vspace{0.3cm}\nonumber\\ && -\ {2\over m^2}\ \left(\gamma_\mu p_\nu\ +\ \gamma_\nu p_\mu\right)\ \not\hspace{-0.07cm}p\ \tilde a \tilde a^\prime\ -\ {2\over m^2}\ \left(\gamma_\mu p_\nu\ +\ \gamma_\nu p_\mu\right)\ \tilde a \tilde b^\prime\ + \nonumber\\ \vspace{0.3cm}\nonumber\\ &&\left. +\ 8\ \not\hspace{-0.07cm}p\ {p_\mu p_\nu\over m^3}\ \tilde a^\prime \tilde b^\prime\ \right]\ +\ \left(\gamma_\mu \not\hspace{-0.07cm}p \gamma_\nu\ -\ \not\hspace{-0.07cm}p \gamma_\mu \gamma_\nu\right)\ \tilde a^3\ -\ 2\ \gamma_\mu p_\nu\ {p^2\over m^2}\ \tilde a^2\tilde a^\prime\ +\nonumber\\ \vspace{0.3cm}\nonumber\\ && +\ 2\ \not\hspace{-0.07cm}p \gamma_\mu \not\hspace{-0.07cm}p\ {p_\nu\over m^2}\ \tilde a^2 \tilde a^\prime\ -\ 2\ \left(\gamma_\mu \not\hspace{-0.07cm}p\ -\ \not\hspace{-0.07cm}p \gamma_\mu\right)\ {p_\nu\over m}\ \tilde a^2 \tilde b^\prime\ \end{eqnarray} \parindent0.em In performing the calculation we always keep track of those terms contributing in the long distance region to leading order only. In particular, the leading long distance term of eq.\ (\ref{DR5}) is read off from the relation (written for Euclidean space here) \begin{eqnarray} \label{DR7}\hspace{-0.4cm} \int {d^4p_E\over (2\pi)^4}\ {{\rm e}^{\ ip_E x_E} \over p^2_E\ +\ m^2 }\ &=&\ {m\over 4\pi^2\ \vert x_E\vert}\ K_1\left( m\vert x_E\vert\right) \nonumber\\ \vspace{0.8cm}\nonumber\\ &\stackrel{m^2 x^2_E \gg 1}{=}&\ {\sqrt{m}\over 2\ (2\pi)^{3/2}\ \vert x_E\vert^{3/2}}\ \ {\rm e}^{\displaystyle\ -\ m \vert x_E\vert}\ \ \Big[\ 1\ +\ \ldots\ \Big]\ .\ \ \ \end{eqnarray} The result obtained this way for the IR tail of $a_g$, $b_g$ then is (We give this and all further results for Euclidean space.) \begin{eqnarray} \label{DR8} a_g(x_E)&\stackrel{m^2 x^2_E \rightarrow \infty}{=}& m^4\ {\alpha\ C_g\ G\over (2\pi)^{5/2}}\ \ {(-s_0)^{3/4}\ \tilde a_g(s_0)\over \sqrt{-s_0}\ +\ \alpha/2\sqrt{\beta}}\ \ (m \vert x_E\vert)^{-7/2\ +\ \alpha (3-\lambda)/2\pi} \ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\ \ \ \cdot\ {\rm e}^{\displaystyle\ -\ (\ \sqrt{-s_0}\ +\ \alpha/2\sqrt{\beta}\ )\ m \vert x_E\vert}\ \ \Big[\ 1\ +\ \ldots\ \Big]\\ \vspace{0.3cm}\nonumber\\ \label{DR9} b_g(x_E)&\stackrel{m^2 x^2_E \rightarrow \infty}{=}& m^4\ {\alpha\ C_g\ H\over (2\pi)^{5/2}}\ \ (-s_0)^{3/4}\ \tilde a_g(s_0)\ \ (m \vert x_E\vert)^{-7/2\ +\ \alpha (3-\lambda)/2\pi} \ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\ \ \ \cdot\ {\rm e}^{\displaystyle\ -\ (\ \sqrt{-s_0}\ +\ \alpha/2\sqrt{\beta}\ )\ m \vert x_E\vert}\ \ \Big[\ 1\ +\ \ldots\ \Big]\\ \vspace{0.3cm}\nonumber\\ \label{DR10} G&=&-{3\over 2}\ (1+\lambda)\ + \nonumber\\ \vspace{0.3cm}\nonumber\\ &&\ \ \ +\ 2\ (3-\lambda)\ \left[\ s_0\ \ {\tilde a^\prime_g(s_0)\over \tilde a_g(s_0)}\ +\ \sqrt{-s_0}\ \ {\tilde b^\prime_g(s_0)\over \tilde a_g(s_0)}\ +\ {1\over 2}\ \right]^2 \\ \vspace{0.3cm}\nonumber\\ \label{DR11} H&=&-3\ (1+\lambda)\ -\ G \end{eqnarray} It should find mention that eq.\ (\ref{DR10}) provides us with an implicit expression for $G$ only because in view of eqs.\ (\ref{DR9}), (\ref{DR11}) its r.h.s.\ also depends on $G$ via the term $\tilde b^\prime_g(s_0)/\tilde a_g(s_0)$. Therefore, eq.\ (\ref{DR10}) represents a cubic equation for the value of $G$ which has always at least one (real) solution. From eq.\ (\ref{DR10}) one recognizes that $G$ is a RG invariant quantity, i.e., it is invariant against (finite) mass and (fermion) wave function renormalizations (We will discuss the normalization issue further below.).\\ \parindent1.5em Taking into account the definitions (\ref{DI10a}), (\ref{DI10b}) we find from eqs.\ (\ref{DR8}), (\ref{DR9}) the IR tail of $a$, $b$. \begin{eqnarray} \label{DR12} a(x_E)&\stackrel{m^2 x^2_E \rightarrow \infty}{=}& m^4\ {\alpha\ G\over (2\pi)^{5/2}}\ \ (-s_0)^{1/4}\ \tilde a_g(s_0)\ \ \ (m \vert x_E\vert)^{-7/2}\ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\ \ \ \cdot\ {\rm e}^{\displaystyle\ -\ \sqrt{-s_0}\ m \vert x_E\vert}\ \ \Big[\ 1\ +\ \ldots\ \Big]\\ \vspace{0.3cm}\nonumber\\ \label{DR13} b(x_E)&\stackrel{m^2 x^2_E \rightarrow \infty}{=}& m^4\ {\alpha\ H\over (2\pi)^{5/2}}\ \ (-s_0)^{3/4}\ \tilde a_g(s_0)\ \ \ (m \vert x_E\vert)^{-7/2}\ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\ \ \ \cdot\ {\rm e}^{\displaystyle\ -\ \sqrt{-s_0}\ m \vert x_E\vert}\ \ \Big[\ 1\ +\ \ldots\ \Big] \end{eqnarray} \parindent0.em From above equations we see that the IR tails of $a$, $b$ agree qualitatively (The same is true for $a_g$, $b_g$.).\\ \parindent1.5em After having obtained the functional dependence of the kernel of the fermion action in the asymptotic IR region we still need to fix the arbitrary constants involved (In particular, this will require to discuss the normalization issue not touched so far.). For this purpose we have to calculate the Fourier transforms of $a_g$, $b_g$, and that of $a$, $b$ the latter of which are determined by the solution of the integral equation (\ref{DK1}) via eqs.\ (\ref{DI10a}), (\ref{DI10b}). It appears reasonable to represent these Fourier transforms in the low $s$ region \footnote{In the following we will deliberately leave open the precise meaning of this term and we will return to the issue in section \ref{SH2} only.} appropriate for the normalization purposes we are aiming at by the sum of the Fourier transforms of the trivial solution (\ref{DK2}), (\ref{DK3}) and the Fourier transforms $\tilde a_{sg}$, $\tilde b_{sg}$, $\tilde a_s$, $\tilde b_s$ of the IR tails of $a_g$, $b_g$ and $a$, $b$ given in eqs.\ (\ref{DR8}), (\ref{DR9}) and (\ref{DR12}), (\ref{DR13}) respectively. So, we simply extend the long distance representations (\ref{DR8}), (\ref{DR9}), (\ref{DR12}), (\ref{DR13}) to the whole configuration space and expect that this procedure will give reasonable results in the low $s$ region at least.\\ To the calculation of the Fourier transforms following formula applies \cite{g}. \begin{eqnarray} \label{DR14} &&\hspace{-1.5cm}\int d^4x_E\ \ {\rm e}^{\ -ip_E x_E}\ \ \left( x^2_E\right)^\kappa\ \ {\rm e}^{\displaystyle\ - \rho \vert x_E\vert}\ \ \ =\nonumber\\ \vspace{0.3cm}\nonumber\\ &&=\ -\ {4\pi^2\ \Gamma\left(4+2\kappa\right)\over \vert p_E\vert\ \left(\rho^2+p^2_E\right)^{3/2\ +\ \kappa}}\ \ P^{-1}_{2(1+\kappa)}\left({\rho\over\sqrt{ \rho^2+p^2_E}}\right)\nonumber\\ \vspace{0.5cm}\nonumber\\ &&=\ \ \ {4\pi^2\ \Gamma(3+2\kappa)\ \ \over p^2_E\ \left(\rho^2+p^2_E\right)^{3/2\ +\ \kappa} }\ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\ \ \ \ \ \ \cdot\ \left[\ \sqrt{ \rho^2+p^2_E}\ P_{1+2\kappa}\left({\rho\over\sqrt{ \rho^2+p^2_E}}\right)\ -\ \rho\ P_{2(1+\kappa)}\left({\rho\over\sqrt{ \rho^2+p^2_E}}\right)\ \right]\\ \vspace{0.5cm}\nonumber\\ &&\hspace{7.cm}Re\ \rho\ >\ 0\ ,\ \ Re\ \kappa\ > -2\nonumber \end{eqnarray} \parindent0.em Having in mind continuation to Minkowski space, please note that (more precisely) the condition $\left\vert Im\ \vert p_E \vert \right\vert\ <\ Re\ \rho$ is to be respected. Although it is less compact in the following we will always exploit the lower representation of eq.\ (\ref{DR14}) because we find it more convenient for an eventual transition back to Minkowski space.\\ \parindent1.5em For $\tilde a_g$ given in the low $s$ region as Fourier transform of eq.\ (\ref{DR8}) we obtain the following result. \begin{eqnarray} \label{DR15} \hspace{-0.5cm} \tilde a_g(s)&=&{\alpha\ C_g\ G\over \sqrt{2\pi}}\ \ \Gamma\left(-{1\over 2}\ +\ \alpha\ {(3-\lambda)\over 2\pi}\ \right)\ (-s_0)^{3/4}\ \tilde a_g(s_0)\ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\ \ \ \cdot\ {1\over s}\ \left[\ (\sqrt{-s_0}\ +\ \alpha/2\sqrt{\beta})^2\ + s\ \right]^{1/4\ -\ \alpha (3-\lambda)/4\pi}\ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\ \ \ \cdot\ \left[\ \sqrt{\ 1\ +\ {s\over (\sqrt{-s_0}\ +\ \alpha/2\sqrt{\beta})^2}}\ \right.\cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\ \ \ \ \ \ \cdot\ P_{-5/2\ +\ \alpha (3-\lambda)/2\pi}\left(\left( 1\ +\ {s\over (\sqrt{-s_0}\ +\ \alpha/2\sqrt{\beta})^2}\right)^{-1/2} \right) \ -\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\ \ \ \ \ \ -\ \left. P_{-3/2\ +\ \alpha (3-\lambda)/2\pi}\left(\left( 1\ +\ {s\over (\sqrt{-s_0}\ +\ \alpha/2\sqrt{\beta})^2}\right)^{-1/2} \right)\ \right] \end{eqnarray} \parindent0.em By specifying $s = s_0$ (this corresponds to an analytic continuation to Minkowski space) above equation leads to a consistency equation (the value of $\tilde a_g(s_0)$ drops out) yielding a first relation among the parameters of the IR solution. It reads \begin{eqnarray} \label{DR16} 1&&=\ -\ \alpha^{1\ -\ \alpha(3-\lambda)/2\pi}\ \ {G\over \sqrt{2\pi}}\ \ \Gamma\left(-{1\over 2}\ +\ \alpha\ {(3-\lambda)\over 2\pi}\ \right)\ \ \ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\ \ \ \ \ \cdot\ \exp\left\{\ {\alpha\over 4\pi}\left[\ (3+\lambda)\ +\ 2\ (3-\lambda)\ \gamma\ \right]\ \right\}\ w^{-1/2}\ \ (1 + 2 w)^{1/4\ -\ \alpha (3-\lambda)/4\pi}\ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\ \ \ \ \ \cdot\ \left[\ {\sqrt{1 + 2 w}\over 1 + w}\ P_{-5/2\ +\ \alpha (3-\lambda)/2\pi}\left(\ {1 + w\over\sqrt{1 + 2 w} }\ \right) \ - \right.\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{3.5cm} -\ \left. P_{-3/2\ +\ \alpha (3-\lambda)/2\pi}\left(\ {1 + w\over\sqrt{1 + 2 w} }\ \right)\ \right]\ \ \ \ ,\ \\ \vspace{1.3cm}\nonumber\\ \label{DR17} &&\hspace{7.5cm}w\ =\ {2\over\alpha}\ \sqrt{-s_0\ \beta}\ \ \ \ . \end{eqnarray} Here, $G$ is understood as a function of $w$ and $\alpha$ (and $\lambda$). It is given as solution of the following cubic equation derived from eq.\ (\ref{DR10}). \begin{eqnarray} \label{DR18} &&\hspace{-1.5cm} G^3\ +\ \left\{\ {3\over 2}\ (1 + \lambda)\ -\ 2\ (3 - \lambda)\ \left[\ \left( 2 + {1\over w}\right)\ L(w,\alpha)\ +\ {1\over 2}\ \right]^2\ \right\}\ G^2\ -\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-1.cm}-\ 12\ (3 - \lambda)\ (1 + \lambda)\ \left( 1 + {1\over w}\right)\ \left[\ \left( 2 + {1\over w}\right)\ L(w,\alpha)\ +\ {1\over 2}\ \right]\ L(w)\ \ G -\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-1.cm}-\ 18\ (3 - \lambda)\ (1 + \lambda)^2\ \left(1 + {1\over w}\right)^2\ L(w,\alpha)^2\ \ =\ \ 0\\ \vspace{0.5cm}\nonumber\\ &&\hspace{6.5cm} L(w,\alpha)\ =\ s_0\ \ {\tilde a^\prime_g(s_0)\over \tilde a_g(s_0)}\nonumber \end{eqnarray} To obtain this cubic equation we have made use of the relation \begin{eqnarray} \label{DR19} \tilde b_g(s)&=&-\ \sqrt{-s_0}\ \left(\ 1 + {1\over w}\ \right)\ \left[1\ +\ {3\ (1 + \lambda)\over G}\right]\ \tilde a_g(s)\ +\ \tilde b(\infty) \end{eqnarray} based on eqs.\ (\ref{DR8}), (\ref{DR9}) and therefore valid in the low $s$ region only. We see that solutions $G$ of equation (\ref{DR18}) are functions of $w$, $\alpha$ while solutions $w$ of eq.\ (\ref{DR16}) exclusively depend on $\alpha$ (and on $\lambda$, in principle, if for conceptual reasons we were not to set it to zero as outlined in chapter 3). Clearly, they do not depend on $\tilde b(\infty)$. Although numerically the discriminant of eq.\ (\ref{DR18}) turns always out to be negative in the relevant domain, only one of the three real solutions of eq.\ (\ref{DR18}) then proves appropriate to find a solution of eq.\ (\ref{DR16}) furthermore. In general, solutions $G$, $w(\alpha)$ of above equations can be found numerically only (For a plot of numerical results see figs.\ 1, 2.). But, for sufficiently small $\alpha$ ($\alpha \ll 1$) $w(\alpha)$ turns out to be large ($w(\alpha) \gg 1$) and eq.\ (\ref{DR16}) admits an analytical solution in this region. This asymptotic solution will be studied now.\\ \parindent1.5em We investigate the case $\alpha \ll 1$ (We assume that the solution $w(\alpha)$ in this region will be much larger than one.). Let us start with the following asymptotic representation \cite{j}. \begin{eqnarray} \label{DR20} &&\hspace{-1.5cm}z^{-1/2\ +\ \kappa}\ \left[\ z^{-1}\ P_{-5/2\ +\ \kappa}(z)\ -\ P_{-3/2\ +\ \kappa}(z)\ \right]\ = \nonumber\\ \vspace{0.3cm}\nonumber\\ &&=\ \left( {1\over 2}\ -\ \kappa \right)\ {\Gamma\left(1\ -\ \kappa\right)\over \Gamma\left({5\over2}\ -\ \kappa\right)}\ {2^{1/2\ -\ \kappa}\over \sqrt{\pi}}\ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\ \ \ \ \cdot\ \left[\ 1\ -\ {1\over 4 \kappa\ z^2}\ \left( {1\over 2}\ -\ \kappa \right)\ \left( {3\over 2}\ -\ \kappa \right)\ \ \right. \cdot \nonumber\\ \vspace{0.3cm}\nonumber\\ &&\ \ \ \ \cdot\ \left.\left(\ {(2z)^{2\kappa}\over 1 - \kappa}\ {\Gamma\left({1\over2}\ -\ \kappa\right)\over \Gamma\left({1\over2}\ +\ \kappa\right)}\ {\Gamma\left(1\ +\ \kappa\right)\over \Gamma\left(1\ -\ \kappa\right)}\ -\ 1\ \right)\ +\ O\left( z^{-2(2\ -\ \kappa)}\right)\ \right]\\ \vspace{0.3cm}\nonumber\\ &&\hspace{7.5cm}\kappa\ >\ 0\ \ ,\ \ \vert z\vert \ \gg\ 1\nonumber\\ \vspace{0.3cm}\nonumber\\ \label{DR21} &&=\ {2\ \sqrt{2}\over 3\pi}\ \left[\ 1\ -\ {3\over 16}\ z^{-2}\ \left[\ 2\ \ln 8z\ +\ 1\ \right]\ +\ O(z^{-4}\ln z)\ \right]\ \ ,\\ \vspace{0.5cm}\nonumber\\ &&\hspace{7.5cm}\kappa\ =\ 0\ \ ,\ \ \vert z\vert \ \gg\ 1\nonumber \end{eqnarray} \parindent0.em Then, from eq.\ (\ref{DR16}) one finds (Here, $\ln w(\alpha)$ is thought to grow for small $\alpha$ like $\alpha^{-1/2}$ at most.) \begin{eqnarray} \label{DR22} G&=&{3\pi\over 4\alpha}\ \left\{\ 1\ -\ {\alpha\over 4\pi}\ \left[\ (3+\lambda)\ +\ 2\ (3-\lambda)\ \left({8\over 3}\ -\ \ln\left[{2^5 w(\alpha)\over\alpha}\right]\ \right)\ \right]\right.\ +\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{1.5cm}+\left. {1\over 2}\ \left({\alpha (3 - \lambda)\over 2\pi}\right)^2\ \ln w(\alpha)\ \ln\left[\alpha^4 w(\alpha)\right]\ +\ O(\alpha^{3/2})\ \right\}\ \ . \end{eqnarray} Taking into account (\ref{DR22}) eq.\ (\ref{DR19}) can then be inserted on the r.h.s.\ of eq.\ (\ref{DR10}) and eq.\ (\ref{DR22}) on its l.h.s.. The solution of the resulting equation for $w(\alpha)$ is now straightforward. One finds for small $\alpha$ \begin{eqnarray} \label{DR23} \hspace{-2.cm}w(\alpha)&=&{1\over 32}\ \exp\left\{\ {2\over 3}\ \sqrt{{2\pi\over \alpha\ (1-\lambda/3)}}\ +\ 4\ +\ \sqrt{{\alpha\ (1-\lambda/3)\over 2\pi}}\ \ln \alpha\ -\ \right.\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{1.5cm}-\left. {1\over 6}\ \sqrt{{\alpha\ (1-\lambda/3)\over 2\pi}}\ \left[\ {59\over 3}\ +\ {38\lambda\over (3-\lambda)}\ \right]\ +\ O\left(\alpha\right)\ \right\}\ \ ,\\ \vspace{0.5cm}\nonumber\\ &&\hspace{8.5cm}\alpha\ \ll\ 1\ \ \ .\nonumber \end{eqnarray} Note, that higher loop contributions possibly to be taken into account in the integral equation (\ref{DK1}) will influence above result via the last term in the exponent only. To see this simply replace in the first term in the exponent $\alpha$ by $\alpha [1 + O(\alpha)]$. Finally, using (\ref{DR23}) one finds from eq.\ (\ref{DR22}) following expression for $G(\alpha)$. \begin{eqnarray} \label{DR24} \hspace{-0.5cm} G(\alpha)&=&{3\pi\over 4\alpha}\ \left\{\ 1\ +\ 2\ \sqrt{{\alpha\ (1 - \lambda/3)\over 2\pi}}\ +\ {\alpha\ (3 - \lambda)\over 2\pi}\ \ln \alpha\ +\right.\nonumber\\ \vspace{0.3cm}\nonumber\\ &&+\left. \ {\alpha\ (9 - 5 \lambda)\over 4\pi}\ +\ 15\ \left({\alpha\ (1 - \lambda/3)\over 2\pi}\right)^{3/2}\ \ln \alpha\ +\ O\left(\alpha^{3/2}\right)\ \right]\\ \vspace{0.5cm}\nonumber\\ &&\hspace{8.5cm}\alpha\ \ll\ 1\ \ \ .\nonumber \end{eqnarray} \parindent1.5em The next task is to find the solution $s_0$ of eq.\ (\ref{DR4}). But, any solution $s_0$ can sensibly be related to physics only if the mass normalization to be used is specified. So, before attempting the task to find $s_0$ we discuss the normalization issue in somewhat greater detail now.\\ Let us assume we had determined $s_0$. Then, whatever normalization of $\tilde a_g(s_0)$ is applied eq.\ (\ref{DR4}) yields the value of $\tilde b_g(s_0)$, and in our specific case the value of $\tilde b(\infty)$ because eq.\ (\ref{DR9}) is not independent of eq.\ (\ref{DR8}). Now, let a certain function $\hat{g}\ =\ \hat{g}(-m^2x^2)$ with $\hat{g}(0) = 1$ define a map $\hat{{\bf g}}: a_g\longrightarrow a_{g\hat{g}},\ b_g\longrightarrow b_{g\hat{g}}$ by applying the prescriptions (\ref{DI10a}), (\ref{DI10b}) to $\hat{g}$. Considering the equation \begin{eqnarray} \label{DR25} s_1 &=& -\ {\tilde b^2_{g\hat{g}}(s_1)\over \tilde a^2_{g\hat{g}}(s_1)}\ \ \ \ \ \ ,\ \ (\ s_1\ <\ 0\ )\ \ \ \ , \end{eqnarray} \parindent0.em the map $\hat{\bf g}$ obviously induces a map $\hat{{\bf g}}_s: s_0 \longrightarrow s_1$. If $\hat{g} \equiv 1$, $\hat{\bf g}$ and $\hat{{\bf g}}_s$ are the identity maps. If we specifically choose $\hat{g} = g^{-1}$, then $\hat{\bf g}$ is the inverse of $\bf g$ and it holds $a_{g\hat{g}} = a$, $b_{g\hat{g}} = b$ (cf.\ eqs.\ (\ref{DI10a}), (\ref{DI10b})). However, $a$, $b$ are related to physics and we would like to formulate normalization conditions in their terms, i.e., we naturally prefer to impose standard normalization conditions on $\tilde a$, $\tilde b$ (i.e., mass shell normalization at the physical electron mass $m$): \begin{eqnarray} \label{DR26} \tilde a(s_1=-1)&=\ \pm\ \tilde b(s_1=-1)\ =&N^{-1}_2\ =\ 1\hspace{2.cm}. \end{eqnarray} By other words, we of course require that the fermion propagator derived from the effective action we are in search of has a pole related to the physical electron mass $m$. In eq.\ (\ref{DR26}) $N_2$ is the (fermion) wave function normalization constant \footnote{A (finite) wave function renormalization corresponds to a change in $N_2$.}. Note, that it is always possible to choose $s_1 = -1$ because in our set-up there exists a scaling symmetry $m \rightarrow \tau m$ ($s \rightarrow s/\tau^2 $), $\beta \rightarrow \tau^2 \beta$, $b \rightarrow b/\tau $ for any non-zero real parameter $\tau$ (RG invariance against (finite) mass renormalizations). Consequently, we now apply the inverse map $\ \hat{{\bf g}}^{-1}_s: s_1 \longrightarrow s_0$ to determine $s_0$.\\ \parindent1.5em Taking into account (cf.\ eqs.\ (\ref{DR12}), (\ref{DR13})) \begin{eqnarray} \label{DR27} \tilde b(s)&=&-\ \sqrt{-s_0}\ \left[1\ +\ {3\ (1 + \lambda)\over G}\right]\ \tilde a(s)\ +\ \tilde b(\infty) \end{eqnarray} \parindent0.em (valid in the low $s$ region) and the low $s$ result for the Fourier transform of $a$ \begin{eqnarray} \label{DR28} &&\hspace{-1.cm}\tilde a(s)\ =\ \sqrt{2}\ \ \alpha\ G(\alpha)\ \tilde a(s_0)\ \ {s_0\over s}\ \left(1-{s\over s_0}\right)^{1/4}\ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\ \ \ \ \ \cdot\ \left[\ \sqrt{1-{s\over s_0}}\ P_{-5/2}\left(\left(1-{s\over s_0}\right)^{-1/2}\right)\ -\ P_{-3/2}\left(\left(1-{s\over s_0}\right)^{-1/2}\right)\ \right] \end{eqnarray} we conveniently calculate for the $s_1$-pole via $\sqrt{-s_1}\ \tilde a(s_1) = \pm \tilde b(s_1)$ the value of the RG invariant quantity $\tilde b(\infty)/(\sqrt{-s_0}\ \tilde a_g(s_0))$ (i.e., the value of the RG variant quantity $\tilde b(\infty)$ expressed in terms of $s_0$ and $\tilde a_g(s_0)$). We find \begin{eqnarray} \label{DR29} &&\hspace{-1.cm}{\tilde b(\infty)\over\sqrt{-s_0}\ \tilde a_g(s_0)}\ = \nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-0.5cm} =\sqrt{2\ u}\ \alpha\ G(\alpha)\ \left\{\ \pm 1\ +\ \sqrt{u}\ \left[1\ +\ {3 (1 + \lambda)\over G(\alpha)}\right]\right\} \left(1-u^{-1}\right)^{1/4}\ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\ \cdot\ \left[\ \sqrt{1-u^{-1}}\ P_{-5/2}\left(\left(1-u^{-1}\right)^{-1/2}\right)\ -\ P_{-3/2}\left(\left(1-u^{-1}\right)^{-1/2}\right)\ \right]\ ,\ \\ \vspace{0.5cm}\nonumber\\ &&\hspace{9.5cm}u\ =\ {s_0\over s_1}\hspace{1.5cm} .\nonumber \end{eqnarray} The same quantity can now be found from the $s_0$-pole via $\sqrt{-s_0}\ \tilde a_g(s_0) = \tilde b_g(s_0)$ \footnote{We omit the other root $\sqrt{-s_0}\ \tilde a_g(s_0) = - \tilde b_g(s_0)$ because one does not find any solution $s_0$ in this case.} and both values have to agree, of course, what provides us with an equation for $s_0$ measured in units of $s_1$, which is in our case ($s_1 = -1$) related to the physical electron mass $m$. The equation reads \begin{eqnarray} \label{DR30} &&\hspace{-1.cm} 1\ +\ \left(1\ +\ {1\over w(\alpha)}\right)\ \left[1\ +\ {3 (1 + \lambda)\over G(\alpha)}\right]\ =\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-0.5cm} =\ \sqrt{-2\ s_0 }\ \alpha\ G(\alpha)\ \left\{\ \pm 1\ +\ \sqrt{-s_0}\ \left[1\ +\ {3 (1 + \lambda)\over G(\alpha)}\right]\right\} \left(1+s_0^{-1}\right)^{1/4}\ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-0.5cm} \ \ \ \cdot\ \left[\ \sqrt{1+s_0^{-1}}\ P_{-5/2}\left(\left(1+s_0^{-1}\right)^{-1/2}\right)\ -\ P_{-3/2}\left(\left(1+s_0^{-1}\right)^{-1/2}\right)\ \right]\ \ . \end{eqnarray} Again, in general solutions $s_0(\alpha)$ of this equation can be studied numerically only (see fig.\ 3). However, for very small $\alpha$ ($\alpha \ll 1$) where $s_0$ is very close to -1 it can also be investigated analytically and one finds (choose the upper sign in eq.\ (\ref{DR30})) \begin{eqnarray} \label{DR31} \hspace{-0.7cm} \sqrt{\alpha}\ \left[1\ +\ O\left(\sqrt{\alpha} \ln \alpha\right)\right]&=& \sqrt{{2\pi\over (1-\lambda/3)}}\ {3(1+s_0)\over 32}\ \left[\ \ln {-(1+s_0)\over 64}\ +\ 3\ \right]\ ,\ \\ \vspace{0.5cm}\nonumber\\ &&\hspace{6.cm}\alpha\ \ll\ 1\ \ .\nonumber \end{eqnarray} It should be noted that for eq.\ (\ref{DR30}) a critical value $\alpha = \alpha_c$ exists which separates the $\alpha$ regions in which the upper and lower signs in eq.\ (\ref{DR30}) apply. For $\alpha < \alpha_c$ only in case of the upper sign a solution $s_0$ exists \footnote{It is clear that for small $\alpha$ (i.e., $\alpha \rightarrow 0$) a smooth transition from $\sqrt{-s_0}\ \tilde a_g(s_0) = \tilde b_g(s_0)$ to $\sqrt{-s_1}\ \tilde a(s_1) = \pm \tilde b(s_1)$ must exist , consequently the upper sign holds.} while for $\alpha > \alpha_c$ only the lower sign admits to find a solution $s_0$. This critical value $\alpha_c$ corresponds to the singularity $s_0(\alpha\rightarrow \alpha_c) \longrightarrow -\infty$. Consequently, we find from (\ref{DR30}) the equation for determining $\alpha_c$ by considering $s_0 \longrightarrow -\infty$. It reads \begin{eqnarray} \label{DR32a}\hspace{-0.5cm} 1\ +\ \left[\ 1\ +\ {1\over w(\alpha_c)}\ -\ {\alpha_c\over 2 \sqrt{2}}\ G(\alpha_c)\ \right]\ \left[1\ +\ {3\ (1 + \lambda)\over G(\alpha_c)}\right]&=&0\ \ \ . \end{eqnarray} Numerically, one finds $\alpha_c \simeq 0.70$ (see fig.\ 3). Furthermore, there exists a maximal value $\alpha = \alpha_{max} > \alpha_c$ beyond which no solution $s_0$ can be found. The value of $\alpha_{max}$ corresponds to the limit $s_0(\alpha\rightarrow \alpha_{max}) \longrightarrow -1$. The corresponding equation for $\alpha_{max}$ reads \begin{eqnarray} \label{DR32b}\hspace{-0.5cm} &&\hspace{-1.cm} \left[\ 1\ +\ {4\over 3 \pi}\ \alpha_{max}\ G(\alpha_{max})\ \right]\ + \nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-0.4cm}+\ \left[\ 1\ +\ {1\over w(\alpha_{max})}\ -\ {4\over 3 \pi}\ \alpha_{max}\ G(\alpha_{max})\ \right]\ \left[1\ +\ {3\ (1 + \lambda)\over G(\alpha_{max})}\right]\ =\ 0\ . \end{eqnarray} The numerical calculation yields $\alpha_{max} \simeq 2.64$ (see fig.\ 3).\\ \parindent1.5em From above considerations it is clear that to find a consistent IR solution of the integral equation (\ref{DK1}) requires to understand the parameter $\beta$ of our Ansatz (\ref{DG5}) as some function of $\alpha$ and therefore it cannot be left arbitrary up to the point where we are going to impose the fixed point condition for the kernel of the gauge field action. It will be true in general that one parameter of any Ansatz (containing, say, $n$ parameters) for the kernel of the gauge field action needs to be reserved to allow to find a consistent IR solution of the integral equation (\ref{DK1}). We have only one parameter at hand and from eq.\ (\ref{DR17}) we immediately find its dependence on $\alpha$ (for a plot see fig.\ 4). \begin{eqnarray} \label{DR33} \beta &=\ \beta(\alpha)\ =& -\ {\alpha^2\ w(\alpha)^2\over 4\ s_0(\alpha)} \end{eqnarray} \parindent0.em Here, $w(\alpha)$, $s_0(\alpha)$ are solutions of eqs.\ (\ref{DR16}), (\ref{DR30}) respectively. One easily recognizes (cf.\ fig.\ 4) that for small $\alpha$ the parameter $\beta$ assumes unrealistic large values what underscores the point that the present approximative calculation has to be understood as a model calculation only.\\ \parindent1.5em After having applied the normalization condition (\ref{DR26}) and having fixed the parameters $G$, $s_0$, $\beta$ the functions $\tilde a = \tilde a_s$, $\tilde b = \tilde b(\infty) + \tilde b_s$ can be written in the low $s$ region as follows ($s_0 \le -1$). \begin{eqnarray} \label{DR34} &&\hspace{-1.3cm}\tilde a(s)\ =\ -\ {1\over s}\ \left(1-{s\over s_0}\right)^{1/4}\ \left(1+{1\over s_0}\right)^{-1/4}\ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\ \cdot\ \left[\ \sqrt{1-{s\over s_0}}\ P_{-5/2}\left(\left(1-{s\over s_0}\right)^{-1/2}\right)\ -\ P_{-3/2}\left(\left(1-{s\over s_0}\right)^{-1/2}\right)\ \right]\ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\ \cdot\ \left[\ \sqrt{1+{1\over s_0}}\ P_{-5/2}\left(\left(1+{1\over s_0}\right)^{-1/2}\right)\ -\ P_{-3/2}\left(\left(1+{1\over s_0}\right)^{-1/2}\right)\ \right]^{-1}\ ,\\ \vspace{1.5cm}\nonumber\\ \label{DR35} &&\hspace{-1.cm}\tilde b(s)\ =\ ( \pm 1 -\tilde b(\infty))\ \tilde a(s)\ +\ \tilde b(\infty)\ \ \ \ . \end{eqnarray} \parindent0.em The parameter $\tilde b(\infty)$ in the normalization applied reads (for a plot see fig.\ 5) \begin{eqnarray} \label{DR36} \tilde b(\infty)\ =&\pm 1\ -\ \sqrt{-s_0}\ \ \displaystyle{H\over G}&=\ \pm 1\ +\ \sqrt{-s_0}\left[\ 1\ +\ {3(1+\lambda)\over G(\alpha)}\right]\ \ \ \ \ \ . \end{eqnarray} For small $\alpha$ we immediately find from eq.\ (\ref{DR24}) \begin{eqnarray} \label{DR37} &&\hspace{-1.cm}\tilde b(\infty)\ =\ 1\ +\ \sqrt{-s_0} \left[\ 1\ +\ 4\ (1+\lambda)\ {\alpha\over\pi}\ +\ O\left(\alpha^{3/2}\right)\ \right]\ \ \ ,\ \\ \vspace{0.5cm}\nonumber\\ &&\hspace{7.5cm}\alpha\ \ll\ 1\ \ ,\ \ \alpha\ <\ \alpha_c \ .\nonumber \end{eqnarray} Taking into account eq.\ (\ref{DR31}) ($s_0 \simeq -1$, $\alpha \ll 1$) we recognize that for small $\alpha$ ($\alpha \ll 1$, $\alpha < \alpha_c$) it holds $\tilde b(\infty) \simeq 2$. From a physical point of view this might be interpreted such a way that at low energies the fermion action merely describes individual real fermions ($\tilde b \simeq 1$), i.e., a single particle interpretation is possible, while at high energies it reflects collective properties of the vacuum which are related to fermion (electron-positron) pairs, consequently $\tilde b \sim \tilde b(\infty) \simeq 2$. Apparently, such an interpretation breaks down at stronger coupling.\\ \parindent1.5em Now, the appropriately normalized $\tilde a_g(s)$ (eq.\ (\ref{DR15})) reads in the low $s$ region \begin{eqnarray} \label{DR38} &&\hspace{-1.cm}\tilde a_g(s)\ =\nonumber\\ \vspace{0.3cm}\nonumber\\ &&=\ {C_g\over 2\sqrt{\pi}}\ \ \Gamma\left(-{1\over 2}\ +\ \alpha\ {(3-\lambda)\over 2\pi}\ \right)\ \left(-s_0 - 1\right)^{-1/4}\ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\ \ \ \ \ \cdot\ \left[\ \sqrt{1+{1\over s_0}}\ P_{-5/2}\left(\left(1+{1\over s_0}\right)^{-1/2}\right)\ -\ P_{-3/2}\left(\left(1+{1\over s_0}\right)^{-1/2}\right)\ \right]^{-1}\ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\ \ \ \ \ \cdot\ {1\over s}\ \left[\ (\sqrt{-s_0}\ +\ \alpha/2\sqrt{\beta})^2\ +\ s\ \right]^{1/4\ -\ \alpha (3-\lambda)/4\pi}\ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\ \ \ \ \ \cdot\ \left[\ \sqrt{\ 1\ +\ {s\over (\sqrt{-s_0}\ +\ \alpha/2\sqrt{\beta})^2}}\ \right.\cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\ \ \ \ \ \ \ \ \cdot\ P_{-5/2\ +\ \alpha (3-\lambda)/2\pi}\left(\left( 1\ +\ {s\over (\sqrt{-s_0}\ +\ \alpha/2\sqrt{\beta})^2}\right)^{-1/2} \right) \ -\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\ \ \ \ \ \ \ \ -\ \left. P_{-3/2\ +\ \alpha (3-\lambda)/2\pi}\left(\left( 1\ +\ {s\over (\sqrt{-s_0}\ +\ \alpha/2\sqrt{\beta})^2}\right)^{-1/2} \right)\ \right]\ \ \ \ .\ \end{eqnarray} \parindent0.em And eq.\ (\ref{DR19}) can be written as \begin{eqnarray} \label{DR39} \tilde b_g(s)&=&\left( \pm 1-\tilde b(\infty)\right)\ \left(\ 1\ +\ {1\over w(\alpha)}\ \right)\ \tilde a_g(s)\ +\ \tilde b(\infty)\ \ \ .\ \end{eqnarray} Clearly, $s_0$, $\beta$, $\tilde b(\infty)$ are functions of $\alpha$ ($\lambda = 0$ as explained in chapter 3).\\ \parindent1.5em Finally, the correctly normalized IR tails of $a$, $b$ characterizing the kernel of the fermion action are \begin{eqnarray} \label{DR40} &&\hspace{-3.cm}a(x_E)\stackrel{m^2 x^2_E \rightarrow \infty}{=} \nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-2.cm}=\ {m^4\over\sqrt{2}\ (2\pi)^{5/2}}\ \ \left(-s_0 - 1\right)^{-1/4}\ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-1.cm} \cdot\ \left[\ \sqrt{1+{1\over s_0}}\ P_{-5/2}\left(\left(1+{1\over s_0}\right)^{-1/2}\right)\ -\ P_{-3/2}\left(\left(1+{1\over s_0}\right)^{-1/2}\right)\ \right]^{-1}\ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-1.cm} \cdot \ (m \vert x_E\vert)^{-7/2}\ \ {\rm e}^{\displaystyle\ -\ \sqrt{-s_0}\ m \vert x_E\vert}\ \ \Big[\ 1\ +\ \ldots\ \Big]\hspace{1.cm},\\ \vspace{0.6cm}\nonumber\\ \label{DR41} b(x_E)&\stackrel{m^2 x^2_E \rightarrow \infty}{=}& \left(\pm 1-\tilde b(\infty)\right)\ a(m^2 x^2_E \rightarrow \infty)\hspace{1.cm}, \end{eqnarray} \parindent0.em where $s_0$ and $\tilde b(\infty)$ are to be considered as functions of $\alpha$. Clearly, in qualitative respect eqs.\ (\ref{DR40}), (\ref{DR41}) agree with the long distance representation of the 1-loop fermion self-energy calculated in standard QED perturbation theory.\\ \parindent1.5em To conclude this subsection it should be emphasized that in analyzing the integral equation (\ref{DK1}) for the kernel of the fermion action in the asymptotic UV and IR regions respectively based on certain reasonable assumptions we have obtained a qualitative and nonperturbative understanding of the behaviour of its solution. Furthermore, the IR analysis even yields approximative quantitative, nonperturbative results which combined with the information about the UV behaviour of the kernel of the fermion action obtained admits to attempt the approximative calculation of the QED coupling constant $\alpha$. This we will study now.\\ \subsubsection{\label{DS33}\label{SH2}The Fixed Point Condition for the Kernel of the Gauge Field Action and the Approximative Calculation of the QED Coupling Constant $\alpha$} From eq.\ (\ref{DI12}) we recognize that the functional integration induces a change $\Delta\Gamma^G_I [A]$ to be added to the gauge field action $\Gamma^G_I [A]$ to obtain $\Gamma^G_{II} [A]$. In accordance with our approximation strategy we display only those terms that match our Ansatz (\ref{DG5}). \begin{eqnarray} \label{DL1} \hspace{-0.5cm}\Delta\Gamma^G_I [A]\ &=&\nonumber\\ \vspace{0.3cm}\nonumber\\ \hspace{-0.7cm} = \ {\alpha\over 4\pi} &\displaystyle{\int} d^4x &A^\mu(x)\ \left[ g_{\mu\nu} \Box\ -\ \partial_\mu \partial_\nu\right]\ \left[\ C_{1a}\ +\ C_{2a}\ {\Box\over m^2}\ +\ \ldots\ \right] \ A^\nu(x) \end{eqnarray} \parindent.0em Because $a_g$, $b_g$ respect conditions (\ref{DF9}), (\ref{DF10}) (cf.\ eqs.\ (\ref{DV14}), (\ref{DV15})) no terms violating gauge invariance occur and eq.\ (\ref{ZA10}) applies. $C_{1a}$ reads (see Appendix A; as explained in section \ref{SH1} we confine ourselves to 1-loop contributions) \begin{eqnarray} \label{DL2a} C_{1a}\ &=&\ {2\over 3}\ \ln\ \left[ {\tilde b (\infty)\over\tilde b_g(0)} \right]^2\ - \ \int\limits_0^\infty ds\ M(s)\hspace{0.5cm},\\ \vspace{0.3cm}\nonumber\\ \label{DL2b} &&\ M(s)\ =\ \ {1\over s \tilde a^2_g + \tilde b^2_g }\ \left[\ {s\ \tilde a^2_g \over s \tilde a^2_g + \tilde b^2_g}\ \left[\ s\ \tilde a_g \tilde a^\prime_g + \tilde b_g \tilde b^\prime_g\ \right] \ +\right.\nonumber\\ \vspace{0.2cm}\nonumber\\ &&\hspace{2cm} +\ {2\over 3}\ s^3\ \tilde a_g \tilde a^{\prime\prime\prime}_g\ +\ 3\ s^2\ \tilde a_g \tilde a^{\prime\prime}_g\ +\ {2\over 3}\ s^2\ \tilde b_g \tilde b^{\prime\prime\prime}_g\ +\nonumber\\ &&\hspace{2cm}+\ 2\ s\ \tilde a_g \tilde a^\prime_g\ +\ 3\ s\ \tilde b_g \tilde b^{\prime\prime}_g\ -\ s\ (\tilde b^\prime_g)^2\ +\ 3\ \tilde b_g \tilde b^\prime_g\ \Bigg] \ \ . \end{eqnarray} From above expression one recognizes that $C_{1a}$ is a RG invariant quantity, i.e., it is invariant against (finite) mass and (fermion) wave function renormalizations. $C_{2a}$ has not yet been calculated in terms of $\tilde a_g$, $\tilde b_g$ but it will have an analogous representation. Because $\tilde a_g$, $\tilde b_g$ exclusively depend on $\alpha$ the coefficients $C_{1a}$, $C_{2a}$ can both be understood as functions of this parameter. Then, the fixed point condition $d_I = d_{II}$ according to our approximation strategy reads (cf.\ subsection 4.2.3) \begin{eqnarray} \label{DL3} C_{1a}(\alpha)\ &=&\ 0\hspace{2.cm} ,\\ \vspace{-0.1cm}\nonumber\\ \label{DL4} C_{2a}(\alpha)\ &=&\ 0\hspace{2.cm} . \end{eqnarray} It is clear that within our approximative approach we do not have enough parameters left to satisfy both of these equations (if they are not degenerate, perhaps by accident). We decide to choose eq.\ (\ref{DL3}) as fixed point equation because we require that at least in the asymptotic IR (long distance, long wavelength) region the fixed point condition for the map $f$ should be fulfilled. Consequently, to determine the QED coupling constant $\alpha$ we have to find the zero(s) of $C_{1a}(\alpha)$.\\ \parindent1.5em The explicit calculation of $C_{1a}$ has of course to be based on information obtained in the preceding sections. The first point to be made is that we will take eq.\ (\ref{DL2a}) as it stands. In principle, one could identically reformulate it by exploiting partial integrations for functions that obey conditions (\ref{DF9}), (\ref{DF10}) (or the even somewhat weaker conditions $a_g(s) = O(s^\kappa),\ \kappa < -1/2,\ b_g(s) = O(1), \ s \rightarrow \infty$). We choose the present representation for its 'minimal' shape (Of course, this is merely a matter of taste.). Let us also emphasize that it turns out advantageous because a certain piece is already integrated out and it therefore depends on the boundary values of $\tilde b_g$ only. This term contains certain nonperturbative information from the solution of the integral equation (\ref{DK1}) for the kernel of the fermion action not easily incorporated otherwise. Finally, one should keep in mind that although different representations of eq.\ (\ref{DL2a}) are equivalent in a rigorous mathematical sense, they may lead to different answers if approximative information is taken into account only (and this is what we will do).\\ Now, the first guess might be simply to insert into eq.\ (\ref{DL2a}) the IR representation found for $\tilde a_g$, $\tilde b_g$ (eqs.\ (\ref{DR38}), (\ref{DR39})). But, as comes as no surprise the integral in eq.\ (\ref{DL2a}) is not convergent for $\alpha \le \pi/3$ (it is logarithmically UV divergent then). In other words, this approximation would be so crude as to even not deliver finite results. So, in the parameter region $\alpha \le \pi/3$ at least one has to proceed differently. Without any problem we may always insert the value of $\tilde b(\infty)$ determined by the normalization conditions applied within the IR analysis. For $\tilde b_g(0)$ and in the low $s$ integration region of the integral we will insert $\tilde a_g$, $\tilde b_g$ as given by eqs.\ (\ref{DR38}), (\ref{DR39}). In the large $s$ region $\tilde a_g$, $\tilde b_g$ will be taken from eqs.\ (\ref{DV14}), (\ref{DV15}). One immediately recognizes that this is a better approximation because the integral in eq.\ (\ref{DL2a}) then gives finite results. Now, of course, the practical question arises which intermediate value of the integration variable $s$ in eq.\ (\ref{DL2a}) should split the application regions of the IR and UV representation of $\tilde a_g$, $\tilde b_g$. Perhaps, one could choose to fit together the IR and UV representations at some value of $s$ to be determined by a certain condition. For the purpose of the present numerical calculation we select another way. The UV tail of the integrand $M(s)$ in eq.\ (\ref{DL2a}) will not contribute significantly and we therefore ignore it by simply cutting the integration over the IR representation of the integrand at some upper value $s = s_x$. This value is determined as follows. Observe that the exact integrand $M(s)$ in eq.\ (\ref{DL2a}) is positive for $s\rightarrow\infty$. To see this one may insert eqs.\ (\ref{DV14}), (\ref{DV15}) into (\ref{DL2b}) and one finds to leading order \begin{eqnarray} \label{DL6} M(s)&=& 11\ \ {C^2_{\tilde a}\over s^4}\ \ +\ \ \ldots\ \ \ >\ 0\ \ , \ \ \ s\ \longrightarrow\ \infty\ \ \ . \end{eqnarray} \parindent.0em On the other hand, one may easily convince oneself that for $\alpha \le \pi/3$ the integrand $M(s)$ of eq.\ (\ref{DL2a}) turns negative for $s \longrightarrow \infty$ if the low $s$ representations (\ref{DR38}), (\ref{DR39}) are inserted. One now detects that the integrand with the low $s$ representation inserted is positive for $s = 0$. Consequently, there exists a zero of the integrand taken in the IR representation (cf.\ fig.\ 6). Obviously, this zero determines the point beyond which the IR (low $s$) representation starts to strongly misrepresent the true integrand and we therefore choose this zero as upper cut-off $s_x$ of the numerical integration (See fig.\ 7 for the dependence of $s_x$ on $\alpha$.) \footnote{Another choice might be to fit the IR and UV representations of the integrand together at some $s_y < s_x$. Here, one way is to require continuity of the integrand at $s = s_y$ and to determine $s_y$ by extremizing the value of the integral. However, in doing so one detects that the contribution of the UV tail is negligible numerically.}. It is clear that this recipe leads to a certain slightly lower value of the integral than if the UV region was not neglected.\\ \parindent1.5em Now, the result of the numerical calculation of $C_{1a}(\alpha)$ is shown in fig.\ 8, while fig.\ 9 displays the behaviour of the two contributions $C_{1a}(\alpha)$ derives from (cf.\ eq.\ (\ref{DL2a})). Unfortunately, within the approximation applied we do not find any zero of $C_{1a}(\alpha)$, but from fig.\ 9 one recognizes that both contributions to be taken into account are indeed comparable numerically. We believe that the contribution of the integral in eq.\ (\ref{DL2a}) is underestimated within the approximation applied compared with the exact one which relates to the exact solution of the integral equation (\ref{DK1}). The contribution of the first term in eq.\ (\ref{DL2a}) is probably determined to a more reliable degree because only the boundary values of $\tilde b_g(s)$ contribute to it. Furthermore, the smaller $\alpha$ the more the approximation applied for the second term in eq.\ (\ref{DL2a}) miscalculates it. This can easily be seen from fig.\ 6 (and fig.\ 7). The true integrand (the exact solution of eq.\ (\ref{DK1}), which we do not know presently, inserted) would likely contribute more because we expect the integrand $M(s)$ to be positive for large $s$. This would shift curve 2 in fig.\ 9 to larger values and consequently a zero of $C_{1a}(\alpha)$ might occur.\\ To conclude, the mechanism proposed has explicitly been shown capable to attempt the calculation of the QED coupling constant $\alpha$. However, the approximation applied turns out too simple yet to obtain any specific value of $\alpha$. In particular, for small values of $\alpha$ where most of the approximations applied within the calculation given in the present chapter appear to be most justified no zero of $C_{1a}(\alpha)$ is found. But it is clear that more advanced approximations may lead to a different picture. This needs to be studied in the future. We postpone further discussion of this issue to chapter 6.\\ \newpage \section{\label{ES}The Vacuum Energy, and Related Problems} \setcounter{equation}{0} In this chapter we discuss the vacuum energy issue and some related problems we did not mention so far. The consideration will not be aimed at the most general theoretical set-up eventually possible which very likely would turn out fruitless, but we restrict consideration to QED and in particular to that approximative approach to it studied in chapter 4. It might be hoped that this special case yields certain new insight into the problem useful at least for gauge field theories in general.\\ In standard QED in 4D Minkowski space the vacuum energy density originating from fermion as well as from photon fluctuations and their interactions is a divergent quantity but it is considered as unimportant because it can either be removed by applying normal ordering (in operator quantization) or by appropriately normalizing the functional integral defining the theory. No physical quantity depends on it. But, it is also known that modifications of the vacuum energy density as occurring when external conditions are applied (boundary conditions, temperature, external fields) do matter and in certain cases consequences are even observable in experiment (so, the Casimir effect) \cite{k}-\cite{kapu}. Few changes of the vacuum energy density turn out to be finite immediately (e.g., the Casimir energy density, or the free energy density for QED at finite temperature). Others require renormalization, like the QED effective potential for (say) a constant magnetic field. Even more care is needed in the study of QED in a gravitational background field we will return to later. However, large part of the motivation for studying the vacuum energy density derives from this situation because it gives rise to the concept of induced (classical) gravity \cite{l} understood as some kind of gravitational (metric) Casimir effect (for a review of recent work and further references see \cite{adle},\cite{novo}, also note \cite{davi1},\cite{davi2}).\\ First, let us compare the calculation of the vacuum energy density in standard QED and within the present approach. We restrict ourselves to the 1-loop level which contains all important features. We apply the simplest regularization possible, namely cut-off regularization (with a (radial) momentum space UV cut-off at $\Lambda$), which is most suited for our purposes. The vacuum energy density $\rho_{vac}$ is given by \parindent0.em \begin{eqnarray} \label{E1} \Gamma_{II} [0,0,0]\ &=&-\ V_4\ \rho_{vac}\nonumber\\ \vspace{0.3cm}\nonumber\\ &=&\ const.\ - \ i\ln\ {\rm Det}_\Lambda\ \left( S^{-1}_I\right)\ -\nonumber\\ &&\hspace{0.8cm} -\ i\ln\ {\rm Det}_\Lambda\ \left( D^{-1}_{gh\ I}\right)\ + {i\over 2}\ \ln\ {\rm Det}_\Lambda\ \left( D^{-1}_{I\ \mu\nu}\right)\ \ \ . \end{eqnarray} Here, \begin{eqnarray} \label{E2} S^{-1}_I(x-x^\prime)\ &=&\ i\not\hspace{-0.07cm}\partial_x\ a_I\left(x-x^\prime\right)\ -\ m\ b_I\left(x-x^\prime\right)\\ \vspace{0.3cm}\nonumber\\ \label{E3} D^{-1}_{gh\ I}(x-x^\prime)\ &=&{1\over\sqrt{\lambda}}\ _x\partial_\mu\ n^\mu(x-x^\prime)\ \\ \vspace{0.3cm}\nonumber\\ \label{E4} D^{-1}_{I\ \mu\nu}(x-x^\prime)\ &=& \left[ g_{\mu\nu}\ _x\Box - \ _x\partial_\mu \ _x\partial_\nu\right] d_I\left(x-x^\prime\right)\ -\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\ -\ {1\over\lambda}\ \int d^4 y\ \ n_\mu(y-x)\ n_\nu(y-x^\prime) \end{eqnarray} are the quadratic kernels of the fermion, ghost (contributing in QED to the vacuum energy only), and gauge field actions respectively \footnote{ $n_\mu$ can be here any vector-valued distribution, e.g., perhaps a derivative $\partial_\mu$ acting on some scalar function leading to a Lorentz type gauge, or any constant vector times a scalar function yielding an axial type gauge.}. From eq.\ (\ref{E1}) we find accordingly \begin{eqnarray} \label{E5} &&\hspace{-1.4cm}\Gamma_{II} [0,0,0]\ =\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-1.2cm} =\ const.\ -\ 2i\ V_4 \int\limits_\Lambda {d^4p\over (2\pi)^4}\ \ \ln\left[ \ -p^2\ \tilde a_I (p)^2\ +\ m^2\ \tilde b_I (p)^2\ \right]\ - \nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-0.7cm} -\ i\ V_4 \int\limits_\Lambda {d^4p\over (2\pi)^4}\ \ \ln\left[\ i\ \lambda^{-1/2}\ p\tilde n(p)\ \right]\ + \nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-0.7cm} +\ {i\over 2}\ V_4 \int\limits_\Lambda {d^4p\over (2\pi)^4}\ \ \ln\left[ \det\left[ (g_{\mu\nu}\ p^2\ -\ p_\mu p_\nu)\ \tilde d_I(p)\ -\ \lambda^{-1}\ \tilde n_\mu(p)\ \tilde n_\nu(p)\ \right]\ \right]\ . \end{eqnarray} Taking into account the relation \begin{eqnarray} \label{E6} \det\left[ (g_{\mu\nu}\ p^2\ -\ p_\mu p_\nu)\ \tilde d\ -\ \lambda^{-1}\ \tilde n_\mu\tilde n_\nu \right]\ &=& -\ {\tilde d^{\ 3}\over\lambda}\ \ \left[p\tilde n\right]^2 \left[p^2\right]^2 \end{eqnarray} and applying a Wick rotation one finds after some manipulations \footnote{We have absorbed certain $\ \ln m$ terms into the first (normalization) constant on the r.h.s.\ of eq.\ (\ref{E5}).} \begin{eqnarray} \label{E7} &&\hspace{-0.8cm}\Gamma_{II} [0,0,0]\ =\ const.\ +\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-0.4cm}+\ {V_4\over 8\pi^2}\ m^4\ \int\limits_0^{\Lambda^2\over m^2} ds\ s\ \left\{\ \ln\left[ \ s\ \tilde a_I (s)^2\ +\ \tilde b_I (s)^2\ \right]\ -\ {1\over 2}\ \ln\left[\ s\ \tilde d_I(s)^{3/2}\ \right]\ \right\}\ . \end{eqnarray} There is no trace left of the gauge condition because we have correctly included in the kernel of the ghost action (\ref{E3}) the gauge parameter $\lambda$ (For a related discussion see \cite{alle},\cite{niel}.). One immediately recognizes the well-known fact that in standard QED ($\tilde a_I = \tilde b_I = \tilde d_I \equiv 1$) the vacuum energy density $\rho_{vac}$ diverges \footnote{Incidentally, one may always formally (by ignoring finiteness/convergence requirements of properly applied mathematics) transform eq.\ (\ref{E5}) in the 'sum over the spectrum' formula for the vacuum energy density by exploiting the cut in the appropriate variable (i.e., $p_0$) connected with the logarithms, starting at the lowest energy eigenvalue of the spectrum, and extending to infinity.}. Now, QED in a background field (electromagnetic or gravitational; we restrict consideration to these external conditions most interesting in view of standard QED difficulties) will change the quantity $s$ (stemming from differential operators in configuration space) appearing in the argument of the logarithms above to some $s + \Delta s$ where for large $s$ the change $\Delta s$ behaves like $\Delta s \stackrel{s \longrightarrow \infty}{\sim} const.$ \footnote{Considering a connection in the covariant derivatives this naively yields $\Delta s \stackrel{s \longrightarrow \infty}{\sim} \sqrt{s}$, but symmetry reasons finally lead to the somewhat weaker behaviour $\Delta s \stackrel{s \longrightarrow \infty}{\sim} const.$.}. Of course, as already mentioned one can always absorb the divergent terms characteristic for 4D Minkowski space and displayed in eq.\ (\ref{E7}) on the r.h.s.\ into the normalization constant of the functional integral. But, for QED in a background field the logarithm in the integrand of eq.\ (\ref{E7}) then reads for large $s$ \begin{eqnarray} \label{E8} \ln\left[\ 1\ +\ {\Delta s\over s}\ +\ \ldots\ \right]& \stackrel{s \longrightarrow \infty}{=}& \ln\left[\ 1\ +\ O(s^{-1})\ \right]\ =\ O(s^{-1}) \end{eqnarray} and the vacuum energy density depending on the background field is still divergent (This even holds up to $\Delta s \stackrel{s \longrightarrow \infty}{\sim} 1/s$.).\\ \parindent1.5em Now, compare this with our approximative approach to the equation for the complete effective action of QED. From eqs.\ (\ref{DV14}), (\ref{DV15}) we know that it holds \begin{eqnarray} \label{E9} s\ \tilde a_I(s)^2\ +\ \tilde b_I(s)^2\ &\stackrel{s \longrightarrow \infty}{=} &\tilde b(\infty)^{2}\ \left[\ 1\ -\ {C^2_{\tilde a}\over s^3}\ +\ \ldots\ \right]\hspace{1.5cm}. \end{eqnarray} \parindent0.em Absorbing a $\ \ln \tilde b(\infty)$ term into the normalization constant of the functional integral we see that the part of the vacuum energy density originating from fermion fluctuations (the first term in the integrand of eq.\ (\ref{E7})) is even finite without any further appeal to this constant. As we have explained in subsection \ref{SH4} this is true irrespectively of the particular approximation applied (i.e., whether we first perform the gauge field integration or the fermionic integration). Consequently, any change of the fermionic part of the vacuum energy density under the influence of external (electromagnetic as well as gravitational) fields will also be finite. But, in view of condition (\ref{DG4}) the part of the vacuum energy density originating from photon fluctuations (the second term in the integrand of eq.\ (\ref{E7})) is still divergent and equally as in standard QED we need to absorb this divergency for 4D Minkowski space into the normalization constant of the functional integral in order to properly define the equation for the complete effective action of QED. This can be done without any problem. The only concern remaining is the behaviour of the gauge field determinant in the presence of a gravitational background field. We do not have any quick answer on this, but let us speculate for a moment. Assume we had for 4D Minkowski space absorbed the UV divergency stemming from the gauge field determinant into the normalization constant of the functional integral by using a certain power of the determinant of the d'Alembertian \footnote{One may well imagine that $d_I$ behaves for high energies such a way that this recipe removes all divergencies. If not, perhaps the determinant of $d_I$ in whole has to be included in the normalization constant.}. If one now generalizes the 4D Minkowski space functional integral to an arbitrary gravitational background this has to be done for the whole functional integral measure, i.e., also the (normalization) determinant of the d'Alembertian has to be generalized covariantly. Then of course, using this recipe the vacuum energy density of QED would be finite in electromagnetic as well as in gravitational background fields. If one is to reject above recipe one has to further discuss the determinant of the d'Alembertian in the presence of a gravitational background field what is a problem of long standing concern, in particular the gauge field conformal anomaly and its regularization dependence \cite{birr}. Finally, it appears not unreasonable to expect that above discussion persists to apply also if further contributions (higher loops) are taken into account.\\ \parindent1.5em Above consideration now admits to compare standard QED in a gravitational background field and the present approach. In standard QED the structure of the first few terms of the effective gravitational action (i.e., up to a minus sign the (time integrated) vacuum energy) is known \cite{k},\cite{birr},\cite{full}. \parindent0.em \begin{eqnarray} \label{E10} &&\hspace{-1.5cm}\Gamma_{II} [0,0,0]\ =\nonumber\\ \vspace{0.3cm}\nonumber\\ &&=\ \int d^4x\ \sqrt{-g}\ \left\{\ m^4\ c_1\ +\ m^2\ c_2\ R\ +\ c_3\ \Box\ R\ +\ c_4\ R^2\ +\right.\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{2.5cm}\left.+\ c_5\ R_{\mu\nu}\ R^{\mu\nu}\ +\ c_6\ R_{\mu\nu\alpha\beta}\ R^{\mu\nu\alpha\beta}\ +\ \ldots\ \right\} \end{eqnarray} $c_1$ to $c_6$ are certain divergent dimensionless constants. We have already discussed above $c_1$ (i.e., $-\rho_{vac}$ for 4D Minkowski space), $c_2$ is a quadratically (in the cut-off $\Lambda$) divergent quantity while $c_3$ to $c_6$ diverge logarithmically. All further terms are finite. Consistency requires to start in the standard QED functional integral with a certain bare gravitational action (included in $\Gamma_I$) containing all terms displayed in eq.\ (\ref{E10}) in order to be able to absorb the divergencies into the bare constants in front of them. Consequently, induced gravity is not a consistent concept within standard QED. In contradistinction to standard QED, by taking into account the UV behaviour of the quadratic kernel of the fermion action (a consequence of the equation for the complete effective action of QED) we have demonstrated above that whatever the technical approach to calculate $c_2$ to $c_6$ will be in detail \footnote{In cut-off regularization they will have representations analogous to eqs.\ (\ref{ZA6})--(\ref{ZA8}).} these coefficients will come out finite (at least at the 1-loop level). The contribution from the determinant of the gauge field kernel will depend on the choice one is willing to make for the normalization of the functional integral. Therefore, within the present framework induced gravity might under certain circumstance turn out to be a valid concept. Of course, as has been pointed out by {\sc Sakharov} in his pioneering paper \cite{l} the (induced) gravitational action will very likely not be dominated by contributions stemming from QED but from the heaviest excitations (particles) existent in nature. If one would like to attempt the calculation of the induced gravitational action within the concept proposed in the present paper one would first have to study the equation for the complete effective action of the standard model at least. If one is willing to do so this will require much effort and certainly results cannot be obtained quickly. But, in view of the possible outcome perhaps it might be worth to be done.\\ \newpage \section{\label{FS}Discussion and Conclusions} \setcounter{equation}{0} Before turning to some matters of principle let us further discuss the approximative approach to the functional integral equation for the complete effective action of QED. We have seen that the general approximative approach chosen (cf.\ section 4.1) admits to find certain nonperturbative information about the quadratic kernels of the QED action. Particular emphasis deserves the fact that the information found indicates that there exists an unique solution to the functional integral equation only (at least within the approximative approach studied). Of course, this point has to be studied further using more advanced approximations in order to see whether for the QED coupling constant $\alpha$ only one admissible value exists (if any at all -- but nature appears to allow for some). Furthermore, within the approximative approach divergencies as they are characteristic for standard QED do not show up (at least, as far as the present study runs). It should perhaps also be said that the nonlocal character of the fermion action admits to employ nonperturbative techniques which are not quickly applicable in standard QED. For example, as we have seen this way the well-known Bloch-Nordsieck contribution can be obtained easily and it contains important IR (long distance) information crucial to the further calculation.\\ \parindent1.5em However, so far the concept proposed in the present article has not yet successfully passed the crucial test attempted in subsection 4.3.3, namely the approximative calculation of the QED coupling constant $\alpha$. As we have seen the approach used is indeed suited for explicit calculation but inasmuch as within the simple approximation applied we did not find any zero of $C_{1a}(\alpha)$ the question remains presently open. How might a better approximation look like? First, it should be noted that by imposing eq.\ (\ref{DR16}) independently of the value of $\alpha$ a strong coupling condition has been enforced which annihilates the hope that higher loop contributions can really be neglected in the integral equation for the quadratic kernel of the fermion action (\ref{DK1}). But, to take into account higher loop contributions would add complications to the formalism not easily to be resolved in analytical calculations. One way out of this dilemma might be to relax for approximative purposes the fixed point condition for the quadratic kernel of the fermion action to $a_{II} = C\ a_I$, $b_{II} = C\ b_I$ where $C$ is some arbitrary real constant, instead of immediately enforcing $C = 1$. This requirement of structural similarity perhaps could be sufficient to keep the conceptual content alive and at the same time admits to count indeed (not only seemingly) in any arguments on the eventual smallness of $\alpha$. The parameter $\beta$ then would also be unconstrained as long as the fixed point condition $d_{II} = d_I$ is not enforced. To finally fix both $\alpha$ and the parameter $\beta$ the conditions (\ref{DP2}), (\ref{DP3}) can be applied simultaneously. Whether this recipe yields a more effective approximation remains to be seen in future investigations. It might perhaps also be necessary to include some higher loop contributions to $C_{1a}$ and $C_{2a}$. Certainly, the solution of the integral equation for the kernel of the fermion action (\ref{DK1}) has to be studied further. May be, it will also be advisable to improve the Ansatz (\ref{DG5}). These are few of the changes in the approximation strategy which can be implemented most easily along the lines of chapter 4. Perhaps, still more severe changes are required. Finally, it should be said that the calculation discussed in chapter 4 should merely be understood as a first (naive) attempt to extract information out of the functional integral equation for the complete effective action by means of a simple approximation which however admits mostly analytical investigation. It is clear, of course, that the present understanding is poor and much remains to be learned.\\ Throughout the paper we have preliminary applied the standard point of view that the space-time structure is prescribed to the functional integral equation for the complete effective action. In a certain sense, it is considered as 'classical' and as prior to quantum effects (at least for flat space-time). However, the criticism spelled out in section 2.3 with respect to the artificial distinction between classical action and effective action also applies to this view on the space-time structure. Therefore, more adequate the structure of space-time should be understood as some characteristics of the quantum field theoretic vacuum. Basically, this is the point of view applied within the concept of induced gravity although this aspect is hardly discussed in the literature. But, also in flat space-time the idea applies. Recent investigations of propagation of light in a Casimir vacuum indicate that this concept is already implicitly entailed in standard QED \cite{scha}-\cite{bart2}. As discussed in ref.\ \cite{bart2}, although lack of appropriate nonperturbative calculational tools leaves the question so far unsettled in the strict sense the only conceptually viable (as far as present knowledge is concerned) of the alternatives allowed by the Kramers-Kronig relation for the refractive index $n(\omega)$ of the Casimir vacuum ($\omega$ is the frequency of the test wave) is that $n(\infty) < 1$ holds for propagation of light perpendicular to two parallel mirrors in the slab between them (This entails a signal velocity of light larger than in the free space vacuum.). While the result is often viewed as something like a paradox in standard QED it is easily understandable by means of the concept put forward in the present article (where it may count as a special application). If the map $f$ is modified such a way that it is no longer fully Lorentz invariant \footnote{For an appropriate functional integral formulation of standard QED in the presence of two parallel mirrors see \cite{bord}.} then also the solution of the functional integral equation for the complete effective action is no longer fully Lorentz invariant and the dispersion analysis in accordance with the effective Maxwell action may well reveal a change in the signal velocity of light. The point is that only one situation can be considered as the one where normalization is performed (and we typically choose free Minkowski space as reference situation and the signal velocity of light there as reference standard, although of course also any less symmetrical set-up could be used). But, in view of the discussion performed in section 2.3 it makes no sense to consider any normalized value of a certain quantity (mass, charge, velocity of light, e.g.) as classical because this is a concept not accessible to experiment. We can only denote certain values defined by a certain measurement scenario under defined circumstances as reference values. Any changes of these values measured under different circumstances are certainly of quantum nature but equally well these values could have served as initial reference values. Consequently, it appears most sensible to consider these quantities from the very beginning as characteristics of the quantum field theoretic vacuum and their changes as parameterizing changes of it with respect to some reference situation.\\ Summarizing the concept proposed in the present article let us point out that it proposes a view on quantum field theory which differs from the established one, but the established standard paradigm finds it natural explanation and place within this new approach. In particular, it incorporates and continues in modified shape certain ideas used in local renormalizable quantum field theory such as the unobservability of bare quantities and the hypothesis that the vanishing of the beta function(s) (corresponding to a fixed point of the renormalization group) defines the physical coupling constant(s) of a model. The functional integral equation for the complete effective action proposed ensures (merely by definition) that any of its solutions is finite (It is not a solution, otherwise.). This removes to a certain extent the concern of divergencies standard quantum field theory is beset by, but the price to pay for this is the present uncertainty whether the functional integral equation proposed has beyond free field theories any other nontrivial solution (i.e., any nonlinear (interacting) field theory). The most natural place to find out whether the proposed concept is physically correct should be QED because unlike some other model theories it is a theory for phenomena definitely present in nature. QED is certainly structurally more complex than scalar model field theories, e.g., but if for QED something new can be learned we may feel sure that our physical understanding has advanced. The approximative approach to the functional integral equation for the QED effective action presented has proved its calculational accessibility. Although the particular approximation studied is still quite simple it has yielded certain nonperturbative information what indicates that the present approach also has certain calculational advantages. However, only further investigation will show whether any obviously appropriate approximation can be found which yields with reasonable calculational effort the correct value of the fine structure constant. In a certain sense this should be viewed as a crucial test because in principle the present approach if really physically correct and adequate should be able to pass it.\\ \newpage \subsubsection*{Acknowledgements} \parindent0.em The author thanks D.\ Robaschik, E.\ Wieczorek and M.\ Bordag for helpful discussions on the subject and on an earlier version of the paper. Large part of the present investigation has been performed in 1992 at the INTSEM, University of Leipzig, and the author is grateful to A.\ Uhlmann who made this research possible. Finally, kind hospitality during a stay at the Naturwissenschaftlich-Theoretisches Zentrum (NTZ), University of Leipzig, where this preprint has been produced is herewith acknowledged.\\ \newpage \setcounter{section}{1} \section*{Appendix A} \addcontentsline{toc}{section}{Appendix A} \label{GS} \renewcommand{\theequation}{\mbox{\Alph{section}.\arabic{equation}}} \setcounter{equation}{0} Consider the following formula \parindent0.em \begin{eqnarray} \label{ZA1} {\rm e}^{\displaystyle\ i \Delta\Gamma^G_I [A]}\ \ &=&\ C\ \int D\psi D\bar\psi\ \ {\rm e}^{\displaystyle\ i\Gamma_I^F [A,\psi,\bar\psi]}\ \ \ , \end{eqnarray} where $\Gamma_I^F [A,\psi,\bar\psi]$ is given by eq.\ (\ref{D3}). In the present Appendix we are going to calculate the coefficients of the first two quadratic terms of the derivative expansion of $\Delta\Gamma^G_I [A]$, i.e., the coefficient of the mass term $A_\mu A^\mu$ and the coefficients of $(\partial_\mu A^\mu)^2$ and $\partial_\mu A_\nu \partial^\mu A^\nu$. For this purpose we rewrite $\Gamma_I^F [A,\psi,\bar\psi]$ in the following symmetrized form. \begin{eqnarray} \label{ZA2} &&\hspace{-1.cm}\Gamma_I^F [A,\psi,\bar\psi]\ =\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-1.cm}=\ {1\over2}\ \int d^4x\ d^4x^\prime\ \ \bar\psi (x) \ \ {\rm e}^{\displaystyle\ ie \int^{x^\prime}_x dy_\mu\ A^\mu(y)}\ \ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-0.2cm}\cdot\ \left[ a_I\left( x-x^\prime\right) \ \left( i\stackrel{\rightarrow}{\not\hspace{-0.07cm}\partial_{x^\prime}} - e \not\hspace{-0.13cm}A(x^\prime) \right)\ -\ m\ b_I\left( x-x^\prime\right) \right]\psi (x^\prime)\ + \nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-0.3cm} +\ {1\over2}\ \int d^4x\ d^4x^\prime\ \ \bar\psi (x) \ \left[ -\left( i \stackrel{\leftarrow}{\not\hspace{-0.07cm}\partial_x} + e \not\hspace{-0.13cm}A(x) \right)\ \ a_I\left( x-x^\prime\right)\ -\ m\ b_I\left( x-x^\prime\right) \right]\ \ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{0.8cm}\cdot \ {\rm e}^{\displaystyle\ ie \int^{x^\prime}_x dy_\mu\ A^\mu(y)}\ \psi (x^\prime) \end{eqnarray} We now expand the r.h.s.\ of eq.\ (\ref{ZA2}) in powers of $A_\mu$ up to $O(A^2)$ (i.e., $O(e^2)$) and insert following expansions (the upper obtained by using $y_\mu(\tau)\ =\ (x^\prime - x)_\mu\ \tau + x_\mu$, $\tau\ \in\ [0,1]$). \begin{eqnarray} \label{ZA3} &&\hspace{-1.5cm}\int^{x^\prime}_x dy_\mu\ A^\mu(y)\ =\ (x^\prime - x)^\mu\ \bigg\{\ A_\mu(y)\ +\nonumber\\ &&\ +\ \left. {1\over 24}\ (x^\prime - x)^\nu (x^\prime - x)^\lambda \ \partial_\nu \partial_\lambda\ A_\mu(y) \ +\ \ldots\ \right\}_{y={(x+x^\prime)\over 2}}\ \ \ ,\\ \vspace{0.3cm}\nonumber\\ \label{ZA4} &&\hspace{-1.5cm} A_\mu (x)\ +\ A_\mu (x^\prime)\ =\ \nonumber\\ &&=\ 2\ \left\{\ A_\mu (y)\ +\ {1\over 8}\ (x^\prime - x)^\nu(x^\prime - x)^\lambda\ \partial_\nu \partial_\lambda\ A_\mu (y) \ +\ \ldots\ \right\}_{y={(x+x^\prime)\over 2}}\ . \end{eqnarray} For calculating the coefficients of $A_\mu A^\mu$, $(\partial_\mu A^\mu)^2$, and $\partial_\mu A_\nu \partial^\mu A^\nu$ in $\Delta \Gamma_I^G [A]$ it is sufficient to keep at most two derivatives acting on the gauge potentials in $\Gamma_I^F [A,\psi,\bar\psi]$. The expression obtained this way for $\Gamma_I^F$ (we will not give this rather long expression) now serves as the starting point for deriving Feynman rules and calculating the effective action terms desired. One should take notice that $\Gamma_I^F$ also contains terms quadratic in $A_\mu$ what leads to the situation that besides the standard photon polarization diagram also a tadpole contribution to the photon self-energy is to be taken into account.\\ \parindent1.5em The explicit calculation of the terms we are aiming at is quite tedious and shall not be displayed here. We only comment few points of the calculation. Coordinate differences as occurring in eqs.\ (\ref{ZA3}), (\ref{ZA4}) are translated into momentum space as derivatives with respect to a corresponding momentum variable acting on certain functions in momentum space. This of course involves partial integrations in momentum space for which as usual boundary contributions are assumed not to occur. The photon polarization function is a nonlocal distribution. Therefore, from the formal expression derived by the Feynman rules the local structures we are interested in have to be extracted. In order to properly define this procedure we apply a (radial) momentum space UV cut-off at $\Lambda$ for the loop integration. The final result will be given within this gauge non-invariant cut-off regularization. Furthermore, a Wick rotation for the loop integration is performed and such equivalences like (\ref{DF5}), (\ref{DF6}) are used. Then, the final result reads \parindent0.em \begin{eqnarray} \label{ZA5} &&\hspace{-0.5cm}\Delta\Gamma^G_I [A]\ =\ const.\ +\ {e^2\over 16\pi^2} \ \int d^4x\ \bigg\{\ C_0\ m^2 A_\mu(x) A^\mu(x)\ + \nonumber\\ \vspace{0.3cm}\nonumber\\ &&+\ \Big[\ C_{1s}\ [g_{\mu\nu} g_{\alpha\beta}\ +\ g_{\mu\alpha} g_{\nu\beta}]\ +\ C_{1a}\ [g_{\mu\nu} g_{\alpha\beta}\ -\ g_{\mu\alpha} g_{\nu\beta}]\ \Big]\ A^\mu(x) \partial^\alpha \partial^\beta A^\nu(x)\ + \nonumber\\ \vspace{0.3cm}\nonumber\\ &&+\ \ldots \bigg\} \end{eqnarray} where ($h^\prime= d/ds\ h$) \begin{eqnarray} \label{ZA6} C_0\ \ &=&\ -\ s^2\ h^\prime\ \ \Bigg\vert_0^{\Lambda^2\over m^2}\\ \vspace{1.2cm}\nonumber\\ \label{ZA7} \hspace{-1.cm}C_{1s}\ &=&\ -\ {1\over 6}\ s^3\ h^{\prime\prime\prime}\ -\ {1\over 2}\ s^2\ h^{\prime\prime}\ +\nonumber\\ \vspace{0.2cm}\nonumber\\ &&\ +\ {1\over 2}\ \left(\ {\rm e}^{\displaystyle -h}\ \left[\ s^4\ \tilde a \tilde a^{\prime\prime}\ +\ 2\ s^3\ \tilde a \tilde a^\prime\ +\ s^3\ \tilde b \tilde b^{\prime\prime}\ +\ s^2\ \tilde b \tilde b^\prime\ \right]\right)^\prime\ -\nonumber\\ \vspace{0.2cm}\nonumber\\ &&\ -\ {\rm e}^{\displaystyle -h}\ \left[\ {1\over 3}\ s^4\ \tilde a \tilde a^{\prime\prime\prime}\ +\ 2\ s^3\ \tilde a \tilde a^{\prime\prime}\ +\ {1\over 3}\ s^3\ \tilde b \tilde b^{\prime\prime\prime}\ +\ 2\ s^2\ \tilde a \tilde a^\prime\ +\right.\nonumber\\ &&\hspace{1.8cm} +\left. {3\over 2}\ s^2\ \tilde b \tilde b^{\prime\prime}\ +\ s\ \tilde b \tilde b^\prime\ \right]\ \ \ \Bigg\vert_0^{\Lambda^2\over m^2}\\ \vspace{1.2cm}\nonumber\\ \label{ZA8} C_{1a}\ &=&\ {1\over 18}\ s^3\ h^{\prime\prime\prime}\ -\ {1\over 6}\ s^2\ h^{\prime\prime}\ -\ {2\over 3}\ s\ h^\prime\ +\ {2\over 3}\ h\ +\nonumber\\ \vspace{0.2cm}\nonumber\\ &&\ +\ {1\over 2}\ \left(\ {\rm e}^{\displaystyle -h}\ \left[\ -\ {1\over 3}\ s^4\ \tilde a \tilde a^{\prime\prime}\ +\ {2\over 3}\ s^3\ \tilde a \tilde a^\prime\ -\ {1\over 3}\ s^3\ \tilde b \tilde b^{\prime\prime}\ +\ s^2\ \tilde b \tilde b^\prime\ \right]\right)^\prime\ +\nonumber\\ \vspace{0.2cm}\nonumber\\ &&\ +\ {\rm e}^{\displaystyle -h}\ \left[\ {1\over 9}\ s^4\ \tilde a \tilde a^{\prime\prime\prime}\ +\ {4\over 3}\ s^3\ \tilde a \tilde a^{\prime\prime}\ +\ {1\over 9}\ s^3\ \tilde b \tilde b^{\prime\prime\prime}\ +\ 2\ s^2\ \tilde a \tilde a^\prime\ +\right.\nonumber\\ &&\hspace{1.8cm}+\left.\ {7\over 6}\ s^2\ \tilde b \tilde b^{\prime\prime}\ +\ 2\ s\ \tilde b \tilde b^\prime\ \right]\ \ \ \Bigg\vert_0^{\Lambda^2\over m^2}\ - \nonumber\\ &&\ -\ \int\limits_0^{\Lambda^2\over m^2} ds\ {1\over s \tilde a^2 + \tilde b^2}\ \left[\ {s\ \tilde a^2 \over s \tilde a^2 + \tilde b^2}\ \left[\ s\ \tilde a \tilde a^\prime + \tilde b \tilde b^\prime\ \right] \ +\right.\nonumber\\ \vspace{0.2cm}\nonumber\\ &&\hspace{1.5cm} +\ {2\over 3}\ s^3\ \tilde a \tilde a^{\prime\prime\prime}\ +\ 3\ s^2\ \tilde a \tilde a^{\prime\prime}\ +\ {2\over 3}\ s^2\ \tilde b \tilde b^{\prime\prime\prime}\ +\nonumber\\ &&\hspace{1.5cm}+\ 2\ s\ \tilde a \tilde a^\prime\ +\ 3\ s\ \tilde b \tilde b^{\prime\prime}\ -\ s\ (\tilde b^\prime)^2\ +\ 3\ \tilde b \tilde b^\prime\ \Bigg] \ \ \ ,\\ \vspace{0.3cm}\nonumber\\ h\ &=&\ h(s)\ =\ \ln\left[ s \tilde a^2 +\tilde b^2 \right] \hspace{0.5cm},\hspace{0.5cm}\tilde a\ = \tilde a (s) \hspace{0.5cm},\hspace{0.5cm}\tilde b\ = \tilde b (s)\ \ \ .\nonumber \end{eqnarray} For convenience, in the equations we have omitted the index $I$ for $\tilde a$ and $\tilde b$. The result given above is exact for any value of the cut-off $\Lambda$, so far no term vanishing at removing the cut-off has been neglected. A comparison of the mass term with eq.\ (\ref{DF7}) shows that both results although obtained by different methods agree as expected. Also the first line of eq.\ (\ref{ZA7}) can be re-identified in eq.\ (\ref{DF7}). For $\tilde a\ =\ \tilde b\ \equiv\ 1$ the standard QED result is reproduced (cf.\ \cite{ach}; \cite{jau}, eq.\ (9-64), for $\Lambda \longrightarrow \infty$ the coefficient $C(0)$ there is related to our expressions by the equation $C(0) = - e^2\ (5 C_{1s} + 3 C_{1a})/24\pi^2$).\\ \parindent1.5em Now, if conditions (\ref{DF9}), (\ref{DF10}) are fulfilled above result significantly simplifies. Then, the UV cut-off can be lifted without any problem ($\Lambda\longrightarrow\infty$), the coefficients $C_0$ and $C_{1s}$ connected with terms spoiling gauge invariance are vanishing and the final completely gauge invariant result reads \parindent0.em \begin{eqnarray} \label{ZA9} \hspace{-0.5cm}\Delta\Gamma^G_I [A]\ &=&\ const.\ +\nonumber\\ &&\ \ \ +\ \ C_{1a}\ \ {e^2\over 16\pi^2} \ \int d^4x \ A^\mu(x)\ [g_{\mu\nu} \Box\ -\ \partial_\mu \partial_\nu]\ A^\nu(x)\ + \ \ldots\ ,\ \end{eqnarray} with \begin{eqnarray} \label{ZA10} C_{1a}\ &=&\ {2\over 3}\ \ln\ \left[ {\tilde b (\infty)\over\tilde b(0)} \right]^2\ - \nonumber\\ \vspace{0.3cm}\nonumber\\ &&\ \ \ -\ \int\limits_0^\infty ds\ {1\over s \tilde a^2 + \tilde b^2 }\ \left[\ {s\ \tilde a^2 \over s \tilde a^2 + \tilde b^2}\ \left[\ s\ \tilde a \tilde a^\prime + \tilde b \tilde b^\prime\ \right] \ +\right.\nonumber\\ \vspace{0.2cm}\nonumber\\ &&\hspace{1.5cm} +\ {2\over 3}\ s^3\ \tilde a \tilde a^{\prime\prime\prime}\ +\ 3\ s^2\ \tilde a \tilde a^{\prime\prime}\ +\ {2\over 3}\ s^2\ \tilde b \tilde b^{\prime\prime\prime}\ +\nonumber\\ &&\hspace{1.5cm}+\ 2\ s\ \tilde a \tilde a^\prime\ +\ 3\ s\ \tilde b \tilde b^{\prime\prime}\ -\ s\ (\tilde b^\prime)^2\ +\ 3\ \tilde b \tilde b^\prime\ \Bigg] \ \ \ . \end{eqnarray} It is worth noting that the coefficient $C_{1a}$ is finite due to conditions (\ref{DF9}), (\ref{DF10}). Gauge invariance and UV finiteness are closely related here \footnote{Were it not for the first term ($\sim (s \tilde a^2 + \tilde b^2)^{-2}$) in the integral in eqs.\ (\ref{ZA8}), (\ref{ZA10}), also the weaker condition given in footnote \ref{foot1} on p.\ \pageref{foot1} then replacing (\ref{DF9}) would lead to gauge invariance and UV finiteness at the same time.} (For a further discussion see \cite{scharn2}.).\\ \newpage \setcounter{section}{2} \section*{Appendix B} \addcontentsline{toc}{section}{Appendix B} \label{HS} \setcounter{equation}{0} \parindent0.em In this Appendix we explicitly calculate the function \begin{eqnarray} \label{ZB1} g(x - x^\prime)\ =\ {\rm e}^{\displaystyle\ -{i\over 2}\int d^4y\ d^4y^\prime\ \bar{J}_\mu (x,x^\prime;y)\ D_I^{\mu\nu}(y - y^\prime)\ \bar{J}_\nu (x,x^\prime;y^\prime)} \end{eqnarray} for the Ansatz (\ref{DG5}) \begin{eqnarray} \label{ZB2} d_I(x)\ &=&\ \left[\ 1 + \beta\ {\Box\over m^2}\ \right]\ \delta^{(4)}(x) \ \ \ . \end{eqnarray} Eq.\ (\ref{ZB1}) can easily be rewritten as \footnote{Of course, this transformation is not specific to the Ansatz (\ref{ZB2}). To obtain eq.\ (\ref{ZB3}) a gauge fixing term $\Gamma_{gf}$ with $\tilde n_\mu = i p_\mu\ \tilde d_I(p)^{1/2}$ has been added to the gauge field action $\Gamma^G_I$.} \begin{eqnarray} \label{ZB3} g(x - x^\prime)\ &=&\ \ \ {\rm e}^{\displaystyle\ -\ i e^2\ (x - x^\prime)^2\ \int_0^1 d\tau\ (1-\tau)\ D_I((x - x^\prime)\ \tau)}\ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\ \ \cdot\ {\rm e}^{\displaystyle\ \ i e^2\ (1-\lambda)\ \left[\ D^\ast_I(x - x^\prime ) - D^\ast_I(0)\ \right]}\ \ \ \ , \end{eqnarray} where \footnote{The IR divergency can be regularized and drops then out for $g(x)$. The spurious pole generated by the model Ansatz $\tilde d_I(p)$ is understood as also supplied with the $i\epsilon$-prescription.} \begin{eqnarray} \label{ZB4} D^\ast_I(x)\ &=&\ \int {d^4p\over(2\pi)^4}\ {{\rm e}^{\ ipx}\over (p^2\ + \ i\epsilon)^2}\ {1\over \tilde d_I(p)}\ \ \ ,\ \\ \vspace{0.3cm}\nonumber\\ &&\hspace{3.cm}\tilde d_I(p)\ =\ 1\ -\ \beta\ {p^2\over m^2}\ \ \ ,\ \nonumber \end{eqnarray} and \begin{eqnarray} \label{ZB5} D_I(x)\ &=&\ \Box\ D^\ast_I(x)\hspace{2.cm}. \end{eqnarray} For simplicity, let us perform the calculation for $g$ in Euclidean space. Results then can be read off for Minkowski space whenever needed by rotating back the fourth coordinate. In Euclidean space $D^\ast_I$ and $D_I$ read \begin{eqnarray} \label{ZB6} D^\ast_I(x_E)\ &=&\ -\ {i\over 16\pi^2}\ \ln\left(\mu^2 x_E^2\right)\ -\ {\beta\over m^2}\ D_I(x_E) \end{eqnarray} ($\mu^2$ is the temporary IR cut-off applied), and \begin{eqnarray} \label{ZB7} D_I(x_E)\ &=&\ {i\over 4\pi^2\ x_E^2}\ -\ {i\ m\over 4\pi^2\ \sqrt{\beta}\ \vert x_E \vert}\ K_1\left( {m \vert x_E \vert\over\sqrt{\beta}}\right)\ \ \ . \end{eqnarray} For the further calculation following integral turns out to be useful (${\bf L}_\nu$ are Struve functions) \cite{g}, vol.\ 2. \begin{eqnarray} \label{ZB8} &&\hspace{-1.2cm}\int {d\tau\over \tau}\ K_1(\tau)\ =\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-0.3cm}=\ -\ K_1(\tau)\ -\ \tau\ K_0(\tau)\ -\ {\pi\over 2}\ \tau\ \left[\ K_1(\tau)\ {\bf L}_0(\tau)\ +\ K_0(\tau)\ {\bf L}_1(\tau)\ \right] \end{eqnarray} Consequently, we find ($\gamma$ is the Euler constant) \begin{eqnarray} \label{ZB9} &&\hspace{-1.2cm}-\ x_E^2\ \int_0^1 d\tau\ (1-\tau)\ D_I(x_E\ \tau)\ =\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-0.3cm}=\ {i\ m\over 4\pi^2}\ \left\{\ 1\ +\ \gamma\ +\ {1\over 2}\ \ln\left({m^2 x_E^2\over 4\beta}\right)\ + \right.\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-0.1cm}\ \ +\ \left( 1\ -\ {m^2 x_E^2\over \beta}\right)\ K_0\left( {m \vert x_E \vert\over\sqrt{\beta}}\right)\ -\ {m \vert x_E \vert\over\sqrt{\beta}}\ K_1\left( {m \vert x_E \vert\over\sqrt{\beta}}\right)\ -\ {\pi\over 2}\ {m^2 x_E^2\over\beta}\ \cdot\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{0.4cm}\ \cdot\ \left.\left[\ K_1\left( {m \vert x_E \vert\over\sqrt{\beta}}\right)\ {\bf L}_0\left( {m \vert x_E \vert\over\sqrt{\beta}}\right)\ + \ K_0\left( {m \vert x_E \vert\over\sqrt{\beta}}\right)\ {\bf L}_1\left( {m \vert x_E \vert\over\sqrt{\beta}}\right)\ \right]\right\}\ \ .\ \end{eqnarray} The final result for $g(x_E)$ is then ($\ t = m \vert x_E \vert /\sqrt{\beta}\ $) \begin{eqnarray} \label{ZB10} &&\hspace{-1.3cm}g(x_E)\ =\ \nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{-0.9cm}=\ \exp\left\{\ {\alpha\over\pi}\ \left[\ 1\ +\ \gamma\ +\ {1\over 2}\ \ln\ {t^2\over 4}\ +\ (1-t^2)\ K_0(t)\ -\ t\ K_1(t)\ - \right.\right.\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{2.5cm}\left. -\ {\pi\over 2}\ t^2\ \left[\ K_1(t)\ {\bf L}_0(t)\ +\ K_0(t)\ {\bf L}_1(t)\ \right]\ \right]\ \ +\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{0.4cm}\left. +\ {\alpha\over\pi}\ (1-\lambda)\ \left[\ {1\over t^2}\ -\ {1\over t}\ K_1(t)\ +\ {1\over 4}\ (2\gamma\ -\ 1)\ +\ {1\over 4}\ \ln\ {t^2\over 4}\ \right]\ \right\} \ .\ \end{eqnarray} In the long distance limit ($t \gg 1$) eq.\ (\ref{ZB10}) reads \begin{eqnarray} \label{ZB11} g(x_E)\ &=&\ \exp\left\{\ {\alpha\over 2\pi}\ \left[\ -\ \pi\ t\ +\ {(3-\lambda)\over 2}\ \ln\ {t^2\over 4}\ \right.\right. +\nonumber\\ \vspace{0.3cm}\nonumber\\ &&\hspace{3.cm}\left.\left. +\ {(3+\lambda)\over 2}\ +\ (3-\lambda)\ \gamma\ +\ \ldots\ \right]\right\}\ \ \ .\ \end{eqnarray} So, in the long distance region we are mainly interested in the function $g(x_E)$ can be written as follows. \begin{eqnarray} \label{ZB12} g(x_E)&=&C_g\ \left(m^2 x_E^2\right)^{\displaystyle\ \alpha (3-\lambda)/4\pi}\ \ {\rm e}^{\displaystyle\ -\ {\alpha\over 2 \sqrt{\beta}} \ m \vert x_E \vert}\ \ \Big[\ 1\ + \ \ldots\ \Big]\\ \vspace{0.3cm}\nonumber\\ C_g&=&\left( 4\beta\right)^{\displaystyle\ - \alpha (3-\lambda)/4\pi}\ \ \exp\left\{\ {\alpha\over 4\pi}\left[\ (3+\lambda)\ +\ 2\ (3-\lambda)\ \gamma\ \right]\ \right\} \end{eqnarray} One easily recognizes in eq.\ (\ref{ZB12}) the well-known exponent of the (power-like) Bloch-Nordsieck contribution (cf.\ \cite{h},\cite{i} and references therein).\\ \newpage
2101.00656
\section*{Results and Discussion} \subsection*{Modelling flows on hypergraphs}. We model flows on hypergraphs with random walks, using hypergraphs with nodes $V$, hyperedges $E$ with weights $\omega$, and hyperedge-dependent node weights $\gamma$. Each hyperedge $e$ has a weight $\omega(e)$. Each node $u$ with incident hyperedges $E(u) = \{ e \in E : u \in e \}$ has a weight $\gamma_e(u)$ for each incident hyperedge $e$. To simplify the notation when normalising weights into probabilities, we denote node $u$'s total incident hyperedge weight $d(u) = \sum_{e \in E(u)}\omega(e)$ and hyperedge $e$' total node weight $\delta(e) = \sum_{u \in e}\gamma_e(u)$~\cite{chitra2019random}. With these weights, a lazy random walker moves from node $u$ at time $t$ to node $v$ at time $t+1$ in three steps by~\cite{chitra2019random}: \begin{enumerate} \item Picking hyperedge $e$ among node $u$'s hyperedges $E(u)$ with probability $\frac{\omega(e)}{d(u)}$. \item Picking one of the hyperedge $e$'s nodes $v$ with probability $\frac{\gamma_e(v)}{\delta(e)}$. \item Moving to node $v$. \end{enumerate} Variations include non-lazy walks, which never visit the same node twice in a row with a modified second step \begin{enumerate}\setcounter{enumi}{1} \item[2b.] Picking one of the hyperedge $e$'s nodes $v \ne u$ with probability $\frac{\gamma_e(v)}{\delta(e)-\gamma_e(u)}$, \end{enumerate} and teleporting walks, which jump to a random node at some rate to ensure that all nodes can be reached from any node in a finite number of moves, so-called ergodic walks. We pick the next hyperedge based on its similarity to the previously picked hyperedge in hyperedge-similarity walks, which are useful for modelling flows that tend to stay among similar hyperedges such as among research papers with similar author lists and likely similar topics. These walks require memory and correspond to a higher-order Markov chain model because they depend on the previously picked hyperedge. The bipartite, unipartite, and multilayer network representations have different advantages and limitations (Fig.~\ref{fig:hypergraphRepresentations}). A weighted, undirected network suffices for memoryless lazy random walks without hyperedge-dependent node weights, hyperedge-dependent node weights require directed networks, and hyperedge-similarity walks require multilayer networks. Bipartite networks offer the most direct representation of the three-step random-walk process above. We represent the hyperedges with hyperedge nodes, and the three steps become a two-step walk between the nodes at the bottom and the hyperedge nodes at the top in Fig.~\ref{fig:hypergraphRepresentations}b. For simplicity, we refer to them as nodes and hyperedge nodes. First a step from a node~$u$ to a hyperedge~node~$e$, \begin{align} P_{ue} = \frac{\omega(e)}{d(u)}, \end{align} and then a step from the hyperedge node to a node~$v$, \begin{align} P_{ev} = \frac{\gamma_e(v)}{\delta(e)}. \end{align} By starting the random walk on the nodes and taking two steps at a time, corresponding to a two-step Markov process~\cite{kheirkhahzadeh2016efficient}, hyperedge nodes are only intermediate stops with zero flow when the random walk is back on the nodes after two steps. The stationary distribution of the random walk is concentrated to the nodes. For non-lazy walks represented with bipartite networks, we use so-called state nodes\cite{edler2017mapping} in the hyperedge nodes. One state node for each incoming link has out-links to all nodes in the hyperedge, except the incoming link's source ensures that the walks are not backtracking (Fig.~\ref{fig:bipartiteNonBacktracking}). \begin{figure}[htp] \centering \includegraphics[width=\columnwidth]{figs/bipartite-non-backtracking.pdf} \caption{Bipartite network with state nodes for non-lazy random walks. To prevent random walks on bipartite networks from visiting the same node at the bottom twice in a row by backtracking from the hyperedge node at the top, we use state nodes in the hyperedge nodes. Each hyperedge node requires one state node for each node in the hyperedge. The state nodes have one incoming link from its source node and outgoing links to all other nodes in the hyperedge. Colours indicate the optimised partition in Fig.~\ref{fig:schematicalluvial}(b).} \label{fig:bipartiteNonBacktracking} \end{figure} To represent the random walk on a unipartite network, we project the three-step random-walk process down to a one-step process between the nodes and describe it with the transition rate matrix \begin{align} P_{uv} = \hspace{-1em}\sum_{e \in E(u,v)}\hspace{-1em} P_{ue}P_{ev} = \hspace{-1em}\sum_{e \in E(u,v)} \frac{\omega(e)}{d(u)}\frac{\gamma_e(v)}{\delta(e)}, \end{align} where $E(u,v) = \{ e \in E : u \in e, v \in e \}$ is the set of hyperedges incident to both nodes $u$ and $v$. Each hyperedge forms a fully connected group of nodes (Fig.~\ref{fig:hypergraphRepresentations}c). Unipartite networks for non-lazy walks have no self-links. Compared with the bipartite representation, the unipartite representation with fully connected groups of nodes requires more links. To represent the random walk on a multilayer network, we project the three-step random-walk process down to a one-step process on state nodes in separate layers $\layerE$ for each hyperedge $e$. A state node $u^{\layerE}$ represents $u$ in each layer $\layerE \in {E}(u)$ that contains the node. All state nodes in the same layer form a fully connected set (Fig.~\ref{fig:hypergraphRepresentations}d). The transition rate between state node $u^{\layerE}$ in layer $\layerE$ and state node $v^{\layerF}$ in layer $\layerF$ is \begin{align} P_{uv}^{\layerE\layerF} = \frac{\omega(\layerF)}{d(u)}\frac{\gamma_{\layerF}(v)}{\delta(\layerF)}\text{ for } \layerF \in E(u,v). \end{align} Node $u$'s state node visit rates in different layers sum to $u$'s visit rate in the unipartite and bipartite representations. With one state node per hyperedge layer that contains the node, the multilayer representation requires the most nodes and links to describe the walk. But this cost comes with benefits: the multilayer representation can describe higher-order Markov chains, which can capture more regularities in the data. For example, a useful variant of the basic hypergraph random walk is to pick a hyperedge not only proportional to its weight but also proportional to its similarity to the hyperedge picked in the previous step. To include hyperedge-dependent node weight information in the similarity measure, we use one minus the Jensen-Shannon divergence (JSD) between the transition rate vectors $\mathbf{P}_{\layerE v}$ and $\mathbf{P}_{\layerF v}$ to nodes at layers $\layerE$ and $\layerF$ as the hyperedge coupling strength, \begin{align} D_{u}^{\layerE\layerF} &= \omega(\layerF)\left[1 - JSD(\layerE,\layerF)\right] \nonumber\\ &=\omega(\layerF)\biggl[1 - H\left(\frac{1}{2}\mathbf{P}_{\layerE v}+\frac{1}{2}\mathbf{P}_{\layerF v}\right)\nonumber \\ &\phantom{=\omega(\layerF)\biggl[}+\frac{1}{2}H\left(\mathbf{P}_{\layerE v}\right) +\frac{1}{2}H\left(\mathbf{P}_{\layerF v}\right)\biggr] \end{align} for $\layerF \in E(u,v)$. With node~$u$'s total incident hyperedge weight in layer~$\layerE$ \begin{align} S_{u}^{\layerE} = \sum_{\layerF \in E(u)} D_{u}^{\layerE\layerF}, \end{align} the hyperedge-similarity walk has the transition rates \begin{align} P_{uv}^{\layerE\layerF} = \frac{D_{u}^{\layerE\layerF}}{S_{u}^{\layerE}}\frac{\gamma_{\layerF}(v)}{\delta(\layerF)}\text{ for } \layerF \in E(u,v). \end{align} Because the transition rates at a node depend on the current layer, the random walks generate non-Markovian dynamics that a unipartite or bipartite network representation cannot capture. To ensure ergodic node-visit rates, we derived an unrecorded teleportation scheme that leaves the node-visit rates unchanged when teleportation is superfluous for hypergraphs with hyperedge-independent node weights, robust to changes in the teleportation rate when teleportation is needed\cite{lambiotte2012ranking}, and independent of the representation (see Methods). \bigskip\subsection*{Mapping flows on hypergraphs}. To identify flow-based communities or modules in hypergraphs, we seek to compress a modular description of random walks on the network representations guided by their links. We cast the problem of finding flow-based communities in hypergraphs as a minimum-description-length problem with the map equation framework~\cite{rosvall2008maps}. With this compression-based framework, we can compare how much the different representations compress modular flows. When used to detect communities, the representation matters because bipartite, unipartite, and multilayer networks provide the community-detection algorithm Infomap with different degrees of freedom~\cite{edler2017mapping}. Infomap assigns only nodes to communities in a unipartite network, but assigns also hyperedge nodes in a bipartite network. The multilayer network, with a state node for each hyperedge a node belongs to, implies even more node assignments and possibly overlapping communities. When mapping flows modelled by lazy and non-lazy random walks on the schematic network in Fig.~\ref{fig:hypergraphRepresentations}, the optimal partitions of the bipartite networks have two communities, whereas the unipartite and multilayer networks have three communities (Table~\ref{table:schematic} and Fig.~\ref{fig:schematicalluvial}). The bipartite network favours fewer modules -- using the optimal three-module partition of the unipartite network on the bipartite network gives code length 3.29 bits instead of 2.90 bits for two modules –– because the random walker transitions more frequently between modules when they include hyperedges: Even if a hyperedge node contains no flows at the end of each two-step walk from node through hyperedge node to node, assigning it to a module costs extra bits when it has nodes in multiple modules. For example, if nodes $a$, $b$, and $c$ in the bipartite network in Fig.~\ref{fig:hypergraphRepresentations}(b) would belong to a third green module as in the optimal unipartite solution, and the random walker at node $c$ would return to the hyperedge it comes from before revisiting node $c$, it would first need to exit the green module and enter the orange module, then exit the orange module and re-enter the green module. The corresponding walk on the unipartite network stays within the green module. As a result, the unipartite network representation favours more, smaller modules than the bipartite network representation for lazy and non-lazy walks (Table~\ref{table:schematic}). \begin{table}[tbp] \caption{Optimal flow-based communities of the schematic hypergraph in Fig.~\ref{fig:hypergraphRepresentations} represented with different networks. The number of nodes includes state nodes for the multilevel representations and the bipartite non-lazy representation. We measure the overlap as the perplexity of the optimal solutions (see Methods). \label{table:schematic}} \centering \setlength{\tabcolsep}{4pt} \begin{small} \begin{tabular}{@{}lccccc@{}} \specialrule{0.1em}{0em}{0em}\noalign{\smallskip} Representation & Nodes & Links & Modules & Codelength & Overlap \\ { } & & & & (bits) &\\ \noalign{\smallskip} \specialrule{0.05em}{0em}{0em}\noalign{\smallskip} \footnotesize{\emph{Lazy}} &&&&& \\ \quad Bipartite & 15 & 32 & 2 & 2.90 & -- \\ \quad Unipartite & 10 & 40 & 3 & 2.35 & -- \\ \quad Multilayer & 16 & 98 & 3 & 2.35 & 1.00 \\ \quad Multilayer h-s\footnotemark[1] & 16 & 98 & 4 & 2.28 & 1.09\\ \noalign{\smallskip} \footnotesize{\emph{Non-lazy}} &&&&&\\ \quad Bipartite & 26 & 52 & 2 & 3.00 & -- \\ \quad Unipartite & 10 & 30 & 3 & 2.63 & -- \\ \quad Multilayer & 16 & 68 & 3 & 2.62 & 1.10 \\ \quad Multilayer h-s\footnotemark[1] & 16 & 68 & 4 & 2.32 & 1.29 \\ \mybottomrule \end{tabular} \footnotetext[1]{hyperedge-similarity} \end{small} \end{table} Multilayer networks enable further compression with overlapping modules. But for this small network, only non-lazy walks give overlapping modules with 0.01 bits compression gain (Table~\ref{table:schematic}). With walks that preferentially move to similar hyperedges, the optimal partitions of the multilayer hyperedge-similarity network representations for lazy and non-lazy random walks both have more overlap in four modules (Table~\ref{table:schematic} and Fig.~\ref{fig:schematicalluvial}). The hyperedge-similarity walks favour these overlapping modules because they stay longer within them than the regular walks. \begin{figure}[htb] \centering \includegraphics[width=0.95\columnwidth]{figs/alluvial-small.pdf} \caption{Alluvial diagrams of optimal partitions for the schematic hypergraph in Fig.~\ref{fig:hypergraphRepresentations}. (a) Optimal partitions for lazy walks represented with the networks in Fig.~\ref{fig:hypergraphRepresentations}(b-d). (b) Optimal partitions for non-lazy walks. \label{fig:schematicalluvial}} \end{figure} For a given random-walk model, the representations give equivalent node-visit rates but alter the link flows, and with different link flows, the optimal partition can change. The bipartite network representation favours partitions with fewer modules than the unipartite network representation because assigning hyperedge nodes to modules implies encoding more transitions between modules. Multilayer representations, especially with walks that spend longer time among similar hyperedges, favour more overlapping modules. The random-walk model determines how much the multilayer network modules overlap. Non-lazy and hyper-edge similarity walks favour overlap because they lead to longer persistence times among nodes in possibly overlapping groups. \bigskip\subsection*{Experiments}. To illustrate how the network representation affects detected communities in real hypergraphs, we generated a collaboration hypergraph from the 734 references in \emph{Networks beyond pairwise interactions: Structure and dynamics} by F.~Battiston et~al.\cite{battiston2020networks} We modelled the referenced articles as hyperedges and their authors as nodes. Authors with multiple articles form connections between the hyperedges. We analysed the largest connected component with $|V| = 361$ author nodes in $|E| = 220$ hyperedges. The median number of authors in a hyperedge is 3, and the authors have contributed to 2.2 articles on average though most have only contributed to one. We assigned the relative importance of references by their number of citations $c$ in December 2020. Some references had no citations and some were highly cited. One such example is \emph{Diffusion of innovations} by Everett M.~Rogers, with more than $120,000$ citations. To avoid disproportionally large or small hyperedge weights~$\omega(e)$, we weighted the edges by the logarithm of the number of citations and added unit constants to avoid the zero citation problem, \begin{equation} \label{eq:citations} \omega(e) = \ln\left( c + 1 \right) + 1. \end{equation} We modelled the authors' different contributions to articles by assigning higher weights to the first and last author~\cite{chitra2019random}. We used the edge-dependent node weights \begin{equation} \label{eq:contributions} \gamma_e(v) = \begin{cases} 2 \quad \text{if node $v$ is first or last author,} \\ 1 \quad \text{otherwise.} \end{cases} \end{equation} We assumed equal contribution for alphabetically sorted authors, and assigned all of them weight $\gamma(v) = 1$. This model ranks a \mbox{co-corresponding} author's contributions lower than those of the corresponding authors. To study how hypergraph representations and random-walk models affect the community structure, we generated bipartite, unipartite, and multilayer representations for lazy and non-lazy random walks on the collaboration network. We identified nested hierarchical partitions in each network with Infomap, using 100 independent searches for each network. Infomap's running time depends on the number of nodes, links, and solution levels: The bipartite and unipartite representations finished 3--7 times faster than the multilayer representations. The non-lazy bipartite representation with many state nodes ran almost as long. \begin{table}[tbp] \caption{Optimised flow-based multilevel communities of the collaboration hypergraph represented with different networks. The number of nodes includes state nodes for the multilevel representations and the bipartite non-lazy representation. Shortest codelength of 100 trials with the variance in parenthesis. We measure the overlap as the perplexity of the optimised solutions (see Methods). \label{table:citations}} \centering \setlength{\tabcolsep}{1.8pt} \begin{small} \begin{threeparttable} \begin{tabular}{@{}lrrccccl@{}} \toprule\noalign{\smallskip} Representation & Nodes & Links & \multicolumn{4}{c}{Modules} & \multicolumn{1}{c}{Codelength} \\ { } & & & Top & Leaf & Levels & Overlap & \multicolumn{1}{c}{(bits)} \\ \noalign{\smallskip} \midrule\noalign{\smallskip} \footnotesize{\emph{Lazy}} &&&&& \\ \quad Bipartite & 581 & 1,560 & 4 & 23 & 3 & -- & 5.178(1) \\ \quad Unipartite & 361 & 2,607 & 9 & 69 & 4 & -- & 3.82557(2) \\ \quad Multilayer & 780 & 17,193 & 9 & 76 & 4 & 1.003 & 3.82730(2) \\ \quad Multilayer h-s\tnote{a} & 780 & 17,193 & 8 & 90 & 4 & 1.127 & 3.54939(3) \\ \noalign{\smallskip} \footnotesize{\emph{Non-lazy}} &&&&&\\ \quad Bipartite & 1,141 & 3,548 & 5 & 25 & 3 & -- & 5.1733(2) \\ \quad Unipartite & 361 & 2,246 & 7 & 49 & 4 & -- & 4.25104(8) \\ \quad Multilayer & 780 & 12,843 & 7 & 54 & 4 & 1.098 & 4.16349(8) \\ \quad Multilayer h-s\tnote{a} & 780 & 12,843 & 9 & 66 & 4 & 1.181 & 3.70432(1) \\ \bottomrule\addlinespace[1ex] \end{tabular} \begin{tablenotes}\footnotesize \item[a] hyperedge-similarity \end{tablenotes} \end{threeparttable} \end{small} \end{table} The optimised partitions for the lazy and non-lazy representations behave like the schematic example: The bipartite representations have the fewest leaf modules and highest codelengths, and the multilayer hyperedge-similarity representations have the most leaf modules and shortest codelengths, with the unipartite and the regular multilayer representations in between (Table~\ref{table:citations}). Except for the non-lazy bipartite representation with its many state nodes, the lazy representations have more leaf modules and shorter code lengths than their corresponding non-lazy representations because the lazy random walk is more confined than the non-lazy random walk. With more nodes than in the schematic example, the solutions have more depth. The bipartite solutions have three, and the unipartite and multilayer solutions have four hierarchical levels. The unipartite and multilayer solutions also have more top modules. With non-lazy dynamics, they split the largest top module, and in the lazy dynamics, they split the two largest top modules. But the second-largest top module reunites in the hyperedge-similarity representation, with stronger connections between similar hyperedges (Fig.~\ref{fig:alluvialCitations} and Fig.~\ref{fig:collaboration-map} in Appendix~\ref{appendix:hypergraph}). The unipartite and multilayer solutions are also most similar at the leaf level (Fig.~\ref{fig:leafmoduleSimilarity} in Appendix~\ref{appendix:similarity}). \begin{figure}[!tbp] \centering \includegraphics[width=\columnwidth]{figs/alluvial-citations.pdf} \caption{Alluvial diagrams of optimised partitions for different representations of the collaboration hypergraph . Lazy walks in (a) and non-lazy walks in (b). Module names from the top-ranked author within each module.} \label{fig:alluvialCitations} \end{figure} In this larger example, the multilayer hyperedge-similarity representations give more overlap. The non-lazy representations result in higher average overlap because random walkers visiting a node must continue to other nodes, often in the same or a similar hyperedge layer. When random walkers from dissimilar hyperedges come together at a node, they tend to return to where they came from and favour overlapping modules. The non-lazy representations also result in higher max overlap with the same authors topping all representations (Fig.~\ref{fig:multipleassignedauthors}). \begin{figure}[tbhp] \centering \includegraphics[width=\columnwidth]{figs/slopegraph.pdf} \caption{\label{fig:multipleassignedauthors}Authors in the collaboration hypergraph with the highest average effective number of assignments in the lazy and non-lazy multilayer representations (see Methods).} \end{figure} In line with the information-theoretic duality between finding regularities in data and compressing those data, representations that enable deeper solutions with more modules have shorter codelengths (Table \ref{table:citations}). The lazy multilayer representation is an exception. Its optimised codelength is bound above by the lazy unipartite representation's codelength -- they have the same codelength for the same hard partition -- and overlapping modules can potentially reduce the codelength. Infomap's best codelength was instead 0.05 percent longer than for the lazy unipartite representation. Multilayer representations with their many state nodes and links aggravate the search problem, and Infomap could not find a better solution in 100 attempts. But the gain from overlapping modules is higher for the non-lazy multilayer representation and Infomap finds a solution with a significantly shorter codelength. \bigskip\subsection*{A case study on fossil data}. Palaeontologists classify major groups of marine animals archived in the fossil record into global-scale faunas that change over time\cite{sepkoski_factor_1981}. They have used different network representations to understand the macroevolutionary pattern of marine biodiversity\cite{rojas_multiscale_2019, muscente_quantifying_2018}. However, it is still unclear how such an organisation of marine animals into modules representing global faunas changes with random-walk model and network representation. To illustrate how the network representation of the underlying paleontological data affects empirical estimates of this macroevolutionary pattern, we generated a hypergraph from genus-level fossil occurrences presented in ref.~\citen{rojas_multiscale_2019} and retrieved from the PaleoDB\cite{peters_paleobiology_2016}. We restricted our analysis to fossil occurrences from the Cambrian (541 MY) to the Cretaceous period (66 MY) and modelled 77 geological stages as hyperedges and 13,276 genera as nodes. Genera occurring in multiple geological stages form connections between hyperedges. We weighted the hyperedges by dividing the number of samples where a genus occurs in a given geological stage by the total number of samples recorded at the stage, a procedure modified from ref.~\citen{rojas_global_2017}. We generated bipartite, unipartite, and multilayer network representations for lazy and non-lazy random walks from the underlying palaeontology data and identified optimised partitions in the assembled networks using Infomap. \begin{figure}[htb] \centering \includegraphics[width=\columnwidth]{figs/paleo-alluvial.pdf} \caption{Alluvial diagrams of optimised partitions for the fossil hypergraph represented with different networks. Lazy walks in (a) and non-lazy walks in (b). We show top modules when a partition lacks deeper levels and leaf modules marked with dashed lines when they exist. Module names from the geological period or era represented by the fauna assemblage. \label{fig:fossilalluvial}} \end{figure} For lazy random walks, Infomap partitioned only the multilayer representations into multilevel communities: three modules at the first hierarchical level [Fig. \ref{fig:fossilalluvial}(a)]. Similar to the schematic example and the collaboration hypergraph, the bipartite representation for the lazy random walks has the fewest leaf modules and the highest codelength. The multilayer hyperedge-similarity representation has the most leaf modules and the shortest codelength (Table \ref{table:fossil}). \begin{table*}[tbp] \caption{Optimised flow-based multilevel communities of the fossil hypergraph represented with different networks. The number of nodes includes state nodes for the multilevel representations and the bipartite non-lazy representation. The number of non-trivial top and leaf modules. Average number of levels weighted by the flow volume. We measure the overlap as the perplexity of the optimised solutions (see Methods). Shortest codelength of 20 trials with the variance in parenthesis. \label{table:fossil}} \centering \setlength{\tabcolsep}{6pt} \begin{small} \begin{threeparttable} \begin{tabular}{@{}lcrcccclc@{}} \toprule\noalign{\smallskip} Representation & Nodes & \multicolumn{1}{c}{Links} & \multicolumn{4}{c}{Modules} & \multicolumn{1}{c}{Codelength} & Time \\ { } & $(\times 10^3)$ & $(\times 10^3)$ & Top & Leaf & Levels & Overlap & \multicolumn{1}{c}{(bits)} & (hh:mm:ss) \\ \noalign{\smallskip} \midrule\noalign{\smallskip} \footnotesize{\emph{Lazy}} &&&&& \\ \quad Bipartite & 13 & 79 & 5 & 8 & 2.02 & -- & 10.50927(5) & 00:00:06 \\ \quad Unipartite & 13 & 16,155 & 6 & 13 & 2.02 & -- & 10.3953503(1) & 00:13:24 \\ \quad Multilayer & 40 & 174,490 & 3 & 17 & 3.00 & 1.011 & 10.39819(1) & 09:08:43 \\ \quad Multilayer h-s\tnote{a} & 40 & 174,490 & 3 & 19 & 3.28 & 1.135 & \,\,\,9.84170(1) & 14:19:39 \\ \noalign{\smallskip} \footnotesize{\emph{Non-lazy}} &&&&&\\ \quad Bipartite & 53 & 25,937 & 2 & 15 & 3.02 & -- & 10.34889(3) & 01:14:25 \\ \quad Unipartite & 13 & 16,141 & 6 & 12 & 2.02 & -- & 10.4031798(6) & 00:13:04 \\ \quad Multilayer & 40 & 174,209 & 3 & 15 & 3.00 & 1.010 & 10.406141(9) & 08:55:03 \\ \quad Multilayer h-s\tnote{a} & 40 & 174,209 & 3 & 16 & 3.00 & 1.135 & \,\,\,9.84912(1) & 13:23:13 \\ \bottomrule\addlinespace[1ex] \end{tabular} \begin{tablenotes}\footnotesize \item[a] hyperedge-similarity \end{tablenotes} \end{threeparttable} \end{small} \end{table*} For non-lazy random walks, Infomap partitioned the bipartite representation into a multilevel solution with shorter codelength than the unipartite representation and the standard multilevel representation [Fig.~\ref{fig:fossilalluvial}(b)]. The multilayer hyperedge-similarity representation once more provides the most leaf modules and the highest overlap. The multilayer network representations, including lazy and non-lazy random walks, reproduce modules reminiscent of the Cambrian, Paleozoic, and modern evolutionary faunas widely used in macroevolutionary research\cite{sepkoski_factor_1981}. Also, leaf modules in the multilayer representations capture subfaunas from specific geological periods as nested modules such as Silurian, Triassic, Jurassic, and Cretaceous. Infomap applied to the bipartite representation of the non-lazy random walks identified similar subfaunas but combined Cambrian and Paleozoic faunas into a single top module, obscuring the large-scale pattern. Overall, our results indicate some advantages of using multilayer over bipartite and unipartite representations of fossil occurrence data to quantify the marine biodiversity's macroevolutionary patterns, with lazy and non-lazy random walks providing similar solutions. \section*{Conclusions} We have derived unipartite, bipartite, and multilayer network representations of hypergraph flows with different advantages. We used the information-theoretic and flow-based community detection method Infomap to explore how different hypergraph random-walk models and network representation change the number, size, depth, and overlap of identified multilevel communities. By identifying flow-based communities both in a schematic and real hypergraphs -- a small collaboration hypergraph of researchers working on networks beyond pairwise interactions and a large faunal hypergraph of sampled species across geological stages -- we found that the bipartite network representation is the most compact and enables the fastest community detection. A multilayer network representation that reinforces flows within similar layers -- one for each hyperedge -- gave the deepest modular structures with the most module overlap. But the modular detection gain comes at a high computational cost: Combining fully connected layers with other layers requires many more nodes and links than in the bipartite network representation. If the research question does not require hyperedge assignments or overlapping modules, the unipartite network representation provides a trade-off with intermediate compactness, speed, and the ability to reveal modular regularities. Among the random-walk models, lazy walks typically give more modules in deeper nested structures, and non-lazy walks provide higher modular overlap. Our methods and results help researchers model and map flows on hypergraphs to study the effects of multibody interactions in complex systems. \section*{Methods} \subsection*{Unrecorded teleportation}. With hyperedge-independent node weights where $\gamma_e(u) = \gamma(u)$ for all hyperedges $e \in E(u)$, undirected weighted networks can represent the dynamics, and the stationary distribution of the random walk $\pi_u$ is proportional to the product of node $u$'s total incident hyperedge weight $d(u)$ and weight $\gamma(u)$. With normalised node-visit rates\cite{chitra2019random}, \begin{align}\label{eq:stat_dist_node} \pi_u = \frac{d(u)\gamma(u)}{\sum_{v \in V}d(v)\gamma(v)}. \end{align} For the multilayer network representation, the node-visit rates split between layers based on the node $u$'s incident hyperedge weight per layer state node \begin{align}\label{eq:stat_dist_statenode} \pi_{u}^{\layerE} = \frac{\omega(\layerE)\gamma(u)}{\sum_{v \in V}d(v)\gamma(v)}. \end{align} With hyperedge-dependent node weights $\gamma_e(u)$, only directed weighted networks can represent the dynamics. We use random teleportation to ensure ergodic walks when deriving the node-visit rates with the power-iteration method. Unrecorded teleportation to links minimises the distortion\cite{lambiotte2012ranking}: In each iteration of the power-iteration method, we distribute a fraction $\tau=0.15$ of each node's flow volume among all nodes proportional to their out-link weights. The remaining flow volume moves on the links proportional to their weights. In the last iteration, we move all flows on the links proportional to their weights and record all flows on links and nodes to obtain the ergodic node- and link-visit rates with unrecorded teleportation. This procedure gives equivalent visit rates as simulating a random walker that only records moves on links: With probability $1-\tau$, the random walker moves to a node by following the links proportional to their weights and records the link and the target node. With probability $\tau$, the random walker teleports without recording to the link's source node proportional to the link weight. The normalised number of recordings of each node and link gives the visit rates. We want teleportation applied to undirected networks -- where it is unnecessary -- to leave the node- and link-visit rates unchanged. We achieve this smooth teleportation by scaling the transition rates from nodes by the node-visit rates: Then unrecorded teleportation proportional to the nodes' total out-link weights followed by recorded moves on the links proportional to their weights distributes on the nodes according to the ergodic visit rates on undirected networks\cite{lambiotte2012ranking}. For the general case when the node weights can depend on the hyperedge, and the network may be directed, we use Eq.~\ref{eq:stat_dist_node} without assuming $\gamma_e(u) = \gamma(u)$ as an approximation of the node-visit rates: \begin{align} \tilde{\pi}_u = \frac{\sum_{e \in E(u)}\omega(e)\gamma_e(u)}{\sum_{v \in V\!\!, e \in E(v)}\omega(e)\gamma_e(v)} \end{align} for nodes and \begin{align} \tilde{\pi}_{u}^{\alpha} = \frac{\omega(\layerE)\gamma_\layerE(u)}{\sum_{v \in V\!\!, e \in E(v)}\omega(e)\gamma_e(v)}\text{ for } \layerE \in E(u) \end{align} for state nodes. With exact node-visit rates, we would obtain the stationary flow volumes on links by multiplying the transition rates by the source nodes' visit rates. With approximate node-visit rates, instead, we obtain the link weights \begin{align} w_{ue} = \tilde{\pi}_u P_{ue} \end{align} for bipartite networks, \begin{align} w_{uv} = \tilde{\pi}_u P_{uv} \end{align} for unipartite networks, and \begin{align} w_{uv}^{\layerE\layerF} = \tilde{\pi}_{u}^{\alpha} P_{uv}^{\layerE\layerF}\text{ for } \layerF \in E(u,v) \end{align} for multilayer networks. With unrecorded teleportation proportional to these link weights, modelling flows on hypergraphs give node-visit rates robust to changes in the teleportation rate and independent of the representation. \bigskip\subsection*{Overlap metric}. Modules overlap when Infomap assigns a node's state nodes in the multilayer network representations to different modules. Measuring the overlap through the absolute number of assignments is misleading because the overlap is 2 regardless of the number of state nodes assigned to a different module than the rest. Instead, we used the effective number of assignments. If a fraction~$f$ of node~$u$'s state nodes is assigned to the $m$th module in $u$'s module assignment set, the $m$th element of $u$'s assignment vector is $a^u_m = f$ and the effective number of assignments measured by the perplexity of $u$'s module assignments is \begin{equation} o_u = 2^{H(\mathbf{a}^u)}. \end{equation} The effective number of assignments is one if all $u$'s state nodes are in one module, and it is equal to the number of assignments when the state nodes are divided evenly among $u$'s module assignments. We averaged over all nodes for the partition overlap. \section*{Data and code availability} All data and source code are available on GitHub: \small{\url{http://github.com/mapequation/mapping-hypergraphs}}. \bibliographystyle{naturemag}
1403.2037
\section{Introduction and Preliminary} Long-Guang and Xian in \cite{Xian} generalized the concept of a metric space, replacing the set of real numbers by an ordered Banach space and obtained some fixed point theorems for mapping satisfy different contractive conditions.\\ Recently Wei-Shih Du in \cite{Du1} has proved that the Banach contraction principle in general metric spaces and in TVS-cone metric spaces are equivalent, and in \cite{Du2} has obtained new type fixed point theorems for nonlinear multivalued maps in metric spaces and the generalizations of Mizoguchi-Takahashi's fixed point theorem and Berinde-Berinde's fixed point theorem. But in this paper according to metric which introduced by Feng and Mao in \cite{Feng}, Asadi and Vaezpour in \cite{AVS-3}, we obtain the equivalent contractive conditions which satisfies as their contractions in cone metrics.\\ Let $E$ be a real Banach space. A nonempty convex closed subset $P\subset E$ is called a cone in $E$ if it satisfies: \begin{enumerate} \item[$(i)$] {$P$ is closed, nonempty and $P\neq \{0\}$,} \item[$(ii)$] {$a,b\in \mathbb{R},$ $a,b\geq 0$ and $x,y \in P$ imply that $ax+by \in P,$} \item[$(iii)$] {$x \in P$ and $-x \in P$ imply that $x = 0.$} \end{enumerate} The space $E$ can be partially ordered by the cone $P\subset E$; that is, $x \le y$ if and only if $y-x \in P$. Also we write $x\ll y$ if $y-x \in P^o$, where $P^o$ denotes the interior of $P$.\\ A cone $P$ is called normal if there exists a constant $k>0$ such that $0\le x \le y$ implies $\|x\| \le k\|y\|$.\\ In the sequel we always suppose that $E$ is a real Banach space, $P$ is a cone in $E$ with nonempty interior i.e. $P^o\neq \emptyset$ and $\leq$ is the partial ordering with respect to $P$. \begin{defn} (\cite{Xian}) Let $X$ be a nonempty set. Assume that the mapping $D:X\times X\rightarrow E$ satisfies \begin{enumerate} \item[(i)] {$0\leq D(x,y)$ for all $x,y \in X$ and $D(x,y)=0 $ iff $x=y$} \item[(ii)] {$D(x,y)=D(y,x)$ for all $x,y \in X$} \item[(iii)] {$D(x,y)\leq D(x,z)+D(z,y)$ for all $x,y,z \in X$}. \end{enumerate} Then $D$ is called a cone metric on $X$, and $(X,D)$ is called a cone metric space. \end{defn} \begin{defn} Metric $d$ is equivalent of cone metric $D$, if generated topology of $d$ and $D$ be equal and further the equivalent metric satisfies the same contractive conditions as the cone metric. \end{defn} In other words, convergence one of them implies that other ones, i.e. $$x_n \overset{d}{\longrightarrow} x\iff x_n \overset{D}{\longrightarrow} x.$$ \begin{thm}(\cite{Feng} For every cone metric $D:X\times X\rightarrow E$ there exists a metric $d:X\times X\rightarrow \mathbb{R}^+$ which is equivalent to $D$ on $X$. \end{thm} Indeed, the metric $d$ that has been defined in \cite{Feng,AVS-3} is $d(x,y)=\inf\{\|u\|: D(x,y)\leq u\}.$ Also, remember that for all $\{x_n\}\subseteq X$ and $x\in X$, $x_n\rightarrow x$ in $(X,d)$ if and only if $x_n\rightarrow x$ in $(X,D)$ (\cite{Feng,AVS-3}).\\ \newline Throughout this paper we shall show that the equivalent metric satisfies the same contractive conditions as cone metric. So most of the fixed point theorems which have been proved are the straightforward results from the metric case. For more details see \cite{HSh,AR,IR1,IR2,R.Sh,Wardowski,deim,Rh,RR,KRR,JKRR,JRRR,R,KRV,Rh2}. \section{Main Results} \begin{lem}\label{b} Let $D,D^*:X\times X\rightarrow \Bbb E$ be cone metrics, $d,d^*:X\times X\rightarrow \mathbb{R}^+$ their equivalent metrics respectively and $T:X\rightarrow X$ a self map. If $D(Tx,Ty)\leq D^*(x,y)$, then $d(Tx,Ty)\leq d^*(x,y).$ \end{lem} {\bf Proof.} By the definition of $d^*$, $$\forall \varepsilon>0 \quad\exists v \quad \|v\|<d^*(x,y)+\varepsilon,\quad D^*(x,y)\leq v. $$ Therefore if $D(Tx,Ty)\leq D^*(x,y)\leq v,$ then we have $$d(Tx,Ty)\leq \| v\|\leq d^*(x,y)+\varepsilon.$$ Since $\varepsilon>0$ was arbitrary so $d(Tx,Ty)\leq d^*(x,y).$ $\Box$\newline \begin{exa} Let $E:=\Bbb R^+$, $P:=\Bbb R^+$ and $D:X\times X\rightarrow E$ be a cone metric, $d:X\times X\rightarrow \mathbb{R}^+$ be its equivalent metric. Also let $T:X\rightarrow X$ be a self map and $\varphi:\Bbb R^+\rightarrow \Bbb R^+$ be defined by $\varphi (x)=\frac{x}{1+x}$. If $D^*:=\varphi(D)$, then it is easy to see that $D^*(x,y)=\varphi (D(x,y))=\frac{D(x,y)}{1+D(x,y)}$ is a cone metric and its equivalent metric is $d^*=\varphi(d)$, and if, $D(Tx,Ty)\leq \varphi (D(x,y))=\frac{D(x,y)}{1+D(x,y)},$ then by Lemma \ref{b}, $d(Tx,Ty)\leq \varphi (d(x,y))=\frac{d(x,y)}{1+d(x,y)}.$ We can see that $x_n\to x$ in $(X,d)$ if and only if $x_x\to x$ in $(X,D)$. \end{exa} \begin{defn} A self map $\varphi$ on a normed space $X$ is bounded if $$\|\varphi\|:=\sup_{0\neq x\in X}\frac{\|\varphi(x)\|}{\|x\|}<\infty.$$ \end{defn} \begin{thm} Let $D:X\times X\rightarrow E$ be a cone metric, $d:X\times X\rightarrow \mathbb{R}^+$ its equivalent metric, $T:X\rightarrow X$ a self map and $\varphi:P\rightarrow P$ a bounded map, then there exists $\psi:\mathbb{R}^+\rightarrow \mathbb{R}^+$ such that $D(Tx,Ty)\leq \varphi(D(x,y))$ for every $x,y\in X$ implies $d(Tx,Ty)\leq \psi(\|D(x,y)\|)$ for all $x,y\in X$. Moreover if $\psi$ is decreasing map or $\varphi$ is linear and increasing map then, $d(Tx,Ty)\leq \psi(d(x,y))$ for all $x,y\in X$. \end{thm} {\bf Proof.} Put $\psi(t):=\sup_{0\neq x\in P}\left\|\varphi\left(\frac{t}{\|x\|}x\right )\right\|$ for all $t\in \Bbb R^+$ and note that $\psi(t)\leq t\|\varphi\|$ for all $t\in\Bbb R^+$. So $\|\varphi(x)\|\leq\psi(\|x\|)$ for all $x\in P$. Therefore if $D(Tx,Ty)\leq \varphi(D(x,y)),$ then we have $d(Tx,Ty)\leq \| \varphi(D(x,y))\|\leq \psi(\|D(x,y)\|).$ \newline By the definition of $d$ we have $d(x,y)\leq \|D(x,y)\|$. Now if $\psi$ is a decreasing map,then $$d(Tx,Ty)\leq \psi(\|D(x,y)\|)\leq \psi(d(x,y)).$$ If $\varphi$ is a linear increasing map, then $\psi(t)=t\|\varphi\|.$ The definition of $d$ implies that $$\forall \varepsilon>0 \quad\exists v \quad \|v\|<d(x,y)+\varepsilon,\quad D(x,y)\leq v. $$ Therefore if $D(Tx,Ty)\leq \varphi(D(x,y))\leq \varphi(v),$ then we have $$d(Tx,Ty)\leq \| \varphi(v)\|\leq \psi(\|v\|)\leq \psi(d(x,y))+\psi(\varepsilon).$$ Since $\varepsilon>0$ was arbitrary and $\psi(\varepsilon)\rightarrow 0$ as $\varepsilon\rightarrow 0$, so $d(Tx,Ty)\leq \psi(d(x,y)).$ $\Box$\newline \par In the following summary of our results are listed. \begin{cor}\label{2.5} Let $D$ be a cone metric, $d$ its equivalent metric, $T :X\rightarrow X$ a map, $\lambda\in[0,\frac{1}{2})$ and $\alpha,\beta\in [0,1)$. For $x, y \in X$, \begin{enumerate} \item[i.] $D(Tx, Ty)\leq \alpha D(x,y) \Rightarrow d(Tx, Ty)\leq \alpha d(x, y).$ \item[ii.] $D(Tx, Ty)\leq \lambda (D(Tx, x) + D(Ty, y)) \Rightarrow d(Tx, Ty)\leq \lambda (d(Tx, x) + d(Ty, y)).$ \item[iii.] $D(Tx, Ty)\leq \lambda (D(Tx, y) + D(Ty, x)) \Rightarrow d(Tx, Ty)\leq \lambda (d(Tx, y) + d(Ty, x)).$ \item[iv.] $D(Tx, Ty)\leq \alpha D(x, Ty) + \beta D(Tx, y) \Rightarrow d(Tx, Ty)\leq \alpha d(x, Ty) +\beta d(Tx, y).$ \item[v.] $D(Tx, Ty)\leq \alpha D(x, Tx) + \beta D(y, Ty) \Rightarrow d(Tx, Ty)\leq \alpha d(x, Tx) +\beta d(y, Ty).$ \end{enumerate} \end{cor} \begin{cor} Let $D$ be a cone metric, $d$ its equivalent metric, $T :X\rightarrow X$ a map and $\alpha,\beta\in [0,1)$. For $x, y \in X$, \begin{enumerate} \item[a.] there exists $u\in \{D(x, y); D(x, Tx); D(y, Ty); \frac{1}{2}[ D(x, Ty)]+ D(y, Tx)]\}$ such that $D(Tx, Ty)\leq \alpha u$ where $\alpha \in(0, 1),$ then $$d(Tx, Ty)\leq \alpha\max\{d(x, y); d(x, Tx); d(y, Ty); \frac{1}{2}[d(x, Ty)]+ d(y, Tx)]\}.$$ \item[b.] there exists $u\in \{D(x, y); D(x, Tx); D(y, Ty); \frac{1}{2} D(x, Ty);\frac{1}{2}D(y, Tx)\}$ such that $D(Tx, Ty)\leq \beta u$ where $\beta \in(0, 1),$ then $$d(Tx, Ty)\leq \beta\max\{d(x, y); d(x, Tx); d(y, Ty); \frac{1}{2}d(x, Ty);\frac{1}{2}d(y, Tx)\}.$$ \item[c.] there exists $u\in \{D(x, y); \frac{1}{2}[D(x, Tx)+ D(y, Ty)]; \frac{1}{2}[D(x, Ty)+ D(y, Tx)]\}$ such that $D(Tx, Ty)\leq \beta u$ where $\beta \in(0, 1),$ then $$d(Tx, Ty)\leq \beta\max\{d(x, y); \frac{1}{2}[d(x, Tx)+ d(y, Ty)]; \frac{1}{2}[d(x, Ty)+ d(y, Tx)]\}.$$ \end{enumerate} \end{cor} {\bf Proof.} To prove $(a)$, if $u\in \{D(x, y); D(x, Tx); D(y, Ty)\}$, then by Corollary \ref{2.5}, $(i)$; and if $u= \frac{1}{2}(D(x, Ty)+ D(y, Tx))$ by Corollary \ref{2.5}, $(iv)$; with $\alpha=\beta=\frac{1}{2}$ we obtain desire results.\newline $(b)$ and $(c)$ are clear, by Corollary \ref{2.5}, $(i)$ and $(iv),(v)$ respectively. $\Box$ \begin{cor} Let $D$ be a cone metric, $d$ its equivalent metric, $T :X\rightarrow X$ a map. For $x, y \in X$, \begin{enumerate} \item[a.] if $$D(Tx, Ty)\leq a_1D(x, y)+a_2 D(x, Tx)+a_3 D(y, Ty)+a_4D(x, Ty)+a_5D(y, Tx),$$ then $$d(Tx, Ty)\leq a_1d(x, y)+a_2 d(x, Tx)+a_3 d(y, Ty)+a_4d(x, Ty)+a_5d(y, Tx)$$ where $\sum_{i=1}^5a_i< 1.$ \item[b.] if there exists $$u\in \{D(x, y); D(x, Tx); D(y, Ty); D(x, Ty);D(y, Tx)\}$$ such that $D(Tx, Ty)\leq \lambda u,$ then $$d(Tx, Ty)\leq \lambda\max\{d(x, y); d(x, Tx); d(y, Ty); d(x, Ty);d(y, Tx)\}$$ where $\lambda\in[0,\frac{1}{2})$. \item[c.] if $$D(Tx, Ty)\leq a_1D(x, y)+a_2 D(x, Tx)+a_3 D(y, Ty)+a_4[D(x, Ty)+D(y, Tx)],$$ then $$d(Tx, Ty)\leq a_1d(x, y)+a_2 d(x, Tx)+a_3 d(y, Ty)+a_4[d(x, Ty)+d(y, Tx)]$$ where $a+1+a_2+a_3+2a_4< 1.$ \end{enumerate} \end{cor} {\bf Proof.} To prove $(a)$, for convenience, put $$D^T:=D(Tx,Ty),D_1:=D(x,y),$$ $$D_2:=D(x,Tx),D_3:=D(y,Ty),D_4:=D(x,Ty),D_5:=D(y,Tx)$$ and similarly $$d^T:=d(Tx,Ty),d_1:=d(x,y),$$ $$d_2:=d(x,Tx),d_3:=d(y,Ty),d_4:=d(x,Ty),d_5:=d(y,Tx).$$ So $D^T\leq \sum_{i=1}^5a_iD_i$. Now by definition of $d$ $$\forall i~(1\leq i\leq 5)~\forall \varepsilon>0~\exists v_i\quad\text{s.t.} \quad \|v_i\|<d_i+\varepsilon$$ and $D_i\leq v_i$. Therefore $$D^T\leq \sum_{i=1}^5a_iD_i\leq \sum_{i=1}^5a_iv_i,$$ thus $$d^T\leq \|\sum_{i=1}^5a_iv_i\|\leq \sum_{i=1}^5a_i\|v_i\|< \sum_{i=1}^5a_id_i+(\sum_{i=1}^5a_i)\varepsilon,$$ since $\varepsilon>0$ is arbitrary so we have $$d^T\leq \sum_{i=1}^5a_id_i.$$ To prove $(b)$ and $(c)$ we use the Corollary \ref{2.5}. $\Box$ \begin{cor} Let $D,D^*$ be cone metrics, $d,d^*$ their equivalent metrics, $T :X\rightarrow X$ a map. There exist $m,n\in\mathbb{N}$ and $ k\in[0,1)$ such that $$D(T^mx,T^ny)\leq kD(z,t)$$ for all $x,y\in X$, $z\neq t$ and $z,t\in \{x,y,T^px,T^qy\}$ where $1\leq p\leq m$ and $1\leq q\leq n,$ then $$d(T^mx,T^ny)\leq k d(z,t).$$ \end{cor} {\bf Proof.} By definition of $d$ we have $$\forall z,t\in \{x,y,T^px,T^qy\}~\forall \varepsilon>0~\exists v\quad\text{s.t.} \quad \|v\|<d(z,t)+\varepsilon$$ where $z\neq t$ and $D(z,t)\leq v$. So $$D(T^mx,T^ny)\leq kD(z,t)\leq kv,$$ therefore $$d(T^mx,T^ny)\leq \|kv\|< kd(z,t)+k\varepsilon$$ since $\varepsilon>0$ is arbitrary so we have $$d(T^mx,T^ny)\leq k d(z,t). \Box$$ {\bf Acknowledgements}\newline The author is indebted to referee for carefully reading the paper and for making useful suggestions. This paper has been supported by the Zanjan Branch, Islamic Azad University, Zanjan, Iran. The author would like to thanks this support.
2004.07494
\section{Introduction}\label{intro} Let $\mathcal{H}$ be a complex Hilbert space with inner product $\langle \cdot,\cdot\rangle$ and $\mathcal{B}(\mathcal{H})$ be the $\mathbb{C}^*$- algebra of all bounded linear operators on $\mathcal{H}$. For $T\in \mathcal{L(H)}$, the {\it numerical range} of $T$ is defined as $$W(T)=\{\langle Tx, x \rangle: x\in \mathcal{H}, \|x\|=1 \}.$$ The {\it numerical radius} of $T$, denoted by $w(T)$, is defined as $ w(T)=\displaystyle\sup\{|z|: z\in W(T) \}.$ It is well-known that $w(\cdot)$ defines a norm on $\mathcal{H}$, and is equivalent to the usual operator norm $\|T\|=\displaystyle \sup \{ \|Tx \|: x\in \mathcal{H}, \|x\|=1 \}.$ In fact, for every $T \in \mathcal{L(H)}$, \begin{align}\label{p3100} \frac{1}{2}\|T\|\leq w(T)\leq \|T\|. \end{align} One may refer \cite{MBKS,TY,SND,SND1,SND2,HirKit} for several generalizations, refinements and applications of numerical radius inequalities in different settings which appeared in the last decade. Let $\|\cdot\|$ be the norm induced from $\langle \cdot,\cdot\rangle.$ A selfadjoint operator $A\in\mathcal{B}(\mathcal{H})$ is called {\it positive} if $\langle Ax, x\rangle \geq 0$ for all $x \in \mathcal{H}$, and is called {\it strictly positive} if $\langle Ax, x\rangle > 0$ for all non-zero $x\in \mathcal{H}$. We denote a positive (strictly positive) operator $A$ by $A \geq 0$ ($ A > 0$). Let $B$ be a $2\times 2$ diagonal operator matrix, in which each of the diagonal entries is a positive operator $A$. Through out this article, $A$ is always assumed to a positive operator. Clearly, if $A$ is a positive operator, it induces a positive semidefinite sesquilinear form, $\langle \cdot,\cdot\rangle_A: \mathcal{H}\times\mathcal{H}\rightarrow\mathbb{C}$ defined by $\langle x, y \rangle_A=\langle Ax,y\rangle,$ $x,y\in\mathcal{H}.$ Let $\|\cdot\|_A$ denote the semi-norm on $\mathcal{H}$ induced by $\langle \cdot, \cdot \rangle_A,$ i.e., $\|x\|_A=\sqrt{\langle x, x \rangle_A}$ for all $x \in \mathcal{H}.$ It is easy to verify that $\|x\|_A$ is a norm if and only if $A$ is a strictly positive operator. Also, $(\mathcal{H}, \|\cdot\|_A)$ is complete if and only if the range of $A$ ($\mathcal{R}(A)$) is closed in $\mathcal{H}.$ For $T \in\mathcal{B}\mathcal{(H)}$, $A$-operator seminorm of $T$, denoted as $\|T\|_A,$ is defined as $$\|T\|_A:=\sup_{x\in \overline{\mathcal{R}(A)},~x\neq 0}\frac{\|Tx\|_A}{\|x\|_A}=\inf\left\{c>0: \|Tx\|_A\leq c\|x\|_A,x\in \overline{\mathcal{R}(A)}\right\}<\infty.$$ We set $\mathcal{B}^A\mathcal{(H)}:=\{T\in \mathcal{B(H)}:\|T\|_A<\infty\}.$ It can be seen that $\mathcal{B}^A\mathcal{(H)}$ is not generally a subalgebra of $\mathcal{B(H)}$, and $\|T\|_A=0$ if and only if $ATA=0.$ For $T\in\mathcal{B}^A\mathcal{(H)},$ we also have $$\|T\|_A=\sup \{|\langle Tx,y\rangle_A|: x,y\in \overline{\mathcal{R}(A),} ~\|x\|_A=\|y\|_A=1\}.$$ If $AT\geq 0$, then the operator $T$ is called {\it $A$-positive}. Note that if $T$ is $A$-positive, then $$\|T\|_A=\sup \{\langle Tx,x\rangle_A: x\in \mathcal{H}, \|x\|_A=1\}.$$ For $T\in \mathcal{B(H)},$ an operator $R\in \mathcal{B(H)}$ is called an {\it $A$-adjoint operator} of $T$ if for every $x,y\in \mathcal{H},$ we have $\langle Tx,y\rangle_A=\langle x, Ry\rangle_A,$ i.e., $AR=T^*A.$ By Douglas Theorem \cite{Doug}, the existence of an $A$-adjoint operator is not guaranteed. In fact, an operator $T\in \mathcal{B(H)}$ may admit none, one or many $A$-adjoints. The set of all operators which admits $A$-adjoint is denoted by $\mathcal{B}_A\mathcal{(H)}.$ Note that $\mathcal{B}_A\mathcal{(H)}$ is a subalgebra of $\mathcal{B(H)}$ which is neither closed nor dense in $\mathcal{B(H)}.$ Moreover, the following inclusions $\mathcal{B}_A\mathcal{(H)}\subseteq \mathcal{B}^A\mathcal{(H)}\subseteq\mathcal{B}\mathcal{(H)}$ hold with equality if $A$ is injective and has a closed range. For $A\in \mathcal{B(H)}$ and $\mathcal{R}(A)$ is closed, the {\it Moore-Penrose inverse} of $A$ \cite{gro} is the operator $X\in \mathcal{B(H)}$ which satisfies the following four Penrose equations: \begin{center} (1) $AXA = A$,~ (2) $XAX = X$,~ (3) $(A X)^* = A X$,~ (4) $(X A)^*= X A.$ \end{center} It is unique, and is denoted by $A^\dagger.$ If $T\in \mathcal{B}_A\mathcal{(H)},$ the reduced solution of the equation $AX=T^*A$ is a distinguished $A$-adjoint operator of $T,$ which is denoted by $T^{\#_A}$ (see \cite{Mos}). Note that $T^{\#_A}=A^\dagger T^* A$. If $T\in \mathcal{B}_A(\mathcal{H}),$ then $AT^{\#_A}=T^*A.$ An operator $T\in \mathcal{B(H)}$ is said to be {\it $A$-selfadjoint} if $AT$ is selfadjoint, i.e., $AT=T^*A.$ Observe that if $T$ is $A$-selfadjoint, then $T\in \mathcal{B}_A(\mathcal{H}).$ However, in general, $T\neq T^{\#_A}.$ For $T\in \mathcal{B}_A(\mathcal{H}),$ $T=T^{\#_A}$ if and only if $T$ is $A$-selfadjoint and $\mathcal{R}(T)\subseteq \overline{\mathcal{R}(A)}.$ Note that if $T\in \mathcal{B}_A(\mathcal{H}),$ then $T^{\#_A}\in \mathcal{B}_A(\mathcal{H}),$ $(T^{\#_A})^{\#_A}=PTP,$ where $P$ is an orthogonal projection onto $\overline{\mathcal{R}(A)},$ and $\left((T^{\#_A})^{\#_A}\right)^{\#_A}=T^{\#_A}.$ Also $T^{\#_A}T$ and $TT^{\#_A}$ are $A$-selfadjoint and $A$-positive operators. So, \begin{align}\label{ineq0} \|T^{\#_A}T\|_A=\|TT^{\#_A}\|_A=\|T\|_A^2=\|T^{\#_A}\|_A^2. \end{align} An operator $U\in \mathcal{B}_A(\mathcal{H})$ is said to be $A$-unitary if $\|Ux\|_A=\|U^{\#_A}x\|_A=\|x\|_A$ for all $x\in \mathcal{H}.$ For $T\in \mathcal{B}_A(\mathcal{H})$ and $U$ is $A$-unitary, $w_A(U^{\#_A}TU)=w_A(T).$ Again, for $T,S\in \mathcal{B}_A(\mathcal{H}),$ $(TS)^{\#_A}=S^{\#_A}T^{\#_A},$ $\|TS\|_A\leq \|T\|_A\|S\|_A$ and $\|Tx\|_A\leq \|T\|_A\|x\|_A$ for all $x\in \mathcal{H}.$ For $T\in \mathcal{B}_A(\mathcal{H})$, we can write $Re_A(T)=\frac{T+T^{\#_A}}{2}$ and $Im_A(T)=\frac{T-T^{\#_A}}{2i}$. For further details, we refer the reader to \cite{ARIS,ARIS2}. in 2012, Saddi \cite{Saddi} defined {\it $A-$numerical radius} of $T,$ denoted as $w_A(T),$ for $T\in \mathcal{B(H)}$ as follows $$w_A(T)=\sup\{|\langle Tx,x\rangle_A|:x\in \mathcal{H}, \|x\|_A=1\}. $$ In 2019, Zamani \cite{Zam} showed that if $T\in \mathcal{B}_A\mathcal{(H)}$, then \begin{align}\label{ineq00} w_A(T)=\sup_{\theta\in \mathbb{R}}\left\|\frac{e^{i\theta}T+(e^{i\theta}T)^{\#_A}}{2}\right\|_A. \end{align} The author then extended the inequality \eqref{p3100} using $A$-numerical radius of $T$, and the same is illustrated next: \begin{align}\label{ineq1} \frac{1}{2}\|T\|_A\leq w_A(T)\leq \|T\|_A. \end{align} Furthermore, if $T$ is $A$-selfadjoint, then $w_A(T)=\|T\|_A$. In 2019, Moslehian {\it et al.} \cite{MOS} further continued the study of $A$-numerical radius and established some inequalities for $A$-numerical radius.\\ For a $2\times 2$ operator matrix $T,$ $B$-numerical radius of $T$ is defined as $$w_B(T)=\sup\{|\langle Tx,x\rangle_B|:x\in \mathcal{H}, \|x\|_B=1\},$$ where $B=\begin{bmatrix} A & 0\\ 0 & A \end{bmatrix}$.\\ In 2019, Bhunia {\it et al.} \cite{PINTU} studied $B$-numerical radius inequalities of $2\times 2$ operator matrices, where $B$ is a $2\times 2$ diagonal operator matrix whose diagonal entries are $A$. In this directions some authors has been studied many generalizations and refinements of $A$-numerical radius, for more details one can refer \cite{Pintu1, Feki, Pintu2}. This motivates us to further study on this topic. The objective of this paper is to present new $B$-numerical radius inequalities for $2\times 2$ operator matrices. Further two refinements of the 1st inequality in \eqref{ineq1} is addressed in this article. In this aspect, the article is structured as follows. In Section 2, we recall some upper and lower bounds for $B$-numerical radius inequalities for a $2\times 2 $ operator matrix.The next section contains our main results and is of two folds. First part establishes some upper and lower bounds for $2\times 2$ operator matrices while the second part deals with certain refinements of \eqref{ineq1}. \section{Preliminaries} In 2020, Pintu {\it et al.} \cite{PINTU} proved the following lemma for $2\times 2$ operator matrices. \begin{lemma}\label{lem0001}\textnormal{[Lemma 2.4 , \cite{PINTU}]} \\ Let $T_1, T_2\in \mathcal{B}_A(\mathcal{H}).$ Then the following results hold: \begin{enumerate} \item [\textnormal{(i)}] $w_B\left(\begin{bmatrix} T_1 & 0\\ 0 & T_2 \end{bmatrix}\right)= \max\{w_A(T_1), w_A(T_2)\}.$\\ \item [\textnormal{(ii)}] If $A>0$, then $w_B\left(\begin{bmatrix} 0 & T_1\\ T_2 & 0 \end{bmatrix}\right)=w_B\left(\begin{bmatrix} 0 & T_2\\ T_1 & 0 \end{bmatrix}\right).$\\ \item [\textnormal{(iii)}] If $A>0,$ then~for~any~$\theta\in\mathbb{R}, w_B\left(\begin{bmatrix} 0 & T_1\\ e^{i\theta}T_2 & 0 \end{bmatrix}\right)=w_B\left(\begin{bmatrix} 0 & T_1\\ T_2 & 0 \end{bmatrix}\right).$\\ \item [\textnormal{(iv)}] If $A>0$,~ then~ $w_B\left(\begin{bmatrix} T_1 & T_2\\ T_2 & T_1 \end{bmatrix}\right)=\max\{w_A(T_1+T_2),w_A(T_1-T_2)\}.$\\ In particular, $w_B\left(\begin{bmatrix} 0 & T_1\\ T_1 & 0 \end{bmatrix}\right)=w_A(T_1).$ \end{enumerate} \end{lemma} In 2019, the authors of \cite{Pintu1} established an upper and lower bound for a $2\times 2$ operator matrix. \begin{lemma}\label{lem0002}\textnormal{[Theorem 4.3, \cite{Pintu1}]} \\ Let $T_1, T_2\in \mathcal{B}_A(\mathcal{H})$ where $A>0.$ If $T=\begin{bmatrix} 0 & T_1\\ T_2 & 0 \end{bmatrix}$ and $B=\begin{bmatrix} 0 & A\\ A & 0 \end{bmatrix}$ then \begin{align*} \frac{1}{2} \max\{w_A(T_1+T_2), w_A(T_1-T_2)\}&\leq w_B(T)\\ &\leq \frac{1}{2}\{w_A(T_1+T_2)+ w_A(T_1-T_2)\}. \end{align*} \end{lemma} In 2020, Feki \cite{Feki1} proved the following result. \begin{lemma}\label{lem00003}\textnormal{[Lemma 2.1, \cite{Feki1}]} \\ Let $T=(T_{ij})_{n\times n}$ such that $T_{ij}\in \mathcal{B}_A(\mathcal{H})$ for all $i,j.$ Then $$\|T\|_A\leq \|\widehat{T}\|,$$ where $\widehat{T}=(\|T_{ij}\|_A)_{n\times n}.$ \end{lemma} \section{Main Results} This section is two fold. First, we present some generalizations of $A$-numerical radius inequalities. Further we prove some upper and lower bounds for $B$-numerical radius of operator matrices. Second, we provide different refinements of $A$-numerical radius inequalities. \subsection{Upper and lower bounds for $B$-numerical radius of $2\times 2$ operator matrix.} In this subsection, we establish different upper and lower bounds for $B$-numerical radius of a $2\times 2$ block operator matrix. We start with the following lemma. \begin{lemma}\label{l001} Let $T_1, T_2, T_3, T_4\in \mathcal{B}_A(\mathcal{H}).$ Then \begin{enumerate} \item [\textnormal{(i)}] $w_B\left(\begin{bmatrix} T_1 & 0\\ 0 & T_4 \end{bmatrix}\right)\leq w_B\left(\begin{bmatrix} T_1 & T_2\\ T_3 & T_4 \end{bmatrix}\right).$ \item [\textnormal{(ii)}] $w_B\left(\begin{bmatrix} 0 & T_2\\ T_3 & 0 \end{bmatrix}\right)\leq w_B\left(\begin{bmatrix} T_1 & T_2\\ T_3 & T_4 \end{bmatrix}\right).$ \end{enumerate} \end{lemma} \begin{proof} Let $ T= \begin{bmatrix} T_1 & T_2\\ T_3 & T_4 \end{bmatrix}$ and the $B$-unitary operator $U=\begin{bmatrix} I & 0\\ 0 & -I \end{bmatrix}.$\\ Here, $ \begin{bmatrix} T_1 & 0\\ 0 & T_4 \end{bmatrix}=\frac{1}{2}(T+U^{\#_B}TU).$ So, we have \\ (i) \begin{align*} w_B\left( \begin{bmatrix} T_1 & 0\\ 0 & T_4 \end{bmatrix}\right)&= \frac{1}{2}w_B(T+U^{\#_B}TU)\\ &\leq \frac{1}{2}[w_B(T)+w_B(U^{\#_B}TU)]\\ &= \frac{1}{2}[w_B(T)+w_B(T)]\\ &=w_B(T)=w_B\left(\begin{bmatrix} T_1 & T_2\\ T_3 & T_4 \end{bmatrix}\right). \end{align*} (ii) \begin{align*} w_B\left( \begin{bmatrix} 0 & T_2\\ T_3 & 0 \end{bmatrix}\right)&= \frac{1}{2}w_B(T-U^{\#_B}TU)\\ &\leq \frac{1}{2}[w_B(T)+w_B(U^{\#_B}TU)]\\ &= \frac{1}{2}[w_B(T)+w_B(T)]\\ &=w_B(T)=w_B\left(\begin{bmatrix} T_1 & T_2\\ T_3 & T_4 \end{bmatrix}\right). \end{align*} \end{proof} The following inequality generalizes \eqref{ineq1}. \begin{thm} Let $T_1, T_2\in\mathcal{B}_A(\mathcal{H}),$ where $A>0.$ If $B=\begin{bmatrix} A & 0\\ 0 & A \end{bmatrix} ,$ then \begin{equation}\label{eq01} \max\{w_A(T_1), w_A(T_2)\}\leq w_B\left(\begin{bmatrix} T_1 & T_2\\ -T_2 & -T_1 \end{bmatrix}\right)\leq w_A(T_1)+w_A(T_2). \end{equation} \end{thm} \begin{proof} By using Lemma \ref{lem0001} and Lemma \ref{l001} $$w_A(T_1)= w_B\left(\begin{bmatrix} T_1 & 0\\ 0 & -T_1 \end{bmatrix}\right)\leq w_B\left(\begin{bmatrix} T_1 & T_2\\ -T_2 & -T_1 \end{bmatrix}\right)$$ and $$w_A(T_2)= w_B\left(\begin{bmatrix} 0 & T_2\\ -T_2 & 0 \end{bmatrix}\right)\leq w_B\left(\begin{bmatrix} T_1 & T_2\\ -T_2 & -T_1 \end{bmatrix}\right).$$ Therefore, $$\max\{w_A(T_1), w_A(T_2)\}\leq w_B\left(\begin{bmatrix} T_1 & T_2\\ -T_2 & -T_1 \end{bmatrix}\right).$$ On the other hand, by using Lemma \ref{lem0001}, we have $$w_B\left(\begin{bmatrix} T_1 & T_2\\ -T_2 & -T_1 \end{bmatrix}\right)\leq w_B\left(\begin{bmatrix} T_1 & 0\\ 0 & -T_1 \end{bmatrix}\right)+w_B\left(\begin{bmatrix} 0 & T_2\\ -T_2 & 0 \end{bmatrix}\right)=w_A(T_1)+w_A(T_2).$$ \end{proof} A particular case of the inequality \eqref{eq01} is the following. \begin{remark} If we choose $T_2=T_1$ in inequality \eqref{eq01}, then $$w_A(T_1)\leq w_B\left(\begin{bmatrix} T_1 & T_1\\ -T_1 & -T_1 \end{bmatrix}\right)\leq 2w_A(T_1).$$ \end{remark} We need the following lemma to prove Theorem \ref{t002}. \begin{lemma}\label{l002} Let $T_1, T_2, T_3, T_4\in \mathcal{B}_A(\mathcal{H}),$ where $A>0.$ If $B=\begin{bmatrix} A & 0\\ 0 & A \end{bmatrix},$ then $$ w_B\left(\begin{bmatrix} T_2 & -T_1\\ T_1 & T_2 \end{bmatrix}\right)= \max\{w_A(T_1+iT_2), w_A(T_1-iT_2)\}.$$ \end{lemma} \begin{proof} Let $T=\begin{bmatrix} iT_2 & -T_1\\ T_1 & iT_2 \end{bmatrix}$ and the $B$-unitary operator $U=\frac{1}{\sqrt{2}}\begin{bmatrix} I & iI\\ iI & I \end{bmatrix}.$ Then $U^{\#_B}TU=\begin{bmatrix} -i(T_1-T_2) & 0\\ 0 & i(T_1+T_2) \end{bmatrix}.$ Using the fact that $w_B(T)=w_B(U^{\#_B}TU),$ we get \begin{align*} w_B(T)=w_B(U^{\#_B}TU)&= w_B\left(\begin{bmatrix} -i(T_1-T_2) & 0\\ 0 & i(T_1+T_2) \end{bmatrix}\right)\\ &=\max\{w_A(-i(T_1-T_2)),w_A(i(T_1+T_2))\}\\ &=\max\{w_A(T_1-T_2),w_A(T_1+T_2)\}. \end{align*} Replacing $T_2$ by $-iT_2$ in the identity, we have $$ w_B\left(\begin{bmatrix} T_2 & -T_1\\ T_1 & T_2 \end{bmatrix}\right) =\max\{w_A(T_1+iT_2),w_A(T_1-iT_2)\}.$$ \end{proof} Theorem \ref{t002} provides an upper bound for a block operator matrix of the form $\begin{bmatrix} T_1 & T_2\\ T_3 & T_4 \end{bmatrix}.$ \begin{thm}\label{t002} Let $T_1, T_2, T_3, T_4\in \mathcal{B}_A(\mathcal{H}),$ where $A>0.$ If $T=\begin{bmatrix} T_1 & T_2\\ T_3 & T_4 \end{bmatrix}$ and $B=\begin{bmatrix} A & 0\\ 0 & A \end{bmatrix}.$ Then \begin{align*} w_B(T)\leq \max\bigg\{\frac{1}{2}w_A(T_1+T_4+i(T_2-T_3)),\frac{1}{2}w_A(T_1+&T_4-i(T_2-T_3))\bigg\} \\ &+\frac{1}{2}(w_A(T_4-T_1)+w_A(T_2+T_3)). \end{align*} \end{thm} \begin{proof} Let $U=\frac{1}{\sqrt{2}}\begin{bmatrix} I & -I\\ I & I \end{bmatrix}$ be $B$-unitary. Using the identity $w_B(T)=w_B(U^{\#_B}TU)$, we have \begin{align*} w_B\left(\begin{bmatrix} T_1 & T_2\\ T_3 & T_4 \end{bmatrix}\right)=& w_B\left(U^{\#_B}\begin{bmatrix} T_1 & T_2\\ T_3 & T_4 \end{bmatrix}U\right)\\ &=\frac{1}{2}w_B\left(\begin{bmatrix} T_1+T_2+T_3+T_4 & -T_1+T_2-T_3+T_4\\ -T_1-T_2+T_3+T_4 & T_1-T_2-T_3+T_4 \end{bmatrix}\right)\\ &=\frac{1}{2}w_B\left(\begin{bmatrix} T_1+T_4 & T_2-T_3\\ T_3-T_2 & T_1+T_4 \end{bmatrix}+\begin{bmatrix} T_2+T_3 & T_4-T_1\\ T_4-T_1 & -T_3-T_2 \end{bmatrix}\right)\\ &\leq \frac{1}{2}\left\{w_B\left(\begin{bmatrix} T_1+T_4 & T_2-T_3\\ T_3-T_2 & T_1+T_4 \end{bmatrix}\right)+w_B\left(\begin{bmatrix} T_2+T_3 & T_4-T_1\\ T_4-T_1 & -T_3-T_2 \end{bmatrix}\right)\right\}\\ &\leq \frac{1}{2}\{\max(w_A(T_3-T_2+i(T_1+T_4))),w_A(T_3-T_2-i(T_1+T_4))\\ &\hspace{2.5cm}+w_A(T_4-T_1)+w_A(T_2+T_3)\}\mbox{~~by Lemma \ref{l002} and Lemma \ref{lem0001}.} \end{align*} \end{proof} The following result demonstrates an upper bound for $B$-numerical radius of a $2\times 2$ operator matrix. \begin{thm} Let $T_1,T_2,T_3,T_4\in \mathcal{B}_A(\mathcal{H}),$ where $A>0.$ If $B=\begin{bmatrix} A & 0\\ 0 & A \end{bmatrix}.$ Then $$w_B\left(\begin{bmatrix} T_1 & T_2\\ T_3 & T_4 \end{bmatrix}\right)\leq \max\{w_A(T_1), w_A(T_4)\}+\frac{w_A(T_2+T_3)+w_A(T_2-T_3)}{2}.$$ \end{thm} \begin{proof} Using similar argument as used in the previous theorem, we have \begin{align*} w_B\left(\begin{bmatrix} T_1 & T_2\\ T_3 & T_4 \end{bmatrix}\right)&=\frac{1}{2}w_B\left(\begin{bmatrix} T_1+T_4 & T_4-T_1\\ T_4-T_1 & T_1+T_4 \end{bmatrix}+\begin{bmatrix} T_2+T_3 & T_2-T_3\\ T_3-T_2 & -T_3-T_2 \end{bmatrix}\right)\\ &\leq \frac{1}{2}\left[w_B\left(\begin{bmatrix} T_1+T_4 & T_4-T_1\\ T_4-T_1 & T_1+T_4 \end{bmatrix}\right)+w_B\left(\begin{bmatrix} T_2+T_3 & T_2-T_3\\ T_3-T_2 & -T_3-T_2 \end{bmatrix}\right)\right]\\ &\leq \frac{1}{2}\max\{w_A(T_1+T_4+T_4-T_1),w_A(T_1+T_4-T_4+T_1)\}\\ & \hspace{4cm}+\frac{1}{2}\{w_A(T_2+T_3)+w_A(T_2-T_3)\}\mbox{ by Lemma \ref{lem0001}(iv)}\\ &=\max\{w_A(T_1),w_A(T_4)\}+\frac{w_A(T_2+T_3)+w_A(T_2-T_3)}{2}. \end{align*} \end{proof} The following result is an estimate of an lower bound for $B$-numerical radius of a $2\times 2$ operator matrix. \begin{thm} Let $T_1, T_2, T_3,T_4\in \mathcal{B}_A(\mathcal{H}),$ where $A>0.$ If $B=\begin{bmatrix} A & 0\\ 0 & A \end{bmatrix},$ then \begin{align*} w_B\left(\begin{bmatrix} T_1 & T_2\\ T_3 & T_4 \end{bmatrix}\right)\geq \max\left\{w_A(T_1), w_A(T_4)),\frac{w_A(T_2+T_3)}{2}, \frac{w_A(T_2-T_3)}{2}\right\}. \end{align*} \end{thm} \begin{proof} It follows from Lemma \ref{l001} that \begin{align*} w_B\left(\begin{bmatrix} T_1 & T_2\\ T_3 & T_4 \end{bmatrix}\right) & \geq \max \left\{w_B\left(\begin{bmatrix} T_1 & 0\\ 0 & T_4 \end{bmatrix}\right), w_B\left(\begin{bmatrix} 0 & T_2\\ T_3 & 0 \end{bmatrix}\right)\right\}\\ &=\max\left\{\max\{w_A(T_1), w_A(T_4)\}, w_B\left(\begin{bmatrix} 0 & T_2\\ T_3 & 0 \end{bmatrix}\right)\right\}\\ & \geq \max\left\{\max\{w_A(T_1), w_A(T_4)\}, \frac{\max(w_A(T_2+T_3),w_A(T_2-T_3))}{2}\right\} \mbox{by Lemma \ref{lem0002}}\\ &=\max\left\{w_A(T_1), w_A(T_4), \frac{w_A(T_2+T_3)}{2},\frac{w_A(T_2-T_3)}{2}\right\}.\\ & \hspace{7cm} \end{align*} \end{proof} To prove the next lemma, we need the following identities, \begin{equation}\label{eq001} \frac{a+b}{2}=\max(a,b)-\frac{|a-b|}{2} \end{equation} and \begin{equation}\label{eq002} \frac{a+b}{2}=\min(a,b)+\frac{|a-b|}{2}, \end{equation} for any two real numbers $a$ and $b$. \begin{lemma} Let $T_1,T_2\in \mathcal{B}_A(\mathcal{H}).$ Then \begin{align*} \max(\|T_1&+T_2\|_A^2, \|T_1-T_2\|_A^2)\\ &\leq \min(\|T_1^{\#_A}T_1+T_2^{\#_A}T_2\|_A+\|T_1^{\#_A}T_2+ T_2^{\#_A}T_1\|_A, \|T_1T_1^{\#_A}+T_2T_2^{\#_A}\|_A+\| T_1T_2^{\#_A}+T_2T_1^{\#_A}\|_A) \end{align*} and \begin{align*} \min(\|T_1&+T_2\|_A^2, \|T_1-T_2\|_A^2)\\ &\geq \max(\|T_1^{\#_A}T_1+T_2^{\#_A}T_2\|_A-\|T_1^{\#_A}T_2+ T_2^{\#_A}T_1)\|_A, \|T_1T_1^{\#_A}+T_2T_2^{\#_A}\|_A-\| T_1T_2^{\#_A}+T_2T_1^{\#_A}\|_A). \end{align*} \end{lemma} \begin{lemma}\label{lem0004} Let $T_1, T_2\in \mathcal{B}_A(\mathcal{H}).$ Then \begin{align*} \max(\|T_1+T_2\|_A^2, \|T_1-T_2\|_A^2)\geq \max(\|T_1^2+T_2^2\|_A, &\|T_1^{\#_A}T_1+T_2^{\#_A}T_2\|_A,\|T_1T_1^{\#_A}+T_2T_2^{\#_A}\|_A)\\ &\hspace{3cm}+\frac{|\|T_1+T_2\|_A^2-\|T_1-T_2\|_A^2|}{2}. \end{align*} and \begin{align*} \min(\|T_1+T_2\|_A^2, \|T_1-T_2\|_A^2)\geq \max(\|T_1^2+T_2^2\|_A, &\|T_1^{\#_A}T_1+T_2^{\#_A}T_2\|_A,\|T_1T_1^{\#_A}+T_2T_2^{\#_A}\|_A)\\ &\hspace{3cm}-\frac{|\|T_1+T_2\|_A^2-\|T_1-T_2\|_A^2|}{2}. \end{align*} \end{lemma} Following theorem demonstrates an upper bound for $B$-numerical radius of $2\times 2$ operator matrix using \eqref{ineq1} and Lemma \ref{lem0004}. \begin{thm}\label{thm20005} Let $T_1,T_2,T_3,T_4\in\mathcal{B}_A(\mathcal{H}).$ Then $$w_B\left(\begin{bmatrix} T_1 & T_2\\ T_3 & T_4 \end{bmatrix}\right)\leq \min(\alpha,\beta),$$ where, $$\alpha=\left(\frac{\|T_1+T_2\|_A^2+\|T_1-T_2\|_A^2}{2}\right)^{\frac{1}{2}}+\left(\frac{\|T_4+T_3\|_A^2+\|T_4-T_3\|_A^2}{2}\right)^{\frac{1}{2}}$$ and $$\beta=\left(\frac{\|T_1+T_3\|_A^2+\|T_1^-T_3\|_A^2}{2}\right)^{\frac{1}{2}}+\left(\frac{\|T_2+T_4^{\#_A}\|_A^2+\|T_2-T_4^{\#_A}\|_A^2}{2}\right)^{\frac{1}{2}}.$$ \end{thm} \begin{proof} We know that \begin{align*} w_B\left(\begin{bmatrix} T_1& T_2\\ 0& 0 \end{bmatrix}\right)&\leq \left\|\begin{bmatrix} T_1& T_2\\ 0 & 0 \end{bmatrix}\right\|_B \\ &=\left\|\begin{bmatrix} T_1& T_2\\ 0& 0 \end{bmatrix}\begin{bmatrix} T_1& T_2\\ 0& 0 \end{bmatrix}^{\#_B}\right\|_B^{\frac{1}{2}}\\ &=\left\|\begin{bmatrix} T_1& T_2\\ 0& 0 \end{bmatrix}\begin{bmatrix} T_1^{\#_A} & 0\\ T_2^{\#_A}& 0 \end{bmatrix}\right\|_B^{\frac{1}{2}}\\ &=\left\|\begin{bmatrix} T_1T_1^{\#_A}+T_2T_2^{\#_A} & 0\\ 0& 0 \end{bmatrix}\right\|_B^{\frac{1}{2}}\\ &=\|T_1T_1^{\#_A}+T_2T_2^{\#_A}\|_A^{\frac{1}{2}}. \end{align*} By using Lemma \ref{lem0004}, we get \begin{align*} w_B\left(\begin{bmatrix} T_1 & T_2\\ 0& 0 \end{bmatrix}\right)&\leq \left(\max(\|T_1+T_2\|_A^2,\|T_1-T_2\|_A^2)-\frac{|\|T_1+T_2\|_A^2-\|T_1-T_2\|_A^2|}{2}\right)^{\frac{1}{2}}\\ &=\left(\frac{\|T_1+T_2\|_A^2+\|T_1-T_2\|_A^2}{2}\right)^{\frac{1}{2}}. \end{align*} Let us take $U=\begin{bmatrix} 0 & I\\ I & 0 \end{bmatrix},$ where $U$ is $B$-unitary. Now, we have \begin{align*} w_B\left(\begin{bmatrix} T_1 & T_2\\ T_3 & T_4 \end{bmatrix}\right)&\leq w_B\left(\begin{bmatrix} T_1 & T_2\\ 0 & 0 \end{bmatrix}\right)+w_B\left(\begin{bmatrix} 0 & 0\\ T_3 & T_4 \end{bmatrix}\right)\\ &= w_B\left(\begin{bmatrix} T_1 & T_2\\ 0 & 0 \end{bmatrix}\right)+w_B\left(U^{\#_B}\begin{bmatrix} T_4 & T_3\\ 0 & 0 \end{bmatrix}U\right) \\ &=w_B\left(\begin{bmatrix} T_1 & T_2\\ 0 & 0 \end{bmatrix}\right)+w_B\left(\begin{bmatrix} T_4 & T_3\\ 0 & 0 \end{bmatrix}\right)\\ &\leq \left(\frac{\|T_1+T_2\|_A^2+\|T_1-T_2\|_A^2}{2}\right)^{\frac{1}{2}}+\left(\frac{\|T_4+T_3\|_A^2+\|T_4-T_3\|_A^2}{2}\right)^{\frac{1}{2}}=\alpha \end{align*} Applying the previous calculation to $\begin{bmatrix} T_1^{\#_A} & T_3^{\#_A}\\ T_2^{\#_A} & T_4^{\#_A} \end{bmatrix}$ in the place of $\begin{bmatrix} T_1 & T_2\\ T_3 & T_4 \end{bmatrix}$, we obtain \begin{align*} w_B\left(\begin{bmatrix} T_1 & T_2\\ T_3 & T_4 \end{bmatrix}\right)&= w_B\left(\begin{bmatrix} T_1^{\#_A} & T_3^{\#_A}\\ T_2^{\#_A} & T_4^{\#_A} \end{bmatrix}\right)\\ &\leq \left(\frac{\|T_1^{\#_A}+T_3^{\#_A}\|_A^2+\|T_1^{\#_A}-T_3^{\#_A}\|_A^2}{2}\right)^{\frac{1}{2}}+\left(\frac{\|T_2^{\#_A}+T_4^{\#_A}\|_A^2+\|T_2^{\#_A}-T_4^{\#_A}\|_A^2}{2}\right)^{\frac{1}{2}}\\ &=\left(\frac{\|T_1+T_3\|_A^2+\|T_1-T_3\|_A^2}{2}\right)^{\frac{1}{2}}+\left(\frac{\|T_2+T_4\|_A^2+\|T_2-T_4\|_A^2}{2}\right)^{\frac{1}{2}}=\beta. \end{align*} Hence, we get the desired result. \end{proof} Next result shows a lower bound for $B$-numerical radius of a $2\times 2$ operator matrix in which 2nd row is zero. \begin{thm} Let $T_1,T_2\in\mathcal{B}_A(\mathcal{H}).$ Then $$w_B\left(\begin{bmatrix} T_1 & T_2\\ 0 & 0 \end{bmatrix}\right)\geq \frac{1}{2}\max(w_A(T_1\pm T_2),w_A(T_1\pm iT_2)).$$ \end{thm} \begin{proof} Let $U=\begin{bmatrix} 0 & I\\I & 0 \end{bmatrix}$ a $B$-unitary operator. Then \begin{align*} \max(w_A(T_1+T_2),w_A(T_1-T_2))&=w_B\left(\begin{bmatrix} T_1 & T_2\\T_2 & T_1 \end{bmatrix}\right) \mbox{~~by Lemma \ref{lem0001}}\\ &=w_B\left(\begin{bmatrix} T_1 & T_2\\0 & 0 \end{bmatrix}+U^{\#_B}\begin{bmatrix} T_1 & T_2\\0 & 0 \end{bmatrix}U\right)\\ &\leq w_B\left(\begin{bmatrix} T_1 & T_2\\0 & 0 \end{bmatrix}\right)+w_B\left(U^{\#_B}\begin{bmatrix} T_1 & T_2\\0 & 0 \end{bmatrix}U\right)\\ &=2w_B\left(\begin{bmatrix} T_1 & T_2\\0 & 0 \end{bmatrix}\right). \end{align*} Setting $V=\begin{bmatrix} I & 0\\0 & -I \end{bmatrix},$ it is not difficult to see that $V$ is $B$-unitary. Now, \begin{align*} \max(w_A(T_1+iT_2), w_A(T_1-iT_2))=&w_B\left(\begin{bmatrix} T_1 & -T_2\\T_2 & T_1 \end{bmatrix}\right) \mbox{~~by Lemma \ref{l002}}\\ &=w_B\left(V^{\#_A}\begin{bmatrix} T_1 & T_2\\0 & 0 \end{bmatrix}V+U^{\#_A}\begin{bmatrix} T_1 & T_2\\0 & 0 \end{bmatrix}U\right)\\ &\leq w_B\left(V^{\#_B}\begin{bmatrix} T_1 & T_2\\0 & 0 \end{bmatrix}V\right )+w_B\left(U^{\#_B}\begin{bmatrix} T_1 & T_2\\0 & 0 \end{bmatrix}U\right)\\ &=2w_B\left(\begin{bmatrix} T_1 & T_2\\0 & 0 \end{bmatrix}\right). \end{align*} Hence, we get $$w_B\left(\begin{bmatrix} T_1 & T_2\\0 & 0 \end{bmatrix}\right)\geq \frac{1}{2}\max(w_A(T_1\pm T_2), w_A(T_1\pm iT_2)).$$ \end{proof} Further upper bound for the $B$-numerical radius of $\begin{bmatrix} T_1 & T_2\\ T_3 & T_4 \end{bmatrix}$ is proved next using the Lemma \ref{lem00003}. \begin{thm}\label{thm20006} Let $T_1,T_2\in \mathcal{B}_A(\mathcal{H}).$ $$w_B\left(\begin{bmatrix} T_1 & T_2\\ T_3 & T_4 \end{bmatrix}\right) \leq \min\{\alpha,\beta\},$$ where, \begin{align*} \alpha=&\frac{1}{\sqrt{2}}\sqrt{\|T_1\|_A^2+\|T_2\|_A^2+\sqrt{(\|T_1\|_A^2-\|T_2\|_A^2)^2+4\|T_1^{\#_A}T_2\|_A^2}}\\&+\frac{1}{\sqrt{2}}\sqrt{\|T_3\|_A^2+\|T_4\|_A^2+\sqrt{(\|T_3\|_A^2-\|T_4\|_A^2)^2+4\|T_4T_3^{\#_A}\|_A^2}} \end{align*} and \begin{align*} \beta=&\frac{1}{\sqrt{2}}\sqrt{\|T_1\|_A^2+\|T_3\|_A^2+\sqrt{(\|T_1\|_A^2-\|T_3\|_A^2)^2+4\|T_1^{\#_A}T_3\|_A^2}}\\&+\frac{1}{\sqrt{2}}\sqrt{\|T_2\|_A^2+\|T_4\|_A^2+\sqrt{(\|T_2\|_A^2-\|T_4\|_A^2)^2+4\|T_4^{\#_A}T_2\|_A^2}}. \end{align*} \end{thm} We now give an special case of Theorem \ref{thm20006} in the following corollary. \begin{cor} Let $T_1,T_2,T_3,T_4\in \mathcal{B}_A(\mathcal{H}).$\\ (a) If $T_1^{\#_A}T_2=0= T_4T_3^{\#_A},$ then $$w_B\left(\begin{bmatrix} T_1 & T_2\\ T_3 & T_4 \end{bmatrix}\right) \leq \max(\|T_1\|_A, \|T_2\|_A)+\max(\|T_3\|_A,\|T_4\|_A).$$ (b) If $T_1^{\#_A}T_3=0= T_4^{\#_A}T_2,$ then $$w_B\left(\begin{bmatrix} T_1 & T_2\\ T_3 & T_4 \end{bmatrix}\right) \leq \max(\|T_1\|_A, \|T_3\|_A)+\max(\|T_2\|_A,\|T_4\|_A).$$ \end{cor} \begin{remark} Note that equality holds in \textnormal{Theorem \ref{thm20005}} and \textnormal{Theorem \ref{thm20006}} by setting $T_1=I,T_2=T_3=T_4=0$ . So $A$-numerical radius inequalities for $2\times 2$ operator matrices in \textnormal{Theorem \ref{thm20005}} and \textnormal{Theorem \ref{thm20006}} are sharp. \end{remark} \subsection{Refinements of $A$-numerical radius inequality for an operator} In this subsection, we present two refinements of \eqref{ineq1}. To do this, we need the identity \eqref{eq001}. The first refinement of inequality \eqref{ineq1} is proved next. \begin{thm} Let $T_1,T_2\in \mathcal{B}_A(\mathcal{H}).$ Then \begin{align*} w_B\left(\begin{bmatrix} 0 & T_1\\ T_2 & 0 \end{bmatrix}\right) &\leq w_A(T_1)+w_A(T_2)-\frac{1}{2}|w_A(T_1+T_2)-w_A(T_1-T_2)|. \end{align*} In particular $$\frac{\|T_1\|_A}{2}+\frac{\|Re T_1^{\#_A}\|_A-\|Im T_1^{\#_A}\|_A}{2}\leq w_A(T_1).$$ \end{thm} \begin{proof} By Lemma \ref{lem0002} and Equality \eqref{eq001}, we have \begin{align*} w_B\left(\begin{bmatrix} 0 & T_1\\ T_2 & 0 \end{bmatrix}\right) &\leq \frac{1}{2} \{w_A(T_1+T_2)+w_A(T_1-T_2)\}\\ &=\max\{w_A(T_1+T_2), w_A(T_1-T_2)\}-\frac{1}{2}|w_A(T_1+T_2)-w_A(T_1-T_2)|\\ &\leq w_A(T_1)+w_A(T_2)-\frac{1}{2}|w_A(T_1+T_2)-w_A(T_1-T_2)|. \end{align*} Replacing $T_1$ by $T_1^{\#_A}$ and $T_2$ by $(T_1^{\#_A})^{\#_A}$, we get \begin{align*} & w_B\left(\begin{bmatrix} 0 & T_1^{\#_A}\\ (T_1^{\#_A})^{\#_A} & 0 \end{bmatrix}\right)\\ &\leq w_A(T_1^{\#_A})+w_A((T_1^{\#_A})^{\#_A})-\frac{1}{2}|w_A(T_1^{\#_A}+(T_1^{\#_A})^{\#_A})-w_A(T_1^{\#_A}-(T_1^{\#_A})^{\#_A})|\\ & =2w_A(T_1^{\#_A})-|\|Re T_1^{\#_A}\|_A-\|Im T_1^{\#_A}\|_A|. \end{align*} So, $$\frac{\|T_1\|_A}{2}+\frac{\|Re T_1^{\#_A}\|_A-\|Im T_1^{\#_A}\|_A}{2}\leq w_A(T_1^{\#_A})=w_A(T_1).$$ \end{proof} Another refinement of the Inequality \eqref{ineq1} is presented next. \begin{thm} Let $T_1, T_2\in \mathcal{B}_A(\mathcal{H}).$ Then \begin{align*} w_B\left(\begin{bmatrix} 0 & T_1\\ T_2 & 0 \end{bmatrix}\right)+\frac{\|T_1\|_A+\|T_2\|_A}{2}+\frac{1}{2}\left|w_A(T_1+T_2)-\frac{\|T_1\|_A+\|T_2\|_A}{2}\right|+&\frac{1}{2}\left|w_A(T_1-T_2)-\frac{\|T_1\|_A+\|T_2\|_A}{2}\right|\\ &\leq2(w_A(T_1)+w_A(T_2)). \end{align*} In particular, $$\frac{\|T_1\|_A}{2}+\frac{1}{4}\left|\|Re( T_1^{\#_A})\|_A-\frac{\|T_1\|_A}{2}\right|+\frac{1}{4}\left|\|Im( T_1^{\#_A})\|_A-\frac{\|T_1\|_A}{2}\right|\leq w_A(T_1).$$ \end{thm} \vspace{.6cm} \noindent {\small {\bf Acknowledgments.}\\ We thank the {\bf Government of India} for introducing the {\it work from home initiative} during the COVID-19 crisis. } \section{References} \bibliographystyle{amsplain}
1801.06178
\section{Introduction} A number of strongly correlated materials with a metallic parent state exhibit a variety of non-Fermi liquid (NFL) properties. Some of the best known examples of such behavior occur in the ruthenates \cite{hussey,gegenwart,kapitulnik,allen}, cobaltates~\cite{Taillefer1,Ong1}, iron-based superconductors~\cite{Matsuda14} and heavy-fermion materials~\cite{Stewart}, amongst others. Some of these materials display striking non-Fermi liquid behavior over a broad range of temperatures above an emergent low energy scale but develop Fermi liquid-like properties and well defined Landau quasiparticles below this scale, while others remain non-Fermi liquid-like down to the lowest temperatures. Perhaps the most striking example of the latter behavior occurs in the ``strange-metal" regime~\cite{Takagi,Keimer15} of the cuprate superconductors and some quantum critical heavy-Fermion systems~\cite{rmpqcp,Stewart}. One of the most dramatic properties associated with many of these materials is a linear dependence of the dc resistivity on temperatures without any sign of saturation. In the cuprates, much of the phenomenology of the normal state is apparently well described by the ``marginal Fermi liquid'' (MFL) model~\cite{Varma}, which postulates the existence of marginally defined quasiparticles, whose scattering rate is comparable to their energy. Broadly speaking, a few theoretical frameworks have been proposed to explain the phenomenology of strange metals: (i) Quantum critical fluctuations of a bosonic degree of freedom coupled to a Fermi-surface leading to a non-Fermi liquid ground state, which dominates the properties of the system in a range of temperatures above the critical point. Concrete examples of such theories involve the situation where an order-parameter field (such as a nematic or antiferromagnetic order-parameter) at its critical point couples to an electronic Fermi-surface \cite{rmpqcp}. Much progress has been made in understanding the properties of this class of metallic quantum critical points in recent years~\cite{lee_review}. (ii) A distinct class of non-Fermi liquids arise at a critical point driven by electronic fluctuations associated with the destruction of the Fermi surface. Examples include a Kondo breakdown transition\footnote{The onset of antiferromagnetism (as a function of some tuning parameter) in several heavy Fermi liquids is known to have a striking non Hertz-Millis character and is accompanied by a dramatic change in the Fermi-surface volume.} in a heavy Fermi liquid~\cite{Coleman2000,Colemanetal,Si1,Si2,TSFL} and a Mott transition between a metal and a quantum disordered insulator~\cite{TS08,TSFL}. Such non-Fermi liquid quantum critical points have been argued~\cite{TS08,TSmott} to possess a critical Fermi surface - {\it i.e.}, the electronic excitations at the critical point are characterized by the presence of a sharply defined Fermi surface but with no sharp Landau quasiparticles.\footnote{Critical Fermi surfaces are also expected to occur at some quantum critical points driven by fluctuations of a Landau order parameter associated with ordering at zero momentum.} Currently known concrete low-energy theories for such quantum critical points involve fractionalized degrees of freedom and associated dynamical gauge fields. Theoretical progress has been possible on a few examples of such theories \cite{TSFL,TS08,TSmott,CFL_FL,OM,DChiggs}. While these concretely tractable examples are extremely useful, much more remains mysterious about the general theory of quantum critical points associated with the `death' of a Fermi surface.\footnote{In particular, in all the examples so far, there is a remnant `ghost' Fermi surface of fractionalized degrees of freedom once the electronic Fermi surface dies. It is not known if continuous quantum phase transitions can occur to phases where there is no such ghost.} (iii) Instead of appearing just at a critical point, a non-Fermi liquid can arise as a stable zero temperature phase, as has been observed for instance in numerical studies of lattice models \cite{mishmash}. A classic example of such non-Fermi liquid behavior occurs in a two-dimensional electron gas under high magnetic fields, when a compressible metallic phase is realized at a filling of $\nu=1/2$ \cite{HLR}. Indication of such non-Fermi liquid quantum phases have also been reported in correlated mixed-valence materials \cite{ybalb1,ybalb2}. (iv) Finally, in the limit of sufficiently strong interactions and at intermediate temperatures, it is possible that strange metal behavior arises generically without tuning to the vicinity of a quantum critical point. However, the ground state is a Landau Fermi liquid or some other conventional state (e.g. a superconductor) and the strange metal regime appears only as a crossover at higher temperatures. Despite all this progress in the theory of non-Fermi liquids, there is no clear mechanism that produces a linear in $T$ resistivity over a broad range of temperature in quantum critical or other non-Fermi liquids in translationally invariant models as a result of strong local electronic interactions. The phenomenological ``marginal Fermi liquid'' theory assumes coupling to a bosonic fluctuating mode that gives linear resistivity~\cite{Varma}; however, it is not clear how to derive such a bosonic spectrum from a microscopic model. The results of recent quantum Monte Carlo (QMC) simulations of an Ising nematic transition~\cite{Lederer2017} are consistent with a linear behavior of the resistivity at the quantum critical point.\footnote{These results are subject to uncertainties associated with analytical continuation from imaginary to real time. From the imaginary time data, one can extract ``resistivity proxies'' that coincide with the dc resistivity under certain assumptions, such as the absence of sharp features in the frequency-dependent conductivity over a scale $\omega \lesssim T$. The validity of these assumptions is hard to assess from imaginary-time data, and has to be checked independently.} There is currently no theoretical understanding of these results. Empirically, it is likely that these different routes to non-Fermi liquid physics are realized in different materials. Our focus in this paper is on route (iv) above. In a number of different systems (for example, in some cobaltates~\cite{Taillefer1,Ong1} and ruthenates~\cite{Tyler1998,Bruin13}) it is indeed seen that there is a wide intermediate temperature $T_{UV} \gg T \gg T_{\textnormal{coh}}$ where strange metallic transport is observed, including non-Fermi liquid temperature dependent resistivity with values exceeding the Mott-Ioffe-Regel limit. As the temperature drops below a low `coherence scale' $T_{\textnormal{coh}}$ there is a crossover to more conventional behavior. Importantly, it does not appear that $T_{\textnormal{coh}}$ can be pushed close to zero by tuning some parameter,\footnote{It is worth pointing out that this is likely {\it not} the situation for the cuprate strange metal and in some heavy electron materials like YbRh$_2$Si$_2$ \cite{rmpqcp}. In both these systems by tuning one parameter it has been possible to stabilize the NFL physics to ultra-low $T$ suggesting that $T_{\textnormal{coh}}$ can, in principle, be tuned to zero.} suggesting that it may be fundamentally impossible to stabilize such NFL states at zero temperature. In other words, the intermediate-$T$ NFL physics of these systems may not in principle be controlled by $T = 0$ Infra-Red (IR) fixed points with a finite number of relevant perturbations. We call such intermediate-$T$ non-Fermi liquid states as examples of ``IR-incomplete" states of matter (see Ref. \cite{tskitp2011} for a possibly useful exposition). By themselves, they cannot be the deep IR theory of any state of matter and hence require IR-completion. \newpage Examples include electron-phonon systems above their Debye temperature~\cite{ziman}, lattice models with bounded kinetic energy at high $T$~\cite{lindner,Oganesyan}, spin-incoherent Luttinger liquids~\cite{sill}, electrons coupled to a lattice of bound-states \cite{PC98}, holographic non-Fermi liquids~\cite{Liu1,Liu2}, and some states found in DMFT calculations at finite temperature~\cite{DMFT,kotliar}. Common to many of these examples of IR-incomplete theories is that they have extensive residual low-$T$ entropy ({\it i.e.} the entropy extrapolated to $T = 0$ from the regime in which the theory applies is non-zero) which is then relieved below $T_{\textnormal{coh}}$ leading to a crossover to a conventional state. Progress in understanding strongly interacting IR-incomplete non-Fermi liquids has been hindered by the lack of suitable controlled theoretical techniques. The Sachdev-Ye-Kitaev (SYK) model~\cite{SY,kitaev_talk,Parcollet1,Parcollet2,FuSS,SS15,Maldacena_syk,kitaevsuh,Altman17}, consisting of a large number of degrees of freedom coupled via a random all-to-all interaction, provides a window into the behavior of strongly coupled systems with no quasiparticles. The model is $(0+1)-$dimensional, and thus it does not contain information about transport. Higher dimensional generalizations of the model have been considered~\cite{Parcollet1,Burdin2002,Gu17,SS17,DVK17,Balents,hongyao}. Refs.~\cite{Parcollet1,Burdin2002} studied lattice models of itinerant fermions coupled to spins with a long-ranged all-to-all interactions. Refs.~\cite{Gu17,Balents,SS17} considered lattice models with an SYK dot placed in every site, with a random short ranged inter-site coupling. The charge and thermal transport properties have been computed. The solution of these models have many appealing characteristics, such as a locally quantum critical, non-Fermi liquid crossover regime where the resistivity is linear in temperature and quasi-particles are destroyed. In all of the above models, translational symmetry is strongly broken, raising a number of questions: (i) Does quenched disorder play an essential role in the behavior of strange metals as suggested in Ref. \cite{subir_kitptalk}, or could it be realized even in a perfectly crystalline system? (ii) Can a non (or marginal-)Fermi liquid with a critical Fermi-surface (to be defined below) appear within this class of models, and what are its transport and other related properties? (iii) Does a non-Fermi liquid with a critical Fermi surface show quantum oscillations in an external applied magnetic field? In order to address these questions, in this work we construct a set of {\it translationally invariant} models that can be solved exactly in the large $N$ limit, where $N$ is the number of fermion flavors (or ``orbitals'') per site, coupled by a frustrated on-site interaction. Our construction is therefore different from other constructions of higher-dimensional generalizations of SYK-type models at a fundamental level. The crucial new ingredient, namely the exact translation symmetry (instead of a statistical symmetry) at the level of each realization will allow us to address many interesting questions beyond the scope of previous works. Specifically, we will address questions related to the possibility of obtaining non-Fermi liquid behavior in models without disorder, the existence of a sharp Fermi surface (or lack thereof) in translation invariant non-Fermi liquids, the fate of quantum oscillations due to critical Fermi surfaces beyond semiclassical quantization of quasiparticle-based theories and other related issues. Our paper will also lead to new insights into a class of non-Fermi liquid metals, namely the ``IR-incomplete" NFLs (of which there are numerous examples, as highlighted later), and will potentially be useful for future developments in the field. Within our construction, if there is a single band of bandwidth $W$, and the typical interaction strength is $U$, we find that the system crosses over at a temperature $T\sim W^2/U (\equiv\Omega^*)$ from a low-temperature Landau Fermi liquid ground state to locally quantum critical non-Fermi liquid state, where the Fermi surface is completely destroyed, but there still is a well-defined Fermi energy. The resistivity crosses over from $\rho \sim T^2$ at $T\ll \Omega^*$ to $\rho \sim T$ at $T\gg \Omega^*$; the value of the resistivity at the crossover scale $(T\sim\Omega^*)$ is $\rho\approx h/Ne^2$. In addition, the two salient features of the one band model are as follows: (i) At strong coupling (i.e. $U\gg W$) and at low temperatures compared to $\Omega^*$, the momentum dependence of the electron self-energy becomes parametrically smaller in $(W/U)$ compared to the frequency dependence. The resulting Fermi liquid has a sharp Fermi surface but the self-energy is momentum independent. At temperatures higher than $\Omega^*$, this sharp Fermi surface is lost and the electronic excitations become incoherent. (ii) In the incoherent regime, even though the system is translationally invariant, as a result of the locally critical structure of the correlation functions and strong momentum dissipation on the lattice, the previously established mechanism for incoherent transport in disordered SYK-like models \cite{Parcollet1,Balents} continues to be applicable to our one-band model. In the Fermi liquid regime, the resistivity is finite and arises from umklapp scattering. Our results for the translationally invariant one-band model shed interesting light on the validity of `locally critical' theories in a microscopic setting, where the self-energy is allowed to be momentum dependent apriori but becomes unimportant in the large$-N$ and strong coupling regime. If there are multiple bands with parametrically different bandwidths (or an itinerant band coupled to localized electrons, as in a Kondo lattice), a richer behavior is observed. In addition to the low temperature Fermi liquid and the high temperature incoherent regime, we find an intermediate range of temperatures where the correlations in the narrow band are locally quantum critical, while the band with the larger bandwidth forms a marginal Fermi liquid, with a single particle inverse lifetime proportional to $\max(\varepsilon, T)$, where $\varepsilon$ is the energy. This region realizes the marginal Fermi liquid phenomenological model proposed in Ref.~\cite{Varma}, with the density (or flavor) fluctuations of the narrow, incoherent band (which have SYK like correlations) playing the role of the critical bosonic degree of freedom. Importantly, our microscopic electronic model defined on the lattice has only {\it local} interactions and preserves translational symmetry. Moreover, even though the light electrons have a feedback on the heavy electrons, there remains a parametrically broad regime of temperatures where the SYK form of the correlations in the heavy band survives. In the regime where the heavy electrons becomes incoherent, there can be strong momentum dissipation in the lattice model leading to a finite $T-$linear resistivity. Within the multi-band setup, we also consider models where the on-site interactions for one of the bands involves $q>4$-body terms, which allows us to obtain non Fermi liquids with a singular self-energy and a critical Fermi-surface. Interestingly, upon applying a magnetic field, both the marginal Fermi liquid and the non-Fermi liquid regimes are characterized by quantum oscillations of the magnetization as a function of the inverse of the field. The period of the oscillations is the same as that of an ordinary Fermi liquid, but the temperature dependence of their amplitude is different from that of a Fermi liquid. It has been proposed that transport in the strange metal regime \cite{Bruin13} can be understood in terms of the conjectured ``Planckian'' bound on relaxation rates, $1/\tau \lesssim k_B T / \hbar$~\cite{QPT, Zaanen04}. It is interesting to examine our results in the context of this proposal; however, there is no unique definition for a ``transport scattering rate". One can naively choose to define it from the dc conductivity by fitting it to a `Drude-like' form $\sigma = ne^2 \tau_{\textnormal{dc}}/m^*$, where $m^*$ is the effective mass of the low-temperature Fermi liquid state, and expect a bound on $\tau_{\textnormal{dc}}~ (\sim 1/T)$.\footnote{This is the definition used in Ref.~\cite{Bruin13}. One we may alternatively define a scattering rate by expressing $\sigma \propto \kappa v_F^{*2} \tau_{\textnormal{d}}$ where $\kappa$ is the compressibility, or $\sigma \propto \omega_p^2 \tau_\textnormal{p}$ where $\omega_p$ is the plasma frequency.} In the two-band non-Fermi liquid state described in Sec.~\ref{nfl}, we find that $1/\tau_{\textnormal{dc}}$ has a non-Planckian form: $1/\tau_{\mathrm{dc}} \sim T^\alpha$ with $\alpha<1$. Alternatively, a natural way of defining the transport scattering rate is to use the temperature dependent crossover frequency scale, $1/\tau_{\textnormal{opt}}$, across which the optical conductivity crosses over from its high frequency regime to the dc limit. For the models considered below, we find that $\tau_{\textnormal{opt}}$ satisfies a Planckian-type bound with $\tau_{\textnormal{opt}}^{-1} \le a k_BT/\hbar$, where $a$ is an $O(1)$ number.\footnote{There are examples of models that violate this bound on $\tau_{\mathrm{opt}}$, however. See, e.g., Ref.~\cite{Werman}.} Thus, the question of the existence of a bound requires a sharp definition of what one means by the ``scattering rate.'' In light of the phenomenologically appealing features of the solution of these models, it is interesting to ask about lessons we might learn and apply to real correlated materials described by some generic model. Restricting to IR-incomplete non-Fermi liquids, it is interesting to consider the structure of a coarse-grained description. We expect that there will be a few distinct universality classes of such non-Fermi liquids with different coarse grained descriptions. The models studied in this paper suggest one possible universal route to non-Fermi liquid behavior. Specifically we propose that in a class of generic systems that show intermediate-$T$ NFL physics, there is an emergent large length scale $\ell \gg a$ (the microscopic scale) such that within patches of size $\ell$ the system is maximally chaotic (in the sense that it obeys the chaos bound of Ref.~\cite{Maldacena2016}; see Appendix \ref{chaos} for details) though globally, {\it i.e.} at longer scales it may not be so. Further we expect that the assumption of maximal chaos severely restricts the structure of correlators within such a patch. A coarse-grained description of the macroscopic physics - appropriate at scales much longer than $\ell$ - can then be built by coupling together maximally chaotic bubbles with generic interactions. Note that the $(0+1)$-dimensional SYK models are well known to be maximally chaotic. Thus the models we study may be viewed as a concrete example of such a coarse grained effective model. In general the appropriate description of a maximally chaotic bubble in such a metal will not likely be an SYK-like model, and will in the future have to be replaced by a better theory that takes into account spatial locality within each bubble. Nevertheless these solvable models point to the importance of maximally chaotic intermediate scale bubbles as a possible universal route to a class of non-Fermi liquids. The rest of this paper is organized as follows: we introduce our model of a strongly interacting translationally invariant one-band metal in section \ref{mod} and compute the fermion Green's function, thermodynamic and transport properties in sections \ref{spec}, \ref{thermo} and \ref{trans1} respectively. In section \ref{scaling} we provide a very simple qualitative understanding of these one-band models which demystifies their properties and provides a complementary approach to analyzing the key features of the model. We introduce an additional band with a parametrically smaller bandwidth and study the effect of inter-band interactions in section \ref{sec:MFL}. We compute the fermion Green's function in section \ref{MFLgreen} and find a regime with a marginal Fermi liquid behavior. We explore the thermodynamic and transport properties associated with the MFL in sections \ref{thermomfl} and \ref{transmfl} respectively. The two band model is generalized in section \ref{nfl}, where we find a regime with non-Fermi liquid behavior and a singular self-energy with a variable exponent; the thermodynamic and transport behavior are discussed in section \ref{thermonfl} and \ref{transnfl}. For the generalized model, we explore the ``$2K_F$" singularities and quantum oscillations in the magnetization as a function of an external magnetic field as a result of the presence of the critical Fermi surface in section \ref{2kfnfl} and \ref{qo}, respectively. On the basis of our study of all the models with locally critical degrees of freedom, we propose some general constraints on models with local quantum criticality in section \ref{LC}. Finally, in section \ref{sec:discussion} we conclude with a summary of our results and their relation to other recent works. In section \ref{conj} we also present our conjectures for intermediate scale non Fermi liquid physics in generic strongly correlated models and explore their consequences for the phenomenology of a wide variety of non-Fermi liquid metals. We study the toy problem with $q=2$ (i.e. a random-matrix) in the presence of uniform hopping terms as an interesting exercise, which can be solved exactly, in Appendix~\ref{syk2} in order to shed some light on issues related to transport. A number of accompanying technical details appear in the appendices. \section{One-Band Model} \label{mod} Let us begin with a microscopic model in $d-$dimensions on a hypercubic lattice ($d=2$ will be of primary interest) with $N$ orbitals per site and fermionic operators defined by, $c^{\dagger}_{{\vec r},\ell}$, $c_{{\vec r},\ell}$, ($\ell=1,...,N$). The fermions satisfy usual anti-commutation algebra $\{c_{{\vec r},\ell}, c_{{\vec r}',\ell'}^{\dagger}\} = \delta_{\ell\el'} \delta_{{\vec r}\r'}$. We assume that there is a global $U(1)$ symmetry corresponding to a {\it single} conserved density ($V\equiv$volume), $Q_c = \sum_{{\vec r},\ell} \langle c_{{\vec r}\ell}^{\dagger} c_{{\vec r}\ell}\rangle/(NV)$. The value of $0<Q_c<1$ can be tuned by a chemical potential $\mu_c$. The Hamiltonian is given by \begin{eqnarray} H_c = \sum_{{\vec r},{\vec r}'} \sum_{\ell}(-t^c_{{\vec r},{\vec r}'} - \mu_c \delta_{{\vec r}\r'}) c_{{\vec r} \ell}^\dagger c_{{\vec r}'\ell} + \frac{1}{(2N)^{3/2}}\sum_{{\vec r}}\sum_{ijk\ell} U^c_{ijk\ell} c^\dagger_{{\vec r} i}c^\dagger_{{\vec r} j} c_{{\vec r} k} c_{{\vec r} \ell}, \label{hc1} \end{eqnarray} where the hopping terms between sites ${\vec r}$ and ${\vec r}'$, $t^c_{{\vec r}\r'}$, are diagonal in the orbital subspace and depend only on $|{\vec r}-{\vec r}'|$ (assumed to be identical for all orbitals). The interaction term, $U^c_{ijk\ell}$, is purely on-site and is properly antisymmetrized with $U^c_{ijk\ell} = -U^c_{jik\ell} = -U^c_{ij\ell k}$ and $U^c_{ijkl} = U^{c}_{klij}$. The values of $U^c_{ijk \ell}$ are assumed to be independent of the site-label, ${\vec r}$ (see Fig.~\ref{model}(a) for a caricature of the model; Fig.~\ref{model}(b) elucidates the structure of interactions within each site). The model can be viewed as a lattice of Sachdev-Ye-Kitaev (SYK)~\cite{SY,kitaev_talk,Parcollet1,Parcollet2,FuSS,SS15} quantum dots with identical on-site interactions, connected by orbital-diagonal, translationally invariant hopping matrix elements.\footnote{A one-dimensional field theory with similar translationally-invariant interactions has been considered in Ref.~\cite{berkooz2017comments}.} The model (\ref{hc1}) is difficult to solve. However, just as in the SYK model, if we consider the interaction terms $U^c_{ijk\ell}$ to be random, independent variables with a zero mean, and take the limit $N\rightarrow \infty$, then it is possible to compute properties of the model {\it averaged over realizations of $U^c_{ijk\ell}$}. It is important to note that we are not only assuming that the coupling constants on different sites have the same distribution; rather, in every realization they are identical to each other, and hence the Hamiltonian defined in Eq.~\ref{hc1} is translationally invariant. For convenience, we set the distribution of the coupling constants to be Gaussian. The distribution satisfies $\overline{U^c_{ijk\ell}}=0$ and $\overline{(U^c_{ijk\ell})^2} \equiv U^2_c$, where $U_c$ characterizes the strength of the interactions. The other energy scale in our problem is the free electrons' bandwidth, which we denote by $W_c$. It is believed that the properties of the SYK model are self-averaging, in the sense that the correlation functions of a typical realization are close to those of the mean, up to $1/N$ corrections. In Appendix \ref{app:selfaverage}, we demonstrate that the standard deviations and higher cumulants of the correlation functions in our model are suppressed by powers of $1/N$. We therefore expect that the correlation functions in our model are self-averaging in the large $N$ limit, as in the single-site SYK model. \begin{figure} \begin{center} \includegraphics[width=1.0\columnwidth]{fig.pdf} \end{center} \caption{(a) A two-dimensional lattice where each site contains $N$ orbitals (represented by different colors). The hoppings, $t^c_{{\vec r}\r'}$, between any neighboring sites (colored arrows) are diagonal in orbital-index. Each site is identical and the system is translationally invariant. (b) The internal structure of a single site with $N$ orbitals. The on-site interactions, $U^c_{ijk\ell}$, are quartic in the fermion operators, with all orbital indices unequal.} \label{model} \end{figure} \subsection{Fermion Green's Function} \label{spec} The fermion Green's function can be analyzed diagrammatically, such that the large-$N$ saddle-point solution reduces to studying the following set of equations self-consistently, \begin{subequations} \begin{eqnarray} \label{SP1} G_c({\vec k},i\omega) &=& \frac{1}{i\omega - \varepsilon_{\vec k} - \Sigma_c({\vec k},i\omega)}, \label{SPc_a}\\ \Sigma_c({\vec k},i\omega) &=& - U_c^2 \int_{{\vec k}_1} \int_{\omega_1} G_c({\vec k}_1,i\omega_1)~\Pi_c({\vec k}+{\vec k}_1,i\omega+i\omega_1), \label{SPc_b}\label{selfenergyc} \\ \Pi_c({\vec q},i\Omega) &=& \int_{{\vec k}} \int_{\omega} G_c({\vec k},i\omega)~G_c({\vec k}+{\vec q},i\omega+i\Omega), \label{SPc_c} \end{eqnarray} \end{subequations} where $\int_{\vec k}\equiv\int d^d{\vec k}/(2\pi)^d$ and $\varepsilon_{\vec k}$ is the dispersion for the $c-$band. Formally, the above set of equations corresponds to resumming an infinite class of `watermelon-diagrams', as shown in Fig.~\ref{se}. One can arrive at the same set of saddle-point equations by starting from the path-integral formulation, as described in Appendix \ref{pif}. In Sec. \ref{scaling}, we provide a simple alternate derivation of the results for the one band model using scaling-type arguments which provide much physical insight. \begin{figure} \begin{center} \includegraphics[width=0.6\columnwidth]{SE_one.pdf} \end{center} \caption{The self-energy diagram, $\Sigma_c$, for $c-$fermions with orbital index $i$ in the single-band model due to $U_c$. The solid black lines represent fully dressed Green's functions, $G_c({\vec k},\omega)$; see Eq.~(\ref{SP1}). The dashed line corresponds to $U_c^2$ contraction and carries no frequency/momentum. } \label{se} \end{figure} As we shall now show, the fermionic spectral function has qualitatively different behavior at different temperatures. When the temperature is much lower than the characteristic crossover scale $\Omega^*_c \equiv W_c^2 / U_c$, the spectral function has a Fermi-liquid like form. In the interesting case $U_c \gg W_c$, there is a second regime defined by $\Omega^*_c \ll T \ll U_c$, where the spectral function has an incoherent, local form without any remnant of a Fermi-surface. To make this statement more precise, we can take the limit of $U_c\rightarrow\infty$ keeping $W_c$ finite (such that $\Omega_c^*$ collapses to zero), and then take the limit of $T\rightarrow0$, thus obtaining a compressible phase of electronic matter without quasiparticle-excitations in a clean system, lacking any sharp momentum-space structure. We refer to this state as a local incoherent critical metal (LICM). To analyze the equations~(\ref{SPc_a}-\ref{SPc_c}), we focus on the two extreme limits of $T$ (or $\omega$) that are either much larger or much smaller than $\Omega^*_c$. In the limit $T\ll \Omega^*_c$, we find that the system follows Fermi liquid behavior at sufficiently low frequencies. To show this, let us use a Fermi liquid-like {\it ansatz} for the fermionic self energy. At low frequencies we assume that $\Sigma_c$ has the following form near the Fermi surface: \begin{eqnarray} \Sigma_c({\vec k},i\omega) = -i(Z^{-1} - 1) \omega + (\tilde{v}_F - v_F)k + \dots, \label{Sigma_c} \end{eqnarray} where $Z$ is the quasiparticle residue, to be determined self-consistently, $k=|{\vec k}-{\vec k}_F|$ (${\vec k}_F$ is the Fermi momentum), $\tilde{v}_F~(v_F)$ are the renormalized (bare) Fermi-velocities with the renormalization $\tilde{v}_F/v_F=A$ to be determined self-consistently, and the $\dots$ denote higher power terms in an expansion in $\omega,~k$. We stress that $\tilde{v}_F$ is different from the effective Fermi velocity $v^*_F = Z \tilde{v}_F$, which is the physical speed with which quasi-particles propagate. For simplicity, we have dropped the constant term, which can be absorbed in the chemical potential. Inserting this form into the self-consistency equations~(\ref{SPc_a}-\ref{SPc_c}), we obtain after a standard computation (see Appendix {\ref{M1A}} for details) \begin{eqnarray} \Pi_c({\vec q}, i\Omega) = Z \nu_0 \left(1 - \frac{|\Omega|}{\sqrt{(Z \tilde{v}_F q)^2 + \Omega^2}} +O(q^2)\right). \label{Pi_c} \end{eqnarray} Here, $\nu_0 \sim k_F^d/W_c$ is the bare density of states at the Fermi energy. (We set the units of length such that the lattice spacing $a=1$.) In Eq.~(\ref{Pi_c}) we have taken into account the contribution of the quasi-particle poles of the Green's functions at $i\omega = Z \tilde\varepsilon_{\vec k}$, and ignored the additional branch cut singularities, that turn out not to change the final result qualitatively. Next, we feed Eq.~(\ref{Pi_c}) back into~(\ref{selfenergyc}), giving \begin{eqnarray} \Sigma_c({\vec k},i\omega) = \nu_0^2 U_c^2 \left[ i Z \omega + i \alpha \nu_0 |\omega|^2 \ln\bigg(\frac{Z \tilde{v}_F k_F}{|\omega|} \bigg) \mathrm{sgn}(\omega) - Z^2 \zeta v_F k \right], \label{eq:sigmac} \end{eqnarray} where $\alpha,~\zeta$ are numerical factors of order unity that depends on the geometry of the Fermi surface (Appendix~\ref{M1A}). The factor of $\ln\left(\frac{Z\tilde{v}_F k_F}{|\omega|}\right)$ in (\ref{eq:sigmac}) is special to $d=2$; it is absent in higher dimensions. Equating this to Eq.~(\ref{Sigma_c}), we get that \begin{eqnarray} \nu_0^2 U_c^2 Z &=& (Z^{-1} - 1),\\ \nu_0^2 U_c^2 Z^2 \zeta &=& (A - 1). \label{Z} \end{eqnarray} In particular, in the weak coupling limit, $\nu_0 U_c \ll 1$, we get that $Z \approx 1 - (\nu_0 U_c)^2 $. In the opposite limit, $\nu_0 U_c \gg 1$, we get to logarithmic accuracy that $Z = 1/(\nu_0 U_c)$, and $A$ is $O(1)$. In this strong coupling limit, even though the electronic self-energy is allowed to be apriori momentum dependent, the frequency dependence is parametrically larger in $(U_c/W_c)$ compared to the momentum dependence. Hence, the ground state is a Fermi liquid for any coupling strength; in the strong coupling limit, the quasi-particle weight becomes small, and the effective mass increases as $m^* = m/Z \approx m\nu_0 U_c$, where $m$ is the bare mass while the momentum dependence of the self-energy is independent of $U_c$. This state is therefore a {\it heavy} Fermi-liquid. Moreover, since the self-energy is only weakly dependent on the momentum but strongly frequency dependent, the resulting state is reminiscent of a DMFT description \cite{DMFT} of a heavily renormalized Fermi liquid. Note, however, that while DMFT is exact in the limit of infinite dimension, in our case $d$ is finite; instead, we have to take the large $N$ and strong coupling limits. Next, we turn to the behavior of $\Sigma_c(\omega)$ at high frequencies. We focus on the strong coupling limit, $\nu_0 U_c \gg 1$. In this regime, $\Sigma_c(\omega)$ exceeds the Fermi energy for sufficiently large $\omega$. Extrapolating $\Sigma_c(\omega)$ from Eq.~(\ref{Sigma_c}) with $Z = 1/(\nu_0 U_c)$, we get that this occurs at frequencies larger then $\Omega^*_c = W_c^2/U_c$. Then, to zeroth order, we can neglect $\varepsilon_{\vec k}$ relative to $\Sigma_c(\omega)$ in Eq.~(\ref{SPc_a}). In this limit, the self-consistent equations~(\ref{SPc_a}-\ref{SPc_b}) reduce to those of the single site SYK model~\cite{SY,kitaev_talk,Parcollet1,Parcollet2,FuSS,SS15,Maldacena_syk}. In particular, we get that at frequencies smaller than $U_c$, $\Sigma_c(\omega) \sim i\mathrm{sgn}(\omega)\sqrt{U_c|\omega|}$~\cite{SY,Parcollet1,Parcollet2}. Extrapolating $\Sigma_c(\omega)$ from high to intermediate frequencies, we reproduce the result that $\Sigma_c(\omega) \gg W_c$ for $\omega\gg W_c^2/U_c$, consistent with the extrapolation from low frequencies. To find the residual momentum dependence of the Green's function in the strong coupling incoherent regime, we expand the self-consistent equations~\ref{SPc_c} in powers of $\varepsilon_{\vec k}$\footnote{The results below are also readily obtained by simply calculating the Green's function in perturbation theory in the hopping $t_c$ along the lines of Sec. \ref{scaling}.}. To leading order, we get that $G_c({\vec k},\omega)-G_{0}(\omega) \sim \varepsilon_{\vec k}/[\Sigma_0(\omega)]^2$, where $G_{0}(\omega)$ and $\Sigma_0(\omega)$ are the Green's function and the self-energy of the single site SYK model, respectively (see Appendix \ref{licm} for details). Importantly, we see that although the momentum dependence of the Green's function decreases with increasing frequency, the correlation length over which $G_c({\vec r},\omega)$ decays (obtained by taking the fourier transform of $G_c({\vec k},\omega)$) remains frequency-independent and is determined by the spatial extent of the hopping parameters, $t^c_{{\vec r}\r'}$. To summarize, we get that for strong coupling, $G_c({\vec k},\omega)$ has the following form in the two extreme frequency limits: \begin{equation} G_c({\vec k},i\omega) \sim \begin{cases} \frac{Z}{i\omega - Z \tilde\varepsilon_{\vec k} + i \alpha \nu_0^2 U_c|\omega|^2 \ln(\frac{1}{|\omega|})\mathrm{sgn}(\omega)}, &\omega \ll W_c^2/U_c,\\ \frac{i\mathrm{sgn}(\omega)}{\sqrt{U_c |\omega|}} - B(\omega) \frac{\varepsilon_{{\vec k}}}{U_c |\omega|}, &W_c^2/U_c \ll \omega \ll U_c, \end{cases} \label{limits} \end{equation} where $Z \sim 1/(\nu_0 U_c)$, and $\alpha$ is a number of order unity. $B(\omega)$ is a constant independent of frequency for both $\omega > 0$ and $\omega < 0$ though its precise value is different for the two signs of $\omega$. Indeed it is a direct descendant of the ``spectral asymmetry" that characterizes the Green's function of a single SYK island \cite{SY,Parcollet1}. At low frequencies, there is a Fermi surface with well-defined, albeit strongly renormalized quasiparticles. The renormalized bandwidth is $W_c^* \sim \Omega^*_c = W_c^2/U_c$. The $\omega^2$ term in the denominator of $G_c$ becomes the imaginary part of the self-energy after an analytic continuation to real frequency. It can be written in a revealing form: $\Sigma''(\omega) \sim \omega^2 \ln\left(\frac{W_c^*}{|\omega|}\right)/W^*_c$. At finite temperatures, the zero-frequency imaginary part is $\Sigma''(0,T)\sim \pi^2 T^2 \ln\left(\frac{W_c^*}{T}\right) /W_c^*$. Note that, upon extrapolating this form to the crossover scale, $\Sigma''(0,T\sim \Omega^*_c) \sim W_c^*$, {\it i.e.} at this scale, the scattering rate of quasiparticles is comparable to the effective bandwidth, and we expect the quasi-particle picture to break down. At energies much higher than the renormalized bandwidth, the Fermi surface is destroyed, and the single-particle spectral function has no sharp features in momentum space. Instead, it is well approximated by $A_c({\vec k},\omega)\sim 1/\sqrt{U_c\, \mathrm{max}(|\omega|,T)}$. This is the LICM regime. \subsection{Thermodynamic Properties} \label{thermo} We now turn to discuss the thermodynamic properties of the one-band model. As we saw in the previous subsection, at sufficiently low temperatures, $T\ll\Omega_c^*$, the system is well described by Fermi-liquid theory. This implies, in particular, that the entropy per unit cell follows a linear temperature dependence, $S(T\ll \Omega_c^*) = N\gamma T$, where $\gamma \propto m^* \sim U_c/W_c^2$. At temperatures much higher than $\Omega_c^*$, we can calculate the thermodynamic properties perturbatively in the inter-site hopping\footnote{Such a perturbative expansion breaks down at sufficiently low temperatures, since the hopping is a relevant perturbation.}. Then, the entropy is given by that of a single SYK dot, up to a correction of the order of $(W_c/U_c)^2$. The entropy takes the form $S(T\gg \Omega_c^*) = N (S_0 + \gamma_0 T)$, where $S_0$ and $\gamma_0$ are known constants~\cite{Parcollet1}. At temperatures of the order of $\Omega^*_c$, we expect the entropy to interpolate between these two behaviors. Based on our analysis of the saddle point equations in this section, as well as our simpler understanding using scaling in Sec.~\ref{scaling} below where we study the perturbative effects of the relevant hopping terms as a function of decreasing energy starting from the decoupled SYK-like regime, we find a strong indication of a single crossover separating the two regimes at the coherence-scale $\Omega_c^*$. All of the thermodynamic quantities, as well as the frequency dependent self-energies, evolve smoothly through this crossover without any associated phase transitions; we have checked this explicitly by solving the saddle-point equations numerically for small system sizes (results not shown). These aspects of our results are thus qualitatively similar to the results reported in Ref.~\cite{Balents} for the disordered version of the one-band model. Next, we turn to discuss the compressibility, given by $N\kappa = (\partial n/\partial \mu)$, where $n = \sum_{{\vec r},\ell} c_{{\vec r},\ell}^\dagger c_{{\vec r},\ell}$ is the total density for all the orbitals. We begin by noting that each site ({\it i.e.} SYK island) has a finite compressibility which is given by $\kappa_0 \sim 1/U_c$ \cite{SS17,Balents}. As a result of the finite hopping and bandwidth, there is a correction to this result and at strong coupling we obtain \begin{eqnarray} \kappa = \frac{c_0}{U_c}\bigg[1+ O\bigg(\frac{W_c^2}{U_c^2}\bigg)\bigg], \end{eqnarray} where $c_0$ is a constant of order unity. As discussed earlier, in this regime the mass enhancement factor $m^*/m =Z^{-1} \approx U_c/W_c$. This can be reconciled within the Fermi-liquid description of the state if one introduces a large dimensionless `Landau-parameter', $F_0\sim(U_c/W_c)^2$. \subsection{Transport} \label{trans1} Let us now discuss both the optical conductivity and the dc resistivity of the metallic phases introduced above. The real part of the optical conductivity is given by the Kubo formula \begin{equation} \sigma_{xx}'(\omega) = \frac{\textnormal{Im}~\Pi_{J_x}^{\textnormal{ret}}(\omega)}{{\omega}}, \end{equation} where $\Pi^{\textnormal{ret}}_{J_x}(\omega)$ is the retarded current-current correlation function for the current in $x$ direction. The total current operator is given by \begin{eqnarray} {\vec J} = \sum_i {\vec J}_i = \sum_{\vec k} {\vec v}^i_{\vec k} c_{{\vec k} i}^\dagger c_{{\vec k} i}, \end{eqnarray} with ${\vec J}_i$ denotes the current from orbital $i$ and ${\vec v}^i_{\vec k} = \nabla_{\vec k}\varepsilon^i_{\vec k}$. For the previously assumed identical dispersions for all the orbitals, the velocities are also the same. The leading diagrams which contribute to $\Pi^{\textnormal{ret}}_{J_x}(\omega)$ are shown in Fig.~\ref{opt}. In Fig. \ref{opt} (a), we show the leading graph without vertex corrections. \begin{figure} \begin{center} \includegraphics[width=0.8\columnwidth]{opt.pdf} \end{center} \caption{Current-current correlation function for evaluating the conductivity in the one-band model. Wiggly line denotes the insertion of the current operator. The solid lines represent the fully dressed propagators. (a) Feynman diagram without vertex corrections. (b) The lowest order vertex correction diagram, which is subleading in the high temperature (LICM) regime. The dashed line represents a $U_c^2$ contraction, as before.} \label{opt} \end{figure} In the high temperature ($T\gg\Omega_c^*$) regime, the vertex corrections (Fig. \ref{opt}b) are subleading. To see this, note that the electron velocity is odd in momentum, while all the Green's functions are momentum independent to lowest order in $W_c/U_c$. Then each of the loops over orbital $i$ and $j$ vanish individually. We therefore consider only the diagram in Fig.~\ref{opt}(a), and this results in \begin{equation} \sigma(\omega,T) = \sum_i \frac{1}{\omega}\int d\omega' \int_{\vec k} v_{\vec k}^2~A_{\vec k}(\omega')~A_{\vec k}(\omega+\omega')~[f(\omega') - f(\omega+\omega')], \end{equation} where $A_{\vec k}(\omega)$ is the electron spectral function and $f(...)$ represents the Fermi-Dirac distribution function. In the high temperature SYK-like regime ($T\gg\Omega_c^*$), the optical conductivity clearly satisfies $(\omega/T)$ scaling. At frequencies much higher than the temperature (i.e. by arranging $\Omega_c^*\ll T\ll\omega\ll U_c$), we find $\sigma(\omega)\propto Nv^2/(U_c~\omega)$. Focusing on the dc conductivity in this regime, we find (in units of $e^2/h$): \begin{eqnarray} \sigma_{dc} \propto \frac{N v^2}{U_c T}, \end{eqnarray} where $v^2~(\sim t_c^2)$ represents an average over the Fermi-surface. As a result of the $\omega/T$ scaling, the crossover scale from the high-frequency to the dc limit is of order $T$. We note that in this incoherent regime, once the electronic correlation functions become locally critical, the previously established mechanism for incoherent transport in disordered SYK-like models \cite{Parcollet1,Balents} continues to be applicable as a result of the strong momentum dissipation in the lattice model. \section{A simple view on the one-band results} \label{scaling} In this section, we provide a simple alternate understanding of the physics of the one-band model that does not require a detailed analysis of the saddle-point equations in Eq.~(\ref{SP1}-\ref{SPc_b}). We carry out a simple scaling analysis for the extension of the usual SYK model (as defined above in Eq.~\ref{hc1}), as well as an extension of the model that involves higher than $2-$body interactions. The latter will be used later in Sec.~\ref{nfl} to obtain a non-Fermi liquid with a critical Fermi surface. We begin by considering the limit where the hopping $t_c \ll U_c$. When $t_c = 0$ the different SYK islands are decoupled from each other. Further we know that within each island the electron has power law correlations in time with a scaling dimension $\Delta_c = \frac{1}{4}$. For small hopping $t_c$, we can study the relevance/irrelevance of the hopping term in the decoupled SYK theories. In the action, the hopping term becomes \begin{equation} S_{\textnormal{hopping}} = -t_c \int d\tau \sum_{i, \langle {\vec r} {\vec r}' \rangle} c^\dagger_{{\vec r} i}(\tau) c_{{\vec r}'i}(\tau) + c^\dagger_{{\vec r}'i}(\tau)c_{{\vec r} i}(\tau). \end{equation} Clearly then under a scaling transformation $\tau \rightarrow \tau' = \frac{\tau}{s}$, $t_c \rightarrow t_c' = t_c ~s^{\frac{1}{2}}$ so that the hopping is relevant. To study the system at a non-zero temperature $T$, we run the scaling until a scale $s_T = \frac{U_c}{T}$. The effective renormalized hopping at this scale is then $t_c(s_T) = t_c \left( \frac{U_c}{T} \right)^{\frac{1}{2}}$. With decreasing temperature we will stay in the regime of weak hopping until a temperature such that $t_c(s_T) \sim U_c$. This corresponds to a temperature scale $T_{\textnormal{coh}} \sim \frac{ t_c^2}{U_c}$ which matches exactly with the coherence scale identified in section \ref{spec}. For $T \gg T_{\textnormal{coh}}$ the physics will be that of weakly coupled SYK islands and we can calculate physical properties in perturbation theory in $t_c$. For $T \ll T_{\textnormal{coh}}$ it is natural to expect that the coupling between the different islands leads to a Fermi liquid phase. We can now understand the thermodynamics and transport through simple physical arguments. First we recall that for the $(0+1)$-dimensional SYK model, the entropy is known \cite{Parcollet1,Maldacena_syk} to obey \begin{equation} S(T) = N(S_0 + \gamma_0 T +...). \end{equation} The ground state entropy $S_0$ is nonzero in the limit $N \rightarrow\infty $, and then $T \rightarrow 0$. As argued in Sec.~\ref{thermo}, in the limit $t_c = 0$ this is obviously the entropy per site of the lattice model. When $t_c \neq 0$ and at sufficiently high temperature such that $T \gg T_{\textnormal{coh}}$, both the entropy and the compressibility only get small corrections when we perturb in $t_c$. For $T \ll T_{\textnormal{coh}}$, however, the ground state entropy of the decoupled limit is relieved, and $\frac{S(T \rightarrow 0)}{M} \rightarrow 0$. ($M=N V$ is the total number of sites). In the low-$T$ Fermi liquid we expect $\frac{S(T \rightarrow 0)}{M} = \gamma T$. An estimate for $\gamma$ can be obtained by matching this entropy extrapolated to $T = T_{\textnormal{coh}}$ with the residual entropy of the high temperature phase. This gives \begin{equation} \gamma \approx \frac{S_0}{T_{\textnormal{coh}}}~\sim \frac{U_c}{t_c^2}. \end{equation} In Fermi liquid theory the $\gamma$ coefficient directly gives the quasiparticle effective mass $m^* \sim \frac{U_c}{t_c^2 a^2}$. ($a$ is the lattice spacing.) Note that the ``bare" mass determined from the hopping Hamiltonian is $m \sim \frac{1}{t_c a^2}$ . Therefore the mass enhancement $\frac{m^*}{m} \sim \frac{U_c}{t_c} \gg 1$ in exact agreement with the solution of the self consistency equations in section \ref{spec}. The behavior of the compressibility in both the high-$T$ and low-$T$ limits have already been described in section \ref{thermo}. Let us now turn to transport. In the high-$T$ regime in perturbation theory in $t_c$, the conductivity $\sigma_{dc}$ will be $\propto t_c^2$. In $d = 2$, $\sigma_{dc}$ is dimensionless in units of $\frac{e^2}{h}$. We thus expect that for $T \gg T_{\textnormal{coh}}$, $\sigma_{dc} \sim \frac{Ne^2}{h}\left( \frac{t_c(s_T)}{U_c}\right)^2$ where $t_c(s_T)$ is the effective renormalized hopping at a temperature $T$ introduced above. We therefore get \begin{equation} \sigma_{dc} \sim \frac{Ne^2}{h} \frac{t_c^2}{U_cT}~\sim \frac{Ne^2}{h} \frac{T_{\textnormal{coh}}}{T}. \end{equation} This is again in exact agreement with the calculations in section \ref{trans1}. For $T \ll T_{\textnormal{coh}}$, if the Fermi surface is big enough to allow umklapp scattering of the low energy quasiparticles, we will get a resistivity $\rho(T) = \frac{\tilde{A}}{N} T^2$. To estimate $\tilde{A}$, we require that when extrapolated to $T = T_{\textnormal{coh}}$ this matches the extrapolation of the high $T$ result down to $T_{\textnormal{coh}}$. This leads to $\tilde{A} \sim \frac{h}{e^2} \frac{1}{T_{\textnormal{coh}}^2}$. Note that in the low-$T$ Fermi liquid $\tilde{A} \sim \gamma^2$ thereby obeying the Kadowaki-Woods relationship~\cite{KW}. The understanding above readily generalizes to the physics of coupled SYK models where the on-site interaction is composed of $q$ ($q \geq 4$ and even) fermion operators \cite{Gross17}; we studied a generalized two-band version of this model in section~\ref{nfl}. Specifically consider the model of just a single band of electrons with the Hamiltonian \begin{eqnarray} H_c = \sum_{{\vec r},{\vec r}'} \sum_{\ell} (-t^c_{{\vec r},{\vec r}'} { - \mu_c \delta_{{\vec r}\r'} }) c_{{\vec r} \ell}^\dagger c_{{\vec r}'\ell} + \frac{\left(\frac{q}{2} !\right)}{N^{\frac{q-1}{2}}}\sum_{\{i_\ell\}}U^c_{i_1i_2...i_q} \bigg[c^\dagger_{{\vec r},i_1}c^\dagger_{{\vec r},i_2}...c^\dagger_{{\vec r},i_{q/2}}c_{{\vec r},i_{q/2+1}}...c_{{\vec r},i_{q-1}}c_{{\vec r},i_q} \bigg]. \nonumber\\ \label{hc} \end{eqnarray} As before we take $U_{i_1i_2...i_q}$ and the hopping $t_c$ to be translationally invariant and $\overline{U_{i_1i_2...i_q}}= 0$, and $\overline{(U_{i_1i_2...i_q})^2}=U_c^2$. We focus on the small $t_c$ regime. For general $q$, the scaling dimension of the fermion is $\Delta(q) = 1/q$. It follows that a small $t_c$ is relevant at the decoupled fixed point and scales as \begin{eqnarray} t_c(s) = t_c~ s^{1- \frac{2}{q}}. \end{eqnarray} Following the discussion above, we determine that the physics will be that of weakly coupled islands until a coherence scale $T_{\textnormal{coh}} = t_c \left(t_c/U_c\right)^{\frac{2}{q-2}}$. In the high-$T$ regime, the entropy and compressibility have the same qualitative behavior as for $q = 4$. Importantly, there is a residual entropy $S_0$ (with a linear $T$ correction) and a finite non-zero compressibility. At $T \ll T_{\textnormal{coh}}$ we again expect a Fermi liquid. The residual entropy is relieved, and the low-$T$ heat capacity coefficient is $\gamma \sim \frac{S_0}{T_{\textnormal{coh}}}$. This can be converted into an estimate for the quasiparticle effective mass in the Fermi liquid. The electrical resistivity in the high-$T$ regime, estimated as above, is of the form \begin{eqnarray} \rho_{dc} \sim \frac{h}{Ne^2} \left(\frac{U_c}{t_c}\right)^2 \left(\frac{T}{U_c}\right)^{\frac{2(q-2)}{q}}. \end{eqnarray} Note that $\rho_{dc}$ increases faster than linearly with $T$, but slower than $T^2$. Thus the high-$T$ linear resistivity is not a generic property of coupled SYK models and requires $q = 4$. As before (with umklapp scattering) the low-$T$ resistivity is $\rho(T) = \tilde{A} T^2$ with $\tilde{A} \sim \gamma^2$. \subsection{Explicit transport calculation at high-temperature} \label{perttrnsprt} It is instructive to explicitly calculate the conductivity in the high-$T$ regime in perturbation theory in the hopping, taking special care with issues regarding disorder averaging. As the leading temperature dependence is $\propto t_c^2$, a second order perturbative calculation should give the exact answer for this leading term. The imaginary frequency current-current correlator is readily related to the electron Green's function of each SYK island (details of such perturbative calculations are straightforward - see for example Appendix E of Ref.~\cite{TSDCP}): \begin{eqnarray} \label{Pipert0} \Pi_{J_x}(q = 0, i\omega_n) = \frac{(et_c)^2}{\beta} \sum_{\omega_\nu} \sum_{ij} G_{ij} ({\vec r}, {\vec r}; i\omega_\nu) \left( G_{ji} ({\vec r}', {\vec r}'; i(\omega_n + \omega_\nu) - G_{ji}({\vec r}', {\vec r}'; i\omega_\nu) \right) \end{eqnarray} Note that we have not carried out the disorder averaging yet. $G_{ij} ({\vec r}, {\vec r}; i\omega_\nu)$ is the frequency dependent fermion Green's function within the SYK island at site ${\vec r}$ and ${\vec r}'$ is the site neighboring to ${\vec r}$ in the positive $x$ direction. We now wish to average this over disorder realizations. If the SYK interactions were independently random at different sites (like in the models studied in Ref. \onlinecite{Balents}), then obviously upon disorder averaging (indicated with an overline) the products $G_{ij} G_{ji}$ that appear above can be replaced by $\overline{G}_{ij} \overline{G}_{ji}$ for any $N$. In our translation invariant models, the SYK interactions are the same at every site. Thus strictly speaking we must instead take $\overline{G_{ij} G_{ji}}$. Fortunately (as shown in Appendix \ref{app:selfaverage}) for $q \geq 4$ in the large-$N$ limit\footnote{The $q = 2$ case is special and will be discussed in detail in Appendix \ref{syk2}.}, the property \begin{eqnarray} \label{gfctr} \overline{G_{ij} G_{ji}} = \overline{G}_{ij} \overline{G}_{ji} \end{eqnarray} holds and we can continue to make this replacement in the products entering the correlation function. Further we know that when $N \rightarrow \infty$, only $\overline{G}_{ii}$ is $O(1$) and $\overline{G}_{ij}$ for $i \neq j$ is suppressed. \footnote{Note that there are $N$ ``diagonal" terms with $i = j$ while there are $O(N^2)$ off-diagonal terms where $i \neq j$. Thus it is necessary that the off-diagonal terms are suppressed by sufficiently high powers of $N$. For SYK$_q$ we show in Appendix \ref{app:selfaverage} that the off-diagonal contributions are of order $\sim N^{3-q}$ and hence can be ignored compared to the $O(N)$ diagonal contributions for $q > 4$. Clearly however they cannot be ignored at $q = 2$, and play an important role in obtaining the correct physics as we discuss in Appendix \ref{syk2}.} Therefore we will henceforth replace all Green's functions by their averages (and drop the overlines). Implicitly this has been done in all of the discussions in this paper. Analytically continuing Eq.~(\ref{Pipert0}) to real frequencies we get the familiar form for the real part of the conductivity \begin{eqnarray} \label{sigmapert} \sigma_{xx}'(\omega, T) = N\pi (et_c)^2 \int d\Omega A(\Omega) A(\omega + \Omega) \left(\frac{f(\Omega) - f(\omega + \Omega)}{\omega}\right) . \end{eqnarray} Here $A(\omega)$ is the spectral function for the Green's function within a single SYK island. For SYK$_q$ (with $q \geq 4$) this satisfies $\omega/T$ scaling, \begin{eqnarray} A(\omega, T) = \frac{1}{U_c} \left(\frac{U_c}{|\omega|}\right)^{1- \frac{2}{q}} F_q\left(\frac{\omega}{T} \right), \end{eqnarray} with $F_q(...)$ a known universal function. It follows from Eq.~(\ref{sigmapert}) that the conductivity itself satisfies $\omega/T$ scaling. We get \begin{eqnarray} \sigma'_{xx}(\omega, T) = \frac{N e^2 t_c^2 }{U_c^2} \left(\frac{U_c}{T}\right)^{2- \frac{4}{q}}{\cal S}_q \left(\frac{\omega}{T} \right), \end{eqnarray} with ${\cal S}_q(...)$ a universal function determined in terms of $F_q(...)$ by Eq.~(\ref{sigmapert}). In particular in the dc limit we reproduce the temperature dependence previously obtained for general $q \geq 4$. As a result of the $\omega/T$ scaling, it is easy to see that the frequency scale over which $\sigma'_{xx}(\omega)$ reaches its dc value is $\tau_{\textnormal{opt}}^{-1}\sim a T$ (in units of $k_B/\hbar$), with $a$ an $O(1)$ number. Moreover, the scaling function ${\cal{S}}_q(x)\sim 1/x^{2-4/q}$ at large $x$. Therefore, at frequencies much larger than the temperature, the conductivity has the form $\sigma'_{xx}(\omega)\sim 1/\omega^{2-4/q}$. \section{Two-Band Model --- Marginal Fermi Liquid} \label{sec:MFL} In the previous section, we saw an example of a crossover from a Fermi-liquid to an incoherent metal, without any remnant of a Fermi-surface, in a one-band model. It is interesting to ask if a critical Fermi-surface~\cite{TS08} can emerge in the general class of translationally invariant models that are being considered here. Before proceeding further, it is useful to define precisely what we mean by a critical Fermi-surface. Within our definition, the criticality is associated with the gapless single-particle excitations of physical electrons over the entire Fermi surface, which remains sharply defined\footnote{In contrast sometimes in the literature the phrase `critical Fermi surface' is used to denote a Fermi surface of emergent fermions (not locally related to microscopic degrees of freedom) which themselves may be critical.}. However there are no Landau quasiparticles across the critical Fermi surface and the quasiparticle residue $Z$ is zero. We describe two classes of models in the next two sections that host such a critical Fermi surface. Let us begin with a model where we introduce an additional band of $f-$fermions with operators $f^{\dagger}_{{\vec r},\ell}$, $f_{{\vec r},\ell}$ ($\ell=1,...,N$) and an associated conserved $U(1)$ charge density, $Q_f$ that may be tuned by a chemical potential $\mu_f$, which we set to zero. The modified Hamiltonian (with a $U(1)_c\times U(1)_f$ symmetry) is \begin{eqnarray} H = H_c + H_f + H_{cf}, \label{Htot} \end{eqnarray} where $H_c$ is as described in Eq.~(\ref{hc}), and $H_f$ is defined in an identical fashion with translationally invariant hoppings $t^f_{{\vec r}\r'}$ and on-site interactions $U^f_{ijk\ell}$. The form of the inter-band interaction is chosen to be \begin{eqnarray} H_{cf} = \frac{1}{N^{3/2}}\sum_{{\vec r}}\sum_{ijk\ell} V_{ijk\ell} c^\dagger_{{\vec r} i}f^\dagger_{{\vec r} j} c_{{\vec r} k} f_{{\vec r} \ell}, \label{hcf} \end{eqnarray} where the coefficients, $V_{ijkl}$, are chosen to be identical at every site with $\overline{U^f_{ijkl}} = \overline{V_{ijkl}} = 0$, and where the distribution of the couplings satisfy $\overline{(U_{ijk\ell}^{f})^2}=U_f^2,~\overline{(V_{ijk\ell})^2}=U_{cf}^2$. We now assume that $t^f_{{\vec r},{\vec r}'}\ll t^c_{{\vec r},{\vec r}'}$, i.e. the bandwidth for the $f-$fermions is much smaller than the bandwidth for the $c-$fermions ($W_f\ll W_c$). The model described by (\ref{Htot}) therefore has some similarity to models for `heavy-Fermion' systems, with a specific form of interaction terms, and where the direct hybridization term, $H_{\textnormal{hyb}} = \sum_{{\vec r}, ij} M_{ij}~c_{{\vec r} i}^\dagger f_{{\vec r} j}$ has been set to zero. To leading order in $1/N$, the saddle point equations for the Hamiltonian defined in Eq.~(\ref{Htot}) are given by, \begin{subequations} \begin{eqnarray} G_c({\vec k},i\omega) &=& \frac{1}{i\omega - \varepsilon_{\vec k} - \Sigma_c({\vec k},i\omega) - \Sigma_{cf}({\vec k},i\omega)},\label{sceq_a} \\ G_f({\vec k},i\omega) &=& \frac{1}{i\omega - \xi_{\vec k} - \Sigma_f({\vec k},i\omega) - \Sigma'_{cf}({\vec k},i\omega)} ,\label{sceq_b} \\ \Sigma_{f}({\vec k},i\omega) &=& { -} U_f^2 \int_{{\vec k}_1} \int_{\omega_1} G_f({\vec k}_1,i\omega_1)~\Pi_f({\vec k}+{\vec k}_1,i\omega+i\omega_1),\label{sceq_c} \\ \Sigma_{cf}({\vec k},i\omega) &=& { -} U_{cf}^2 \int_{{\vec k}_1} \int_{\omega_1} G_c({\vec k}_1,i\omega_1)~\Pi_f({\vec k}+{\vec k}_1,i\omega+i\omega_1),~ \label{sceq_d} \\ \Pi_f({\vec q},i\Omega) &=& \int_{{\vec k}} \int_{\omega} G_f({\vec k},i\omega)~G_f({\vec k}+{\vec q},i\omega+i\Omega),~\textnormal{and} \label{sceq_e} \\ \Sigma'_{cf}({\vec k},i\omega) &=& - U_{cf}^2 \int_{{\vec k}_1} \int_{\omega_1} G_f({\vec k}_1,i\omega_1)~\Pi_c({\vec k}+{\vec k}_1,i\omega+i\omega_1).\label{sceq_f} \end{eqnarray} \end{subequations} We have introduced $\xi_{\vec k}$ as the dispersion for the $f$ fermions and $\Sigma_c$, $\Pi_c$ are as defined earlier in Eqs.~(\ref{SPc_a}-\ref{SPc_c}). The watermelon-diagrams for the self-energies are shown in Fig.\ref{se2}. \begin{figure} \begin{center} \includegraphics[width=1.0\columnwidth]{SE_two.pdf} \end{center} \caption{The self-energy diagrams in the two-band model due to inter-band scattering for (a) $c-$fermions, $\Sigma_{cf}$, and, (b) $f-$fermions, $\Sigma'_{cf}$, with orbital index $i$. The solid black (red) lines represent fully dressed Green's functions, $G_c({\vec k},\omega)~(G_f({\vec k},\omega))$; see Eq.~(\ref{sceq_a}) and \ref{sceq_b}. The dashed lines correspond to $U_{cf}^2$ contractions respectively and carry no frequency/momentum.} \label{se2} \end{figure} Based on our analysis for the one-band model in section \ref{mod}, we see immediately that if $H_{cf}=0$, the two decoupled subsystems have a Fermi-liquid to LICM crossover at frequencies or temperatures of the order of $\Omega_c^*$ and $\Omega_f^*$ respectively ($\Omega_f^*\ll \Omega_c^*$). In the high-temperature regime, $T>\Omega_c^*\gg \Omega_f^*$, when both bands are in a LICM phase, adding $H_{cf}$ does not alter any of the features qualitatively and the resulting state is thus still described by a LICM phase. Similarly, at low-temperatures, $T<\Omega_f^*\ll \Omega_c^*$, when both species are in a Fermi-liquid phase, a finite $H_{cf}$ does not modify the qualitative aspects. There are two Fermi-surfaces for the $c$ and $f$ fermions consistent with their individual Luttinger count; adding a finite $H_{\textnormal{hyb}}$ hybridizes the two Fermi-surfaces, breaking the two independent $U(1)_c\times U(1)_f$ symmetries down to a single conserved $U(1)$ charge corresponding to the total fermion density. The low-energy description of the Fermi-liquid phase is similar to our considerations from the previous section. The key question that remains is what is the fate of the system in the intermediate regime $\Omega_f^*< T <\Omega_c^*$. For the purpose of our subsequent discussion, we can set $U_c=0$, such that the $c$ band is uncorrelated and the scale $\Omega_c^*$ is pushed out to infinity (the bandwidth, $W_c$, is the physical UV scale).{\footnote{All of our results below remain qualitatively the same in the presence of a finite $U_c$. }} In order to have a sharp meaning to the notion of a non-Fermi liquid with a critical Fermi surface, it is useful to also send the scale $\Omega_f^*$ to zero. This is conveniently done by setting $t^f = 0$ while keeping $U_f$ finite. In this limit, we can pose sharp questions about the presence or absence of quasiparticles and Fermi surfaces in the $T \rightarrow 0$ limit. \subsection{Fermion Green's Function} \label{MFLgreen} In the window of intermediate energies, where the $f$ fermions enter the LICM regime, while the $c$ fermions do not, one may find a Fermi surface formed by the lighter bands, with an anomalous single-particle lifetime due to scattering off the heavy band. As we show below, it is precisely in this regime that we obtain a {\it marginal Fermi liquid} regime with a critical Fermi surface of the $c$ fermions. In the next section, we will generalize the model to obtain a critical Fermi surface of the $c$ fermions with a singular frequency dependent self-energy with variable exponents. In order to obtain the structure of the solution in the intermediate frequency regime, $\Omega_f^* < \omega <\Omega_c^*$, we begin by considering the effect of the inter-band interaction perturbatively. Later, we will check that the behavior we find is self-consistent. As emphasized earlier, our conclusions hold in this regime for a finite $U_c$, but we set $U_c=0$ for simplicity. We assume that the $f$ fermions are in the LICM regime, such that $G_f({\vec k},\tau)\sim i\mathrm{sgn}(\tau)/\sqrt{U_f |\tau|}$ for imaginary times $|\tau| \gg 1/U_f$, which is the familiar form in the SYK model~\cite{SY,Parcollet1,Parcollet2}. We ignore here the weak-momentum dependent correction to the $f$ Green's function in the LICM phase (as discussed in section \ref{spec}) by considering the limit of $W_f/U_f\rightarrow0$, with $U_f$ fixed (the crossover scale $\Omega_f^*$ also goes to zero in this limit). Then, $\Pi_f({\vec q},\omega)$ has the momentum independent form\footnote{Note that the spectral asymmetry in the $f$-Green's function cancels out in the product below.}, \begin{eqnarray} \Pi_f({\vec q},i\omega) = \int d\tau ~e^{i\omega \tau}~ G_f(\tau)~G_f(-\tau) \sim \frac{1}{U_f} \log\left( \frac{U_f}{|\omega|} \right). \label{mflpif} \end{eqnarray} Inserting $\Pi_f(\omega)$ into Eq.~(\ref{sceq_d}) we get for the $c$ self-energy in Fig.\ref{se2}(a) (see Appendix~\ref{MFLapp}) \begin{eqnarray} \Sigma_{cf}(i\omega) \sim - \frac{\nu_0 U^2_{cf}}{2\pi^2 U_f}\, i \omega \log\left( \frac{U_f}{|\omega|} \right). \label{SEmfl} \end{eqnarray} The self-energy of the $c$ fermions then has a `marginal Fermi liquid' (MFL)~\cite{Varma} form. {It is important to note that the above result is valid at most up to scales at which the self-energy becomes of the order of the bandwidth, i.e. $\Sigma_{cf}\sim W_c$. This scale can be easily seen to be $\Omega_{cf}^*\sim U_f(W_c/U_{cf})^2$. In order to check that the form of $\Sigma_{cf}(i\omega)$ in (\ref{SEmfl}) is self consistent, we need to verify that it does not change qualitatively if it is evaluated using the full Green's functions. Moreover, we also need to evaluate $\Sigma'_{cf}(\omega)$ (the self-energy for the $f$ fermions due to coupling to the $c$ fermions; Fig. \ref{se2}b) using the renormalized $G_c$, and verify that its behavior is sub-leading to that of $\Sigma_f(\omega) \sim \sqrt{U_f \omega}$. We demonstrated that this solution is indeed self-consistent in Appendix~\ref{SCse}. In particular, focusing for simplicity on the case where $U_{cf} \ll W_c$, the contribution to the $f$ fermion self-energy due to the inter-species interaction consists of two contributions -- an analytic correction, which is given by \begin{eqnarray} \Sigma_{cf,1}'(i\omega) \sim \nu_0\frac{U^2_{cf}}{\sqrt{U_fW_c}}i\omega, \end{eqnarray} that renormalizes the bare `$i\omega$' term, and a singular (but subleading) correction (see Appendix \ref{MFLapp}) \begin{eqnarray} \Sigma'_{cf,2}(i\omega)\sim \frac{U_{cf}^2}{W_c^2\sqrt{U_f}}~ i|\omega|^{3/2}\textnormal{sgn}(\omega). \end{eqnarray} $\Sigma_{cf,1}'(\omega)$ is negligible compared to the bare $i\omega$ term if $U_f\gg W_c (U_{cf}/W_c)^4$. In the limit $U_{cf} \ll W_c$, the MFL regime extends over a frequency (or temperature) window $\Omega_f^* \ll \omega ~(\textnormal{or}~ T) \ll \mathrm{min}(W_c, U_f)$. It is interesing to consider the case where $U_{cf} \sim U_f \gg W_c$; then, the MFL extends only up to temperatures of the order of $\Omega^*_{cf}$. We leave the behavior above $\Omega^*_{cf}$ for a future study. To conclude, we find a broad temperature regime above $\Omega_f^*$ where the $f$ fermions behave as a LICM, while the $c$ fermions follow MFL behavior, with a well-defined Fermi surface and marginally defined fermionic quasiparticles. As we discuss in section~\ref{nfl} below, a generalized version of the two-band model gives a critical Fermi surface where quasiparticles are not even marginally defined (a full-fledged non-Fermi liquid). In Sec.~\ref{nfl} we analyze the density response at the ``$2K_F$" wavevector and quantum oscillations in magnetization of such a critical Fermi surface. Interestingly, in both the marginal Fermi liquid of the present section and in the non-Fermi liquid of Sec.~\ref{nfl}, the structure of the $c-$fermion self-energy is such that it is singular in the limit of $\omega\rightarrow0$ for all momenta, even far away from the Fermi-surface. This is a consequence of the fact that the fluctuations of the $f$ fermions are critical for all momenta. We analyze this structure more carefully in Appendix~\ref{SCse}. \subsection{Thermodynamic Properties} \label{thermomfl} Let us begin by analyzing the specific heat in the MFL regime, where the entropy density has contributions from the critical Fermi-surface of the $c$ fermions as well as the $f$ fermions which are in an SYK-like regime. The total extrapolated zero temperature entropy is finite as a result of the finite entropy density from the SYK sector. However, this excess entropy is relieved as a result of the crossover to the Fermi-liquid below $\Omega_f^*$. In order to extract the contribution from the critical Fermi-surface, we can compute the free energy at a finite temperature using the standard Luttinger-Ward (LW) analysis (Appendix \ref{pif} and \ref{LWapp}). Let us consider the different contributions to the free energy, written as \begin{eqnarray} F = \textnormal{Tr}[\textnormal{log} ~G_c^{-1}] + \textnormal{Tr}[\Sigma_{cf}~G_c] + \textnormal{Tr}[\textnormal{log} ~G_f^{-1}] + \textnormal{Tr}[(\Sigma_f + \Sigma'_{cf})~G_f] - \Phi_{\mathrm{LW}}[G_c, G_f], \label{FLW} \end{eqnarray} where $\Phi_{\mathrm{LW}}[G_c,G_f]$ is the Luttinger-Ward functional, which depends on the exact Green's functions of the $c$ and $f$ electrons. The first two terms, with the form of the MFL self energy in Eq.~(\ref{SEmfl}), gives rise to a low temperature singular logarithmic correction~\cite{MFLsp} to the linear in $T$ specific heat, i.e. it has a $T \ln(1/T)$ form. This feature is reminiscent of the results of Refs.~\cite{Varma,MFLsp}. However, we note that the self-energy alone does not fix the thermodynamic properties. In particular, the other contribution to the free energy arises from the LW term, \begin{align} \Phi_{\mathrm{LW}}[G_c,G_f] & = \frac{U_{f}^2}{4} \sum_{\vec r} \int_0^\beta d\tau~ |G_f({\vec r},\tau) G_f({\vec r},-\tau)|^2 \nonumber \\ & + \frac{U_{cf}^2}{2} \sum_{\vec r} \int_0^\beta d\tau~ G_c({\vec r},\tau)G_c({\vec r},-\tau)G_f({\vec r},\tau)G_f({\vec r},-\tau). \label{LW} \end{align} [The derivation of (\ref{LW}) follows closely the derivation in the single-band case, outlined in Appendix~\ref{pif}.] Given the local character of the $f$ Green's function, we only need the local form of the $c$ fermion bubble above (which are the same as in a Fermi liquid). At low temperature, the first term in the LW functional above is proportional to $T$, and the second is proportional to $T^2$. Hence, the LW term does not lead to any singular modification of the results for the specific heat, $c_V = -T \partial^2 F/\partial T^2$. The MFL has a critical Fermi surface that satisfies Luttinger's theorem: $n_c = {\mathcal{A}_{\mathrm{FS}}}/{(2\pi)^d}$, where $\mathcal{A}_{\mathrm{FS}}$ is the area of the Fermi surface and $n_c$ is the density of $c$ fermions. This follows from a Luttinger-Ward analysis, applied to the $c-$fermion Green's function: $G_c = [i\omega - (\varepsilon_{\vec k} - \mu_c) - \Sigma_{cf}(i \omega)]^{-1}$, and accounting for the fact that our model has two conserved $U(1)_c\times U(1)_f$ densities corresponding to the $c$ and $f$ fermions (see Appendix \ref{LWapp} for details of this analysis). The same analysis gives that the $c-$fermion compressibility $\chi_c \equiv \partial n_c/\partial \mu_c$ is finite and non-singular as a function of $U_{cf}$. In particular, for small $U_{cf}$ (and $U_c=0$) it is given by $\chi_c = \chi_0 + O[U_{cf}^2/(U_f W_c^2)]$, where $\chi_0$ is the non-interacting compressibility. The LW analysis for the conserved $f$ fermion density has been carried out in Ref.~\cite{Parcollet2}. \subsection{Transport} \label{transmfl} Next, we consider the charge transport properties in the MFL regime. The arguments below will also apply to the discussion of transport for the non-Fermi liquid metals described in section~\ref{nfl}. We set $U_c=0$ and only consider the effects of inter-band scattering ($U_{cf}$) when the $f$ electrons are in an incoherent SYK-like regime. We are interested in the real part of the charge conductivity, which is given by \begin{eqnarray} \sigma'_{xx}(\Omega) = \frac{\mathrm{Im}~\Pi^{\mathrm{ret}}_{J_x}(\Omega)}{\Omega}, \end{eqnarray} where $\Pi^{\mathrm{ret}}_{J_x}(\Omega)$ is the retarded correlation function of the $x$ component of the current at a non-zero frequency. In particular, it is important to explore the role of vertex corrections of the current. In the 2-band model there are two independent global $U(1)$ symmetries associated with the separate conservation of $c$ and $f$ fermions. Correspondingly there are two independent conductivities associated with transport of the $c$ and $f$ fermions. Here we will be interested in the conductivity due to the $c$ fermions. The conductivity due to the $f$ fermions will be essentially identical to the discussion in the one-band model and we will not elaborate further on it here. \begin{figure} \begin{center} \includegraphics[width=0.6\columnwidth]{opt_twoband.pdf} \end{center} \caption{(a) Diagram for the computation of $\sigma_{xx}(\Omega)$. Wiggly lines represent current operators and solid black (red) lines represent the full Green's functions, $G_c({\vec k},\omega)$ ($G_f({\vec k},\omega)$). (b) The self-consistent equation for the current vertex. } \label{v2kf} \end{figure} To leading order in $1/N$, $\sigma_{xx}(\Omega)$ is given by the sum over the set of ladder diagrams shown in Fig.~\ref{v2kf}(a), where the self-consistent equation for the current vertex $\Gamma_{J_x}(\omega, \Omega)$, is described diagramatically in Fig.~\ref{v2kf}(b): \begin{equation} \Gamma_{J_x}({\vec k},\omega, \Omega) = v^x_{\vec k} + U_{cf}^2 \int_{\vec l}\int_{\omega'} \Gamma_{J_x}({\vec l},\omega',\Omega)~ G_c({\vec l},\omega')~ G_c({\vec l},\omega'+\Omega)~ \Pi_f({\vec l}-{\vec k},\omega'-\omega), \label{Gammanfl} \end{equation} where $v^x_{\vec k} = \partial \varepsilon_{\vec k} / \partial k_x$ is the `velocity' along $x$ and we have assumed an identical dispersion for all the orbitals. It is important to recall that in a system that preserves inversion symmetry, the velocity (or equivalently, the current) vertex itself is odd with respect to the momentum label, ${\vec k}$, i.e. $v^x_{-{\vec k}} = -v^x_{{\vec k}}$ and $\Gamma_{J_x}(-{\vec k},\omega,\Omega) = -\Gamma_{J_x}({\vec k},\omega,\Omega)$. At the same order in $1/N$, there is another contribution to the set of ladder diagrams and the current vertex{\footnote{This diagram is reminiscent of an `Aslamazov-Larkin' type contribution.}}, as shown in Fig. \ref{al}. However, this correction vanishes as a result of the local structure of the $f$ fermions, as explained below. \begin{figure} \begin{center} \includegraphics[width=0.5\columnwidth]{AL_opt.pdf} \end{center} \caption{Correction to the current vertex in Fig. \ref{v2kf}(b) at the same order in $1/N$.} \label{al} \end{figure} In general, the above self-consistent equation is difficult to solve for the full current vertex. However, the ladder insertions in the above series of diagrams, $\Pi_f$, have a simple local structure that greatly simplifies the problem. At leading order in $\xi_{\vec k}/U_f$, as we have discussed above, $\Pi_f({\vec q},\omega)$ is independent of the momentum ${\vec q}$ and has the form shown in Eq.~(\ref{mflpif}) [see Eq.~(\ref{nflpif}) in section~\ref{nfl} for the generalization to the non-Fermi liquid case], which arises from the completely local form of the Green's function, $G_f$. If we ignore the momentum dependence of $\Pi_f$ in Eq.~(\ref{Gammanfl}) above, it is straightforward to see that the momentum integral in the second term vanishes, as the integrand is odd in ${\vec l}$. The correction in Fig. \ref{al} vanishes for the same reason. In this limit, we can therefore ignore the vertex corrections altogether, such that the conductivity is given only by the diagram in Fig. \ref{v2kf}(a) without any rungs, which reduces the expression for the conductivity to \begin{eqnarray} \sigma'_{xx}(\Omega,T) = \frac{1}{\Omega}\int d\omega \int_{\vec k} (v^x_{\vec k})^2~ A_{\vec k}(\omega)~A_{\vec k}(\Omega+\omega)~[f(\omega) - f(\Omega+\omega)], \label{sigmaxx} \end{eqnarray} where $A_{\vec k}(\omega)$ is the spectral function for the $c$ fermions and $f(...)$ represents the Fermi-Dirac distribution function. At frequencies much higher than the temperature ($\Omega\gg T$), this leads to: \begin{eqnarray} \sigma'_{xx}(\Omega) \propto \frac{Nv^2~U_f}{U_{cf}^2} \frac{1}{\Omega~\ln(1/\Omega)^2}, \end{eqnarray} where $v^2$ is the average of $(v^x_{\vec k})^2$ over the Fermi surface. Let us now focus on the dc limit. We find that the scattering rate determined from the dc resistivity is determined by the single-particle scattering rate of the $c-$fermions. This result is not surprising, since in the regime that is being considered here where the $f-$fermions provide a momentum independent scattering channel, providing an effective `momentum sink' for the $c-$fermions{\footnote{ One might be tempted to associate the momentum relaxing scattering in the clean system above with `umklapp' scattering. However, we note that in the regime of interest here, there is no restriction on the respective $c$ or $f-$fermion densities. }}. Therefore, the resistivity (in units of $h/e^2$) is given by, \begin{eqnarray} \rho_{dc}(T) \propto \frac{U_{cf}^2}{Nv^2 ~U_f} T. \end{eqnarray} We can now estimate the frequency scale at which the high-frequency form of the optical conductivity matches the low-frequency dc result. A simple analysis immediately reveals the crossover scale (in units of $k_B/\hbar$) to be \begin{eqnarray} \tau_{\textnormal{opt}}^{-1}\sim T/\ln^2(1/T). \label{toptmfl} \end{eqnarray} Note that the coefficient of the $T-$linear term in the scattering rate need not be $O(1)$ due to the $\ln^2(1/T)$ term in the denominator. In the regime $T \ll \Omega^*_{cf} = U_f (W_c/U_{cf})^2$ that we are considering here, the dc resistivity is always smaller than the Mott-Ioffe-Regel limit, $\rho_{dc} \ll h/(Ne^2)$. At higher temperatures, we expect the MFL behavior to break down. We shall not treat this regime here, leaving it to a future investigation. \section{Two-Band Model --- Non Fermi Liquid} \label{nfl} In the previous section, we demonstrated an example of a metal with a critical Fermi-surface at which the electronic quasiparticles are only marginally defined. Is it possible to realize a more singular non-Fermi liquid with no well-defined quasiparticle excitations across a critical Fermi-surface? In this section we show that by generalizing the $f-$band Hamiltonian to the `SYK$_q$' form considered in section \ref{scaling} above, it is possible to obtain a non-Fermi liquid with a critical Fermi-surface and a more singular (and variable) self-energy. Let us reintroduce the $f-$electron operators $f_i,~f_i^\dagger$ with $i=1,..,N$ orbitals, as before. We generalize the interaction terms to have a $q-$fold term~\cite{Gross17} (with $q$ even; the models considered thus far correspond to $q=4$). The Hamiltonian is still given by $H = H_c + \tilde{H}_f + H_{cf}$, where the modified Hamiltonian for the $f-$electrons is given by, \begin{eqnarray} \tilde{H}_f = -\sum_{{\vec r},{\vec r}'} \sum_{\{i_\ell\}} (t_{{\vec r},{\vec r}'}^f - \mu_f\delta_{{\vec r}\r'}) f_{{\vec r} i_\ell}^\dagger f_{{\vec r} i_\ell} + \frac{(q/2)!}{N^{\frac{q-1}{2}}}\sum_{\{i_\ell\}}U^f_{i_1i_2...i_q} \bigg[f^\dagger_{{\vec r},i_1}f^\dagger_{{\vec r},i_2}...f^\dagger_{{\vec r},i_{q/2}}f_{{\vec r},i_{q/2+1}}...f_{{\vec r},i_{q-1}}f_{{\vec r},i_q} \bigg].\nonumber\\ \end{eqnarray} The hopping matrix-elements $t^f_{{\vec r},{\vec r}'}$ are translationally invariant and diagonal in orbital-space. The on-site inter-orbital interactions, $U^f_{i_1...i_q}$, are assumed to be random with $\overline{U_{i_1i_2...i_q}}= 0$, $\overline{(U_{i_1i_2...i_q})^2}=U_f^2$ and taken to be identical on every site. The model is therefore a translationally invariant generalization of the SYK$_q$ model \cite{Gross17} with uniform hoppings. Moreover, since we have already discussed the special case of $q=4$ in the previous section, we shall only consider the case of $q>4$ from now onwards. \subsection{Fermion Green's Function} As before, we are interested in the regime above a crossover-scale, $\Omega_{f}^*(q)$, where the $f$ band realizes an incoherent metallic state without any remnant of a Fermi-surface. This crossover scale for the $q-$fold interactions is given by $\Omega_{f}^*(q) = W_f (W_f/U_f)^{2/(q-2)}$, which reduces to the standard expression for $q=4$. In this regime, the scaling dimension of the $f-$operators is $\Delta(q) = 1/q$, such that the Greens function has the form $G_f(\tau) \sim \textnormal{sgn}(\tau)/(U_f~|\tau|)^{2\Delta(q)}$, or equivalently, $G_f(i\omega)\sim i\textnormal{sgn}(\omega)/(U_f^{2\Delta(q)} |\omega|^{1-2\Delta(q)})$. Let us now address the self-energy of the $c$ fermions as a result of the quartic inter-band scattering in $H_{cf}$. The bubble, $\Pi_f({\vec q},\omega)$, has a momentum-independent form, \begin{eqnarray} \Pi_f({\vec q},i\omega) &=& \int \frac{d\Omega}{2\pi}~G_f(i\omega+i\Omega)~G_f(i\Omega)\nonumber\\ &\sim& \frac{1}{U_f^{4\Delta(q)}}\frac{1}{|\omega|^{1-4\Delta(q)}}. \label{nflpif} \end{eqnarray} Solving for the $c$ fermion self-energy self-consistently (see appendix \ref{SCse}), we obtain \begin{eqnarray} \Sigma_{cf}(i\omega) \sim \frac{\nu_0 U_{cf}^2}{U_f^{4\Delta(q)}} ~i |\omega|^{4\Delta(q)}\textnormal{sgn}(\omega), \label{SEnfl} \end{eqnarray} which has a strong non-Fermi liquid form with an exponent $4\Delta(q)<1$ for $q>4$. This behavior is valid to scales up to which the self-energy becomes of the order of bandwidth, which immediately gives $\Omega_{cf}^*(q) \sim U_f (W_c/U_{cf})^{2/4\Delta(q)}$. Once again, for simplicity we restrict our attention to the case where $U_{cf}\ll W_c$ (which implies $\Omega_{cf}^*(q)\gg U_f$). Just like in the case of the MFL, a natural question to ask is if the feedback of the $c$ fermions on the $f$ fermions as a result of the inter-band scattering modifies the SYK form of their self-energy. There will be an analytic correction that renormalizes the bare `$i\omega$' term, with a coefficient $(W_c/U_f)^{2\Delta(q)} (U_{cf}/W_c)^2$. However this correction can be made small compared to the bare $i\omega$ term if $U_f\gg W_c(U_{cf}/W_c)^{1/\Delta(q)}$. In addition, an explicit computation of the singular (but subleading) correction to the $f$ self-energy as a result of the inter-species interaction leads to \begin{eqnarray} \Sigma_{cf}'(i\omega) \sim \frac{\nu_0 U_{cf}^2}{W_c U_f^{2\Delta(q)}} i|\omega|^{1+2\Delta(q)} \textnormal{sgn}(\omega), \end{eqnarray} which, as before, is subleading to $\Sigma_f(i\omega)$ at frequency (or temperature) scales small compared to $\Omega_{cf}^*(q)$. We therefore conclude that in the intermediate regime between the scales $\Omega_f^*(q)$ and $\textnormal{min}(U_f,W_c)$, the $f$ fermions have a local SYK$_q$ form of correlations, while the c fermions have a NFL character with a well-defined Fermi surface but no sharply defined Landau quasiparticles. \subsection{Thermodynamic Properties} \label{thermonfl} We turn to discuss the thermodynamic properties of the intermediate non-Fermi liquid regime. The free energy for general $q$ can be computed similarly to the $q=4$ case, using a Luttinger-Ward formulation [see Eq.~(\ref{FLW})]. The entropy is then obtained through $S=-\partial F/\partial T$. This gives three contributions to the entropy density ${\cal{S}}(T) = S(T)/(2NV)$: \begin{eqnarray} {\cal{S}}(T) &=& {\cal{S}}_c(T) + {\cal{S}}_f(T) + {\cal{S}}_{int}(T),\\ {\cal{S}}_f(T) &=& {\cal{S}}_{0,q} + \gamma_q ~T,\\ {\cal{S}}_c(T) &\sim& T^{1/z} \sim T^{4\Delta(q)},\\ {\cal{S}}_{int}(T) &\sim& T^{1 + 4 \Delta(q)}. \end{eqnarray} Here, ${\cal{S}}_f(T)$ is the entropy of a single SYK$_q$ model (where ${\cal{S}}_{0,q}$ and $\gamma_q$ have been computed in Ref.~\cite{SS17}), ${\cal{S}}_c(T)$ comes from the first and second terms in Eq.~(\ref{FLW}), and ${\cal{S}}_{int}(T)$ originates from the inter-species interaction term $\delta \Phi_{\mathrm{LW}}\propto U_{cf}^2 \int d\tau G_f^2 G_c^2$ in the LW functional. The extrapolated zero temperature entropy ${\cal{S}}(T\rightarrow0)={\cal{S}}_{0,q}$ is finite in the above regime. However, as described earlier, there is a crossover to a Fermi-liquid regime below the scale $\Omega_f^*$, where the excess entropy is relieved. In the non-Fermi liquid regime, the specific heat scales as $c_V = T\partial {\cal{S}}/\partial T \sim T^{4\Delta(q)}$. The compressibility associated with the conserved densities for both species of fermions follows from our discussion in section~\ref{thermomfl}. In particular, the $c-$fermions continue to satisfy Luttinger's theorem, such that their density is given by the area inside the critical Fermi-surface (see Appendix \ref{LWapp}). Moreover, the compressibility for the $c-$fermions, that are scattering off the incoherent $f$ fermions, is a non-singular function of $U_{cf}$. For $U_c=0$, the compressibility is given by that of non-interacting $c$-fermions, up to a correction of the order of $U_{cf}^2$.} \subsection{Transport} \label{transnfl} The transport properties of the non-Fermi liquid regime considered here follow from a straightforward generalization of our results in section \ref{transmfl}. As a result of the completely local form of the Green's function, $G_f$, at temperatures above the crossover scale ($\Omega_f^*$), we can continue to ignore the vertex corrections to the current vertex in Fig. \ref{v2kf}(a). The optical conductivity for the $c$ fermions is then given by Eq.~(\ref{sigmaxx}). It is clear from the form of the spectral function in the non-Fermi liquid regime that the optical conductivity satisfies $\Omega/T$ scaling. At frequencies much higher than the temperature ($\omega\gg T$), \begin{eqnarray} \sigma'_{xx}(\Omega) \propto \frac{Nv^2~U_f^{4\Delta(q)}}{U_{cf}^2} \frac{1}{\Omega^{4\Delta(q)}}, \end{eqnarray} which is determined by the single-particle scattering rate of the $c$ fermions. Following the discussion in section \ref{transmfl}, the dc resistivity is given by, \begin{eqnarray} \rho_{dc}(T) \propto \frac{U_{cf}^2}{Nv^2 ~U_f^{4\Delta(q)}}~ T^{4\Delta(q)}, \label{rhonfl} \end{eqnarray} which reduces to the marginal Fermi liquid form for $q=4$. However note that just as we discussed in the context of the MFL in Sec.\ref{transmfl} above, in the regime $T\ll\Omega_{cf}^*(q)$ that we consider here, the dc resistivity is always smaller than the Mott-Ioffe-Regel limit. As a result of the $\Omega/T$ scaling that holds in the same window of temperature and frequency scales, the conductivity can be expressed as, \begin{eqnarray} \sigma_{xx}'(\Omega,T) \propto \frac{1}{\Omega^{4\Delta(q)}}~H\bigg(\frac{\Omega}{T}\bigg), \end{eqnarray} where $H(...)$ is a universal scaling function. The dc resistivity in Eq.~(\ref{rhonfl}) displays a strong departure from the ``Planckian'' form, since $\rho_{dc}\sim T^{4\Delta(q)}$ with $4\Delta(1)<1$. However the scattering rate associated with the temperature dependent frequency scale that determines the crossover from the high frequency to the dc behavior, $1/\tau_{\mathrm{opt}}$, saturates the Planckian bound: $1/\tau_{\mathrm{opt}} \sim a k_B T/\hbar$ with $a=O(1)$. \subsection{$2K_F$ Singularities} \label{2kfnfl} The sharp structure associated with a Fermi-surface in momentum space in a conventional Fermi-liquid leads to a singular $2K_F$ response. A natural question to ask here is if the non-Fermi liquid states considered in this paper have a $2K_F$ response that is different from other known examples of (non-)Fermi liquids \cite{AIM,mross}? In this section, we analyze the modification to the singularity for the non-Fermi liquid regime discussed above and find that it is fixed by the scaling dimension of the fermions, $\Delta(q)$. We assume that the temperature, while higher than the crossover scale $\Omega_f^*$, continues to be much smaller than the Fermi-energy for $c$ Fermions such that effects of thermal smearing can be ignored. We are interested in studying the response of the critical Fermi-surface to an external source field that couples to the fermion density (for any orbital $i$) at a $2K_F$ momentum. Naively, we expect there to be a suppression of the $2K_F$ response as a result of the smearing of the Landau quasiparticle due to scattering off the `local', incoherent $f-$electrons. Moreover, we have here a situation where the vertex corrections from the interactions, which are usually important in determining the $2K_F$ response, are only weakly momentum dependent (since $\Pi_f({\vec q},\omega)$ has a weak dependence on ${\vec q}$). Let us then consider an external source field that couples to the $2K_F$ fermion density, i.e. a particle at $K_F$ and a hole at $-K_F$, through the following term in the action (it will be sufficient to consider a pair of antipodal patches for this purpose), \begin{eqnarray} \delta H = u\sum_i \int d^2{\vec x}~d\tau \bigg[c^\dagger_{i,L} c_{i,R} + \textnormal{H.c.}\bigg], \label{delH} \end{eqnarray} where $R$ and $L$ correspond to the two antipodal patches at $\pm K_F$. The $2K_F$ operator is defined as $\rho_{2K_F}({\vec x},\tau) = \sum_i c^\dagger_{i,L}({\vec x},\tau) c_{i,R}({\vec x},\tau)$ and we are interested in the long range behavior of the correlation function $C_{2K_F}({\vec x},\tau) = \langle \rho_{2K_F}^* ({\vec x},\tau)~\rho_{2K_F}({\vec{0}},0)\rangle$. We can obtain the singular structure for the correlation function by scaling, when the low energy physics is scale invariant under the following scaling transformation ($z$ is the dynamical exponent), \begin{eqnarray} \omega' &=& \omega~b^{z/2},\\ p_x' &=& p_x~b,\\ p_y' &=& p_y~b^{1/2}. \end{eqnarray} Let us suppose that $u' = u~b^{\phi}$ ($\phi=1$ under the above rescaling, but we allow for a general $\phi$ to account for the possible singular renormalization from corrections to be considered below). Then the $2K_F$ operator satisfies, \begin{eqnarray} \rho'_{2K_F}({\vec x}',\tau') = b^\alpha~ \rho_{2K_F}({\vec x},\tau), \end{eqnarray} where \begin{eqnarray} \alpha = \frac{z+3}{2} - \phi. \end{eqnarray} The fourier transform then satisfies the scaling, \begin{eqnarray} C_{2K_F}({\vec p},\omega) = b^{(3+z)/2 - 2\alpha} ~C'_{2K_F}({\vec p}',\omega'), \end{eqnarray} where ${\vec p}$ in this case represents the deviation of the full momentum away from $2K_F \hat{x}$, i.e. $p_x~(p_y)$ is the direction perpendicular (parallel) to the Fermi-surface. Then we can immediately write the scaling form, \begin{eqnarray} C_{2K_F}({\vec p},\omega) = \frac{1}{\omega^{1 + \frac{3 - 4\alpha}{z}}} ~Y\bigg(\frac{\omega}{|p_y|^z},\frac{p_x}{p_y^2} \bigg). \end{eqnarray} For a conventional Fermi liquid, $\phi=1$ and $z=2$, which leads to the famous $\sqrt{\omega}$ singularity in the $2K_F$ correlations. For the non-Fermi liquid considered above, {if we ignore vertex corrections (which will be shown to be negligible below)}, $\phi=1$ and $z=1/(2\Delta(q))$, which leads to the singular dependence \begin{eqnarray} C_{2K_F}({\vec p},\omega) \sim \omega^{1-2\Delta(q)}, \end{eqnarray} which is what we would naively obtain by computing the density-density response $\sim\int_{\vec k} \int_\Omega~G_c({\vec p}+{\vec k},\omega+\Omega)~G_c({\vec k},\Omega)$ with the above self-energy in Eq.~(\ref{SEnfl}). \begin{figure} \begin{center} \includegraphics[width=0.4\columnwidth]{2kf.pdf} \end{center} \caption{One-loop vertex correction to the $2k_F$ operator, $\delta H$ in Eq.~(\ref{delH}) (denoted by cross). The solid and dashed lines correspond to the $c$ Fermion Green's functions for the two antipodal patches $R$ and $L$. Red lines denote $f$ Green's functions and red dots represent the $V_{ijkl}$ vertex.} \label{2kf} \end{figure} Let us now compute the one-loop vertex correction (Fig. \ref{2kf}), which may {\it apriori} change the singular structure. For simplicity, we set all the external momenta and frequencies to zero. The expression for the diagram is then given by, \begin{eqnarray} \delta u \sim U_{cf}^2 \int_{{\vec k},\Omega}~\frac{\Pi_f({\vec k},\Omega)}{i\textnormal{sgn}(\Omega)~|\Omega|^{4\Delta(q)} - \varepsilon_{{\vec k}}^+} \frac{1}{i\textnormal{sgn}(\Omega)~|\Omega|^{4\Delta(q)} - \varepsilon_{{\vec k}}^-}, \nonumber\\ \end{eqnarray} where $\varepsilon_{\vec k}^\pm = \pm v k_x + k_y^2$ and $v$ denote the dispersions and Fermi-velocities near the R/L patches; we have set the curvature to unity. Then, \begin{eqnarray} \delta u \sim \frac{U_{cf}^2}{v~U_f^{4\Delta(q)}} \int_{k_y,\Omega} \frac{|\Omega|^{4\Delta(q)}}{|\Omega|^{8\Delta(q)} + k_y^4}~ \frac{1}{|\Omega|^{1-4\Delta(q)}}. \end{eqnarray} The above $k_y$ integral is convergent and leads to, \begin{eqnarray} \delta u \sim \frac{U_{cf}^2}{v~U_f^{4\Delta(q)}} \int_{\Omega} |\Omega|^{2\Delta(q)-1} \sim |\omega|^{2\Delta(q)}. \end{eqnarray} We may now include the effect of this vertex on the density-density response, in order to compute the correction $\delta C_{2K_F}({\vec p},\omega)\sim\int_{\vec k}\int_\Omega \delta u ~G_c({\vec p}+{\vec k},\omega+\Omega)~G_c({\vec k},\Omega)\sim \omega$. This is clearly less singular than the result obtained from scaling (or equivalently, the bare correlation function without vertex corrections). This is simple to understand within our model, where the completely local form of the fluctuations associated with the incoherent $f-$fermions leads to scattering at all momenta for the $c-$fermions, and in particular no additional singularities arise as a result of any special scattering across the anti-podal patches. Note, however, that the vertex correction will be important for density correlations near $q \approx 0$, and indeed are needed to obtain the finite non-zero compressibility that we argued characterizes these states. \subsection{Quantum Oscillations} \label{qo} A hallmark of a Fermi liquid with well-defined quasiparticle excitations across the Fermi-surface is the observation of quantum oscillations as a function of an inverse external magnetic field ($B$) in a number of physical observables that depend on the density of states (e.g. magnetization). In all of the translationally invariant models of non-Fermi liquids considered in this paper, there is a sharply defined Fermi-surface of electrons at ${\vec k}={\vec k}_F$ in momentum space, but the electronic quasiparticles are destroyed as a result of coupling to the locally critical degrees of freedom. Here we address the question of whether the non-Fermi liquids considered display quantum oscillations periodic in $1/B$ \cite{YBK95} and if they are different in character from oscillations in Fermi-liquids. The fate of quantum oscillations in the marginal Fermi liquid has been addressed before \cite{wasserman,pelzer_qo,shekhter}. It is useful to treat the problem of oscillations in three-dimensions, where all of our previously obtained results for the self-consistent solutions to the saddle point equations continue to be true. We focus on the example of the non-Fermi liquid (with $q>4$). We note that strictly speaking, we should work at fixed density and account for the oscillations of the chemical potential as a function of the magnetic field. However, our calculations below will be done at a fixed chemical potential rather than a fixed density. In three dimensions, this is a justified approximation as an expansion in leading powers of the ratio of the magnetic field to the cross-sectional area of the Fermi-surface. At a fixed density the chemical potential has an oscillatory correction to its value at zero field whose amplitude vanishes linearly in field \cite{shoenberg}. We do not include the effect of such chemical potential oscillations, that are subleading in powers of the ratio described above. Let us now study the effect of a uniform magnetic field, $B$, along the $z-$direction through its orbital coupling to the $c$ fermions (we assume that there is no orbital coupling to the $U(1)$ charge associated with the $f$ fermions, which is explicitly true when we set $W_f\rightarrow0$). We analyze the structure of the saddle-point equations in the presence of the magnetic field in Appendix \ref{appqo:saddle}. A key property of the solution is that even at $B \neq 0$ both the $c$ and $f$- self energies are completely local in space. In the NFL regime, the $f$ fermions continue to be described in terms of the $(0+1)$ dimensional SYK model and the self-energy for the $c$ fermions as a result of the coupling to the $f$ fermions can be written as, \begin{eqnarray} \Sigma_{cf}(i\omega) = \Sigma_{cf}(i\omega,B=0) + \tilde\Sigma_{cf}(i\omega,B\neq0). \end{eqnarray} We study the effect of the first term above (independent of $B$) on all of the oscillatory phenomena; the effects arising from the explicit dependence of $\tilde\Sigma_{cf}$ on $B$ are of higher order in $(\omega_c/\mu_c)$, where $\omega_c=eB/m^*$ is the cyclotron frequency{\footnote{ For simplicity, we assume a spherical Fermi-surface for the $c$ fermions with $\varepsilon_{\vec k} = {\vec k}^2/(2m^*) - \mu_c$.}}. The magnetic field leads to a singular modification of the kinetic energy of the $c$ fermions into Landau ``bands" in three dimensions that disperse along the direction of the field{\footnote{Unlike Landau levels (LL) in two dimensions.}}. The Green's function for the $c$ fermions in the LL basis is given by, \begin{subequations} \begin{eqnarray} \label{LLG} G_c(n,p_z,i\omega_m) &=& \frac{1}{i\omega_m - \epsilon_n(p_z) + \mu_c -\Sigma_{cf}(i\omega_m)},~\textnormal{where}\\ \epsilon_n(p_z) &=& \bigg(n+\frac{1}{2}\bigg) \omega_c + \frac{p_z^2}{2m^*}. \label{enp} \end{eqnarray} \end{subequations} We are interested in the oscillatory contribution to two quantities: (i) the spectral density of states, and, (ii) the (orbital) magnetization. The oscillatory component of the spectral density of states in the limit of $\omega\rightarrow0$ at a finite $T$ is of the form, \begin{eqnarray} N_{\textnormal{osc}}(\omega\rightarrow0,T) = \frac{N(0)}{2\pi} \sum_{k=1}^{\infty} \frac{(-1)^k}{(2k)^{1/2}} ~ \sin\bigg[\frac{2\pi k \mu_c}{\omega_c} - \frac{\pi}{4}\bigg] ~e^{-\frac{2\pi k}{\omega_c} \frac{N(0)U_{cf}^2T^{4\Delta(q)}}{U_f^{4\Delta(q)}}}\sqrt{\frac{\omega_c}{\mu_c}}. \label{dosqo} \end{eqnarray} ($N(0)$ is the density of states of the non-interacting problem in the absence of $B$). Interestingly, we find that the density of states at zero energy has oscillations in $1/B$ even in the absence of quasiparticle excitations, with the period set by the standard cross-sectional area of the critical fermi-surface. The damping of amplitude of the oscillations is determined by the imaginary part of the self-energy, which has an unconventional form compared to the standard fermi-liquids. The details appear in Appendix \ref{app:qodos}. Let us now focus on the oscillatory component of the orbital magnetization, $M_{\textnormal{osc}}$, which is a thermodynamic quantity. It is possible to write down the oscillatory component of the free energy and compute the magnetization by taking appropriate derivatives. Instead, we compute the magnetization in a different manner here by noticing that the dependence on $B$ enters only through the kinetic energy of the $c$ fermions (as already described above). The magnetization density defined per unit area then is given by, \begin{eqnarray} M(B) = -\frac{1}{NA} \bigg\langle \frac{\partial H_c}{\partial B}\bigg\rangle, \end{eqnarray} where only the kinetic part of $H_c$ in the presence of magnetic field enters the above expression: $H_c = -\sum_{{\vec r}\r'} h_{{\vec r}\r'} c^\dagger_{{\vec r}} c_{{\vec r}'} + \textnormal{H.c.}$, where after Peierls' substitution $h_{{\vec r}\r'} = t^c_{{\vec r}\r'}~e^{iA_{{\vec r}\r'}}$ ($A_{{\vec r}\r'}\equiv$vector-potential corresponding to uniform $B$ along $z-$direction). In the LL basis, the magnetization is then given by, \begin{eqnarray} M(B) = \sum_{n,\alpha,p_z} \langle c_{n\alpha,p_z}^\dagger c_{n\alpha,p_z}\rangle \sum_{{\vec r}\r'} \phi_{n\alpha}^*({\vec r}) \phi_{n\alpha}({\vec r}') \frac{\partial h_{{\vec r}\r'}}{\partial B}, \end{eqnarray} where $n$ labels the LL index, $\alpha$ denotes all of the degenerate states within each LL and $\phi_{n\alpha}({\vec r})$ is the LL wave function. The latter sum over ${\vec r},{\vec r}'$ can be carried out to yield, \begin{eqnarray} M(B) = \sum_{n,\alpha,p_z} \langle c_{n\alpha,p_z}^\dagger c_{n\alpha,p_z}\rangle \frac{\partial \epsilon_n(p_z)}{\partial B}. \end{eqnarray} In the above equation $\epsilon_n(p_z)$ is as denoted in Eq.~(\ref{enp}). Equivalently, this can be obtained directly by writing the Hamiltonian in the LL basis as, \begin{eqnarray} H_c = \sum_{n,\alpha,p_z} \epsilon_n(p_z) c_{n\alpha,p_z}^\dagger c_{n\alpha,p_z}. \end{eqnarray} We then have, \begin{eqnarray} M(B) = \frac{1}{2\pi\beta}\sum_{\omega_m}\sum_n\int_{-\infty}^\infty \frac{dp_z}{2\pi} \frac{(n+1/2)\frac{B}{m^*}}{i\omega_m - (n+1/2)\omega_c + \mu_c - \frac{p_z^2}{2m^*} - \Sigma_{cf}(i\omega_m)}. \label{mag} \end{eqnarray} Using the Poisson resummation formula, the oscillatory component of the magnetization is then, \begin{eqnarray} M_{\textnormal{osc}}(B) = \frac{1}{2\pi\beta}\sum_{\omega_m}\int_{-\infty}^\infty \frac{dp_z}{2\pi} \sum_{k=-\infty}^\infty \int_0^\infty dn \frac{(n+1/2)\frac{B}{m^*}~e^{2\pi i kn}}{i\omega_m - (n+1/2)\omega_c + \mu_c - \frac{p_z^2}{2m^*} - \Sigma_{cf}(i\omega_m)}. \end{eqnarray} After some standard manipulations, details of which appear in Appendix \ref{appqo:mag}, we obtain, \begin{eqnarray} M_{\textnormal{osc}}(B) \approx \frac{N(0)}{4\pi} \sum_{k=-\infty}^\infty (-1)^k e^{i\pi/4} \frac{1}{(2k^3)^{1/2}}\bigg(\frac{\mu_c}{m^*}\bigg)\sqrt{\frac{\omega_c}{\mu_c}} e^{2\pi ik\mu_c/\omega_c}{\cal{A}}\bigg(\frac{2\pi k}{\omega_c}\bigg). \label{mosc} \end{eqnarray} Here, ${\cal{A}}(...)$ is a purely real amplitude for the oscillation of the $k^{\textnormal{th}}-$harmonic [an explicit expression for the amplitude appears in Eq.~ (\ref{alambda})]. We find that the period of oscillations is determined by the cross-sectional area of the Fermi-surface and remains unaffected by the form of the self-energy. The amplitude, on the other hand, is affected by the non-Fermi liquid form of the self-energy and has a non Lifshitz-Kosevich form {\footnote{Non Lifshitz-Kosevich forms for the amplitude of magnetization oscillations have been obtained in earlier holographic calculations \cite{HartnollQO}. }}. The universal scaling structure for the temperature dependence of the amplitude of the oscillations can be determined to be as follows (see Appendix \ref{appqo:mag}) \begin{eqnarray} {\cal{A}}(\lambda_k) = \bigg(\frac{2\pi k}{\omega_c}\bigg)^{1-\frac{1}{4\Delta(q)}} R(2\pi k x), \label{ampscal} \end{eqnarray} where $R(x)$ is a scaling function of $x=(U_{cf}^2T^{4\Delta(q)}/W_cU_f^{4\Delta(q)}\omega_c)$ that decays exponentially at large $x$. The scale for damping of the amplitude for any given harmonic at any finite temperature is then given by $T^*\sim \omega_c^{1/4\Delta(q)}$. \section{General Constraints on Local Criticality} \label{LC} Both the incoherent metal regime in the single band model (Sec.~\ref{mod}) and the marginal/non Fermi liquid regime in the two-band model (Sec.~\ref{sec:MFL},\ref{nfl}) display ``local quantum critical'' behavior. By that, we mean that the temporal correlation functions decay as power laws (up to a correlation time $\xi_\tau \sim 1/T$), whereas the spatial correlations decay exponentially over a temperature-independent length-scale of a few lattice constants.\footnote{Note that this is different from the scenario where the system has long range spatial correlations in addition to the power law correlations in time, but only the frequency dependent correlations have anomalous dimensions~\cite{Si2} - a situation also referred to as ``local quantum criticality.'' This behavior has been invoked in the context of heavy Fermion quantum criticality~\cite{Si1}. Here, we refer only to the situation where the spatial correlations are strictly local.} In both of our models, the local quantum critical regime is unstable at sufficiently low temperatures: below a certain ``coherence temperature,'' a crossover to a different, more conventional behavior occurs. This is consistent with the fact that in both models, the entropy in the local quantum critical regime extrapolates to a non-zero value in the limit $T\rightarrow 0$, violating the third law of thermodynamics. Instead, the one- and two-band models cross over to a Fermi liquid regime below the energy scales $\Omega_c^*$ and $\Omega_f^*$, respectively, and relieve the excess entropy. This raises the question whether, in generic lattice models with a finite number $N$ of degrees of freedom per unit cell, local quantum critical behavior can be stable down to $T=0$ (either as a quantum phase or at a quantum critical point). On scaling grounds, it has been argued that local quantum criticality must be accompanied by a finite entropy density in the limit $T\rightarrow 0$~\cite{Kristan2011}, although some caveats have been pointed out~\cite{Hartnoll2012}.\footnote{For a hyperscaling-violating theory in $d$ spatial dimensions, the entropy density scales as $S\sim T^{(d-\theta)/z}$, where $\theta$ is the hyperscaling violation exponent. Naively, $z\rightarrow \infty$ implies a finite ground state entropy density. Ref.~\cite{Hartnoll2012} pointed out that this can be avoided if $\theta \rightarrow -\infty$.} Here, we argue that in any translationally invariant lattice model with finite $N$, local quantum criticality is not possible down to arbitrarily low temperature. More generally, systems with a weaker form of quantum criticality, where the correlation length diverges sub-polynomially in $1/T$ (as in Refs.~\cite{aji}), must have an entropy that scales as a power of the linear dimension $L$ in the limit $T\rightarrow 0$. We expect this large residual entropy to lead to an instability at sufficiently low temperature, resulting in a lower entropy state. First, consider a translationally invariant system where the correlation time $\xi_\tau \sim 1/T$, while the correlation length $\xi$ is independent of $T$. In a finite cluster of linear size $L = \alpha \xi$, the temporal correlation functions of local operators approach their values in the thermodynamic limit for sufficiently large $\alpha$. Hence, the temporal correlations decay as a power law up to times of the order of $\xi_\tau$. Since the system is finite, $\xi_\tau$ cannot exceed the inverse of the mean level spacing near the ground state, $\xi_\tau \leq 1/\delta(L)$; i.e., $\delta(L) \leq T$. Therefore, in a generic system with a finite $N$, the local quantum critical behavior cannot persist to arbitrarily low $T$, otherwise $\delta(L)\rightarrow 0$. Next, we consider systems with a weaker version of local quantum criticality in which the correlation time, $\xi_\tau$, grows faster than polynomially as a function of the correlation length, $\xi$. The dynamical critical exponent, $z$ (defined via $\xi_\tau \sim \xi^z$) is still infinite. Repeating the argument above for a finite cluster of linear size $L = \alpha \xi$ at $T=0$, we get that $\xi_\tau$ cannot exceed the inverse of the level spacing near the ground state, $\xi_\tau \leq 1/\delta(L)$. Hence, $\delta(L)$ must decrease faster than polynomially in $L$. In contrast, the level spacing near the ground state in generic many-body systems with local interactions is expected to depend polynomially on the system size \cite{RMT}. The anomalously small level spacing near the ground state has consequences for the entropy in the limit $T\rightarrow 0$. In the microcanonical ensemble, the low-temperature entropy scales as $S(T\rightarrow 0) \sim \log[\Delta E /\delta(L)]$, where $\Delta E$ is a sub-extensive energy shell. As a concrete example, suppose that $\xi \sim \log(\xi_\tau)$, as proposed in Refs.~\cite{chakravarty,aji} for certain quantum critical points. In this case, following the considerations above, $\delta(L) \leq e^{-L / \alpha}$. Therefore, we find that $S(T\rightarrow 0) \sim L/\alpha$. Even though such behavior does not violate the third law of thermodynamics in spatial dimension $d>1$, we do not expect it to hold down to $T=0$. The high density of low energy states generically leads to an instability that lifts the near-degeneracy of the ground state. Similarly, if the correlation time scales as $\xi_\tau \sim [\log(\xi)]^\gamma$, we get that $S(T\rightarrow 0) \sim L^{1/\gamma}$. We note some interesting exceptions to this rule. The disorder-averaged correlations of disordered systems at infinite randomness fixed points~\cite{DSF} are known to display $z=\infty$ behavior. This behavior comes from rare regions where the correlation time is much longer than the typical one. However, we do not expect such rare region effects in generic translationally invariant systems. Another exception is found in certain three-dimensional topologically ordered states, called ``fracton states''~\cite{haah,fracton1,fracton2}, that have $S(T\rightarrow 0) \sim L$ without any fine tuning. However, this property probably does not lead to quantum critical behavior of local correlation functions, since local operators have vanishingly small matrix elements between the topologically distinct near-degenerate states that are responsible for the low-temperature entropy. \section{Discussion} \label{sec:discussion} In this work, we have defined a class of translationally invariant models that can be solved in the large $N$ limit. Even though the ground states of these models are conventional (although strongly renormalized) Fermi liquids, they exhibit a crossover at an intermediate energy scale - which can be parametrically smaller than the microscopic coupling constants - into a non-Fermi liquid regime. This regime is characterized by local quantum critical scaling of certain correlation functions - i.e., the correlation time diverges as $\xi_\tau \sim 1/T$, while the correlation length is nearly temperature-independent. Interestingly, many of the properties of the non-Fermi liquid regimes are reminiscent of those seen in different quantum materials. In the one-band model of Sec.~\ref{mod}, the resistivity grows linearly with temperature, and does not saturate at the Mott-Ioffe-Regel limit. The two-band version of the model (Sec.~\ref{sec:MFL},\ref{nfl}) exhibits a regime where the light band has a critical Fermi surface - either a marginal Fermi liquid or a non-Fermi liquid, depending on the precise nature of the interactions between the heavy and light bands. The resistivity grows as $\rho \propto T$ in the MFL and as $\rho\propto T^{4/q}$ with an exponent $q>4$ in the NFL. In this section, we will put these results in the context of previous work, and discuss their possible implications either to more generic models (in particular, ones that do not involve the limit of a large number of degrees of freedom per unit cell), as well as to strongly correlated materials. \subsection{Relation to other work} Several models composed of lattices of coupled SYK dots have been studied recently~\cite{Gu17,SS17, Yao, Balents, mcgreevy,Zhang17,shenoy}. Of these, the one-band model we introduce here is closest to the model solved by Song, Jian, and Balents~\cite{Balents}, who studied a lattice of SYK dots coupled by single-particle hopping. The main difference between this work and the present one is that the model studied here is translationally invariant, whereas the model of Ref.~\cite{Balents} is strongly disordered - both the interactions and the hopping matrix elements vary from site to site. The translational invariance allows us to address the properties of the Fermi surface in the low-temperature Fermi liquid regime. In the strong coupling limit, we find a strongly renormalized Fermi-liquid with a momentum independent self-energy. Interestingly, however, the properties of the high-temperature ($T\gg W_c^2/U_c$) LICM phase are similar in the two models. This is a consequence of the fact that, in this regime, the correlations become short-range in space; hence, the presence of translational invariance does not modify the properties of the system in a fundamental way. For example, even with translational symmetry, there is no remnant of a Fermi surface, and the resistivity is linear in temperature in both cases. In our model, the resistivity scales as $T^2$ in the low temperature regime ($T\ll W_c^2/U_c$), as expected in a Fermi liquid; in contrast, in the model studied in Ref.~\cite{Balents}, we expect the resistivity to saturate to a temperature-independent constant, due to the presence of strong disorder. Earlier work~\cite{Parcollet1} considered a model of localized moments with long-ranged, random in sign interactions, coupled to a band of itinerant electrons. Even this model is very different from the two-band model considered here - in particular, our model is translationally invariant, and has only local interactions - the properties of the intermediate temperature ``marginal Fermi liquid'' regime realized in both models are similar. Hence, our model demonstrates that this regime - as well as the non-Fermi liquid regime discussed in Sec.~\ref{nfl} - can be realized even in the clean limit and can host a critical Fermi-surface of electrons. Finally, our results are -- not surprisingly\footnote{It is the possibility of a simple holographic description of the $0+1$-D SYK model that has partly contributed to the tremendous recent interest in this model. Our two-band model is roughly similar in spirit to the ``semi-holographic" theory in Ref.~\cite{SH11}, although of course the details are very different.} -- similar to those found in strongly coupled theories that can be solved using holographic dualities~\cite{SH11,Liu1,Liu2,tong}. These models give locally quantum critical behavior associated with a non-vanishing entropy in the limit $T\rightarrow 0$. Upon coupling the locally quantum critical degrees of freedom to itinerant fermions, marginal Fermi liquid and non-Fermi liquid states can result (see also Ref.~\cite{mcgreevy}). As in the case of lattices of SYK dots, these models involve taking the limit of a large number of local degrees of freedom. Moreover, as in our model, the locally quantum critical regime is unstable at low energies to the formation of either long-range ordered states or a heavy Fermi liquid. \subsection{Bounds on transport} It is interesting to discuss our results in the context of possible ``universal bounds'' on transport coefficients. It has been proposed ~\cite{QPT,Zaanen04} that the relaxation time (or ``dephasing time''~\cite{QPT}) is bounded by the Planckian time, $1/\tau_P \le a k_B T/\hbar$, where $a$ is an unknown constant of order unity. Following this idea, a number of bounds on transport coefficients have been proposed~\cite{son,Hartnoll15,Blake16,Hartnoll17}. An interesting conjectured bound on the heat and charge diffusion constants in Ref. \cite{Blake16} involved the many-body `chaotic' properties of the system (see Appendix \ref{chaos}). However explicit calculations \cite{Lucas1,Lucas2} have demonstrated violation of such bounds in different settings (at present there is no known counterexample to the bound proposed in Ref. \cite{Hartnoll17}). Empirically, the transport lifetime of many metals where the resistivity is linear in $T$ has been found to be not far from $\hbar/(k_B T)$~\cite{Bruin13}. As we already described in the introduction, there is no unique choice of a transport scattering rate that may have an associated universal bound. In order to compare with the procedure adopted in Ref.~\cite{Bruin13}, where the scattering rate was extracted by fitting the transport data to a Drude-like form, let us focus on the case of the MFL and NFL states discussed in sections~\ref{sec:MFL},\ref{nfl} above. For the model in section~\ref{sec:MFL}, we may extract the renormalized mass $m^*/m \sim (\nu_0 U_{cf}^2/U_f) \ln(1/T_{\textnormal{coh}})$ from the low temperature FL regime (as measured in quantum oscillations) below $T_{\textnormal{coh}} \sim \Omega_f^*$. Using $\sigma=ne^2 \tau_{\textnormal{dc}}/m^*$ to define $\tau_{\textnormal{dc}}$ in the MFL regime at high temperatures leads to $1/\tau_{\textnormal{dc}} \sim T/\ln(1/T_{\textnormal{coh}}) \ll T$, which satisfies a Planckian bound for the particular choice of the dc scattering rate. Note, however, that the resistivity in the MFL is $\rho \propto T$ with no logarithmic corrections, i.e., it is not simply proportional to $1/\tau_{\textnormal{dc}}$. A similar procedure adapted to the NFL regime of the two band model with $q>4$ in section~\ref{nfl} leads to a lifetime with a strongly non-Planckian form, \begin{eqnarray} \frac{1}{\tau_{\textnormal{dc}}} \sim \frac{T^{4\Delta(q)}}{T_{\textnormal{coh}}^{4\Delta(q) - 1}}. \end{eqnarray} However it is still true that $1/\tau_{\textnormal{dc}} < T$ (since in the NFL regime, $T > T_{\textnormal{coh}}$). It is interesting to point out that in all the cases studied here, the ``optical scattering rate,'' $1/\tau_{\textnormal{opt}}$, defined as the frequency scale at which the high frequency optical conductivity approaches its dc value, satisfies $1/\tau_{\textnormal{opt}} < a k_B T/\hbar$ with $a=O(1)$. In the incoherent regime of the one-band model and in the two-band non-Fermi liquid, $1/\tau_{\textnormal{opt}} \sim T$; in the two-band MFL, $1/\tau_{\textnormal{opt}} \sim T/\ln^2(1/T)$ [see Eq. (\ref{toptmfl})]. Thus, $1/\tau_{\mathrm{opt}}$ satisfies a Planckian-type bound, but the temperature dependence of the dc resistivity does not necessarily follow that of $1/\tau_{\mathrm{opt}}$. \subsection{Implications for generic models} Clearly, the models (\ref{hc1},\ref{Htot}) are fine-tuned in many ways. In particular, the number of local degrees of freedom, $N$, is taken to be large, and the interactions ${U_{ijk \ell}}$ are taken to be independent, random variables whose average is precisely zero. It is thus important to ask which of the properties of the solution are peculiar to these models, and which are expected to hold more generically, even in less fine-tuned models with a finite number of degrees of freedom per unit cell. Here, we will discuss possible implications of our results for generic models (with a finite number of degrees of freedom per unit cell). In particular, we describe how local quantum critical behavior may arise in an intermediate temperature window in systems where the coherence scale (e.g., the effective Fermi energy or the Bose condensation temperature) is much smaller than the microscopic scale. We then formulate a conjecture for an effective ``coarse grained'' description of non-Fermi liquid states in generic models, inspired by the models constructed in this work, based on notions of many-body quantum chaos. \subsubsection{Local quantum criticality in generic models} The local quantum critical behavior found in some regimes of our models is unlikely to be stable in generic models down to zero temperature. A diverging correlation time without a corresponding diverging correlation length clearly requires an infinite number of local degrees of freedom. Moreover, as we have argued in Sec.~\ref{LC}, even a correlation length that diverges sub-polynomially with the correlation time implies a divergent (although not necessarily macroscopic) entropy in the $T\rightarrow 0$ limit. Hence an instability is likely to occur at a sufficiently low temperature. Nevertheless, we speculate that local quantum critical behavior (with a correlation time that scales as $\hbar/T$ and a nearly temperature-independent correlation length) can appear generically in strongly correlated metals, over a finite but broad temperature window. To see how such behavior can arise, consider a model with a metallic (Fermi-liquid) ground state. At low temperature, the single-particle lifetime scales as $1/\tau \sim T^2/\Omega^*$, where $\Omega^*$ is a non-universal ``coherence scale'' that depends on the strength and form of the inter-particle interactions. If the interactions are sufficiently strong, $\Omega^*$ may be much smaller than the microscopic coupling constants of the model, such as the hopping or the interaction strength. We expect that $\Omega^* \sim E_F^* \sim v_F^* k_F$, where $E_F^*$, $v_F^*$ are the renormalized Fermi energy and Fermi velocity, respectively\footnote{This is certainly not universally the case; for example, in the vicinity of a metallic quantum critical point with a ${\mathbf Q}=0$ order parameter, $\Omega^*$ and $v_F^* k_F$ are parametrically different. Here, we are assuming that there is a single energy scale $\Omega^*$, which is small not because of the proximity to a quantum critical point, but due to strong microscopic interactions.}. This is indeed the case in the one-band model of Sec.~\ref{mod}. In the Fermi liquid regime, temporal correlations decay exponentially over a timescale $\xi_\tau \sim 1/T$. Spatial correlations decay over the thermal length, $\xi_T \sim v_F^*/T$. Thus, crudely extrapolating to $T \sim \Omega^*$, where the Fermi liquid behavior starts to break down, we get that the correlation length at the crossover temperature becomes $\xi_T \sim v_F^*/\Omega^* \sim \lambda_F$, implying that the correlation length reaches the microscopic length scale set by the Fermi wavelength. (In a typical metal, this is of the same order of magnitude as the lattice spacing.) On the other hand, the correlation time at this temperature is of the order of $1/T$. If the renormalized Fermi energy is much smaller than the microscopic energy scales (set by the interaction strength and the hopping), then at $T \sim \Omega^*$, the correlations extend over a time which is much longer than the inverse of the ``bare'' Fermi energy.\footnote{A classic example of a Fermi system with a low coherence temperature is the normal state of $^3$He; the renormalized Fermi energy is significantly smaller than the bare one, due to the strong inter-particle interactions. The temperature window above the renormalized Fermi energy, where Fermi liquid behavior breaks down but the system is still quantum mechanical, has been termed a ``semi-quantum liquid''~\cite{Andreev1979}.} What happens at temperatures higher than $\Omega^*$? The spatial correlations already decay over a microscopic length scale at $T\sim \Omega^*$, so it is natural to assume that the correlation length is not strongly temperature dependent in this regime. We argued above that the correlation time at $T\sim \Omega^*$ is $\xi_\tau \sim 1/T$. Further, we assume that the ``scrambling rate'' (discussed in Appendix~\ref{chaos}) at this temperature is close to saturating the bound~\cite{Maldacena2016}, $\lambda_L \sim T$. Therefore, one can guess that the bound remains nearly saturated at $T>\Omega^*$. The natural appearance of a Planckian time scale implies that the correlation time $\xi_\tau$ remains of the order of $1/T$ even above $\Omega^*$. Hence, if the temperature window between the renormalized Fermi energy and the bare one can be made very large, then we expect this window to exhibit some form of ``local quantum criticality.''\footnote{Interestingly, the scenario discussed here is similar to the behavior found in the ``spin-incoherent Luttinger liquid'' regime~\cite{sill}; above the spin coherence temperature, the single-particle Green's function decays in space over a length scale set by the inter-particle spacing, but displays power-law behavior in time, up to $\tau \sim 1/T$. However, in this case, the spatial correlations of other operators - such as the density - still decay as a power law. In contrast, we are assuming that not only the single particle correlation functions become short ranged at $T\sim\Omega^*$, but {\it all} correlation functions do.} \newpage \subsubsection{Towards a ``coarse-grained" description of non-Fermi liquid behavior in correlated materials} \label{conj} As outlined in the introduction, there is a zoo of materials that display non-Fermi liquid behavior, in terms of their single-particle properties and transport, over a broad range of temperatures. However, even amongst all of these materials there is a varying degree to which the non-Fermi liquid behavior persists down to the lowest temperatures. A general observation across the wide variety of systems displaying non-Fermi liquid properties are as follows: \begin{enumerate} \item In many correlated metals, the dc resistivity is often linear in temperature{\footnote{ Other power-laws have also been reported, e.g. in some families of the ruthenates \cite{allen}.}}, i.e. $\rho_{\textnormal{dc}}\sim T$, and persists over a broad intermediate range of temperatures with a temperature independent slope. Moreover, it shows no sign of saturation and exceeds the Mott-Ioffe-Regal limit. \item In a number of materials where the above is true, there is a low {\it coherence scale} below which there is a departure from the non-Fermi liquid behavior and a crossover to more conventional Fermi liquid type behavior (and possibly to other ordered phases). Moreover, the extrapolated zero temperature entropy from the finite temperature non-Fermi liquid regime is finite and has been reported in certain members of the ruthenates family \cite{allen} and in the cobaltates \cite{bruhwiler}. This excess entropy is relieved below the coherence scale associated with the low temperature Fermi liquid. There are a number of outliers to the above description, most prominent amongst them being the optimally doped cuprates and certain quantum critical heavy-Fermion materials, where the non-Fermi liquid behavior observed at intermediate temperatures persists down to the lowest temperatures without any changes or characteristic crossovers. Similarly, the extrapolated zero temperature entropy in the non-Fermi liquid regime is zero (see e.g. Ref. \cite{loram} for the cuprates). \item The intermediate scale behavior is remarkably similar in a wide variety of these systems, in spite of the microscopic details being totally distinct. This is particularly surprising, since it appears that there is an emergent universal behavior and the details of the microscopic physics are somehow not important. However, e.g. the coefficient of the $T-$linear transport scattering rate can generically be different and dependent on the underlying details. \end{enumerate} The above experimental observations pose an interesting theoretical challenge. In particular the apparent universality of the phenomena suggests that the explanation does not rely too much on the precise microscopic details of any single system but instead is generic to strong correlations between the electrons at the lattice scale. The theoretical models studied by us in this paper are consistent with a number of these empirical observations. Is it then possible to draw some general lessons from this exercise in order to bridge the gap between a realistic description of materials and the solvable models considered by us? Below we will consider the possibility of a coarse-grained description over scales much longer than any microscopic scale in the problem with a few key assumptions, that allows us to reproduce the features described above. We propose one possible route that allows us to give such a coarse-grained description of non-Fermi liquid metals in a general setting below. This will allow us to place the specific models studied in this paper in context within a conceptual framework that applies to generic strongly correlated materials. The apparent universality of intermediate scale non-Fermi liquid physics in diverse correlated systems naturally leads to the possibility that there is a universal coarse-grained description. After all if the macroscopic behavior is universal it makes sense that the universality has set in at some finite length/time scale large compared to microscopic scales. This length/time scale will itself be non-universally related to the microscopic scales but the subsequent behavior at even longer scales will be universal. There will thus be a universal coarse grained description (much like in hydrodynamics or other theories of universal macroscopic phenomena). We will use the notion of `many-body' quantum chaos to formulate our conjectures below (see Appendix \ref{chaos} for a brief exposition to the subject). \begin{itemize} \item {\bf Conjecture 1 ~ (C1)---} {\it For systems that display non-Fermi liquid behavior over a wide range of temperatures above a low crossover scale $(\Omega^*)$, there is an intermediate emergent lengthscale $\ell$, with $a\ll \ell\ll L$ ($a\equiv$lattice spacing and $L\equiv$system size), such that a sub-system defined within a region of size $\ell$ is maximally chaotic. The entire system may or may not be maximally chaotic globally (on scales $\sim L$).} \item {\bf Conjecture 2 ~ (C2)---} {\it For a patch of size $\ell$ the assumption of maximal chaos severely restricts the structure of general $n$-point correlators, i.e, it restricts them to a set of universality classes of possibilities}. \end{itemize} Let us state the first conjecture a bit more sharply. Consider the squared commutator for generic local operators, $W$ and $V$, \begin{eqnarray} {\cal{C}}(t,{\vec r}) = \langle [V({\vec r},t),~W(\vec{0},0)]^2 \rangle_\beta \sim \epsilon ~e^{\lambda_L t}, \end{eqnarray} where $\epsilon$ depends on ${\vec r}$. The statement of Conjecture C1 is that for ``normal" non-fermi liquid systems, there is a length scale $\ell \gg a$ ($a$ = microscopic length scale) such that for $ a \ll |{\vec r}| \ll \ell$, and for times $\ell/v_B \gg t \gg |{\vec r}|/v_B$ the Lyapunov exponent $\lambda_L = 2\pi T$ thereby saturating the chaos bound. These time scales are long enough for two local operators at ${\vec x}, {\vec x}'$ within a patch to mix but short enough that information has not moved between patches. On the other hand, for $|{\vec r}|\gg\ell$, the system need not be maximally chaotic with $\lambda_L\leq 2\pi T$. Conjecture C2 simply says that the physics of a maximally chaotic bubble is restricted to some universality classes. A coarse grained description of the system would then consist of ``islands" of typical size $\ell$ that are maximally chaotic, which are coupled to each other by generic hopping and interaction terms. Why might we expect these conjectures to be true? Let us start with C1. We have already noted that it is natural that there exists a long length scale $\ell$ at which universality first emerges in a ``normal" NFL system. Sufficiently complex and generic strong local interactions may make it natural that the dynamics is maximally chaotic at these length scales (with no guarantee of course that maximal chaos persists out to macroscopic scales). We regard this as roughly analogous to the assumption of molecular chaos in the kinetic theory of gases. As for Conjecture C2, given the existence of a bound on the Lyapunov exponent it is again natural that systems that saturate the bound are very special and have universal properties. Further inspiration for these conjectures comes from current ideas on strongly coupled continuum quantum field theories and their relationship to quantum black holes. Consider a UV field theory with a conserved global $U(1)$ symmetry that is sufficiently strongly coupled that it has a classical gravity dual. We assume the theory is at a non-zero density of the global $U(1)$ charge. This UV theory will flow under the RG to some IR behavior that in general will describe different physics. As the temperature is decreased there will be a change from a regime controlled by the strongly coupled UV theory to whatever IR theory emerges under the RG flow. In the high temperature regime, in the gravity description of the UV theory we should include a charged black hole. It is well known that this black hole has a residual zero temperature entropy. Thus the corresponding high temperature behavior of the boundary quantum field theory is IR-incomplete and has an extrapolated ground state entropy. Now, it is believed that black holes are the ``fastest scramblers" \cite{sekino}, {\it i.e} they saturate the chaos bound. Thus in the high-$T$ regime the quantum field theory we are considering will satisfy the chaos bound. However there is no guarantee that this will continue to be the case as the temperature is decreased. The restricted behavior of maximally chaotic systems can then be plausibly related to the different universality classes of systems captured holographically by charged black holes. This situation mimics the situation we envisage for generic, complex, strongly coupled lattice models. Of course the presence of the lattice (and the concomitant finite number of degrees of freedom/unit cell) requires that maximal chaos can only develop on some length scale much bigger than $a$. We of course leave for the future explorations of these conjectures and their development into a useful coarse grained description of non-fermi liquids. Here they provide a conceptual context within which we can place the solvable models studied in this paper. Each SYK island is a specific example of a maximally chaotic system. Thus we can view our models as a toy example of a macroscopic system made out of coupling maximally chaotic bubbles. We note however that a future development of a universal coarse-grained NFL description will need to be more refined than simply modeling each bubble by an SYK island \footnote{It may be tempting to contemplate that the large number - $O[(\ell/a)^d]$ - of degrees of freedom within each bubble provides the large-$N$ necessary to obtain the solvable SYK island as a model for the bubble. However we believe this is incorrect and it is too naive to simply expect exactly SYK-like physics to emerge at the scale $\ell$. Note that though the charged SYK model strictly speaking only has $U(1)$ global internal symmetry it has a statistical $U(N)$ internal symmetry after the averaging over realizations. This will not be the case in generic models. One manifestation of the $U(N)$ symmetry is the emergence - in the low-$T$ Fermi liquid phase - of $N$ degenerate Fermi surfaces of electrons. We certainly do not expect this to happen in generic models even if they develop intermediate scale maximal chaos. }. The refinement will need to include spatial locality within each bubble. Further it will need to include microscopic lattice symmetries as effective internal symmetries at scale $\ell$. Finally it will have to incorporate the right microscopic Luttinger/Lieb-Schultz-Mattis constraints involving both charge conservation and the microscopic lattice symmetries. Despite these deficiencies we find it encouraging that the toy model of coupled SYK islands leads to behavior reminiscent of experiments. \section{Acknowledgements} We thank E. Altman, R. Gopakumar, D. Jafferis, A. Nahum, S. Sachdev, and B. Swingle for discussions. DC is supported by a postdoctoral fellowship from the Gordon and Betty Moore Foundation, under the EPiQS initiative, Grant GBMF-4303 at MIT. DC acknowledges the hospitality of the Aspen Center for Physics, which is supported by NSF grant PHY-1607611. DC, EB, and TS acknowledge the hospitality of KITP at UCSB, where this work was initiated, which is supported by NSF grant PHY-1125915. TS is supported by a US Department of Energy grant DE-SC0008739, and in part by a Simons Investigator award from the Simons Foundation. \newpage {\it Note added:} As this paper was being completed for submission, we became aware of a related work \cite{SSmagneto} that studies {\it disordered} higher dimensional generalizations of the SYK model. Our point of strongest overlap is in the discussion of the two-band model where both constructions find a marginal Fermi liquid (in addition, we also obtain a critical Fermi surface in this example). In Ref.~\cite{SSmagneto}, the authors analyze the magnetotransport properties of such a disordered metallic regime; we study the fate of such a critical Fermi surface under the effect of a magnetic field, that gives rise to quantum oscillations even in the absence of quasiparticles.
0909.5687
\section{{\bf 1. INTRODUCTION\/} Li$_2$CuO$_2$ is the first \cite{Hoppe1970} and the most frequently studied compound of the growing class of edge-shared spin-chain cuprates \cite{Matsuda1996,Matsuda2001,MasudaPRB2005,Enderle2005,Drechsler2007PRL,Drechsler2007JP}. Owing to its structural simplicity with ideally planar CuO$_2$ chains (see Fig.\ 1) it has been considered as a model quasi-one-dimensional (1D) frustrated quantum spin system. In almost all edge-shared cuprate chain compounds the spins are expected to be coupled along the chains via nearest neighbor (NN) ferromagnetic (FM) and next-nearest neighbor (NNN) antiferromagnetic (AFM) exchange interactions, $J_1$ and $J_2$, respectively. Due to the induced frustration FM and spiral in-chain correlations are competing. While in the 1D model the ground state is governed by the ratio $\alpha=-J_2/J_1$, the actual 3D magnetic order sensitively depends on the strength of the inter-chain couplings and anisotropy. In Li$_2$CuO$_2${}, below $T_\mathrm{N} \approx 9 \,\mathrm{K}$ \cite{Sapina1990,Chung2003} a long-range collinear commensurate AFM inter-chain with FM in-chain (CC-AFM-FM) magnetic ordering evolves. However, a proper understanding, necessary for \begin{figure}[!b] \begin{center} \begin{minipage}{0.9\textwidth} \hspace{.03\textwidth} \includegraphics[width=0.13\textwidth]{Fig1alorenz.eps} \hspace{.005\textwidth} \includegraphics[width=0.35\textwidth,angle=0]{Fig1blorenz.eps} \end{minipage} \end{center} \caption{(Color online) Left: The crystallographic structure of Li$_2$CuO$_2$\ comprises two AFM coupled CuO$_2$ spin-chains per unit cell running along the $b$-axis (orange {\large \color{orange} $\bullet$} -- Cu$^{2+}$, red {\large \color{red} $\bullet$} -- O$^{2-}$, bright blue {\large \color{brblue} $\bullet$} -- Li$^{+}$). The unit cell is indicated by the outer black cuboid. Right: the main intra- and inter-chain exchange paths, $J_1$, $J_2$, and $\tilde{J}_2$ marked by blue arcs and dashed lines, respectively. Notice the frustration introduced by an AFM inter-chain coupling for any non-FM in-chain ordering. } \label{fig::Struct} \end{figure} a critical evaluation of theoretical studies, especially of electronic/magnetic structure calculations \cite{Mizuno1998,Weht1998,deGraaf2002,DrechslerJMMM2007,Xiang2007}, is still missing. \begin{figure*}[t] \begin{minipage}{0.99\textwidth} \begin{minipage}{0.49\textwidth} \begin{flushleft} \includegraphics[width=\textwidth]{Fig2alorenz.eps \end{flushleft} \end{minipage} \hspace{0.75cm} \begin{minipage}{0.40\textwidth} \begin{flushright} \includegraphics[width=\textwidth]{Fig2blorenz.eps \end{flushright} \end{minipage} \end{minipage} \caption{(Color online) (a) Constant energy scans for momentum transfer along the chains with $\mathbf{q}=(0$ $K$ $1)$ at IN12 ($\Delta E\le 3\,\mathrm{meV}$; open symbols) and $\mathbf{q}=(0$ $1+K$ $0)$ for IN8 data ($\Delta E \ge 5 \,\mathrm{meV}$; filled symbols). (b) Intensity map of the low energy spectrum, interpolated from the IN12 data points ({\small $\bullet$}). The sizeable intensity below the gap energy, i.e.\ at energy transfer $\lesssim 1$\,meV and $q_b$ around -0.05 is a spurious \textsc{Bragg} tail. (c) The dispersion perpendicular to the chains was measured by constant $Q$-scans; lines are \textsc{Gauss}ian fits. Filled symbols: the measured intensity at the zone center for $T > T_\mathrm{N}$.} \label{fig::INS} \end{figure*} In particular, this concerns a precisely enough knowledge of the main exchange interactions. The knowledge of realistic values is helpful also for the understanding of related "frustrated ferromagnets" \cite{Dmitriev2008,Plekhanov2009} such as Ca$_2$Y$_2$Cu$_5$O$_{10}$ \cite{Matsuda2005}, La$_6$Ca$_8$Cu$_{24}$O$_{41}$ \cite{Matsuda1996} with FM in-chain ordering and LiVCuO$_4$, LiCu$_2$O$_2$ with helimagnetism and multiferroicity, all being of considerable current interest. A previous inelastic neutron scattering (INS) study aimed to determine the exchange integrals was not conclusive \cite{Boehm1998}. It revealed an anomalous low-lying branch of hardly dispersive and overdamped spin excitations in chain direction. Its linear spin-wave (LSW) analysis results in unrealistically small in-chain exchange integrals. The missing, but expected, dispersive quasi-1D spin chain excitation remained as a challenging puzzle for the community \cite{Weht1998,Mizuno1998,Mizuno1999}. In Sec.\ 3 we present new INS data which unambiguously show the presence of a strongly dispersive in-chain spin mode. The main exchange integrals are derived applying the LSW-theory \cite{Oguchi1960} and in Sec.\ 4 we compare our results with those of related chain cuprates as well as with predictions of band structure and cluster calculations. A criticism of improper \textsc{Curie-Weiss} analysis of spin susceptibility data is provided and consequences for the direct FM Cu-O exchange parameter $K_{pd}$ entering extended \textsc{Hubbard}{} models are discussed. \section{{\bf 2. EXPERIMENTAL\/} A single crystal of $^7$Li$_2$CuO$_2${} was grown for INS experiments by the travelling solvent floating zone technique under high pressure \cite{Behr2008}. In order to avoid vaporization of Li$_2$O during growth a 4:1 Ar:O$_2$ atmosphere at $50\ab{\,\ab{bar}}$ was chosen. A fast growth rate of $10\,\ab{mm/h}$ inhibits growth of impurity phases. Isotope enriched $^7$Li was employed to avoid the significant neutron absorption coefficient of $^6$Li. The sample was characterized by X-ray powder diffraction, polarized light microscopy, magnetization and specific heat measurements. By X-ray powder diffraction no impurity phase was found. The macroscopic magnetization and specific heat data of the sample agree with literature data, i.e. AFM order is found below $T_\mathrm{N} = 9.2\,\mathrm{K}$ and a weak FM component evolves below $T_2 \approx 3\,\mathrm{K}$ \cite{Ortega1998,Staub2000,Chung2003}. INS experiments were performed with thermal and cold neutrons at the three-axis-spectrometers IN8 and IN12 at the ILL, Grenoble, France. Four single crystals with a total mass of 3.8\,g were mounted together in the $(0$ $K$ $L)$ scattering plane with a resulting sample mosaicity of $3^\circ$. For both instruments, focusing PG(002) monochromator and analyzer have been utilized. The measurements at IN8 were taken with fixed final momentum $k_\ab{f}=2.662\,\mathrm{\AA}^{-1}$ with PG-filter on $k_\ab{f}$. IN12 was configured with $k_\ab{f}=1.5\,\mathrm{\AA}^{-1}$ and Be-filter on $k_\ab{f}$. Most scans have been done in the CC-AFM-FM phase at $T=4.1\,\mathrm{K}$ well above another not yet well understood magnetic phase below $T_{\rm 2}$\ . Anyhow, the observed changes of the INS spectra in this low-$T$ phase (not shown here) are weak. \section{{\bf 3. RESULTS AND DATA ANALYSIS} The results of our INS studies are summarized in \figref{fig::INS}. Representative spectra of constant energy scans as taken at IN12 and IN8 for moment transfer along the chains ($b^*$) are displayed in \figref{fig::INS}(a). The main result is the observation of a highly dispersive excitation which is strong at the magnetic zone center and significantly weakens at higher energies. With the chosen experimental setup the magnetic branch could be traced up to energy transfers of $25\,\mathrm{meV}$. The measured data points along $(0$ $K$ $1)$ taken at IN12 with cold neutrons are summarized in the color map \figref{fig::INS}(b). Note, that the reflections are periodic with the magnetic unit cell and their strongly reduced intensity above $T_\mathrm{N}$ observed up to energy transfers of $15\,\mathrm{meV}$ does confirm their magnetic nature (see Fig.\ 2 (c)). At the magnetic zone center $(0$ $0$ $1)$, a gap of $\Delta = 1.36\,\mathrm{meV}$ is observed. The excitations for momentum transfer along $(0$ $0$ $1+L)$ are only weakly dispersive in agreement with the results of Ref.\ \cite{Boehm1998}. Respective constant $\mathbf{q}$-scans are shown in \figref{fig::INS}(c). Note, that the mosaicity of the sample broadens the excitations along $L$ which is less pronounced for moment transfer along $K$ due to the longer $b^*$-axis. As shown in \figref{fig::INS}(b) we observe further inelastic features for moment transfer along the chain. In addition, the data in \figref{fig::INS}(b) also exhibit weak and presumably incommensurate (IC) magnetic scattering below the magnon gap energy. The origin of these low-energy excitations is not yet clear and will be addressed in future studies. Furthermore, there is a continuous feature appearing at double the energy of the anisotropy gap which is possibly attributed to two-magnon scattering. We also mention that we have observed low-lying and strongly broadened excitations along $b^*$ similarly as in Ref.\ \cite{Boehm1998}. However, the intensity of these excitations was roughly two orders of magnitude weaker than that reported ibidem. According to ESR measurements \cite{Ohta1993} the exchange interactions show an uniaxial anisotropy with the easy-axis directed along the crystallographic $a$-axis. We describe the corresponding Cu momenta by the spin-\textsc{Hamilton}{}ian \begin{equation} \hat{H}=\frac{1}{2}\sum_{\mathbf{m},\mathbf{r}} \left[J_{\mathbf{r}}^{z}\hat{S}_{\mathbf{m}}^{z}\hat{S}_{\mathbf{m} +\mathbf{r}}^{z}+J_{\mathbf{r}}^{xy} \hat{S}_{\mathbf{m}}^{+}\hat{S}_{\mathbf{m}+\mathbf{r}}^{-} \right]\label{eq:H} \end{equation} where $\mathbf{m}$ enumerates the sites in the magnetic (Cu) lattice, the vector $\mathbf{r}$ connects sites with an exchange coupling $J_{\bf r}$ \cite{remarkboehm}. The $z$-axis is taken along the easy-axis, i.e.\ the $a$-axis. Within the LSW-theory \cite{Oguchi1960}, the dispersion-law reads \begin{eqnarray} \omega_{\mathbf{q}} & = & \sqrt{\left(J_{\mathbf{q}}^{xy}-J_{\mathbf{0}}^{xy}+\tilde{J}_{\mathbf{0}}^{xy} -D\right)^{2}-\left(\tilde{J}_{\mathbf{q}}^{xy}\right)^{2}},\label{eq:wq}\end{eqnarray} where $J_{\mathbf{q}}\equiv(1/2)\sum_{\mathbf{r}}J_{\mathbf{r}}\exp\left(\imath\mathbf{qr}\right)$ is the Fourier transform of the in-chain exchange integrals, and analogously for the inter-chain integrals $\tilde{J}_{\mathbf{q}}$. The exchange anisotropy $D\equiv J_{0}^{z}-J_{0}^{xy}-\tilde{J}_{0}^{z}+\tilde{J}^{xy}_0$ causes the abovementioned spin gap $\Delta =\omega_{\mathbf{0}}$ in our case (see Fig.\ 3). Their relation reads \begin{equation} \Delta=\sqrt{D\left(D-2\tilde{J}_{0}^{xy}\right)},\quad \mbox {or} \quad D=\tilde{J}_{\mathbf{0}}^{xy}-\sqrt{\left(\tilde{J}_{\mathbf{0}}^{xy}\right)^{2}+\Delta^{2}} \ . \label{eq:D} \end{equation} In the summations over $\mathbf{r}$ we retain only the leading terms (see Fig.\ 1). According to LSDA+$U$ based magnetic structure calculations they are given by the following in-chain integrals: $J_{1},\: J_{2},\: J_{3}$, (corresponding to $\mathbf{r}=\mathbf{b},\:2\mathbf{b},\:3\mathbf{b}$ respectively) and inter-chain integrals: $\tilde{J}_{111},\:\tilde{J}_{131}$, (corresponding to $\mathbf{r}_{111} =\left(\mathbf{a}+\mathbf{b}+\mathbf{c}\right)/2$, $\mathbf{r}_{131}=\left(\mathbf{a}+3\mathbf{b}+\mathbf{c}\right)/2$). For $q_aa$ or $q_cc=\pi$ the inter-chain dispersion caused by $\tilde{J}_{\mathbf{q}}^{xy}$ vanishes and Eqn.\ (\ref{eq:wq}) simplifies: \begin{equation} \omega_{\mathbf{q}} = J_{\mathbf{q}}^{xy}-J_{\mathbf{0}}^{xy}+\tilde{J}_{\mathbf{0}}^{xy} -D=J_{\mathbf{q}}^{xy}-J_{\mathbf{0}}^{xy}+\Delta_1 , \end{equation} i.e.\ the single-chain dispersion can be read off {\it directly} by subtracting the effective gap $\Delta_1$, only. Our INS data are well fitted by \textsc{Gauss}ian distributions. The maxima of the main branch of the spectrum were analyzed within LSW-theory \eqnref{eq:wq}. The inspection of Fig.\ 3 \begin{figure*}[t] \onefigure[width=.83\textwidth]{Fig3lorenz.eps} \caption{(Color online) (a)-Constant energy scans for the data points shown in part (b) (right panel) near the minimum. (b,c) - INS data points and LSW fits (red line) of the magnon dispersion (b): along $\vec{q} =$(0 $K$ 1), (0 0 $1+L$), (0 $K$ 1.5) for $E$-transfer below $5\,\mathrm{meV}$, compared with Ref.\ \cite{Boehm1998} (black dotted line); (c) - along $Q=(0$~$K$~$0)$ up to high $E$-transfer. Solid red line: our fit (see Tab.\ 1). For comparison the LSW dispersions predicted in Ref.\ \cite{Mizuno1999} (green dashed-dotted and dashed double dotted lines), by our L(S)DA+$U$ (blue dashed line) calculation and the five-band \textsc{Hubbard} model (blue dotted line), both with the added experimental spin gap are shown, too. } \label{fig::DispersionHigh} \end{figure*} reveals a strong in-chain dispersion and a much weaker one in the perpendicular $c$ and $a$ (not shown here) directions. The results of the fit are given in \tabref{tab::ExchangeParameters}. The total width $W=\omega_{\pi/b}-\Delta= 2\mid J_1+J_3+J_5+\cdots-0.5\left( \tilde{J}^{xy}_0-D \right) \mid -\Delta \approx 2\mid J_1\mid $, i.e.\ the in-chain dispersion yields a direct measure of the NN coupling since $W$ is {\it un}affected by $J_2, J_4, J_6 \cdots$. Supposing a monotonous behaviour of $\omega_q$ up to the zone boundary at $Q=0.5$ (corresponding to $q=\pi/b$), from the measured part of the full $\omega_q$-curve one obtains already a rigorous lower bound for $\mid J_1\mid \stackrel{>}{\sim}$ 150~K. Our analysis shows that the in-chain NN interaction $J_1$ is strongly FM, but frustrated by an AFM NNN-coupling $J_2$ which affects the shape of $\omega_q$. In comparison, the AFM inter-chain coupling is weak, clearly demonstrating the magnetically quasi-1D character of the compound. For the in-chain coupling we find $\alpha = -J_2/J_1 = 0.33$, {\it unambigously above} the critical ratio $\alpha_{\ab{crit}}=1/4$ for an isotropic \textsc{Heisenberg}-chain \cite{Bursill1995}. Note that the dispersion near the zone center behaves like $\omega(q)\propto q^4$ to be discussed below. We confirm also theoretical predictions \cite{deGraaf2002,Xiang2007} (see also Tab.\ 1 for our results) that the main inter-chain coupling is indeed the NNN coupling along $(\frac{a}{2},\frac{3b}{2},\frac{c}{2})$. The NN inter-chain exchange has a negligible effect on the dispersion in the (0 $K$ $L$)-plane. Although further couplings can not be accessed from our fits, the main $J$ values given here do not change much, if the INS data are analyzed in more complex models with additional exchange paths, especially $J_3$ and $\tilde{J}_1$. When taken into account, both remain small ($\sim 4$~K and 1~K, respectively) in full accord with L(S)DA+$U$ (see Tab.\ 1). \section{{\bf 4. DISCUSSION\/} The actual CC-AFM-FM ordering seemingly contradicts the IC "spiral" phase expected for a frustration ratio of $\alpha \approx 0.33$ in a 1D-approach (in the sense of the wave vector $q_0\neq 0,\pi/b$ where the magnetic structure factor $S(q)$ becomes maximal \cite{Bursill1995}). Hence, the obtained relatively small but frustrated AFM inter-chain coupling may hinder the spiral formation. Thus, the 3D critical point compared with $\alpha_c^{1D}$ is upshifted: \begin{equation} \alpha_c^{\rm \tiny 3D,iso}=\alpha_c^{1D}\left(1+\beta_1 +9\beta_2 +25\beta_3 + \cdots \right) \ , \label{interchain} \end{equation} where $\beta_n= -\tilde{J}_n/J_1$. Eqn.\ (\ref{interchain}) has been derived in the isotropic (iso) case \cite{Drechsler2005}. Ignoring all other very weak inter-chain couplings $\tilde{J}_n$ we arrive with our results $\tilde{J}_2=9.04$~K and $J_1=-228$~K at $\alpha^{\rm \tiny 3D, iso}_c=0.339$. Anisotropies (aniso), as found here, further stabilize the CC-AFM-FM state. From Eqns.\ (2,3) we estimate finally $\alpha_c^{\rm \tiny 3D,aniso}\approx 0.39$. The combined effect of AFM inter-chain coupling and easy-axis anisotropy is also responsible for the anomalous $q^4$-dependence of the spin excitations mentioned above. In the limit $q_bb \rightarrow 0$ we expand Eqn.\ (2) and obtain \begin{equation} \omega(q)\approx\Delta +A_{\Gamma}(q_bb)^2+B_{\Gamma}(q_bb)^4, \end{equation} \begin{table*}[] \caption{The fitted exchange integrals (in K) as determined from the INS data using Eqns.\ (2,3) compared with microscopic theory (see text) and other recent theoretical results. Values in parentheses are estimates from less accurate fits. } \label{tab::ExchangeParameters} \begin{center} \begin{tabular}{r|c|c|c|c|c|c|c} \hline\hline & $J_1$ & $\alpha$ & $J_2=-\alpha\cdot J_1$ & $J_3$ & $\tilde{J}_2$ &D & $\tilde{J_1}$\\ INS / present work & $-228\pm 5$ & $0.332\pm 0.005$ & $76\pm 2$ & (3.8) & $9.04\pm 0.05$ & $-3.29\pm 0.2$ & (1) \\ \hline 3$d$O2$p$ / present work\cite{remarkmapping}: & $-218$ & $0.30$ & $66$ & $-0.4$ & $-$ & $-$&$-$\\ 3$d$O2$p$ \cite{Malek2008}: & $-143$ & $0.23$ & $33$ & $-1$ &$-$ & $-$ & $-$\\ 3$d$O2$p$ \cite{Mizuno1998}: & $-103$ & $0.47$ & $49$ & $-2$ & $-$& $-$ & $-$\\ two-chain phenomenol. \cite{Mizuno1999}: & $-100$ & $0.40$ & $40$ & $-$ & 16& $-$ & 16\\ LSDA+$U$, $U=$~6 eV \cite{Drechsler2009}: & $-216 \pm 2$ & 0.31 & $66\pm 2$ & $5\pm 2$ &$13\pm 2$& $-$ & 0$\pm 2$\\ GGA +$U$, $U=$~6 eV\cite{Xiang2007}: & $-171$ & 0.60 & 98 & $-$ & 18& $-$ & 0.23\\ RFPLO, LAPW+SO\cite{Mertz2005}: & $-$ & $-$ & $-$ & $-$ & $-$& $-15.6 $ & $-$\\ \hline\hline \end{tabular} \end{center} \end{table*} $A_{\Gamma}$ and the quadratic dispersion vanish exactly at \begin{equation} \alpha^q_0=\frac{1}{4}\left(1+\frac{9\beta_2}{\delta}\right) = 0.33098, \quad \delta =1-\frac{D}{4\tilde{J}_2}. \end{equation} Accidentally Li$_2$CuO$_2$ \ is very close to this point and its dispersion near the zone center is {\it quasi-quartic}. But in the presence of a spin gap $\Delta$ caused by the anisotropy $D$ this vanishing of the quadratic dispersion doesn't yet signal an instability of the CC-AFM-FM state. Near the Z-point $\left(0,0,\pi/c \right)$ the quadratic coefficient $A_Z$ is already essentially negative (see Fig.\ 3 (b, right panel)). Since along the line $Z-R(0,\pi/b,\pi/c)$ the inter-chain dispersion vanishes one can easily read off the 1D Fourier components of the exchange interactions $J^{xy}_{\mathbf{q}}$ (see Eqn.\ (4)). A similar rare situation occurs in the 2D frustrated CsCuCl$_4$ system in a high magnetic field above its saturation limit \cite{coldea2002}. The clearly visible minima at $q_{b,0}b=\cos^{-1}\left(1/4\alpha \right)\approx\pm 0.72=\pm 0.11$~(r.l.u.) correspond to the two equivalent propagation vectors of a low-lying spiral excitation \cite{Kuzian2007} with a pitch angle of about 41.2$^{\circ}$ above a CC-AFM-FM ground state observed to the best of our knowledge for the first time. Next, we briefly compare our results with those obtained so far by INS-studies for Ca$_2$Y$_2$Cu$_5$O$_{10}$ (CYCO) \cite{Matsuda2005} with a similar FM in-chain ordering and a frustrating AFM inter-chain interaction. There the reported $J_1$ read -80~K and -93~K for fits where $J_2=0$ and $J_2= 4.6$~K, respectively. However, such tiny values of $J_2$ are unlikely \cite{Mizuno1998,Mizuno1999,Malek2008}. For a standard Cu-O hybridization a much larger value is expected \cite{Kuzian2009} in accord with the observed sizable part of the total magnetic moment (22 \% ) residing at O \cite{Matsuda2002}. Re-fitting their INS data yields $J_1$,$J_2$ values of the same order as we found for Li$_2$CuO$_2$ \ (LCO) ($J_1^{\tiny \rm CYCO}\sim J_1^{\tiny \rm LCO}$ and $J_2^{\tiny \rm CYCO}\sim 0.5J_{2}^{\tiny \rm LCO}$). A detailed comparison of both systems will be given elsewhere \cite{Kuzian2009}. With respect to their large $J_1$ values the question may arise why they have been not recognised so far in analyzing thermodynamic properties? In this context a critical evaluation of the reported AFM "\textsc{Curie-Weiss}" (CW) temperatures $\Theta_{\tiny \rm CW}^{\tiny \rm CYCO}\approx -15$~K \cite{Yamaguchi1999} or small FM values: 5-10~K \cite{Kudo2005} and $\Theta_{\tiny \rm CW}^{\tiny \rm LCO}\approx -40$ \cite{Sapina1990,Boehm1998} or -8~K \cite{Ebisu1998} is very instructive. All these data have been derived from the linear fits \begin{figure}[!b] \begin{center} \includegraphics[width=0.29\textwidth,angle=-90]{Fig4lorenz.ps} \end{center} \caption{(Color online) The inverse spin susceptibility as measured ($\circ$) vs high $T$-series expansion in 1$^{st}$ (correct CW-law) and 2$^{nd}$ order. Dashed and dotted lines: "pseudo"-CW-laws. } \label{fig::InvChi} \end{figure} of inverse susceptibility plots below 300--400~K (of the type denoted as "pseudo"-CW-lines in Fig.\ 5). But here we estimate $$\Theta_{\tiny \rm CW}\approx \frac{1}{2}\left[\mid J_1\mid-J_2- \frac{z_{int-ch}}{2}\left(\tilde{J}_1+\tilde{J}_{2} \right) \right] > +54 \ \mbox{K},$$ where the inter-chain coordination number $z_{int-ch}=4,8$ for CYCO and LCO, respectively. Thus, we arrive at about $ 60$~K and 58$\pm 4$~K, respectively. The inspection of \figref{fig::InvChi} clearly shows that very high $T$ \cite{remarkCW}, {\it far above} any available data would be required to extract $\Theta_{\tiny \rm CW}$ from $1/\chi(T)$-data. To satisfy the incorrect $\Theta_{\tiny \rm CW}$-values strongly underestimated $J_1$-values have been adopted in Refs.\ \cite{Yamaguchi1999,Kudo2005,Ebisu1998,Boehm1998,Mizuno1998}. Even the 2$^{nd}$ order of high $T$-series expansion (HTS) approaches the experimental curve only above 400~K. Hence, even more any attempt to detect even a \textsc{Curie}-law \cite{Mizuno1998} near 300~K must fail. Our $J_1$-values for LCO and CYCO from INS-data provide support for the large value we found in Li$_2$ZrCuO$_4$ from thermodynamic properties \cite{Drechsler2007PRL} but puzzle the tiny value reported for LiVCuO$_4$ \cite{Enderle2005} with a similar Cu-O-Cu bond angle. Finally, we turn to a microscopic analysis. In Tab.\ 1 and Fig.\ 3 we compare our INS derived exchange integrals with theoretical results. First, we list the in-chain couplings $J_n$ as obtained from the mapping of a five-band extended Hubbard $pd$ model (on open chain Cu$_n$O$_{2n+2}$-clusters $n=5,6$) \cite{remarkmapping} on a corresponding $J_1$-$J_2$-$J_3$-\textsc{Heisenberg}{} model (see Fig.\ 1). But here, to reproduce the main experimental exchange integrals, a refinement has been performed most importantly by considering a larger direct FM Cu-O exchange $K_{pd}=81$~meV compared with 50~meV adopted in Ref.\ \cite{Mizuno1999}. We note that practically only $J_1$ is significantly affected by $K_{pd}$. Thereby $\mid J_1 \mid \propto K_{pd}$ holds approximately. Notice that the contribution of $K_{pd}$ is much more important for the large negative (FM) value of $J_1$ than that of the intra-atomic FM Hund's rule coupling on O. Since the available spectroscopic data at 300~K depend only weakly on $K_{pd}$ not much is known on its magnitude. In the past $K_{pd}$ has been used mostly as a fitting parameter for thermodynamic properties ranging from 50 to 110~meV for CuGeO$_3$ \cite{Mizuno1998,Braden1996}. The INS data reported here provide a unique way to restrict its value phenomenologically and opens a door for systematic studies of this very important interaction and well-founded comparisons with other edge-shared CuO$_2$ chain compounds. Secondly, in the LSDA+$U$ there is practically only one adjustable parameter $U_d-J_H$, where $U_d$ denotes the Coulomb onsite repulsion (between 6~and 10~eV) and $J_H$ denotes the intra-atomic exchange ($ \approx 1$~eV) both on Cu-sites. Comparing the total energy of various ordered magnetic states, a set of in-chain and inter-chain integrals can be derived \cite{Xiang2007}. As a result one arrives again at very close numbers to our INS derived set \cite{Drechsler2009}. Noteworthy, both ED a well as the LSDA+$U$ provides a justification to neglect any long-range exchange beyond the third NN. The latter also explains why there is only one important inter-chain exchange integral $\tilde{J}_2$ ($\beta_2$). The excellent agreement between the INS-data analyzed in the simple LSW-theory and the theoretical results/predictions suggests that in the present case the effect of quantum fluctuations as well as of spin-phonon interaction seems to be rather weak. The former point is also supported by the relatively large value of the magnetic moment $m\approx 0.96\mu_{\tiny \rm B}$ \cite{Sapina1990,Chung2003} in the ordered state below $T_N$ and a consequence of the fact that the FM state is an eigenstate of the 1D spin-model in contrast to the \textsc{N\'eel} state. Concerning the value of the anisotropy, there is no good agreement between contemporary DFT calculations \cite{Mertz2005} and much smaller values obtained in various experiments \cite{Ohta1993,Boehm1998} including our data (see Tab.\ 1). \section{\bf 5.\ SUMMARY} The main results of our INS study are (i) the relatively large dispersion of spin excitations in the CuO$_2$ chains of Li$_2$CuO$_2$ \ due to the large value of the FM NN in-chain coupling $J_1$ and (ii) the observation of a low-energy spiral excitation over a commensurate collinear \textsc{N\'eel} ground state in the vicinity of the 3D critical point above the corresponding 1D point. The obtained main exchange integrals can be approximately reproduced adopting an enhanced value for the direct FM exchange $K_{pd}$ between Cu 3$d$ and O 2$p$ states within an extended five-band \textsc{Hubbard}{}-model. Further support for the empirical exchange integrals comes from L(S)DA+$U$ calculations, if a moderate value of $U$ somewhat smaller than the $U_d$ in exact diagonalization for the extended \textsc{Hubbard}{}-model is employed. The achieved detailed knowledge of the main exchange couplings derived from the INS-data provides a good starting point for an improved general theoretical description of other CuO$_2$-chain systems and to adress a microscopic theory of their exchange anisotropy. \acknowledgments \noindent We thank the DFG [grants KL1824/2 (BB, RK \& WEAL), DR269/3-1 (S-LD \& JM), \& the E.-Noether-progr.\ (HR)], the progr.\ PICS [contr.\ CNRS 4767, NASU 243 (ROK)], and ASCR(AVOZ10100520) (JM) for financial support as well as M.\ Boehm, M.\ Matsuda, A.\ Boris, H.\ Eschrig, V.Ya.\ Krivnov, D.\ Dmitriev, S.\ Nishimoto, E.\ Plekhanov and J.\ Richter for valuable discussions.
0909.4897
\section{Formalism} The width difference between mass eigenstates is then given by~\cite{Beneke:1996gn} \begin{equation} \label{dgdef} \Delta \Gamma_{B_s}\equiv\Gamma_L-\Gamma_H = -2\Gamma_{12}=-2\Gamma_{21}, \end{equation} where $\Gamma_{ij}$ are the elements of the decay-width matrix, $i,j=1,2$ ($|1\rangle=|B_s\rangle$, $|2\rangle=|\overline{B_s}\rangle$) Using optical theorem, off diagonal elements of mixing matrix can be related to the imaginary part of the forward scattering amplitude: \begin{eqnarray} \label{rate} \Gamma_{21}(B_s)&=&\frac{1}{2 M_{B_s}}\langle \overline{B}_s |{\cal T} | B_s \rangle,\quad\\ \label{tdef}{\cal T} &=& {\mbox{Im}}~ i \int d^4 x T \left\{ H_{\mbox{\scriptsize eff}}(x) H_{\mbox{\scriptsize eff}}(0) \right \}. \end{eqnarray} where $H_{eff}$ is an effective weak hamiltionian defined as follows: \begin{equation}\label{hpeng} {\cal H}_{eff}=\frac{G_F}{\sqrt{2}}V^*_{cb}V^{\phantom{*}}_{cs} \left(\,\sum^6_{r=1} C_r Q_r + C_8 Q_8\right), \end{equation} where four-quark operators are defined in the following way: \begin{eqnarray}\label{q1q2} Q_1&=& (\bar b_ic_j)_{V-A}(\bar c_js_i)_{V-A},\\ Q_2&=& (\bar b_ic_i)_{V-A}(\bar c_js_j)_{V-A}, \\ \label{q3q4} Q_3&=& (\bar b_is_i)_{V-A}(\bar q_jq_j)_{V-A},\\ Q_4&=& (\bar b_is_j)_{V-A}(\bar q_jq_i)_{V-A},\\ \label{q5q6} Q_5&=& (\bar b_is_i)_{V-A}(\bar q_jq_j)_{V+A},\\ Q_6&=&(\bar b_is_j)_{V-A}(\bar q_jq_i)_{V+A},\\ \label{q8} Q_8&=& \frac{g}{8\pi^2}m_b\, \bar b_i\sigma^{\mu\nu}(1-\gamma_5)T^a_{ij} s_j\, G^a_{\mu\nu}. \end{eqnarray} In the heavy-quark limit the energy release is large and process is dominated by short-distance physics. An operator product expansion can be constructed which results in series of operators suppressed by powers of $1/m_b$: \begin{eqnarray} \label{expan} \Gamma(B_s)_{21}&=&\frac{1}{2 M_{B_s}} \sum_k \langle B_s |{\cal T}_k | B_s \rangle\nonumber\\ &=&\sum_{k} \frac{C_k(\mu)}{m_b^{k}} \langle B_s |{\cal O}_k^{\Delta B=2}(\mu) | B_s \rangle. \end{eqnarray} The most recent calculations of $B_s$ lifetime difference \cite{Beneke:1996gn} and of QCD corrections to $\Delta\Gamma_s$ \cite{Beneke:1998sy, Ciuchini:2003ww} do not provide definitive theoretical prediction of its value. Heavy quark expansion corrections of order of $1/m_b$ appear to be about $25\%$ of leading order and QCD corrections are as big as $30\%$. \footnote{It was proposed in \cite{Lenz:2006hd} that four-quark operators governing this interaction can be redefined in such a way that corrections will be small}. We compute $1/m_b^2$ corrections in heavy quark expansion to directly check convergence of this series. In other words we compute matching coefficients of an effective $\Delta b=2$ lagrangian. Computation of matrix elements of most of these operators is rather difficult task due to lack of results from Lattice QCD and Light cone QCD calculations. We used a factorization approach to estimate matrix elements of such operators. Expanding the operator product (\ref{tdef}) for small $x\sim 1/m_b$, the transition operator ${\cal T}$ can be written, to leading order in the $1/m_b$ expansion, as~\cite{Beneke:1996gn,Beneke:1998sy} \begin{equation}\label{tfq} {\cal T}=-\frac{G^2_F m^2_b}{12\pi}(V^*_{cb}V^{\phantom{*}}_{cs})^2 \, \left[ F(z) Q(\mu_2)+ F_S(z) Q_S(\mu_2) \right], \end{equation} which results in~\cite{Ciuchini:2003ww} \begin{eqnarray}\label{tres} &&\Gamma_{21}(B_s)= -\frac{G^2_F m^2_b}{12\pi (2 M_{B_s})}(V^*_{cb}V^{\phantom{*}}_{cs})^2 \sqrt{1-4z}\times \nonumber\\ &\times&\left\{\left[(1-z)\,\left(2\, C_1 C_2+N_c C^2_2\right)+(1-4z) C^2_1/2 \right]\right. \langle Q\rangle\nonumber\\ &+& \left.(1+2z)\left(2\, C_1 C_2+N_c C^2_2-C^2_1\right) \langle Q_S \rangle \right\}, \end{eqnarray} where $z=m_c^2/m_b^2$ and the $\Delta B=2$ operators are as follows: \begin{eqnarray}\label{qqs} Q &=& (\bar b_is_i)_{V-A}(\bar b_js_j)_{V-A},\nonumber\\ Q_S&=& (\bar b_is_i)_{S-P}(\bar b_js_j)_{S-P} ~. \end{eqnarray} Color re-arranged operators $\tilde Q=(\bar b_is_j)_{V-A}(\bar b_js_i)_{V-A}$ and $\tilde Q_S= (\bar b_is_j)_{S-P}(\bar b_js_i)_{S-P}$ that appear during calculations were eliminated using Fiertz identities and equation of motion. The Wilson coefficients $F$ and $F_S$ are obtained by computing the matrix elements of ${\cal T}$ in (\ref{tdef}) between quark states. The coefficients in the transition operator (\ref{tfq}) at next-to-leading order, still neglecting the penguin sector, can be written as~\cite{Beneke:1998sy}: \begin{eqnarray}\label{fz} F(z)&=&F_{11}(z) C^2_2(\mu_1)+ F_{12}(z) C_1(\mu_1) C_2(\mu_1)+\nonumber\\ &+&F_{22}(z) C^2_1(\mu_1), \end{eqnarray} \begin{equation}\label{fij} F_{ij}(z)=F^{(0)}_{ij}(z) +\frac{\alpha_s(\mu_1)}{4\pi}F^{(1)}_{ij}(z), \end{equation} $F_S(z)$ has similar structure. The leading order functions $F^{(0)}_{ij}$, $F^{(0)}_{S,ij}$ read explicitly \begin{eqnarray} \label{f011} F^{(0)}_{11}(z)&=&3\sqrt{1-4z}(1-z),\nonumber\\ F^{(0)}_{S,11}(z)&=&3\sqrt{1-4z}(1+2z),\\ \label{f012} F^{(0)}_{12}(z)&=&2\sqrt{1-4z}(1-z)\nonumber\\ F^{(0)}_{S,12}(z)&=&2\sqrt{1-4z}(1+2z),\\ \label{f022} F^{(0)}_{22}(z)&=&\frac{1}{2}(1-4z)^{3/2}\nonumber\\ F^{(0)}_{S,22}(z)&=&-\sqrt{1-4z}(1+2z). \end{eqnarray} The next-to-leading order (NLO) QCD expressions of $F^{(1)}_{ij}$, $F^{(1)}_{S,ij}$ and corrections to Eq.~(\ref{tfq}) arrising from penguin diagram are given in Ref.~\cite{Beneke:1998sy}. \section{$1/m_b^n$ corrections} The general expression for lifetime difference of $B_s$ mesons can be presented in the following way: \begin{eqnarray}\label{corr} &&\Gamma_{21}(B_s) = -\frac{G^2_F m^2_b}{12\pi (2 M_{B_s})}(V^*_{cb}V^{\phantom{*}}_{cs})^2\times\nonumber\\ &\times&\,\left\{\left[F(z)+P(z)\right]Q+ \left[F_S(z)+P_S(z)\right]Q_S\right.\\ &+&\left.\delta_{1/m} + \delta_{1/m^2}\right\}\nonumber \end{eqnarray} where $\delta_{1/m}$ and $\delta_{1/m^2}$ denote contribution from operators suppressed as $1/m_b$ and $1/m_b^2$ respectively. These terms and their numerical values are computed further. The matrix elements for $Q$ and $Q_S$ can be parametrized in the following way~\cite{Beneke:1996gn,Beneke:1998sy,Ciuchini:2003ww} \begin{eqnarray} \label{meqs} \langle \overline B_s\vert Q\vert B_s\rangle &=& f^2_{B_s}M^2_{B_s}2\left(1+\frac{1}{N_c}\right)B, \\[0.1cm] \langle \overline B_s\vert Q_S \vert B_s\rangle &=& -f^2_{B_s}M^2_{B_s} \frac{M^2_{B_s}}{(m_b+m_s)^2}\left(2-\frac{1}{N_c}\right)B_S, \nonumber \end{eqnarray} \noindent where $M_{B_s}$ and $f_{B_s}$ are the mass and decay constant of the $B_s$ meson and $N_c$ is the number of colors. $B$ and $B_S$ are defined such that $B=B_S=1$ corresponds to the factorization (or `vacuum insertion') approach, which can provide a first estimate. Their numerical values are known from Lattice QCD calculations. The $1/m_b$ corrections are obtained expanding amplitude Eq.~(\ref{rate}) in terms of light quark momentum and matching it to four-quark operators that contain derivatives \cite{Beneke:1996gn, Ciuchini:2003ww}, The $\delta_{1/m}$ term can be written in the following form: \begin{eqnarray} \label{OurCorrection} \delta_{1/m} &=& \sqrt{1-4z}\left\{ (1 + 2z)\left[C^2_1\left(R_2 + 2 R_4\right)- \right.\right.\nonumber\\ &-&2\left.(2C_1 C_2+N_c C_2^2)\left(R_1+R_2\right)\right]\frac{z^2}{4}\nonumber \\ &-&\frac{12 z^2}{1 - 4z}\left[(2 C_1 C_2+N_c C^2)\left(R_2+2 R_3\right)\right.\nonumber\\ &+&\left.\left. 2C^2_1 R_3\right]\right\} \end{eqnarray} where additional operators that contain derivatives appear \begin{eqnarray} R_1 &=& \dfrac{m_s}{m_b}\bar b_i \gamma^{\mu} (1-\gamma_5) s_i ~\bar b_j \gamma_{\mu} (1+\gamma_5) s_j\nonumber\\ R_2 &=& \dfrac{1}{m^2_b}\bar b_i {\overleftarrow D}_{\!\rho} \gamma^\mu (1-\gamma_5)\overrightarrow{D}^\rho s_i ~\bar b_j\gamma_\mu(1-\gamma_5)s_j\,,\nonumber \\ R_3 &=& \dfrac{1}{m^2_b}\bar b_i{\overleftarrow D}_{\!\rho} (1-\gamma_5)\overrightarrow{D}^\rho s_i~\bar b_j(1-\gamma_5)s_j\\ R_4 &=& \dfrac{1}{m_b}\bar b_i(1-\gamma_5)i\overrightarrow{D}_\mu s_i ~\bar b_j\gamma^\mu(1-\gamma_5)s_j\,. \nonumber\label{rrt1} \end{eqnarray} Their matrix elements are~\cite{Beneke:1996gn, Ciuchini:2003ww}: \begin{eqnarray} \langle \label{corr1}\overline B_s | R_1 | B_s \rangle &=& \left(2+\frac{1}{N_c}\right)\,\dfrac{m_s}{m_b}\, f_{B_s}^2 M_{B_s}^2\,B^{s}_1\\ \overline B_s | R_2 | B_s \rangle &=& \left(-1+\frac{1}{N_c}\right)\,f_{B_s}^2 M_{B_s}^2 \left( \dfrac{M_{B_s}^2}{m_b^2} - 1 \right)\,B^{s}_2 \nonumber\\ \langle \overline B_s | R_3 | B_s \rangle &=& \left(1+\frac{1}{2N_c}\right)\,f_{B_s}^2 M_{B_s}^2 \left( \dfrac{M_{B_s}^2}{m_b^2} - 1 \right)\,B^{s}_3\nonumber\\ \langle \overline B_s | R_4 | B_s \rangle &=& -f_{B_s}^2 M_{B_s}^2 \left( \dfrac{M_{B_s}^2}{m_b^2} - 1 \right)\,B^{s}_4\,.\nonumber \end{eqnarray} Among these $B$-parameters, $B^{s}_1$ and $B^{s}_2$ are the most widely studied and well known in lattice and light cone QCD. In this paper we use the results of Ref.~\cite{Becirevic:2001xt}. The rest of "bag" parameters is estimated using a vacuum insertion approximation. The color-rearranged operators $\widetilde{R}_i$ were eliminated using Fierz identities and the equations of motion as in Eq.~(\ref{qqs}). As it was mentioned earlier $O(1/m_b)$ corrections are quite large \cite{Beneke:1996gn, Ciuchini:2003ww}. Computing $O(1/m_b^2)$ we directly control convergence of $1/ m$ expansion in lifetime difference calculation. At this order we get more operators that contribute to the $\Delta\Gamma(B_s)$. There are two different types of operators. One class of them involves operators computed by further expansion of Eq.~(\ref{rate}) - they are called kinetic corrections. Another type arises from interaction of quarks with background gluon field. The kinetic corrections can be written as: \begin{eqnarray}\label{OurCorrection1} &&\delta_{1/m^2} = \dfrac{24 z^2}{(1-4 z)^2}(3-10z)\left[ C_1^2 W_3+\right.\nonumber\\ &+&\left.(2\, C_1 C_2 +N_c C_2^2) (W_3+W_2/2)\right]\nonumber \\ &-&\dfrac{12 z^2}{1-4 z}\dfrac{m_s^2}{m_b^2}\left[ C_1^2 Q_S-(2\, C_1 C_2 +N_c C_2^2) (Q_S+Q/2)\right]\nonumber \\ &+&\dfrac{24 z^2}{1-4 z}\left[ 2 C_1^2 W_4-2\, (2\, C_1 C_2 +N_c C_2^2) (W_1+W_2/2)\right]\nonumber \\ &-&(1-2 z)\dfrac{m_s^2}{m_b^2}(C_1^2+2\, C_1 C_2 +N_c C_2^2)Q_R. \end{eqnarray} The operators in Eq.~(\ref{OurCorrection1}) are defined as \begin{eqnarray} Q_R &=& (\bar b_is_i)_{S+P}(\bar b_js_j)_{S+P}\,,\nonumber \\ W_1 &=& \dfrac{m_s}{m_b} \bar b_i\overleftarrow{D}^{\alpha}(1-\gamma_5)\overrightarrow{D}_{\alpha}s_i~\bar b_j(1+\gamma_5)s_j\,,\nonumber \\ W_2 &=& \dfrac{1}{m_b^4} \bar b_i\overleftarrow{D}^{\alpha}\overleftarrow{D}^{\beta}\gamma^{\mu}(1-\gamma_5)\overrightarrow{D}_{\alpha}\overrightarrow{D}_{\beta}s_i ~\bar b_j\gamma_{\mu}(1-\gamma_5)s_j\,, \nonumber \\ W_3 &=& \dfrac{1}{m_b^4} \bar b_i\overleftarrow{D}^{\alpha}\overleftarrow{D}^{\beta}(1-\gamma_5)\overrightarrow{D}_{\alpha}\overrightarrow{D}_{\beta}s_i ~\bar b_j (1-\gamma_5) s_j\,, \nonumber \\ W_4 &=& \dfrac{1}{m_b^4} \bar b_i\overleftarrow{D}^{\alpha}(1-\gamma_5)i \overrightarrow{D}_{\mu}\overrightarrow{D}_{\alpha}s_i ~\bar b_j\gamma^{\mu}(1-\gamma_5)s_j\,, \end{eqnarray} where, as before, we have eliminated the color-rearranged operators $\widetilde{W}_i$ in favor of the operators $W_i$. Due to absence of results from lattice and light cone QCD, the parametrization of the matrix elements of these operators is given. In the pure factorization approach all the bag parameters $\alpha_i$ should be set to 1: \begin{eqnarray} \label{corr2} \langle \overline B_s |Q_R| B_s \rangle &=& -f^2_{B_s}M^2_{B_s} \frac{M^2_{B_s}}{(m_b+m_s)^2}\alpha_1\,, \\ \langle \overline B_s |W_1| B_s \rangle &=& \left(1+\frac{1}{2 N_c}\right)\,f_{B_s}^2 M_{B_s}^2 \left( \dfrac{M_{B_s}^2}{m_b^2} - 1 \right)\alpha_2\,, \nonumber \\ \langle \overline B_s |W_2| B_s \rangle &=& \frac{1}{2}\left(-1+\frac{1}{N_c}\right)\,f_{B_s}^2 M_{B_s}^2 \left( \dfrac{M_{B_s}^2}{m_b^2} - 1 \right)^2\alpha_3\,, \nonumber \\ \langle \overline B_s |W_3| B_s \rangle &=& \frac{1}{2} \left(1+\frac{1}{2 N_c}\right)\,f_{B_s}^2 M_{B_s}^2 \left( \dfrac{M_{B_s}^2}{m_b^2} - 1 \right)^2\alpha_4\,, \nonumber \\ \langle \overline B_s |W_4| B_s \rangle &=& -\frac{1}{2}\,f_{B_s}^2 M_{B_s}^2 \left( \dfrac{M_{B_s}^2}{m_b^2} - 1 \right)^2\alpha_5\,. \nonumber \end{eqnarray} \begin{figure}[h] \centering \includegraphics[width=80mm]{Gluon.eps} \caption{Diagrams contributing to corrections due to interaction with background gluon field.} \label{gluon} \end{figure} In addition to the set of kinetic corrections considered above, the effects of the interactions of the intermediate quarks with background gluon fields should also be included at this order. The contribution of those operators can be computed from the diagram on Fig.\ref{gluon}, resulting in \begin{eqnarray} &&{\cal T}_{{\rm spec}, G} = - \dfrac{G_F^2(V^*_{cb}V^{\phantom{*}}_{cs})^2}{4\pi\sqrt{1-4z}} \left\{C_1^2\left[(1-4z) P_1 P_2+\right.\right.\nonumber\\ &-&(1-4z)+\left.4z P_3-4z P_4 \right]\nonumber\\ &+& \left. 4\; C_1 C_2 z\left[P_5+P_6-P_7-P_{8}\right]\right\}. \nonumber \end{eqnarray} The local four-quark operators contributing to this correction are given in Eq.~(\ref{poper}). \begin{eqnarray} \label{poper} P_1 &=& \bar b_i\gamma^{\mu}(1-\gamma_5)s^{\phantom{l}}_i~ \bar b^{\phantom{l}}_k\gamma^{\nu}(1-\gamma_5)t_{kl}^a\widetilde{G}_{\mu\nu}^a s^{\phantom{l}}_l\\ P_2 &=& \bar b_k\gamma^{\mu}(1-\gamma_5)t_{kl}^a\widetilde{G}_{\mu\nu}^a s_l~ \bar b^{\phantom{l}}_i\gamma^{\nu}(1-\gamma_5)s^{\phantom{l}}_i\,, \nonumber\\ P_3 &=& \dfrac{1}{m_b^2}\bar b_i\overleftarrow{D}^{\mu}\overleftarrow{D}^{\alpha}\gamma^{\alpha}(1-\gamma_5) s^{\phantom{l}}_i~ \bar b^{\phantom{l}}_k\gamma_{\nu}(1-\gamma_5)t_{kl}^a\widetilde{G}_{\mu\nu}^a s^{\phantom{l}}_l\,, \nonumber\\ P_4 &=& \dfrac{1}{m_b^2}\bar b_k\overleftarrow{D}^{\nu}\overleftarrow{D}^{\alpha}\gamma^{\mu}(1-\gamma_5) t_{kl}^a\widetilde{G}_{\mu\nu}^a s^{\phantom{l}}_l ~\bar b^{\phantom{l}}_i\gamma_{\alpha}(1-\gamma_5)s^{\phantom{l}}_i\,, \nonumber \\ P_5 &=& \dfrac{1}{m_b^2}\bar b_k\overleftarrow{D}^{\nu}\overleftarrow{D}^{\alpha}\gamma^{\mu}(1-\gamma_5) s^{\phantom{l}}_i~t_{kl}^a\widetilde{G}_{\mu\nu}^a ~\bar b^{\phantom{l}}_i\gamma_{\alpha}(1-\gamma_5)s^{\phantom{l}}_l\,, \nonumber \\ P_6 &=& \dfrac{1}{m_b^2}\bar b_i\overleftarrow{D}^{\nu}\overleftarrow{D}^{\alpha}\gamma^{\mu}(1-\gamma_5) s^{\phantom{l}}_k~t_{kl}^a\widetilde{G}_{\mu\nu}^a ~\bar b^{\phantom{l}}_l\gamma_{\alpha}(1-\gamma_5) s^{\phantom{l}}_i\,, \nonumber \\ P_7 &=& \dfrac{1}{m_b^2}\bar b_k\overleftarrow{D}^{\mu}\overleftarrow{D}^{\alpha}\gamma^{\alpha}(1-\gamma_5) s^{\phantom{l}}_i~t_{kl}^a\widetilde{G}_{\mu\nu}^a ~\bar b^{\phantom{l}}_i\gamma_{\nu}(1-\gamma_5)s^{\phantom{l}}_l\,, \nonumber \\ P_8 &=& \dfrac{1}{m_b^2}\bar b_i\overleftarrow{D}^{\mu}\overleftarrow{D}^{\alpha}\gamma^{\alpha}(1-\gamma_5) s^{\phantom{l}}_k~t_{kl}^a\widetilde{G}_{\mu\nu}^a ~\bar b^{\phantom{l}}_l\gamma_{\nu}(1-\gamma_5) s^{\phantom{l}}_i. \nonumber \end{eqnarray} Following \cite{Gabbiani:2004tp} these operators are parametrized the following way: \begin{equation} \label{glucorr} \langle B_s | P_i | B_s \rangle = \dfrac{1}{4} f^2_{B_{s}} M^2_{B_{s}} \left(\dfrac{M_{B_s}^2}{m^2_b}-1\right)^2\beta_i. \end{equation} It is hard to obtain precise prediction for lifetime difference with so many operators contributing. Nevertheless contribution from $\delta_{1/m}$ and $\delta_{1/m^2}$ can be evaluated. In our numerical calculations we assume the pole mass of b-quark to be $m_b=4.8 \pm 0.2 GeV$ and $f_B = 230 \pm 25 MeV$. In order to see the effect of $O(1/m_b^2)$ corrections we fix all perturbative parameters in the middle of their allowed ranges to show dependence of $\Delta\Gamma_{B_s}$ on non perturbative parameters $B_i,\ \alpha_i,\ \beta_i$ defined in Eq.\ref{meqs}, \ref{corr1}, \ref{corr2}, \ref{glucorr} \begin{eqnarray} &&\Delta\Gamma_{B_s}=\left[ 0.0005 B + 0.1732 B_s + 0.0024 B_1\right.\nonumber\\ &&- 0.0237 B_2-0.0024B_3-0.0436B_4\nonumber\\ &&+2\times10^{-5} \alpha_1+4\times10^{-5} \alpha_2 + 4\times10^{-5} \alpha_3 \nonumber\\ &&+0.0009\alpha_4 -0.0007\alpha5\nonumber\\ &&+0.0002\beta_1- 0.0002\beta_2+6\times10^{-5}\beta_3\\ &&-6\times10^{-5}\beta_4-1\times10^{-5}\beta_5\nonumber\\ &&\left.-1\times10^{-5}\beta_6+1\times10^{-5}\beta_7 + 1\times10^{-5}\beta_8\right] (ps^{-1})\nonumber \end{eqnarray} It is obvious that $O(1/m_b^2)$ corrections provide minor effect on calculation of $B_s - \overline{B}_s$ lifetime difference. Contribution from interaction with background gluon field is essentially negligible. To obtain full SM prediction of $\Delta\Gamma_{B_s}$ we vary values of parameters of matrix elements. We generate 100000-point probability distribution of the lifetime difference obtained by randomly varying our parameters within $\pm 30\%$ range around their factorization value or within $\pm 1 \sigma$ for parameters known from experimental data or lattice QCD calculations. The resulting distribution is presented on the Fig.\ref{fig:result} \begin{figure}[h] \centering \includegraphics[width=80mm]{Delta_gamma.eps} \caption{Histogram showing the random distribution around the central values of various parameters contributing to $B_s$-lifetime difference $\Delta\Gamma_{B_s}$.} \label{fig:result} \end{figure} There is no theoretically consistent way to treat this diagram since it is not expected for theoretical predictions to have Gaussian distribution. Nevertheless we can give a numerical prediction estimating position of peak as the most probable value and the peak width at half of height as theoretical uncertainty. \begin{eqnarray} \label{main_result} \Delta\Gamma_{B_s}&=&0.072\pm{\displaystyle {0.034 \over 0.030}} ps^{-1}\nonumber\\ \frac{\Delta\Gamma_{B_s}}{\Gamma_{B_s}}&=&0.104\pm0.049 \end{eqnarray} where in latter result we added theoretical error for our calculation of $\Delta\Gamma_{B_s}$ and experimental error from determination of $\Gamma_{B_s}$ in quadrature. Additional improvement in lattice or QCD sum rules determination of "bag" parameters would make this prediction even more solid. \section{New Physics contributions to lifetime difference} In the previous section it was shown that $O(1/m_b^2)$corrections to the lifetime difference of $B_s$ and $\overline{B}_s$ mesons are small. Since we have reliable prediction of $\Delta\Gamma_{B_s}$ it might be interesting to consider possible effects of physics beyond the Standard Model on the lifetime difference in $B_s$ system. As was pointed out long time ago~\cite{Grossman:1996er,Dunietz:2000cr}, CP-violating contributions to $M_{12}$ must {\it reduce} the lifetime diffence in $B_s$-system, as \begin{equation} \Delta \Gamma_s = \Delta \Gamma_s^{SM} \cos^2 2 \theta_s, \end{equation} where $\theta_s$ is a CP-violating phase of $M_{12}$, which is thought to be dominated by some $\Delta B =2$ New Physics. On other hand, CP conserving $\Delta B=1$ NP amplitude can interfere with SM contribution constructively or destructively, depending on the NP model. There was no spectacular NP phases observed in $B_s$ mixing, thus it is important to estimate the CP-conserving contribution to $\Delta \Gamma_s$. We shall consider it using the generic set of effective operators, and then apply our results to popular extensions of the SM. Using the completeness relation the NP contribution to the $B^0_s$-${\overline B}^0_s$ lifetime difference becomes \begin{eqnarray}\label{gammaope} y \ &=& \ \frac{2}{M_{\rm B_s} \Gamma_{\rm B_s}}\, \langle \overline{B}_s | {\rm Im}\, {\cal T} | B_s \rangle \ \ , \\ {\cal T} \ &=& \ \,i\! \int\! {\rm d}^4 x\, T \left( {\cal H}^{\Delta b=-1}_{SM} (x)\, {\cal H}^{\Delta b=-1}_{NP}(0) \right) \ \ .\nonumber \end{eqnarray} We represent the generic NP $\Delta b=1$ hamiltonian as \begin{eqnarray}\label{HamNP} && {\cal H}^{\Delta B=-1}_{NP} = \sum_{q,q'} \ D_{qq'} \left[\overline {\cal C}_1(\mu) Q_1 + \overline {\cal C}_2 (\mu) Q_2 \right]\ , \\ && Q_1 = \overline{b}_i \overline\Gamma_1 q_j' ~ \overline{q}_j \overline\Gamma_2 s_i \ , \ \ Q_2 = \overline{b}_i \overline\Gamma_1 q_i' ~ \overline{q}_j \overline\Gamma_2 s_j\ , \nonumber \end{eqnarray} where $\overline\Gamma_{1,2}$ are arbitrary combinations of Dirac matrices and $\overline {\cal C}_{1,2}(\mu)$ are Wilson coefficients evaluated at energy scale $\mu$. This gives us the following contribution to lifetime difference: \begin{eqnarray} \label{New Physics result} &&\Delta\Gamma_{NP}= \frac{4G_F \sqrt{2}}{M_{B_s}}\sum_{qq'}D_{qq'}V^{\ast}_{qb}V_{q's}\\ &\times&\left(K_1\delta_{i\beta}\delta_{k\gamma}+ K_2\delta_{k\beta}\delta_{i\gamma}\right)\sum_{j=1}^{5}I_j(x,x') \langle\overline{B_s}|O_j^{i\beta k\gamma}|B_s\rangle\nonumber \end{eqnarray} where $i, \beta, g, \gamma$ stand for color indices, operators $O_j^{i\beta k\gamma}$ are the following: \begin{eqnarray} O_1^{i\beta k\gamma}&=&\left(\bar{b}_i\Gamma^{\nu}\gamma^{\rho} \Gamma_2s_{\gamma}\right)\left(\bar{b}_k\Gamma_{1} \gamma_{\rho}\Gamma_{\nu}s_{\beta}\right)\nonumber \\ O_2^{i\beta k\gamma}&=&\left(\bar{b}_i\Gamma^{\nu}\hat{p}\Gamma_2s_{\gamma}\right) \left(\bar{b}_k\Gamma_{1}\hat{p}\Gamma_{\nu}s_{\beta}\right)\nonumber\\ \label{New Physics operators} O_3^{i\beta k\gamma}&=&\left(\bar{b}_i\Gamma^{\nu}\Gamma_2s_{\gamma}\right) \left(\bar{b}_k\Gamma_{1}\hat{p}\Gamma_{\nu}s_{\beta}\right)\\ O_4^{i\beta k\gamma}&=&\left(\bar{b}_i\Gamma^{\nu}\hat{p}\Gamma_2s_{\gamma} \right)\left(\bar{b}_k\Gamma_{1}\Gamma_{\nu}s_{\beta}\right)\nonumber\\ O_5^{i\beta k\gamma}&=&\left(\bar{b}_i\Gamma^{\nu}\Gamma_2s_{\gamma}\right) \left(\bar{b}_k\Gamma_{1}\Gamma_{\nu}s_{\beta}\right)\nonumber, \end{eqnarray} with $p$ being a $b$-quark momentum, and $K_i$ are the following combinations of Wilson coefficients \begin{eqnarray} K_1&=&(C_2\overline{C}_2 N_c + (C_2\overline{C}_1 + C_1\overline{C}_2))\nonumber\\ K_2&=&C_1\overline{C}_1 \end{eqnarray} with the number of colors $N_c$=3. Defining $z\equiv m_q^2/m_b^2$ and $z'\equiv m_q'^2/m_b^2$ coefficients $I_j(z,z')$ can be written as follows: \begin{eqnarray} I_1(z,z')&=&-\frac{\Phi m_c}{48\pi}\left[1-2(z+z')+(z-z')^2\right]\nonumber\\ I_2(z,z')&=&-\frac{\Phi}{24m_c\pi}\left[1+(z+z')-2(z-z')^2\right]\nonumber\\ I_3(z,z')&=&\frac{\Phi }{8\pi}\sqrt{z}\left[1+z'-z\right]\\ I_4(z,z')&=&-\frac{\Phi }{8\pi}\sqrt{z'}\left[1-z'+z\right]\nonumber\\ I_5(z,z')&=&\frac{\Phi m_c}{4\pi}\sqrt{zz'}\nonumber, \end{eqnarray} where $\Phi$ is available phase space of process $\Phi=m_c/2\left(1-2(z+z')+(z-z')^2\right)^{1/2}$ \subsection{Multi-Higgs model}\label{Multi-Higgs} One of possible realizations of New Physics is a charged Higgs doublet model proposed in \cite{Golowich:1979hd}. This model provides new flavor changing interaction mediated by charged Higgs bosons. It leads to the following four-fermion interaction: \begin{equation} \label{Higgs doublet hamiltonian} {\cal{H}}^{\Delta B=1}_{ChH}=-\frac{\sqrt{2}G_F}{M_H^2}\ \overline{b}_i \overline\Gamma_1 q_i' ~ \overline{q}_j \overline\Gamma_2 s_j\, \end{equation} where $\overline{\Gamma}_i, \ i=1,2$ are \begin{eqnarray} \overline\Gamma_1 &=& m_bV_{cb}^{\ast}\cot\beta P_L - m_cV_{cb}^{\ast}\tan\beta P_R\nonumber\\ \overline\Gamma_2 &=& m_sV_{cs}\cot\beta P_R - m_cV_{cs}\tan\beta P_L \end{eqnarray} This leads to three operators with various coefficients, matrix elements of which contribute to the $y_{ChH}$: \begin{eqnarray} &y_{ChH}=\frac{8G_F^2m_b^2}{M_B\Gamma_B}\frac{(V_{cb}^{\ast}V_{cs})^2}{M_H^2}\times\nonumber\\ &\left[\langle Q_1\rangle\left(4K_2x_sI_1\cot^2\beta+2(\cot^2\beta m_b^2x_sI_2-m_bxI_4)(K_2-K_1)\right)\right.+\nonumber\\ &\left.+\langle Q_2\rangle\left(-2K_1x_sI_1\cot^2\beta+(\cot^2\beta m_b^2x_sI_2-m_bxI_4)(K_2-K_1)\right)+\right.\nonumber\\ &\left.+\langle Q_3\rangle(K_1+K_2)\left(x^2\tan^2\beta I_5-m_bxI_3\right)\right] \end{eqnarray} $I_i$ and $K_i$ were defined above. $x=m_c/m_b$ and $x_s=m_s/m_b$. $\langle Q_i\rangle$ are as follows: \begin{eqnarray} Q_1 &=& (\overline b_i)_L (s_i)_R (\overline b_k)_R ( s_k)_L \\ \langle Q_1\rangle &=&-\frac{1}{4}f_B^2M_B^2\frac{M_B^2}{(m_b+m_s)^2} \left(2+\frac{1}{N_c}\right) \nonumber\\ Q_2 &=& (\overline b_i)_R\gamma^{\nu} (s_i)_R (\overline b_k)_L\gamma_{\nu} (s_k)_L \\ \langle Q_2\rangle &=& -\frac{1}{2}f_B^2M_B^2\left(1+\frac{2}{N_c}\right)\nonumber\\ Q_3 &=& (\overline b_i)_L\gamma^{\nu} (s_i)_L (\overline b_k)_L\gamma_{\nu} (s_k)_L \\ \langle Q_3\rangle &=&\frac{1}{2}f_B^2M_B^2\left(1+\frac{1}{N_c}\right) \nonumber \end{eqnarray} For values of $M_H = 85GeV$ and $\cot\beta=0.05$ \cite{PDG} it gives $y_{ChH}\approx 0.0034$. This is about 10\% of Standard Model value. Dependence of $y_{ChH}$ on mass of Higgs boson is given on Fig.\ref{fig:Higgs} \begin{figure}[h] \centering \includegraphics[width=80mm]{Higgs.eps} \caption{Dependence of $y_{ChH}$ on mass of Higgs boson: solid line - $\tan\beta=20$,dashed line - $\tan\beta=10$, dotted line - $\tan\beta=5$, dash-dotted line - $\tan\beta=3$} \label{fig:Higgs} \end{figure} \subsection{Left-Right Models} One of the possible extensions of the SM is a Left-Right Symmetric Model (LRSM) which assumes the extended $SU(2)_L \times SU(2)_R$ symmetry of the theory. In this model additional flavor changing interaction is provided by mediating right-handed $W^{(R)}$-bosons. In this case flavor mixing is described by right-handed CKM matrix $V_{ik}^{(R)}$ and \begin{eqnarray} \overline \Gamma_{1,2}&=&\gamma^{\mu}P_R\\ D_{qq'}&=&V_{cb}^{\ast(R)}V_{cs}^{(R)}\frac{G_F^{(R)}}{\sqrt 2} \end{eqnarray} here $\frac{G_F^{(R)}}{\sqrt 2}=g_R^2/8M_{W^(R)}^2$ and for future calculations we take $g_L=k g_R$ Such model gives us the following prediction for value of $y$: \begin{eqnarray} y_{LR}&=&-V_{cb}^{\ast}V_{cs}V_{cb}^{\ast(R)}V_{cs}^{(R)}\ \frac{G_F^2m_b^2 x}{\pi M_B\Gamma_B}\left(\frac{M_W}{M_W^{(R)}}\right)^2\nonumber\\ &\times&\left[C_1 \langle Q_2\rangle + C_2 \langle \tilde{Q_2}\rangle\right] \end{eqnarray} One of possible realizations of such scenario which gives the biggest numerical value of $y_{LR}$ is a ''Non-manifest LR`` ($V_{ij}^{(R)}\approx 1$) with $M_{W^{(R)}}=1\ TeV$ value of $y_{LR}\approx -0.015$ was obtained. In case of ''manifest LR`` (($V_{ij}^{(R)}= V_{ij}$)) contribution from this model is less. Dependence of $y_{LR}$ on mass of $W^{(R)}$ boson is given on Fig. \ref{fig:LRSM} \begin{figure} \centering \subfigure[Solid line:non manifest LRSM ($k=1$), dashed line: manifest LRSM] { \label{fig:sub:a} \includegraphics[width=8cm]{LR.eps} } \hspace{1cm} \subfigure[Solid line: $k=1$, dashed line: $k=1.5$, dotted line: $k=2$ ] { \label{fig:sub:b} \includegraphics[width=8cm]{Non_manifest.eps} } \caption{Contribution to $\Delta\Gamma_{B_s}/\Gamma_{B_s}$ in the left-right symmetric models. (a) - Dependence on $M_W^{(R)}$. (b) - Dependence on $M_W^{(R)}$ in non-manifest LRSM.} \label{fig:LRSM} \end{figure} \section{Conclusions} We computed the subleading $O(1/m_b^2)$ corrections to the lifetime difference of $B_s$ mesons. The corrections depend on 13 non-perturbative parameters $\alpha_i$ and $\beta_i$. We generated probability distribution of lifetime difference by varying parameters $\pm30\%$ around their "factorization" values or within $1\sigma$ for parameters know from Lattice QCD. The results are presented on Fig.\ref{fig:result}. Translating this diagram into numerical prediction for $\Delta\Gamma_{B_s}/\Gamma_{B_s}$ we obtained the most precise available today theoretical prediction for lifetime difference: \begin{eqnarray} \Delta\Gamma_{B_s}&=&0.072\pm{\displaystyle {0.034 \over 0.030}} ps^{-1}\nonumber\\ \frac{\Delta\Gamma_{B_s}}{\Gamma_{B_s}}&=&0.104\pm0.049 \end{eqnarray} The effect of $1/m_b^2$ corrections to the lifetime difference is small. The generic $\Delta B=1$ New Physics contribution to the lifetime differnec in $B_s$ system is considered. We considered four-fermion effective Hamiltionan of the generic Standard Model extension and computed its contribution to the $\Delta \Gamma_{B_s}$. It can reduce or increase the SM contribution depending or particular choice of the model. Two models of physics beyond the Standard Model considered. Contribution of charged Higgses to the lifetime difference is negligible. LRSM contribution is significant and parameters of this model can be constrained based on $\Delta \Gamma_{B_s}$ measurements.
1202.3583
\section{Introduction} Maxwell's equations for electromagnetic waves in Kerr nonlinear dielectric materials read \begin{subequations}\label{E:Maxwell} \begin{align} \label{E:Ampere} \pa_t \CD &= \nabla \times \CH,\\ \label{E:Faraday} \mu_0\pa_t \CH &= -\nabla \times \CE,\\ \label{E:Gauss} \nabla \cdot \CD &= 0,\\ \label{E:Gauss_magn} \nabla \cdot \CH &= 0 \end{align} \end{subequations} for the \emph{electric field\/} $\CE$, \emph{magnetic field\/} $\CH$, the \emph{electric displacement field\/} $\CD$ with the \emph{constitutive relations} \beq\label{E:constit_rel-1 \begin{split} \CD &= \eps_0 \left(n^2\CE+\CP_{\text{NL}}\right),\\ \CP_{\text{NL},i} &= \sum_{j,l,m=1}^3 \chi_{ijlm}^{(3)}\CE_j\CE_l\CE_m \qquad\text{for }i=1,2,3. \end{split} \ee $\eps_0, \mu_0$ are the \emph{electric permittivity\/} and \emph{magnetic permeability of vacuum}, respectively, $x\mapsto n(x)$ is the \emph{refractive index} of the medium, and $x\mapsto\chi^{(3)}(x)$ is the \emph{cubic electric susceptibility\/} of the medium. We consider a 2D photonic crystal, i.e.{} we assume that the material coefficients change periodically on a plane and are independent of the orthogonal component on that plane. Let $a^{(1)},a^{(2)}\in\R^3$ be linearly independent lattice vectors defining the \emph{Bravais lattice\/} $\Lam:=\operatorname{span}_{\Z}\{a^{(1)},a^{(2)}\}$ of the crystal. Then the required periodicity reads \beq\label{E:constit_rel-2 \begin{split} n(x) &= n(x+R)\in\R,\\ \chi^{(3)}(x) &= \chi^{(3)}(x+R)\in\R^{3\times 3\times 3\times 3} \qquad\text{for all }x\in\R^3\text{ and all }R\in\Lam. \end{split} \ee Without loss of generality we assume that the crystal is homogeneous in the $x_3$-direction, i.e.{} $a^{(1)}_3=a^{(2)}_3=0$ and $\pa_{x_3}n = \pa_{x_3}\chi_{ijlm}^{(3)}=0$ for all $i,j,l,m$. We denote by $U$ the \emph{Wigner--Seitz cell\/} corresponding to the Bravais lattice. We use $b^{(1)}, b^{(2)}$ to denote the pair of vectors satisfying $a^{(i)}\cdot b^{(j)}=2\pi \delta_{i,j}$ for $i,j\in \{1,2\}$, and let the \emph{reciprocal lattice\/} be $\Lam^*:=\operatorname{span}_{\Z}\{b^{(1)},b^{(2)}\}$. $\B$ denotes the first \emph{Brillouin zone}, i.e.{} the Wigner--Seitz cell of the reciprocal lattice. Note that from the relations in \eqref{E:constit_rel-1} it is clear that we are neglecting losses, material dispersion as well as higher order nonlinearities and assuming that the third order nonlinear response of the medium is instantaneous. We will consider monochromatic waves propagating in the $x_3$-direction, i.e.{} waves propagating \emph{out of the plane of periodicity} of the 2D crystal, and use the ansatz \beq\label{E:field_form (\CE,\CH,\CD)(x,t)= e^{\ri(\kappa x_3-\omega t)} (E,H,D)(x_1,x_2;\omega) + \text{c.c.}, \ee where $\kappa \in \R$ and c.c.{} denotes the complex conjugate of the first term on the right. The ansatz \eqref{E:field_form} contains no higher harmonics, which is valid if the above form of $\CP_{\text{NL}}$ is replaced by a time averaged one, see below. Alternatively, a physical justification of neglecting higher harmonics is based on the lack of phase matching and absorption. Note that for the field \eqref{E:field_form} the divergence free conditions \eqref{E:Gauss} and \eqref{E:Gauss_magn} are automatically satisfied provided $\omega \neq 0$ since the spatially dependent parts \[ \big(\hat{\CE},\hat{\CH},\hat{\CD}\big)(x;\omega) :=e^{\ri\kappa x_3}\big(E,H,D\big)(x_1,x_2;\omega) \] satisfy \beq\label{E:solenoid \hat{\CD}=\frac{\ri}{\omega}\nabla\times\hat{\CH} \quad \text{and} \quad \mu_0\hat{\CH}=-\frac{\ri}{\omega}\nabla\times\hat{\CE}, \ee and thus $\nabla\cdot\hat{\CD} = \nabla\cdot\hat{\CH}=0$. Since our analysis below is for gap solitons with $\omega$ close to a band edge, the condition $\omega\neq 0$ is for us restrictive only when $\omega=0$ is in a gap and lies near a band edge. Note also that even if higher harmonics are accounted for, the divergence free conditions are still satisfied for $\omega \neq 0$ as \eqref{E:solenoid} then holds for each generated harmonic. Clearly, only odd, i.e., $(2n+1)$-th, $n\in\Z$, harmonics are generated. We will assume a centrosymmetric and isotropic $\chi^{(3)}$-tensor, which leads to the simplification \begin{align* \CP_{\text{NL}}= \chici (\CE \cdot \CE) \CE, \end{align* where $\chici := \chi^{(3)}_{1111} = \chi^{(3)}_{2222} = \chi^{(3)}_{3333}$ for $\chici:(x_1,x_2)\in \R^2 \rightarrow \R$, see \cite[Sec.~2d]{MN04}. Inserting the ansatz \eqref{E:field_form} in the nonlinearity $\CP_{\text{NL}}$ clearly generates the harmonics $e^{\pm 3\ri(\kappa x_3-\omega t)}$. These are, however, typically neglected based on the physical arguments that the fundamental harmonics $e^{\pm \ri(\kappa x_3-\omega t)}$ and the higher harmonics are not phase matched and that at the higher values of frequency (i.e. at $\pm 3\omega$) material absorption is usually large preventing the generation of significant fields at these frequencies, see e.g.{} \cite{BS01}. Considering only the fundamental harmonics, the nonlinear polarization for the ansatz \eqref{E:field_form} becomes \beq\label{E:PNL_ansatz \CP_{\text{NL}} =\chici \big(2|E|^2\,E+E\cdot E \,\barl{E}\big) e^{\ri(\kappa x_3-\omega t)} + \text{c.c.}\,, \ee i.e.{} \beq\label{E:PNL_ans_full} \CP_{\text{NL}} =\chici \bspm (3|E_1|^2+2|E_2|^2+2|E_3|^2)E_1+(E_2^2+E_3^2)\bar{E}_1\\ (2|E_1|^2+3|E_2|^2+2|E_3|^2)E_2+(E_1^2+E_3^2)\bar{E}_2\\ (2|E_1|^2+2|E_2|^2+3|E_3|^2)E_3+(E_1^2+E_2^2)\bar{E}_3 \espm e^{\ri(\kappa x_3-\omega t)} + \text{c.c.}. \ee Another widely used model for the nonlinear polarization is \begin{align* \CP_{\text{NL}}= \chici [ \CE \cdot \CE ]^{\rm av} \CE, \end{align* where $[f]^{\rm av}$ denotes the time average of $f$ over the period of $f$, i.e.{} over $t\in [0,\pi/\omega]$ for $f=\CE \cdot \CE$, cf.{} \cite{Stuart_93,Sutherland03}. The averaging generates no higher harmonics so that in this model \eqref{E:PNL_ansatz} is exact. Note that the Kerr nonlinear problem including all higher harmonics has been recently considered for a 1D periodic structure in \cite{PSW11}. In the following we rescale the frequency by defining $$ \wt{\omega}:=\frac{\omega}{c} $$ but drop the tilde again for better readability. For convenience we will denote the square of the refractive index by $$ \eta(x):=n^2(x)\qquad\text{for all }x\in\R^3. $$ With the ansatz \eqref{E:field_form} equations \eqref{E:Ampere} and \eqref{E:Faraday} become \begin{subequations}\label{E:Maxw_1st_harm} \begin{align} -\ri c \omega D &= \nabla \times H + \ri \bspm 0\\0\\\kappa \espm\times H,\\ \ri c \omega \mu_0 H &= \nabla \times E + \ri \bspm 0\\0\\\kappa \espm\times E. \end{align} \end{subequations} Since all our functions are independent of $x_3$, we let from now on $x=(x_1,x_2)\in\R^2$. Using the fact that $E$ depends only on $x_1$ and $x_2$, a second order formulation of \eqref{E:Maxw_1st_harm} reads \beq\label{E:NL_Maxw \left(L-\omega^2\eta\right)E = \omega^2 P_{\text{NL}}, \ee where \beq\label{E:L_op L E := \nabla\times \nabla \times E + \ri \kappa \bspm\pa_{x_1}E_3\\\pa_{x_2}E_3\\\pa_{x_1}E_1+\pa_{x_2}E_2\espm +\kappa^2 \bspm E_1\\E_2\\0\espm, \ee and \[ P_{\text{NL}} = \chici \big(2|E|^2\,E+E\cdot E \,\barl{E}\big). \ Having determined $E$, the magnetic field can be recovered by \[ H = -\tfrac{\ri}{\omega \mu_0}\left(\nabla \times E + \ri \bspm 0\\0\\\kappa\espm\times E\right). \ Based on the analogy with the periodic nonlinear Schr\"odinger equation \cite{Pankov05}, equation \eqref{E:NL_Maxw} is expected to have localized $H(\text{curl},\R^2)$-solutions $E$ for any $\omega$ in a spectral gap of the linear problem $Lu=\omega^2 \eta u$. Such solutions are called \textit{gap solitons}. The aim of this paper is to provide an approximation of gap solitons $E$ of \eqref{E:NL_Maxw} for $\omega$ in an $\eps^2$-vicinity ($0<\eps\ll 1$) of a gap edge using a slowly varying envelope approximation. As we show, envelopes of such gap solitons satisfy a system of nonlinear constant coefficient equations, so called \textit{coupled mode equations} (CMEs) posed in the slow variables $y=\eps x$. The CMEs can be numerically solved with less effort than the nonlinear Maxwell system \eqref{E:NL_Maxw} in the variable $x$. An asymptotic approximation of a gap soliton of \eqref{E:NL_Maxw} near a gap edge is then the sum of linear Bloch waves at the edge, modulated by the corresponding envelopes. Asymptotic approximations via CMEs have been analyzed for gap solitons of the stationary periodic nonlinear Schr\"odinger equation in 1D \cite{PSn07} as well as in 2D \cite{DPS09,DU09,DU_err11}. In these works the approximation via CMEs was also rigorously justified using Lyapunov--Schmidt reductions. Gap solitons of the nonlinear Maxwell's equations have been approximated by CMEs in the case of 1D photonic crystals with a small (infinitesimal) contrast in the periodicity \cite{GWH01,PSn07,PSW11}, where \cite{GWH01} considers gap solitons modulated also in time. To our knowledge the problem of a systematic CME approximation of gap solitons of nonlinear Maxwell's equations describing 2D or 3D photonic crystals does not appear in the literature. Although CMEs have been formally derived for pulses in Maxwell's equations with a 2D periodic medium of small contrast \cite{AJ98,AP05,DA05}, these pulses cannot be true gap solitons because in 2D and 3D a large enough contrast is necessary for the opening of spectral gaps. In this paper we consider a 2D photonic crystal with a finite contrast in the periodicity. For our examples we use a photonic crystal which has several spectral gaps \cite{AM03}. Besides the above cited works on coupled mode modeling of gap solitons there are a number of papers on the slowly varying envelope approximation of nonlinear pulses in periodic structures with the pulse frequency lying within the spectral bands. The envelope in this case can be typically modeled by the time dependent nonlinear Schr\"odinger equation and the approximation holds on large but finite time intervals \cite{BS01,BabinFigotin:2005,BSTU06}. The rest of the paper is organized as follows. In Section \ref{S:band_str} we study the linear band structure $\omega_n(k)$ of \eqref{E:NL_Maxw} (with $\chici=0$) and obtain thus the linear spectrum of the problem. We also discuss possible symmetries in the band structure and among the corresponding Bloch waves. An example of a photonic crystal from \cite{AM03} is then provided, for which the band structure is numerically computed and three band gaps are observed on the positive half axis $\omega>0$. In Section \ref{S:CME_deriv} we present a slowly varying envelope approximation of gap solitons of \eqref{E:NL_Maxw} for $\omega$ in the vicinity of a spectral edge and carry out a systematic formal derivation of CMEs describing the envelopes. Next, examples of CMEs are presented for the concrete photonic crystal given in Section \ref{S:band_str} as well as for other theoretical situations. Here the symmetries in the band structure and among the Bloch waves play an important role in determining properties of the CME coefficients. In Section \ref{S:numerics} we plot the approximation of two gap solitons in the chosen photonic crystal. The approximation requires computing the Bloch waves at the edge and solving the corresponding CMEs. \section{Linear Band Structure}\label{S:band_str} \subsection{The periodic eigenvalue problem}\label{S:evp} We study first the linear problem \beq\label{E:lin_Maxw_2nd} Lu=\omega^2 \eta u\qquad\text{on }\R^2 \eeq and define the band structure as well as the linear Bloch waves. By the Bloch--Floquet theory, see \cite{Kuchment_1993} or \cite[Ch.~3]{DLPSW_2011}, solution modes of \eqref{E:lin_Maxw_2nd} are given by the \emph{Bloch waves\/} $u_n(k;\pkt)$ for $n\in\N$ that satisfy \begin{equation}\label{E:omega_nk_def \begin{split} Lu_n(k;\pkt)&=\omega_n(k)^2\eta u_n(k;\pkt),\\ u_n(k;\pkt+R)&=u_n(k;\pkt)e^{\ri k\cdot R} \qquad\quad\text{for all }R\in\Lam, \end{split} \end{equation} where $k=(k_1,k_2)$ sweeps the first Brillouin zone $\B\subset \R^2$. It is well-known that $L$ is self-adjoint and has a compact inverse and that there thus exists a sequence of eigenvalues $\{\omega_n\}_{n\ge1}$ with $\lim_{n\to\infty}\omega_n=\infty$ and each eigenspace is of finite dimension. These eigenvalues are nonnegative and we use the natural ordering $\omega_{n-1}\le\omega_n$ for $n\ge1$. The mapping $k\mapsto\omega_n(k)$ is called the \emph{$n$-th band} of the spectral problem \eqref{E:omega_nk_def}. Of course, \eqref{E:omega_nk_def} allows also non-positive bands $-\omega_n$. These are typically labeled via $\omega_{-n}=-\omega_n$ and will play no role in our analysis. We therefore restrict ourselves to $\omega_n \geq 0$ for $n \in \N$. The Bloch waves in \eqref{E:omega_nk_def} can be written in the form \ u_n(k;x) = p_n(k;x)e^{\ri k\cdot x}, \ where the $p_n$ are $\Lambda$-periodic in $x$, i.e.{} $p_n(k;x+R)=p_n(k;x)$ for all $x\in U$, $R\in\Lam$. These satisfy the eigenvalue problem \beq\label{E:shifted \begin{split} \left(\wt{L}(k) -\omega_n^2(k)\eta(x)\right) p_n(k;x) &=0\qquad\qquad\quad\text{for all }x\in U,\\ p_n(k;x+R)&=p_n(k;x)\qquad\text{for all }x\in\partial U \text{ and all }R\in\Lam, \end{split} \ee with $$ \wt{L}(k) p_n(k;x) = (\nabla + \ri k')\times(\nabla + \ri k')\times p_n(k;x), $$ where $k=(k_1,k_2)\in\B$, $k'=(k_1,k_2,\kappa)^T$. Since $p_n$ is $x_3$-independent, $\wt{L}(k)$ can be written as \[ \wt{L}(k) = \bspm\kappa ^2-(\pa_{x_2}+\ri k_2)^2 & (\pa_{x_1}+\ri k_1)(\pa_{x_2}+\ri k_2) & \ri \kappa (\pa_{x_1}+\ri k_1)\\ (\pa_{x_1}+\ri k_1)(\pa_{x_2}+\ri k_2) & \kappa ^2-(\pa_{x_1}+\ri k_1)^2 & \ri \kappa (\pa_{x_2}+\ri k_2)\\ \ri \kappa (\pa_{x_1}+\ri k_1) & \ri \kappa (\pa_{x_2}+\ri k_2) & -(\pa_{x_1}+\ri k_1)^2-(\pa_{x_2}+\ri k_2)^2 \espm. \ In the variable $k$ the Bloch waves $u_n$ and the eigenvalues $\omega_n$ are easily proved to fulfill \beq\label{E:k-per \omega_n(k) = \omega_n(k+K), \quad p_n(k+K;x)=p_n(k;x)e^{-\ri K\cdot x} \qquad \text{for all }x\in U,\,K\in\Lam^*. \ee Due to the self-adjoint nature of $\wt{L}(k)$ we can normalize the Bloch functions via \beq\label{E:normalize_Bloch \left\langle p_n(k;\pkt),\eta p_m(k;\pkt)\right\rangle = \delta_{n,m}, \ee where $\langle f,g \rangle = \langle f,g \rangle_{L^2(U)^3} = \int_{U} f(x)\cdot\barl{g}(x)\dd x$ for $f,g:\R^2\rightarrow\C^3$. For purposes of the later asymptotic analysis of gap solitons we also present calculations of first and second order derivatives of the bands at extremal points. Suppose the band $\omega_{n_*}$ has an extremum at $k=k_*\in \B$ and denote $\omega_*:=\omega_{n_*}(k_*)$. By direct differentiation of \eqref{E:shifted} we see that the ``generalized Bloch functions'' $\pa_{k_j}p_{n_*}$, for $j \in \{1,2\}$, are solutions of the system \beq\label{E:deriv_eq \left(\wt{L}(k_*)-\omega_*^2\eta\right)\pa_{k_j}p_{n_*}(k_*;\pkt) =- \pa_{k_j}\wt{L}(k_*) p_{n_*}(k_*;\pkt). \ee Applying the differentiation $\pa_{k_i,k_j}^2$, for $i,j\in\{1,2\}$, to \eqref{E:shifted} and evaluation at $n=n_*$, $k=k_*$ yields \begin{align*} &\left(\wt{L}(k_*)-\omega_*^2\eta(x)\right) \pa_{k_i,k_j}^2p_{n_*}(k_*;x)\\ &\qquad = 2\omega_*\eta(x)\pa_{k_i,k_j}^2 \omega_{n_*}(k_*)p_{n_*}(k_*;x) -\pa_{k_i,k_j}^2\wt{L}(k_*)p_{n_*}(k_*;x)\\ &\qquad\quad{}-\pa_{k_i}\wt{L}(k_*)\pa_{k_j}p_{n_*}(k_*;x) -\pa_{k_j}\wt{L}(k_*)\pa_{k_i}p_{n_*}(k_*;x). \end{align*} Necessarily, due to the Fredholm alternative, the right hand side is $L^2$-orthogonal to $p_{n_*}(k_*;\pkt)$, which lies in the kernel of $\wt{L}(k_*)-\omega_*^2\eta$ with periodic boundary conditions on $U$. This yields the formula \begin{align}\label{E:om_deriv_2}\nota &\left(\pa_k^2\omega_{n_*}(k_*)\right)_{i,j} =\pa_{k_i,k_j}^2\omega_{n_*}(k_*)\\ &\quad=\frac{1}{2\omega_*}\left\langle \pa_{k_i,k_j}^2 \wt{L}(k_*)p_{n_*}(k_*;\pkt) +\pa_{k_i}\wt{L}(k_*)\pa_{k_j}p_{n_*}(k_*;\pkt) +\pa_{k_j}\wt{L}(k_*)\pa_{k_i}p_{n_*}(k_*;\pkt), p_{n_*}(k_*,\cdot)\right\rangle. \end{align A straightforward differentiation of $\wt{L}(k)$ yields \begin{align* \pa_{k_1}\wt{L}(k_*) &=\bspm 0 & \ri (\pa_{x_2}+\ri k_{*,2}) & -\kappa \\ \ri (\pa_{x_2}+\ri k_{*,2}) & -2\ri (\pa_{x_1}+\ri k_{*,1}) & 0\\ -\kappa & 0 & -2\ri(\pa_{x_1}+\ri k_{*,1}) \espm,\\ \pa_{k_2}\wt{L}(k_*) &=\bspm -2\ri (\pa_{x_2}+\ri k_{*,2}) & \ri (\pa_{x_1}+\ri k_{*,1}) & 0\\ \ri (\pa_{x_1}+\ri k_{*,1}) & 0& -\kappa \\ 0& -\kappa & -2\ri(\pa_{x_2}+\ri k_{*,2}) \espm, \end{align*} \beq\label{E:Ltil_2nd_der} \pa_{k_1}^2\wt{L}\equiv \bspm 0&0&0\\0&2&0\\0&0&2\espm, \quad \pa_{k_2}^2\wt{L}\equiv \bspm 2&0&0\\0&0&0\\0&0&2\espm, \quad \text{and} \quad \pa_{k_1,k_2}^2\wt{L}\equiv \bspm \hphm 0&-1&\hphm 0\\-1&\hphm 0&\hphm 0\\\hphm 0&\hphm 0&\hphm 0\espm, \ee where $k_{*,j}$, for $j\in\{1,2,3\}$, is the $j$-th component of $k_*$. With these the explicit forms of \eqref{E:om_deriv_2} read \beq\label{E:om_2deriv_k1 \pa_{k_1}^2\omega_{n_*}(k_*) = \frac{1}{\omega_*} \left\langle \bspm \ri(\pa_{x_2}+\ri k_{*,2})\pa_{k_1}p_{n_*,2}(k_*;\pkt) -\kappa \pa_{k_1}p_{n_*,3}(k_*;\pkt)\\ \ri(\pa_{x_2}+\ri k_{*,2})\pa_{k_1}p_{n_*,1}(k_*;\pkt) -2\ri(\pa_{x_1}+\ri k_{*,1}) \pa_{k_1}p_{n_*,2}(k_*;\pkt)+p_{n_*,2}(k_*;\pkt)\\ -2\ri(\pa_{x_1}+\ri k_{*,1})\pa_{k_1}p_{n_*,3}(k_*;\pkt) -\kappa \pa_{k_1}p_{n_*,1}(k_*;\pkt)+p_{n_*,3}(k_*;\pkt) \espm,p_{n_*}(k_*;\pkt) \right\rangle, \ee \beq\label{E:om_2deriv_k2 \pa_{k_2}^2\omega_{n_*}(k_*) = \frac{1}{\omega_*}\left\langle \bspm -2\ri(\pa_{x_2}+\ri k_{*,2})\pa_{k_2}p_{n_*,1}(k_*;\pkt) +\ri(\pa_{x_1}+\ri k_{*,1}) \pa_{k_2}p_{n_*,2}(k_*;\pkt)+p_{n_*,1}(k_*;\pkt)\\ \ri(\pa_{x_1}+\ri k_{*,1})\pa_{k_2}p_{n_*,1}(k_*;\pkt) -\kappa \pa_{k_2}p_{n_*,3}(k_*;\pkt)\\ -\kappa \pa_{k_2}p_{n_*,2}(k_*;\pkt)-2\ri(\pa_{x_2}+\ri k_{*,2}) \pa_{k_2}p_{n_*,3}(k_*;\pkt)+p_{n_*,3}(k_*;\pkt) \espm,p_{n_*}(k_*;\pkt) \right\rangle, \ee and \begin{align}\label{E:om_deriv_k12 \barr{rl} \pa_{k_1,k_2}^2\omega_{n_*}(k_*) = &\frac{1}{2\omega_*} \left\langle \bspm -2\ri(\pa_{x_2}+\ri k_{*,2})\pa_{k_1}p_{n_*,1}(k_*;\pkt) +\ri(\pa_{x_1}+\ri k_{*,1})\pa_{k_1}p_{n_*,2}(k_*;\pkt)+\ri(\pa_{x_2} +\ri k_{*,2})\pa_{k_2}p_{n_*,2}(k_*;\pkt)\\ \ri(\pa_{x_1}+\ri k_{*,1})\pa_{k_1}p_{n_*,1}(k_*;\pkt)+\ri(\pa_{x_2} +\ri k_{*,2})\pa_{k_2}p_{n_*,1}(k_*;\pkt)-2\ri(\pa_{x_1}+\ri k_{*,1}) \pa_{k_2}p_{n_*,2}(k_*;\pkt)\\ -2\ri(\pa_{x_2}+\ri k_{*,2})\pa_{k_1}p_{n_*,3}(k_*;\pkt)-2\ri(\pa_{x_1} +\ri k_{*,1})\pa_{k_2}p_{n_*,3}(k_*;\pkt) \espm \right.\\ & \qquad \left. {}+ \bspm -\kappa \pa_{k_2}p_{n_*,3}(k_*;\pkt)-p_{n_*,2}(k_*;\pkt)\\ -\kappa \pa_{k_1}p_{n_*,3}(k_*;\pkt)-p_{n_*,1}(k_*;\pkt)\\ -\kappa \big(\pa_{k_1}p_{n_*,2}(k_*;\pkt)+\pa_{k_2}p_{n_*,1}(k_*;\pkt)\big) \espm, p_{n_*}(k_*;\pkt) \right\rangle. \earr \end{align \subsection{Symmetries of the Band Structure and the Bloch waves} Symmetries in the refractive index function $\eta$ yield symmetries in the band structure and among Bloch waves. We restrict our attention to the cases of discrete rotational and axial reflection symmetry, which are relevant for the example we present below. The results of this section will be important when determining properties of the coefficients of coupled mode equations in Section \ref{S:CME_examples}. \subsubsection{Rotational symmetry}\label{S:rot_sym} Assume that the photonic crystal satisfies the rotational symmetry \beq\label{E:material_rot_sym \eta(x) = \eta(r_\alpha(x)) \qquad \text{for all }x\in \R^2 \ee for some $\alpha \in (-\pi,\pi]$ with the rotation $r_\alpha$ defined by \ r_\alpha(x) = \bspm \cos(\alpha)x_1 -\sin(\alpha) x_2\\ \sin(\alpha)x_1 +\cos(\alpha) x_2 \espm \ Below we use the notation $r_\alpha(v) = (\cos(\alpha)v_1 -\sin(\alpha) v_2, \sin(\alpha)v_1 +\cos(\alpha) v_2)^T$ if $v$ is a two dimensional vector $v\in \C^2$ and $r_\alpha(v) = (\cos(\alpha)v_1 -\sin(\alpha) v_2, \sin(\alpha)v_1 +\cos(\alpha) v_2, v_3)^T$ if $v$ is a three dimensional vector $v\in \C^3$. The symmetry \eqref{E:material_rot_sym} implies a symmetry of the Rayleigh quotient corresponding to the eigenvalue problem \eqref{E:shifted} and thus a symmetry of the band structure. In detail, for $k\in \B$ we have \ \omega_n^2(k)=\min_{\substack{V\subset H^{\text{curl}}_{\text{per}}(U) \\ \dim V=n}} \quad \max_{w\in V,\,w\neq 0} \frac{\int_U |(\nabla +\ri k')\times w(x)|^2\dd x {\int_U\eta(x) |w(x)|^2\dd x}, \ and the corresponding extremal point is $p_n(k;\pkt)$. Due to the relation $$ \left((\nabla+\ri r_\alpha(k'))\times f\right)(r_{\alpha}(x)) = r_{\alpha}\left[(\nabla +\ri k')\times r_{-\alpha} \left(f(r_{\alpha}(x))\right)\right] \quad \text{for all smooth }f:\R^2\to\R^3 $$ we get $$ \int_U |(\nabla +\ri r_\alpha(k'))\times w(x)|^2\dd x = \int_U |(\nabla +\ri k')\times r_{-\alpha} \left(w(r_{\alpha}(x))\right)|^2\dd x, $$ and symmetry \eqref{E:material_rot_sym} yields $$ \int_U \eta(x) |w(x)|^2\dd x = \int_U \eta(x) |r_{-\alpha}(w(r_{\alpha}(x))) |^2\dd x. $$ As a result we obtain that \beq\label{E:rot_sym_bands \omega_n(k) = \omega_n(r_\alpha(k)) \qquad \text{for all }n \in \N \text{ and all }k\in\B. \ee If $\omega_n(k)$ has geometric multiplicity one as an eigenvalue of \eqref{E:shifted}, we have also a symmetry of the corresponding Bloch functions, namely \beq\label{E:rot_sym_Bloch p_n(r_\alpha(k);x) = e^{\ri a} r_{-\alpha} \left(p_n(k;r_{\alpha}(x))\right) \qquad \text{for all } n \in \N \text{ and some } a=a(n)\in \R . \ee Note that a renormalization of $p_n(r_\alpha(k);x)$, in order to obtain $a=0$ in \eqref{E:rot_sym_Bloch}, is in general impossible when $r_\alpha(k)\doteq k$, where $k\doteq l$ reads ``$k$ congruent to $l$'' and means $k=l+K$ for some $K\in\Lam^*$. This is because in this case $p_n(r_\alpha(k);x)$ and $p_n(k;x)$ are related by \eqref{E:k-per} and a renormalization of the left hand side of \eqref{E:rot_sym_Bloch} would affect the right hand side in the same way. When $r_\alpha(k)$ is not congruent to $k$, e.g.{} when $k\in \operatorname{int}(\B)\setminus \{0\}$, then one can set $a=0$ in \eqref{E:rot_sym_Bloch}. >From the symmetry \eqref{E:rot_sym_bands} we can deduce a symmetry of the second derivatives of $\omega_n$. Using the identity $\partial_{k}\omega_n(k) =\partial_{k}(\omega_n(r_\alpha(k))) =(r_\alpha)^T(\partial_{k}\omega_n)(r_\alpha(k))$, we get by further differentiation \beq\label{E:der2_rot_sym \begin{pmatrix} \partial_{k_1}^2\omega_n(r_\alpha(k))\\ \partial_{k_2}^2\omega_n(r_\alpha(k))\\ \partial_{k_1,k_2}^2\omega_n(r_\alpha(k)) \end{pmatrix} =\begin{pmatrix} \cos^2(\alpha) & \sin^2(\alpha) & -\sin(2\alpha)\\ \sin^2(\alpha)&\cos^2(\alpha)&\sin(2\alpha)\\ \tfrac{1}{2}\sin(2\alpha)&-\tfrac{1}{2}\sin(2\alpha)&\cos(2\alpha) \end{pmatrix} \begin{pmatrix}\partial_{k_1}^2\omega_n(k)\\ \partial_{k_2}^2\omega_n(k)\\ \partial_{k_1,k_2}^2\omega_n(k) \end{pmatrix} \ee for all $k\in \B$ and $n\in \N$. \subsubsection{Reflection symmetry}\label{S:refl_sym} If the photonic crystal satisfies the reflection symmetry \beq\label{E:material_refl_sym \eta(x) = \eta(S_1(x)) \qquad \text{for all }x\in \R^2, \text{ where } S_1(x) = (-x_1,x_2)^T, \ee then similarly to Section \ref{S:rot_sym} we have \beq\label{E:refl_sym_bands \omega_n(k) = \omega_n(-k_1,k_2) \qquad \text{for all } k\in \B \text{ and } n \in \N. \ee Again, if $\omega_n(k)$ has geometric multiplicity one as an eigenvalue of \eqref{E:shifted}, then \beq\label{E:refl_sym_Bloch p_n(S_1(k);x) = e^{\ri a} S_1\left(p_n(k;S_1(x))\right) \qquad\text{for all } n \in \N \text{ and some } a=a(n)\in \R, \ee where $S_1(v)=(-v_1,v_2,v_3)^T$ for $v\in \C^3$. Just as above, unless $k \doteq S_1(k)$, we can set $a = 0$ in \eqref{E:refl_sym_Bloch}. The symmetry \eqref{E:refl_sym_bands} implies \beq\label{E:der2_refl_sym \begin{split} &\partial_{k_1}^2\omega_n(k)=(\partial_{k_1}^2\omega_n)(-k_1,k_2), \quad\partial_{k_2}^2\omega_n(k)=(\partial_{k_2}^2\omega_n)(-k_1,k_2),\\ &\partial_{k_1,k_2}^2\omega_n(k) =-(\partial_{k_1,k_2}^2\omega_n)(-k_1,k_2) \end{split} \ee for all $k\in \B$ and $n\in \N$. An analogous discussion, of course, applies for the reflection symmetry $\eta(x) = \eta(S_2(x))$ for all $x\in \R^2$, where $S_2(x) = (x_1,-x_2)^T$. One the obtains \beq\label{E:der2_refl_symB \begin{split} &\partial_{k_1}^2\omega_n(k)=(\partial_{k_1}^2\omega_n)(k_1,-k_2), \quad \partial_{k_2}^2\omega_n(k)=(\partial_{k_2}^2\omega_n)(k_1,-k_2),\\ &\partial_{k_1,k_2}^2\omega_n(k) =-(\partial_{k_1,k_2}^2\omega_n)(k_1,-k_2) \end{split} \ee for all $k\in \B$ and $n\in \N$ and if $\omega_n(k)$ has geometric multiplicity one as an eigenvalue of \eqref{E:shifted}, then \beq\label{E:refl2_sym_Bloch p_n(S_2(k);x) = e^{\ri a} S_2\left(p_n(k;S_2(x))\right) \quad\text{for all } n \in \N \text{ and some } a=a(n)\in \R . \ee \subsubsection{Combination of rotational and reflection symmetries If both the reflection symmetry \eqref{E:material_refl_sym} and the rotational symmetry \eqref{E:material_rot_sym} for some $\alpha \in (-\pi,-\pi]$, $|\alpha|\neq \pi/2$, hold, then for $k$ along the rays with angles $\pi/2-\alpha/2$ and $-(\pi/2+\alpha/2)$ the mixed derivative $\partial_{k_1,k_2}^2\omega_n(k)$ can be expressed in terms of $\partial_{k_1}^2\omega_n(k)$ and $\partial_{k_2}^2\omega_n(k)$. This is because for $k$ along these rays we have $(-k_1,k_2)=r_{\alpha}(k)$ or $(k_1,-k_2)=r_{\alpha}(k)$, so that both \eqref{E:der2_rot_sym} and \eqref{E:der2_refl_sym} or \eqref{E:der2_refl_symB} apply. In detail, suppose $$ (-k_1,k_2)=r_{\alpha}(k), \text{ i.e.{} } k =|k|e^{\ri (\pi/2-\alpha/2)} \quad\text{or}\quad k =|k|e^{-\ri (\pi/2+\alpha/2)}=-|k|e^{\ri (\pi/2-\alpha/2)}. $$ Then it follows that \[ \partial_{k_1}^2\omega_n(k) =(\partial_{k_1}^2\omega_n)(-k_1,k_2) =\cos^2(\alpha)\partial_{k_1}^2\omega_n(k) -\sin(2\alpha)\partial_{k_1,k_2}^2\omega_n(k) +\sin^2(\alpha)\partial_{k_2}^2\omega_n(k), \ where the first equality holds due to \eqref{E:der2_refl_sym} and the second due to \eqref{E:der2_rot_sym}. As a result, for $\alpha\in (-\pi,\pi]$, $|\alpha| \neq \pi/2$, and $k=\pm |k|e^{\ri (\pi/2-\alpha/2)}$ we get \beq\label{E:mix_der_slave \partial_{k_1,k_2}^2\omega_n(k) =\frac{1}{2}\tan(\alpha)\left(\partial_{k_2}^2\omega_n(k) -\partial_{k_1}^2\omega_n(k)\right). \ee Identity \eqref{E:mix_der_slave} applies also in the case when the $S_2$ reflection symmetry and the rotational symmetry \eqref{E:material_rot_sym} are both present for some $\alpha \in (-\pi,-\pi]$, $|\alpha|\neq \pi/2$. Then \eqref{E:mix_der_slave} holds for $k$ that satisfy $$ (k_1,-k_2)=r_{\alpha}(k), \text{ i.e.{} } k=\pm |k|e^{-\ri \alpha/2}. $$ \subsection{Example: Hexagonal Lattice with a Circular Material Structure \label{S:hex_struct As an example we consider the hexagonal lattice in the $(x_1,x_2)$-plane generated by the vectors $$ a^{(1)}= a_0\bspm\cos(\pi/3)\\ \sin(\pi/3)\espm \quad \text{and} \quad a^{(2)} =a_0\bspm 1\\0\espm \quad \text{with} \quad a_0>0. $$ In the Wigner--Seitz cell $U$ the material structure is given by the annulus centered at the lattice point in the origin and having outer and inner radii $a_0/2$ and $a_0(1.31/4.9)$ respectively. The material properties are given by $\eta(x) = 2.1025$ for $a_0(1.31/4.9)\leq |x|\leq a_0/2 $ and $\eta(x)=1$ otherwise. This is the same as the crystal used in \cite{AM03}, where the corresponding band structure was also computed. One choice of vectors generating the reciprocal lattice is $$ b^{(1)}=\tfrac{2\pi}{J_{12}} \bspm a^{(2)}_2\\ -a^{(2)}_1\espm= \tfrac{2\pi}{a_0\sin(\pi/3)} \bspm 0 \\1\espm, \quad b^{(2)} =\tfrac{2\pi}{J_{12}} \bspm -a^{(1)}_2\\ a^{(1)}_1\espm= \tfrac{2\pi}{a_0}\bspm 1 \\-\cot(\pi/3) \espm, $$ where $J_{12}=\det(a^{(1)},a^{(2)}) = a^{(1)}_1a^{(2)}_2-a^{(1)}_2a^{(2)}_1$. These vectors have been obtained via the formulas $\tilde{b}^{(1)}=2\pi \tfrac{\tilde{a}^{(2)}\times \tilde{a}^{(3)}}{J_{12}}$ and $\tilde{b}^{(2)}=2\pi \tfrac{\tilde{a}^{(3)}\times \tilde{a}^{(1)}}{J_{12}}$, where $\tilde{a}^{(j)} = (a^{(j)^T}, 0)^T$, $\tilde{b}^{(j)} = (b^{(j)^T}, 0)^T$ for $j\in\{1,2\}$ and $\tilde{a}^{(3)} = (0,0,1)^T$, cf.{} \cite[Ch.~5]{Ash_Mer_1976}. Figure \ref{F:crystal} shows the crystal geometry and the corresponding Brillouin zone. In this case both the rotational symmetry \eqref{E:material_rot_sym} with $\alpha = \pi/3$, the reflection symmetry \eqref{E:material_refl_sym} as well as the analogous symmetry $S_2$ do hold. The band structure and Bloch waves can therefore be recovered via \eqref{E:rot_sym_bands}, \eqref{E:rot_sym_Bloch} and \eqref{E:refl_sym_bands}, \eqref{E:refl_sym_Bloch} from the irreducible Brillouin zone $\B_0$ in Figure \ref{F:crystal}, i.e.{} the triangle with vertices $\Gamma,M,K$, where $\Gamma=(0,0)^T$, $M = \tfrac{1}{2}b^{(2)}$, and $K=\tfrac{1}{\sqrt{3}}|b^{(2)}|(1,0)^T$. These points are called \emph{high symmetry points}. \begin{figure}[h!] \begin{center} \begin{minipage}[c]{0.49\textwidth} \centering (a)\\ \psset{unit=2.9cm} \pspicture(-1.4,-1.4)(1.7,1.7) \pscircle[linecolor=lightgray,fillstyle=solid,fillcolor=lightgray](0,0){0.5} \pscircle[linecolor=lightgray,fillstyle=solid,fillcolor=white](0,0){0.36} \pscircle[linecolor=lightgray,fillstyle=solid,fillcolor=lightgray](0.5,0.866){0.5} \pscircle[linecolor=lightgray,fillstyle=solid,fillcolor=white](0.5,0.866){0.36} \pscircle[linecolor=lightgray,fillstyle=solid,fillcolor=lightgray](-0.5,0.866){0.5} \pscircle[linecolor=lightgray,fillstyle=solid,fillcolor=white](-0.5,0.866){0.36} \pscircle[linecolor=lightgray,fillstyle=solid,fillcolor=lightgray](-0.5,-0.866){0.5} \pscircle[linecolor=lightgray,fillstyle=solid,fillcolor=white](-0.5,-0.866){0.36} \pscircle[linecolor=lightgray,fillstyle=solid,fillcolor=lightgray](0.5,-0.866){0.5} \pscircle[linecolor=lightgray,fillstyle=solid,fillcolor=white](0.5,-0.866){0.36} \pscircle[linecolor=lightgray,fillstyle=solid,fillcolor=lightgray](1,0){0.5} \pscircle[linecolor=lightgray,fillstyle=solid,fillcolor=white](1,0){0.36} \pscircle[linecolor=lightgray,fillstyle=solid,fillcolor=lightgray](-1,0){0.5} \pscircle[linecolor=lightgray,fillstyle=solid,fillcolor=white](-1,0){0.36} \psline(0.5,0.2887)(0,0.5774)\psline(0,0.5774)(-0.5,0.2887) \psline(-0.5,0.2887)(-0.5,-0.2887)\psline(-0.5,-0.2887)(0,-0.5774) \psline(0,-0.5774)(0.5,-0.2887)\psline(0.5,-0.2887)(0.5,0.2887) \psdots*(0,0)(0.5,0.866)(-0.5,0.866)(-1,0)(-0.5,-0.866)(0.5,-0.866)(1,0) \psline{->}(0,0)(1,0) \psline{->}(0,0)(0.5,0.866) \psline[linewidth=0.5pt,linestyle=dashed](0,0)(-0.433,-0.25) \psline[linewidth=0.5pt,linestyle=dashed](0,0)(0.3118,-0.18) \put(-0.17,-0.2){$\tfrac{a_0}{2}$} \put(0.1,-0.18){$r_1$} \put(0.74,0.06){$a^{(2)}$} \put(0.41,0.59){$a^{(1)}$} \put(-0.59,0.26){$U$} \endpspicture \end{minipage} \begin{minipage}[c]{0.49\textwidth} \centering (b)\\ \bigskip \bigskip \bigskip \psset{unit=2.9cm} \pspicture(-1,-1)(1,1) \psdots*(0,0)(0.866,0.5)(0.866,-0.5)(-0.866,0.5)(-0.866,-0.5)(0,1)(0,-1 \put(0.21,0.26){$\B$} \psline(0.5774,0)(0.2887,0.5) \psline(0.2887,0.5)(-0.2887,0.5) \psline[linestyle=dashed](-0.2887,0.5)(-0.5774,0) \psline[linestyle=dashed](-0.5774,0)(-0.2887,-0.5) \psline[linestyle=dashed](-0.2887,-0.5)(0.2887,-0.5) \psline(0.2887,-0.5)(0.5774,0) \psline{->}(0,0)(0,1) \psline{->}(0,0)(0.866,-0.5) \put(0.04,0.7){$b^{(1)}$} \put(0.8,-0.38){$b^{(2)}$} \put(-0.12,0){$\Gamma$}\put(0.62,0){$K$}\put(0.49,-0.26){$M$} \pspolygon[fillstyle=solid,fillcolor=lightgray](0,0)(0.433,-0.25)(0.5774,0) \put(0.3,-0.12){$\B_0$} \endpspicture \end{minipage} \end{center} \caption{\label{F:crystal}\smal (a) Hexagonal lattice with a cylindrical material structure, (b) the corresponding first Brillouin zone $\B$ with a shaded irreducible Brillouin zone $\B_0$. Note that the Brillouin zone has been scaled to fit the figure.} \end{figure} Next we provide some specific information about the values of the second derivatives of $\omega_n$ at the high symmetry points of the Brillouin zone at hand using symmetries \eqref{E:der2_rot_sym}, \eqref{E:der2_refl_sym}, and \eqref{E:der2_refl_symB}. This information will be used in Section \ref{S:CME_examples} Identity \eqref{E:der2_rot_sym} with $k=0$ and $\alpha=\pi/3$ yields \begin{align}\label{E:der2_Gamma \partial_{k_2}^2\omega_n(\Gamma)=\partial_{k_1}^2\omega_n(\Gamma) \quad\text{and }\quad \partial_{k_1,k_2}^2\omega_n(\Gamma)=0 \qquad \text{for all }n\in \N. \end{align Symmetry \eqref{E:der2_refl_symB} implies \beq\label{E:mix_K \partial_{k_1,k_2}^2\omega_n(K)=0 \qquad \text{for all }n\in \N. \ee At $k=r_{2\pi/3}(M)$ $(=\tfrac{1}{2}b^{(1)})$ we have $k_1=0$ so that \eqref{E:der2_refl_sym} implies \beq\label{E:mix_Mprime \partial_{k_1,k_2}^2\omega_n(r_{2\pi/3}(M))=0 \qquad \text{for all }n\in \N. \ee Relation \eqref{E:mix_der_slave} then yields \beq\label{E:2nd_der_Mprime \partial_{k_2}^2\omega_n(r_{2\pi/3}(M)) = \partial_{k_1}^2\omega_n(r_{2\pi/3}(M)) \qquad \text{for all }n\in \N. \ee Applying now \eqref{E:der2_rot_sym} with $\alpha=2\pi/3$, we get \beq\label{E:M_der \partial_{k_1}^2\omega_n(M) = \partial_{k_1}^2\omega_n(r_{2\pi/3}(M)), \ \partial_{k_2}^2\omega_n(M) = \partial_{k_1}^2\omega_n(r_{2\pi/3}(M)), \text{ and } \partial_{k_1,k_2}^2\omega_n(M)=0 \ee for all $n\in\N$. Because $r_{\pi/3}(M)$ is obtained from $r_{2\pi/3}(M)$ by the reflection $(k_1,k_2)\rightarrow (k_1,-k_2)$, we also have \begin{align* \partial_{k_1}^2\omega_n(M) = \partial_{k_1}^2\omega_n(r_{\pi/3}(M)), \ \partial_{k_2}^2\omega_n(M) = \partial_{k_1}^2\omega_n(r_{\pi/3}(M)), \text{ and } \partial_{k_1,k_2}^2\omega_n(r_{\pi/3}(M))=0. \end{align* As an example we took the configuration from \cite{AM03} as described in Section \ref{S:hex_struct}. The computations were done with a finite element Maxwell solver that uses lowest order Nedelec elements \cite{Monk:2003}. These elements were implemented in the software deal.II \cite{BHK:2007}. The eigenvalue problems were solved by a Krylov--Schur method.\footnotemark \footnotetext{SLEPc package (\textsc{http://www.grycap.upv.es/slepc/})} We computed the eigenvalues $\{\omega_n(k)\}_{n=1,14}$ and corresponding eigenfunctions $\{p_n(k,\cdot)\}_{n=1,14}$ for each vertex $k$ in a discretization of the Brillouin zone $\B$. The error level of this computations is about $10^{-3}$ in the curl-norm and it is estimated from a series of computations on a sequence of nested grids. In Figure \ref{F:band_str} we present the numerically computed band structure over $\partial\B_0$ (following the tradition) for the above described crystal and for $\kappa=5(2\pi/a_0)$. Here, $\partial\B_0$ is represented by 128 $k$-points. It has, however, been checked that the observed gaps do not get narrower in the interior of $\B$. Three band gaps appear on the positive half of the $\omega$ axis, one between $0$ and $\omega_1$, another one between $\omega_6$ and $\omega_7$ and the last one between $\omega_{12}$ and $\omega_{13}$. To get the extremal points at the band edges we used a bisection method in $k$ which was initialised with data obtained from the band structure computation. The approximations to 1st and 2nd order derivatives of $k\mapsto\omega_{n}(k)$ at the extremal values were obtained by first projecting $k\mapsto\omega_{n}(k)$ onto a locally quadratic finite element space and then taking mean values of the derivatives around vertices. \begin{figure}[h!] \begin{center} \epsfig{figure = FIGURES/band_structure2.eps,scale=0.62} \caption{\label{F:band_str} \small Band structure $k\mapsto\omega_n(k)$ for the described hexagonal lattice with the cylindrical material structure: the first 14 eigenvalues along $\partial\B_0$. Three band gaps appear on the positive half axis $\Omega\geq 0$: one between $0$ and $\omega_1$, one between $\omega_6$ and $\omega_7$, and one between $\omega_{12}$ and $\omega_{13}$. Gap edges are marked by $s_1, \ldots, s_5$.} \end{center} \end{figure} \section{Derivation of Coupled Mode Equations for Gap Solitons near Band Edges}\label{S:CME_deriv} \subsection{Slowly varying envelope approach \label{S:slowly envelope We seek gap solitons $E$ of \eqref{E:NL_Maxw}. Afterward, the full electric field can be recovered via \eqref{E:field_form}. In the following let us assume that \begin{itemize} \item[(A1)] the spectrum $\{\omega_n(k):k\in\B,\,n\in\N\}$ possesses a gap, \item[(A2) one of the gap edges, denoted by $\omega_*$, is attained at precisely $N\in\N$ points $k^{(1)},\ldots,k^{(N)}\in \B$ by bands with indices $n_1,\ldots,n_N$, respectively, where the $k$-points and/or band indices are not necessarily all distinct, \item[(A3) for each $j\in \{1, \ldots, N\}$ the band $\omega_{n_j}$ is twice continuously differentiable in $k$ at $k=k^{(j)}$, \item[(A4) $\pa_k^2\omega_{n_j}(k^{(j)})$, the Hessian of $\omega_{n_j}$ at $k=k^{(j)}$, is (positive or negative) definite for each $j\in \{1, \ldots, N\}$. \end{itemize} The smoothness assumption (A3) is needed to justify our Taylor expansions of $\omega_{n_j}$ near $k^{(j)}$. Bands $\omega_n$ are generally only Lipschitz continuous due to possible transversal intersections of bands and their numbering according to size \cite{MP_1996}. Away from points of intersection or tangency bands can be shown to be actually analytic in $k$ by standard perturbation theory \cite{Kato_1995}. The simplest situation when (A3) is satisfied is thus when each band $\omega_{n_j}$ is isolated near $k^{(j)}$, which is equivalent to $n_1=\ldots=n_N$ due to our ordering of bands according to size of $\omega_n(k)$ at each $k$. Note that since each band $\omega_{n_j}$ has an extremum at $k=k^{(j)}$, we have $\pa_{k_1}\omega_{n_j}(k^{(j)})=\pa_{k_2}\omega_{n_j}(k^{(j)})=0$ for $j\in \{1,\ldots,N\}$. Assumption (A4) then guarantees that the leading order terms in the Taylor expansion of the band $\omega_{n_j}$ around $k=k^{(j)}$ are in fact quadratic. The asymptotic expansion for the electrical field $E$ of gap solitons with $\omega$ in the gap and in the vicinity of the edge $\omega_*$ is expected \cite{DPS09,DU09} to be of the following \emph{slowly varying envelope} form \begin{gather}\label{E:ansatz_phys} \begin{split} &\eps\sum_{j=1}^NA_j(y)u_{n_j}(k^{(j)};x) +\eps^2\psi^{(1)}(x) +\eps^3\psi^{(2)}(x)+\mathcal{O}(\eps^4),\\ &\omega = \omega_*+\Omega\eps^2, \quad y = \eps x, \quad 0<\eps \ll 1, \end{split} \end{gather} where $A_j:\R^2\to\C$ is a fast decaying smooth function and where $\Omega = \pm 1$. The sign of $\Omega$ is determined by the condition that $\omega_*+\eps^2 \Omega$ lies in the gap. Performing a multiple scales analysis in the physical variables $(x,y)$ is impossible. The reason is that in order to solve the resulting equations at each order of the expansion, one has to ensure that inhomogeneous terms are orthogonal to the kernel of $L-\omega_*^2\eta$, i.e., to $u_{n_j}(k^{(j)};\pkt)$ for all $j \in \{1, \ldots, N\}$. This orthogonality needs to be checked on the common period of those $u_{n_j}$. If, however, one of the components of $k^{(j)}$ is irrational, the corresponding $u_{n_j}$ is not even periodic and this approach fails similarly to \cite{DU09}. We therefore perform the asymptotic analysis in Bloch variables where all functions are $U$-periodic in $x$ and orthogonality conditions are always posed over $U$. Let us define the \emph{Bloch transform} $\CT:E\mapsto\wt{E}$ and its inverse, cf.{} \cite[Ch.~7]{BCM_2001}, by \begin{gather \begin{split}\notag \wt{E}(k;x) &= (\CT E)(k;x) = \sum_{K\in\Lam^*} e^{\ri K\cdot x}\wh{E}(k+K), \quad E(x) = (\CT^{-1} \wt{E})(x) = \int_{\B}e^{\ri k\cdot x} \wt{E}(k;x)\dd k \end{split} \end{gather} for all $x,k\in\R^2$, where $\wh{E}$ denotes the \emph{Fourier transform\/} of $E$ \begin{gather}\label{E:FT} \notag \wh{E}(k):= (\CF E)(k) :=\frac{1}{(2\pi)^2}\int_{\R^2}E(x)e^{-\ri k\cdot x}\dd x, \quad E(x)=(\CF^{-1}\wh{E})(k) :=\int_{\R^2}\wh{E}(k)e^{\ri k\cdot x}\dd k. \end{gather} By definition we have the following properties of the Bloch transform \begin{alignat}{2} \nota \wt{E}(k;x+R) &= \wt{E}(k;x) &&\qquad\text{for all }R \in \Lam, \\ \label{E:Bloch_B_per_in_k} \wt{E}(k+K;x) &= e^{-\ri K\cdot x}\wt{E}(k;x) &&\qquad\text{for all }K \in \Lam^*. \end{alignat} Multiplication of two functions $f,g$ in physical space corresponds to convolution in Bloch space, i.e., \begin{gather} \notag \big(\CT (fg)\big)(k;x) = \int_{\B}\wt{f}(k-l;x)\wt{g}(l;x)\dd l =: \big(\wt{f} *_{\Bsm}\wt{g}\big)(k;x), \end{gather} where \eqref{E:Bloch_B_per_in_k} is used if $k-l\notin\B$. Especially, if $x\mapsto f(x)$ is $U$-periodic, then \[ \big(\CT (fg)\big)(k;x) = f(x)(\CT g)(k;x). \ This can be easily checked by writing $f$ in the form of a Fourier series, i.e.{} $f(x) = \sum_{K\in\Lam^*}c_K e^{\ri K\cdot x}$, cf.{} \cite[Ch.~7]{BCM_2001}. Exploiting this observation and applying the Bloch transform to \eqref{E:NL_Maxw} leads to \be \notag \left(\wt{L}(k)-\omega^2\eta(x)\right)\wt{E}(k;x) = \omega^2 \wt{P}_\text{NL}(k;x)\quad\text{for all }x,k\in\R^2, \ee where \begin{align*} \wt{P}_\text{NL}(k;\pkt) = \chici\CT\big(2|E|^2\,E+E\cdot E \,\barl{E}\big) = \chici\big(2(E\,.\!*_\Bsm\barl{E})*_\Bsm E +(E\,.\!*_\Bsm E)*_\Bsm\barl{E}\big), \end{align*} with $f\,.\!*_\Bsm g:=\sum_jf_j*_\Bsm g_j$ for vector valued $f,g$, while $f*_\Bsm g$ is understood componentwise for scalar $f$ and vector valued $g$. By definition of the Bloch-- and Fourier transformation one immediately finds \[ \CT\big(A_j(\eps\pkt)e^{\ri k^{(j)}\cdot(\pkt)}\big)(k;x) = \eps^{-2}\sum_{K\in\Lam^*}\wh{A}_j \left(\tfrac1{\eps}(k-k^{(j)}+K)\right)e^{\ri K\cdot x}, \ so that the asymptotic ansatz \eqref{E:ansatz_phys} is transformed to \beq\label{E:ansatz_Bloch \eps\wt{\psi}^{(0)}(k;x) +\eps^2\wt{\psi}^{(1)}(k;x)+\eps^3\wt{\psi}^{(2)}(k;x) +O(\eps^4), \ee where $$ \wt{\psi}^{(0)}(k;x)=\eps^{-2}\sum_{j=1}^N \sum_{K \in \Lam^*}\wh{A}_j\left(\tfrac1{\eps}(k-k^{(j)}+K)\right) e^{\ri K \cdot x}p_{n_j}(k^{(j)};x). $$ Similarly to \cite{DU09} and \cite{DU_err11}, due to the fast decay of the Bloch transform of $A_j$ in $k$, we approximate $\wh{A}_j\left(\tfrac1{\eps}(k-k^{(j)}+K)\right)$ by $\chi_{D_{\eps^r}}^{}\left(k-k^{(j)}+K\right) \wh{A}_j\left(\tfrac1{\eps}(k-k^{(j)}+K)\right)$ for some $r\in(0,1)$, where $\chi_{S}$ is the indicator function of a set $S$, $D_\delta:=B_\delta(0)$ with $B_\delta(z) :=\{k\in\R^2:|k-z|<\delta\}$ for $\delta>0$, $z\in\R^2$. We will therefore introduce the approximation \[ \wt{E}(k;x) = \eps^{-1}\wt{E}^{(0)}(k;x) + \wt{E}^{(1)}(k;x)+\eps \wt{E}^{(2)}(k;x) + O(\eps^2) \ with \beq \notag \wt{E}^{(0)}(k;x) =\sum_{j=1}^N\sum_{K\in\Lam^*} \chi_{D_{\eps^r}}^{}\left(k-k^{(j)}+K\right) \wh{A}_j\left(\tfrac1{\eps}(k-k^{(j)}+K)\right)e^{\ri K\cdot x} p_{n_j}(k^{(j)};x) \ee for all $k\in\B$ and $x\in\R^2$. In the following we will use the notation $K^m=m_1b^{(1)}+m_2b^{(2)}\in\Lam^*$ for $m\in\Z^2$ for convenience. As an abbreviation we let $\ell^{(j,m)}(k):=\tfrac1{\eps}(k-k^{(j)}+K^m)$ for $k\in\R^2$ and $m\in\Z^2$, so that $\wt{E}^{(0)}$ is given as \beq \label{E:E0_approx \wt{E}^{(0)}(k;x) =\sum_{j=1}^N\sum_{m\in\Z^2} \chi_{D_{\eps^r}}^{}\big(\ell^{(j,m)}(k)\big) \wh{A}_j\big(\ell^{(j,m)}(k)\big)e^{\ri K^m\cdot x} p_{n_j}(k^{(j)};x). \ee Note that $\wt{E}^{(0)}(\pkt;x)$ is supported on a set of (for sufficiently small $\eps$) disjoint balls $B_{\eps^r}(k^{(j)}-K^m)$, $j\in\{1,\ldots,N\}$, $m\in\Z^2$. \subsection{Formal asymptotic analysis \label{S:asymptotic_analysis Let us proceed with a formal asymptotic analysis of \eqref{E:NL_Maxw}. First, we consider $k$ close to $k^{(j)}-K^m$, i.e., $k\in B_{\eps^r}(k^{(j)}-K^m)$ for some $j\in \{1, \ldots, N\}, m\in \Z^2$. Then \beq\label{E:Ltil_expand \begin{split} \wt{L}(k) &= \wt{L}\big(k^{(j)}-K^m+\eps\ell^{(j,m)}(k)\big)\\ &= \wt{L}(k^{(j)}-K^m) +\eps \ell^{(j,m)}(k)\cdot\pa_{k}\wt{L}(k^{(j)}-K^m) +\frac12\eps^2Q(\ell^{(j,m)}(k)), \end{split} \ee where we have used the fact that the second derivatives of $\wt{L}$ are constant in $k$, see \eqref{E:Ltil_2nd_der}, and where \[ \begin{split} \ell^{(j,m)}(k)\cdot\pa_{k}\wt{L}(k^{(j)}-K^m) &= \sum_{i=1}^2\ell^{(j,m)}_i(k)\pa_{k_i}\wt{L}(k^{(j)}-K^m), \text{ and}\\ Q(\ell^{(j,m)}(k)) & = \sum_{a,b=1}^2\ell_a^{(j,m)}(k)\ell_b^{(j,m)}(k)\pa^2_{k_a,k_b}\tilde{L}. \end{split} \] Using \eqref{E:ansatz_Bloch}, \eqref{E:E0_approx}, \eqref{E:Ltil_expand} and $\omega = \omega_* + \Omega \eps^2$, we get a hierarchy of equations at each power of $\eps$ for $x\in U$ and $k\in B_{\eps^r}(k^{(j)}-K^m)$. We now study the equations related to $\eps^{-1},\eps^0,\eps^1$ under the condition that the nonlinear term contributes to $\eps^1$, which is confirmed later in \eqref{E:NL_term_conv}. $\mbf{O(\eps^{-1})}$: The resulting equation is \begin{align*} &\wh{A}_j\big(\ell^{(j,m)}(k)\big) \left(\wt{L}(k^{(j)}-K^m)-\omega_*^2\eta(x)\right) \big(p_{n_j}(k^{(j)};x)e^{\ri K^m\cdot x}\big)\\ &\qquad=\wh{A}_j\big(\ell^{(j,m)}(k)\big)e^{\ri K^m\cdot x} \left(\wt{L}(k^{(j)})-\omega_*^2\eta(x)\right) p_{n_j}(k^{(j)};x)\solleq 0. \end{align* This holds by the definitions of $\omega_*=\omega_{n_j}(k^{(j)})$ and $p_{n_j}(k^{(j)};\pkt)$. $\mbf{O(1)}$: The resulting equation is \begin{align*} &\left(\wt{L}(k^{(j)}-K^m)-\omega_*^2\eta(x)\right)\wt{E}^{(1)}(k;x)\\ &\qquad=-\wh{A}_j(\ell^{(j,m)}(k)\big)\big(\ell^{(j,m)}(k)\cdot \pa_{k}\wt{L}(k^{(j)}-K^m)\big)\big(p_{n_j}(k^{(j)};x)e^{\ri K^m\cdot x}\big)\\ &\qquad=-\wh{A}_j\big(\ell^{(j,m)}(k)\big)e^{\ri K^m\cdot x} \big(\ell^{(j,m)}(k)\cdot\pa_{k}\wt{L}(k^{(j)})\big)p_{n_j}(k^{(j)};x) \solleq 0. \end{align* Using \eqref{E:deriv_eq}, the solution is found to be \beq\label{E:E1 \wt{E}^{(1)}(k;x) =\wh{A}_j(\ell^{(j,m)}(k))e^{\ri K^m\cdot x} \big(\ell^{(j,m)}(k)\cdot\pa_{k}p_{n_j}(k^{(j)};x)\big), \ee where $\ell^{(j,m)}(k)\cdot\pa_{k}p_{n_j}(k^{(j)};x) = \sum_{i=1}^2\ell^{(j,m)}_i(k)\pa_{k_i}p_{n_j}(k^{(j)};x)$. $\mbf{O(\eps)}$: The contribution of $\wt{L}(k)\wt{E}$ is \begin{align*} &\left(\wt{L}(k^{(j)}-K^m)-\omega_*^2\eta(x)\right)\wt{E}^{(2)}(k;x)\\ &\qquad{}+\tfrac{1}{2}Q(\ell^{(j,m)})\wt{E}^{(0)}(k;x) -2\omega_*\Omega\eta(x)\wt{E}^{(0)}(k;x)\\ &\qquad{}+\big(\ell^{(j,m)}(k)\cdot\pa_{k}\wt{L}(k^{(j)}-K^m)\big) \wt{E}^{(1)}(k;x). \end{align* By insertion of the previous results this gives (for $k\in B_{\eps^r}(k^{(j)}-K^m)$) \begin{align} \notag &\left(\wt{L}(k^{(j)}-K^m)-\omega_*^2\eta(x)\right)\wt{E}^{(2)}(k;x)\\ \notag &\qquad\quad{}+\Big[\tfrac{1}{2}Q(\ell^{(j,m)}(k))p_{n_j}(k^{(j)};x) -2\omega_*\Omega\eta(x)p_{n_j}(k^{(j)};x)\\ \notag &\qquad\qquad\quad{}+\big(\ell^{(j,m)}(k)\cdot\pa_{k}\wt{L}(k^{(j)}-K^m)\big) \big(\ell^{(j,m)}(k)\cdot\pa_{k}p_{n_j}(k^{(j)};x)\big)\Big]\wh{A}_j \big(\ell^{(j,m)}(k)\big)e^{\ri K^m\cdot x}\\ \label{E:Oeps} &\qquad\solleq\omega_*^2\chici(x)\frac1{\eps^4} \Big(2\big(\wt{E}^{(0)}\,.\!*_\Bsm\wt{\barl{E^{(0)}}}\big)*_\Bsm \wt{E}^{(0)} +\big(\wt{E}^{(0)}\,.\!*_\Bsm \wt{E}^{(0)}\big)*_\Bsm\wt{\barl{E^{(0)}}}\Big) (k;x)\\ \notag &\qquad=:\omega_*^2\chici(x)\wt{G}_j(k,x). \end{align The remainder of the section is devoted to the analysis of the structure of $\wt{G}_j$ in \eqref{E:Oeps} and to the derivation of a solvability condition for \eqref{E:Oeps}. Let us first analyze the nonlinearity. The convolutions in \eqref{E:Oeps} can be expanded into the form \beq\label{E:NL_termE \wt{E}^{(0)}_a *_{\Bsm} \wt{E}^{(0)}_b *_{\Bsm} \wt{\barl{E^{(0)}_c}} = \sum_{\alpha,\beta,\gamma=1}^N \xi_{\alpha,a} *_{\Bsm} \xi_{\beta,b} *_{\Bsm} \xi_{\gamma,c}^\times, \ee where $a,b,c \in \{1,2,3\}$, and functions $\xi_{\alpha,a}$ and $\xi_{\alpha,a}^\times$ are given by \ \begin{array}{ll} & \xi_{\alpha,a}(k;x) := p_{n_\alpha,a}(k^{(\alpha)};x) \sum\limits_{z\in \Z^2} \chi_{D_{\eps^r}}^{}\big(k-k^{(\alpha)}+K^{z}\big)\wh{A}_\alpha \left(\tfrac1{\eps}(k-k^{(\alpha)}+K^{z})\right)e^{\ri K^{z}\cdot x}, \\[6pt] & \xi_{\alpha,a}^\times(k;x) := \barl{p_{n_\alpha,a}}(k^{(\alpha)};x) \sum\limits_{z\in\Z^2} \chi_{D_{\eps^r}}^{}\big(k+k^{(\alpha)}-K^{z}\big)\wh{A}_\alpha \left(\tfrac1{\eps}(k+k^{(\alpha)}-K^{z})\right)e^{-\ri K^{z}\cdot x}. \end{array} \ Note that \eqref{E:NL_termE} represents all the nonlinear terms in \eqref{E:Oeps} due to commutativity of $*_{\Bsm}$. The summands in \eqref{E:NL_termE} have the form \beq\label{E:double_convol (\xi_{\alpha,a}*_{\Bsm}\xi_{\beta,b}*_{\Bsm}\xi_{\gamma,c}^\times)(k;x) = \sum_{n,o,q\in\Z^2}g_{noq}(k;x) = \sum_{n\in M_\alpha^{(2)},\,o\in M_\beta^{(2)},\,q\in M_\gamma} g_{noq}(k;x), \ee where (with indices $\alpha,\beta,\gamma,a,b,c$ suppressed) \beq\label{E:NL_term \begin{split} g_{noq}(k;x) &= e^{\ri(K^n+K^o-K^q)\cdot x} p_{n_{\alpha},a}(k^{(\alpha)};x) p_{n_{\beta},b}(k^{(\beta)};x)\ov{p_{n_{\gamma},c}}(k^{(\gamma)};x)\\ &\quad \int\limits_{\B}\int\limits_{\B} \chi_{D_{\eps^r}}^{}\big(k-t-k^{(\alpha)}+K^n\big) \wh{A}_\alpha\left(\tfrac1{\eps}(k-t-k^{(\alpha)}+K^n)\right)\\ &\qquad\qquad \times \chi_{D_{\eps^r}}^{}\big(t-s-k^{(\beta)}+K^o\big) \wh{A}_\beta\left(\tfrac1{\eps}(t-s-k^{(\beta)}+K^o)\right)\\[4pt] &\qquad\qquad \times \chi_{D_{\eps^r}}^{}\big(s+k^{(\gamma)}-K^q\big) \wh{\barl{A}}_\gamma\left(\tfrac1{\eps}(s+k^{(\gamma)}-K^q)\right) \dd s\dd t \end{split} \ee and with \begin{align* M_\gamma &= \{z\in\Z^2 : k-k^{(\gamma)}+K^{z}\in B_{\eps^r}(0) \text{ for some } k\in \B \text{ and all } \eps>0 \},\\ M_\flat^{(2)} &= \{z\in\Z^2 : k-k^{(\flat)}+K^{z}\in B_{\eps^r}(0) \text{ for some } k\in \B + \B \text{ and all } \eps>0 \} \end{align* for $\flat\in\{\alpha,\beta\}$. The truncation of the series in \eqref{E:double_convol} comes from the fact that for $s,t,k \in \B$ we have $t-s\in\B + \B$ and $k-t \in \B + \B$ so that the three characteristic functions in \eqref{E:NL_term} can be nonzero only for $n\in M_\alpha^{(2)}$, $o\in M_\beta^{(2)}$, and $q\in M_\gamma$. More precisely, this is seen as follows. Only those combinations of $n,o,q$ which produce nonzero values of all the three characteristic functions in \eqref{E:NL_term} and of the function $\chi_{D_{\eps^r}}^{}\big(\pkt-k^{(j)}+K^m\big)$ in \eqref{E:Oeps} for given $j$ and some $k,t,s\in \B$ are of relevance. Firstly, $\chi_{D_{\eps^r}}^{}\big(s+k^{(\gamma)}-K^q\big)$ is nonzero for some $s\in \B$ and for arbitrary $\eps>0$ if and only if $s_0:=-k^{(\gamma)}+K^q\in \overline{\B}$ (the closure of $\B$) for some $q\in\Z^2$, which is equivalent to \beq\label{E:q_cond q\in M_\gamma. \ee Secondly, for a fixed $q$ the factor $\chi_{D_{\eps^r}}^{}\big(t-s-k^{(\beta)}+K^o\big)$ is nonzero for all $\eps >0$ and some $t\in\B$ and $s$ obtained in the first step if and only if $t_0:=s_0+k^{(\beta)}-K^o \in \overline{\B}$, i.e., \beq\label{E:o_cond k^{(\beta)}-k^{(\gamma)}+K^q-K^o \in \barl{\B}. \ee This can always be satisfied by a choice of $o\in M^{(2)}_\beta$. Finally, for fixed $q$ and $o$ we need that $\chi_{D_{\eps^r}}^{}\big(k-t-k^{(\alpha)}+K^n\big)$ does not vanish for some $k\in\B$ with $k-k^{(j)}+K^m\in D_{\eps^r}$ and all $\eps>0$, where this latter restriction is due to the restriction $k\in B_{\eps^r}(k^{(j)}-K^m)$ in \eqref{E:Oeps}. In other words, we need that $k_0:=k^{(j)}-K^m\in\barl{\B}$ and $0=k_0-t_0-k^{(\alpha)}+K^n$, i.e.{} \beq\label{E:n_cond k^{(\alpha)}+k^{(\beta)}-k^{(\gamma)}+K^q-K^o-K^n = k^{(j)}-K^m \in\barl{\B} \ee for some $n\in \Z^2$. In fact, all solutions for $n$ of \eqref{E:n_cond} lie in $M_\alpha^{(2)}$. In summary, for $\alpha,\beta, \gamma \in \{1, \ldots, N\}$ the term $g_{noq}$ is nonzero in \eqref{E:NL_term} if $n,o,q$ satisfy \eqref{E:q_cond}, \eqref{E:o_cond} and \eqref{E:n_cond}. So the term $\xi_{\alpha,a} *_{\Bsm} \xi_{\beta,b} *_{\Bsm} \xi_{\gamma,c}^\times$ enters $\wt{G}_j$ provided \ \CA_{\alpha,\beta,\gamma,j} :=\Big\{(n,o,q)\in (\Z^2)^3 : n\in M_\alpha^{(2)},\,o\in M_\beta^{(2)}, \,q\in M_\gamma\text{ and } \eqref{E:o_cond},\,\eqref{E:n_cond} \text{ hold}\Big\} \ is nonempty for some $m\in M_j$. Note that we omitted an index $m$ in this definition, because the set is either nonempty or empty for all $m\in M_j$. Indeed, if $m$ is one index that meets the requirements with $(n,o,q)$ and $z$ is any other index in $M_j$, then $z$ meets the requirements for $(n+m-z,o,q)$. $\CA_{\alpha,\beta,\gamma,j}$ can be constructed by a computer code that scans all possible combinations of $n,o,q$. This will be discussed in Section \ref{S:CME_examples}. Due to the characteristic function, the integration domains in \eqref{E:NL_term} can be reduced to $s\in B_{\eps^r}(-k^{(\gamma)}+K^q)\cap \B$ and $t\in B_{2\eps^r}(k^{(\beta)}-k^{(\gamma)}-K^o+K^q) \cap \B$. Now we introduce the change of variables $\tilde{s} :=(s+k^{(\gamma)}-K^q)/\eps$ and $\tilde{t} := (t-k^{(\beta)}+k^{(\gamma)}+K^o-K^q)/\eps$ to get \beq\label{E:NL_term_conv \begin{split} &g_{noq}(k;x) =\eps^4 e^{\ri(K^n+K^o-K^q)\cdot x}p_{n_{\alpha},a}(k^{(\alpha)};x) p_{n_{\beta},b}(k^{(\beta)};x)\ov{p_{n_{\gamma},c}}(k^{(\gamma)};x)\\ &\qquad\times\int \limits_{D_{2\eps^{r-1}}\cap\tfrac{\B-k^{(\beta)}+k^{(\gamma)}+K^o-K^q}{\eps}} \int\limits_{D_{\eps^{r-1}}\cap\tfrac{\B+k^{(\gamma)}-K^q}{\eps}} \chi_{D_{\eps^{r-1}}}^{}\left( \tfrac{k-(k^{(\alpha)}+k^{(\beta)}-k^{(\gamma)})+K^n+K^o-K^q}{\eps} -\tilde{t}\right)\\ &\qquad\quad\quad\times\wh{A}_\alpha\left( \tfrac{k-(k^{(\alpha)}+k^{(\beta)}-k^{(\gamma)})+K^n+K^o-K^q}{\eps} -\tilde{t}\right) \chi_{D_{\eps^{r-1}}}^{}(\tilde{t}-\tilde{s})\wh{A}_\beta(\tilde{t}-\tilde{s}) \chi_{D_{\eps^{r-1}}}^{}(\tilde{s})\wh{\barl{A}}_\gamma(\tilde{s}) \dd\tilde{s} \dd\tilde{t}. \end{split} \ee The factor $\eps^4$ in this formula shows that $\wt{G}_j=O(1)$ as required for the consistent asymptotic expansion. If \eqref{E:n_cond} is satisfied, \eqref{E:NL_term_conv} becomes \beq\label{E:NL_term_conv2 \begin{split} &g_{noq}(k;x) =\eps^4 e^{\ri(k^{(\alpha)}+k^{(\beta)}-k^{(\gamma)}-k^{(j)}+K^m)\cdot x} p_{n_{\alpha},a}(k^{(\alpha)};x) p_{n_{\beta},b}(k^{(\beta)};x)\ov{p_{n_{\gamma},c}}(k^{(\gamma)};x) \\ &\qquad\times\int \limits_{D_{2\eps^{r-1}}\cap\frac{\B-k^{(\beta)}+k^{(\gamma)}+K^o-K^q}\eps} \int\limits_{D_{\eps^{r-1}}\cap\frac{\B+k^{(\gamma)}-K^q}\eps} \chi_{D_{\eps^{r-1}}}^{}\left(\tfrac{k-k^{(j)}+K^m}{\eps}-\tilde{t}\right)\\ &\qquad\qquad\times\wh{A}_\alpha\left(\tfrac{k-k^{(j)}+K^m}{\eps}-\tilde{t}\right) \chi_{D_{\eps^{r-1}}}^{}(\tilde{t}-\tilde{s})\wh{A}_\beta(\tilde{t}-\tilde{s}) \chi_{D_{\eps^{r-1}}}^{}(\tilde{s})\wh{\barl{A}}_\gamma(\tilde{s}) \dd\tilde{s} \dd\tilde{t} \end{split} \ee for $k\in B_{\eps^r}(k^{(j)}-K^m)$. As we show in Remark \ref{R:convol_sum}, summing, for fixed $k,j,m$, the terms \eqref{E:NL_term_conv2} over $(n,o,q)\in \CA_{\alpha,\beta,\gamma,j}$ yields a double convolution integral in $\tilde{s},\tilde{t}$ over the full discs $D_{2\eps^{r-1}}$ and $D_{\eps^{r-1}}$, i.e., \beq\label{E:NL_term_conv_sum \begin{split} &\big(\xi_{\alpha,a} *_{\Bsm} \xi_{\beta,b} *_{\Bsm} \xi_{\gamma,c}^\times\big)(k;x) =\eps^4 e^{\ri(k^{(\alpha)}+k^{(\beta)}-k^{(\gamma)}-k^{(j)}+K^m)\cdot x} p_{n_{\alpha},a}(k^{(\alpha)};x)p_{n_{\beta},b}(k^{(\beta)};x) \barl{p_{n_{\gamma},c}}(k^{(\gamma)};x) \\ &\quad\times\int \limits_{D_{2\eps^{r-1}}}\int\limits_{D_{\eps^{r-1}}} \chi_{D_{\eps^{r-1}}}^{}\big(\ell^{(j,m)}(k)-\tilde{t}\big) \wh{A}_\alpha\big(\ell^{(j,m)}(k)-\tilde{t}\big) \chi_{D_{\eps^{r-1}}}^{}(\tilde{t}-\tilde{s}) \wh{A}_\beta(\tilde{t}-\tilde{s}) \chi_{D_{\eps^{r-1}}}^{}(\tilde{s})\wh{\barl{A}}_\gamma(\tilde{s}) \dd\tilde{s}\dd\tilde{t}\\ &\quad=:\eps^4 e^{\ri(-k^{(j)}+K^m)\cdot x} u_{n_\alpha,a}(k^{(\alpha)};x)u_{n_\beta,b}(k^{(\beta)};x) \barl{u_{n_\gamma,c}}(k^{(\gamma)};x)\; \tilde{h}_{\alpha,\beta,\gamma}^{(\eps)}(\ell^{(j,m)}(k)) \end{split} \ee for $k\in B_{\eps^r}(k^{(j)}-K^m)$. Here we have used $u_{n_{\alpha},a}(k^{(\alpha)};x) =p_{n_{\alpha},a}(k^{(\alpha)};x)e^{\ri k^{(\alpha)}\cdot x}$, etc., and we defined $\tilde{h}_{\alpha,\beta,\gamma}^{(\eps)}(\ell^{(j,m)}(k))$ as an abbreviation for the integral on the right hand side. \brem\label{R:convol_sum} To show that the sum of $g_{noq}$ over $(n,o,q)\in \CA_{\alpha,\beta,\gamma,j}$ yields a double convolution integral over full discs, let us first note that the definitions of $M_\gamma$ and $M^{(2)}_\beta$ ensure \beq\label{E:Mgamma_sum} \bigcup_{q\in M_\gamma} \left((\B+k^{(\gamma)}-K^q)\cap D_{\eps^r}\right)=D_{\eps^r}, \ee and \beq\label{E:M2beta_sum} \bigcup_{o\in M^{(2)}_\beta} \left((\B-k^{(\beta)}+k^{(\gamma)}+K^o-K^q)\cap D_{2\eps^r}\right) =D_{2\eps^r}. \ee These are obvious when $k^{(\gamma)} \in \operatorname{int}(\B)$ and $k^{(\beta)},k^{(\gamma)} \in \operatorname{int}(\B)$, respectively, because then $M_\gamma=M^{(2)}_\beta=\{(0,0)^T\}$. But when $k^{(\gamma)}\in\pa\B$, then only a fraction of $-k^{(\gamma)}+D_{\eps^r}$ lies in $\B$ (in our example with a hexagonal $\B$ the fraction is a half unless $k^{(\gamma)}$ is a vertex of $\B$, in which case it is a third) and the rest lies in periodicity cells centered at neighboring reciprocal lattice points. Each point $\ell$ in this rest is therefore mapped to $\B$ via $\ell+K^q$ with some $q\in M_\gamma$, and we thus have \eqref{E:Mgamma_sum}. By an analogous argument, observing that $k^{(\beta)}-(k^{(\gamma)}-K^q)\in\B +D_{\eps^r}$ for all $q\in M_\gamma$, we get \eqref{E:M2beta_sum} from the definition of $M^{(2)}_\beta$. Let us now assume \eqref{E:n_cond} and show that for each $K^q$ fixed, i.e.{} for each fixed integration domain in the inner integral in \eqref{E:NL_term_conv}, the sum of $g_{noq}$ over $(n,o,q)\in \CA_{\alpha,\beta,\gamma,j}$ yields an integration over the full disc $D_{2\eps^r}$ in the outer integral. If this were not the case, i.e.{} if $\exists\ell\in D_{2\eps^r}$ such that $\ell\notin \B-k^{(\beta)}+k^{(\gamma)}+K^o-K^q$ for any such $(n,o,q)\in \CA_{\alpha,\beta,\gamma,j}$, then by \eqref{E:M2beta_sum} there would be $o\in M_\beta^{(2)}$ such that $(n,o,q)\notin \CA_{\alpha,\beta,\gamma,j}$ while \eqref{E:o_cond} and \eqref{E:n_cond} are satisfied. This is a contradiction to the definition of $\CA_{\alpha,\beta,\gamma,j}$. After that we sum over all $q\in M_\gamma$ and the result follows from \eqref{E:NL_term_conv}. \ere We now write the $d$-th component ($d\in \{1,2,3\}$) of $\wt{G}_j$ as \begin{align} \label{E:Gamma_def} \wt{G}_{j,d}(k;x)=\eps^{-4}\chi_{D_{\eps^r}}(k-k^{(j)}+K^m) \sum_{a,b,c=1}^3\Gamma_{a,b,c}^{(d)} \left(\wt{E}^{(0)}_a *_{\Bsm} \wt{E}^{(0)}_b *_{\Bsm} \wt{\barl{E^{(0)}_c}}\right)(k;x), \end{align} where the integer coefficients $\Gamma_{a,b,c}^{(d)}$ can be easily derived from \eqref{E:PNL_ans_full}. In detail we have $\Gamma_{1,1,1}^{(1)}=\Gamma_{2,2,2}^{(2)}=\Gamma_{3,3,3}^{(3)}=3$, $\Gamma_{1,2,2}^{(1)}=\Gamma_{2,1,2}^{(1)}=\Gamma_{1,3,3}^{(1)} =\Gamma_{3,1,3}^{(1)} =\Gamma_{2,2,1}^{(1)}=\Gamma_{3,3,1}^{(1)}=1$, $\Gamma_{1,2,1}^{(2)}=\Gamma_{2,1,1}^{(2)}=\Gamma_{3,2,3}^{(2)} =\Gamma_{2,3,3}^{(2)}=\Gamma_{1,1,2}^{(2)}=\Gamma_{3,3,2}^{(2)}=1$, $\Gamma_{1,3,1}^{(3)}=\Gamma_{3,1,1}^{(3)}=\Gamma_{2,3,2}^{(3)} =\Gamma_{3,2,2}^{(3)}=\Gamma_{1,1,3}^{(3)}=\Gamma_{2,2,3}^{(3)}=1$, and the remaining $\Gamma_{a,b,c}^{(d)}$ are zero. Finally, using \eqref{E:NL_term_conv_sum}, we get for $k\in B_{\eps^r}(k^{(j)}-K^m)$ \begin{align}\label{E:NL_term_structure} \begin{split} \wt{G}_{j,d}(k;x)=e^{\ri(-k^{(j)}+K^m)\cdot x} \sum_{a,b,c=1}^3\Gamma_{a,b,c}^{(d)} \sum_{\substack{\alpha,\beta,\gamma\in \{1,\ldots,N\} \text{ s.t.} \\ \CA_{\alpha,\beta,\gamma,j}\neq \emptyset}} & u_{n_\alpha,a}(k^{(\alpha)};x)u_{n_\beta,b}(k^{(\beta)};x)\\ \times & \barl{u_{n_\gamma,c}}(k^{(\gamma)};x) \tilde{h}_{\alpha,\beta,\gamma}^{(\eps)}(\ell^{(j,m)}(k)). \end{split} \end{align} In order to make the discussion of the asymptotic hierarchy complete, we also have to consider the part of the $k-$domain outside the neighborhoods of $k^{(j)}$. For $k \in \B$ such that $k-k^{(j)}+K^m \in \B\setminus D_{\eps^r}$ for all $m \in M_j$ we have $\bigl(\wt{L}(k^{(j)}-K^m;x)-\omega_*^2\eta(x)\bigr)\wt{E}^{(l)}(k;x)=0$ for $l\in\{0,1\}$ so that $\wt{E}^{(0)}(k;\pkt)\equiv \wt{E}^{(1)}(k;\pkt)\equiv 0$ for such $k$. \subsection{Coupled mode equations \label{S:cme We return now to equation \eqref{E:Oeps}. Due to the Fredholm alternative the existence of $\Lambda$-periodic solutions $\wt{E}^{(2)}$ of equation \eqref{E:Oeps} is equivalent to $L^2$-orthogonality of \eqref{E:Oeps} to $p_{n_j}(k^{(j)};x)e^{\ri K^m\cdot x}$, which needs to be ensured for all $m \in M_j$ and $j\in \{1,\ldots, N\}$. The range of $\ell^{(j,m)}$ is a different section of the disc $D_{\eps^{r-1}}$ for each $m\in M_j$. This section is a $(1/|M_j|)$-th of the full disc so that these $|M_j|$ equations actually build one equation in $\ell\in D_{\eps^{r-1}}$. Figure \ref{F:Dj_Mj} shows these sections for two example points $k^{(j)}$. One example is for $|M_j|=2$ and the other one for $|M_j|=3$. \begin{figure}[h!] \begin{center} \begin{minipage}[c]{.49\textwidth} \centering (a) \medskip \psset{unit=2.9cm} \pspicture(-1,-1)(1,1) \psdots*(0,0)(0.866,0.5)(0.866,-0.5)(-0.866,0.5)(-0.866,-0.5)(0,1)(0,-1 \put(-0.3,0.29){\Large$\B$} \pscircle[linestyle=dotted,linewidth=0.5pt](0.433,0.25){0.16} \pswedge[linestyle=dotted,fillstyle=solid,fillcolor=gray](-0.433,-0.25){0.16}{-60}{120} \pswedge[linestyle=dotted,fillstyle=solid,fillcolor=lightgray](0.433,0.25){0.16}{120}{300} \psline(0.5774,0)(0.2887,0.5) \psline(0.2887,0.5)(-0.2887,0.5) \psline[linestyle=dashed](-0.2887,0.5)(-0.5774,0) \psline[linestyle=dashed](-0.5774,0)(-0.2887,-0.5) \psline[linestyle=dashed](-0.2887,-0.5)(0.2887,-0.5) \psline(0.2887,-0.5)(0.5774,0) \psdots*[dotstyle=+](0.433,0.25) \put(0.45,0.29){$k^{(j)}$} \put(0.59,0.14){$B_{\eps^r}(k^{(j)})$} \pswedge[linestyle=dotted,fillstyle=solid,fillcolor=gray](0,0){0.16}{-60}{120} \pswedge[linestyle=dotted,fillstyle=solid,fillcolor=lightgray](0,0){0.16}{120}{300} \psdots*(0,0) \put(0.08,0.325){$m^{(1)}$} \put(-0.66,-0.39){$m^{(2)}$} \psline{->}(0.09,-0.02)(0.19,-0.16) \psline{->}(-0.09,-0.04)(-0.07,-0.3) \put(0.21,-0.2){$\eps l^{(j,m^{(2)})}$} \put(-0.14,-0.45){$\eps l^{(j,m^{(1)})}$} \endpspicture \end{minipage} \begin{minipage}[c]{0.49\textwidth} \centering (b) \medskip \psset{unit=2.9cm} \pspicture(-1,-1)(1,1) \psdots*(0,0)(0.866,0.5)(0.866,-0.5)(-0.866,0.5)(-0.866,-0.5)(0,1)(0,-1 \put(-0.43,0.05){\Large$\B$} \pscircle[linestyle=dotted,linewidth=0.5pt](0.5774,0){0.16} \pswedge[linestyle=dotted,fillstyle=solid,fillcolor=lightgray](0.5774,0){0.16}{120}{240} \pswedge[linestyle=dotted,fillstyle=solid,fillcolor=gray](-0.2887,0.5){0.16}{240}{360} \pswedge[linestyle=dotted,fillstyle=solid,fillcolor=darkgray](-0.2887,-0.5){0.16}{0}{120} \psline(0.5774,0)(0.2887,0.5) \psline(0.2887,0.5)(-0.2887,0.5) \psline[linestyle=dashed](-0.2887,0.5)(-0.5774,0) \psline[linestyle=dashed](-0.5774,0)(-0.2887,-0.5) \psline[linestyle=dashed](-0.2887,-0.5)(0.2887,-0.5) \psline(0.2887,-0.5)(0.5774,0) \psdots*[dotstyle=+](0.5774,0) \put(0.595,0.04){$k^{(j)}$} \put(0.72,-0.16){$B_{\eps^r}(k^{(j)})$} \put(0.24,0.07){$m^{(1)}$} \put(-0.3,0.55){$m^{(2)}$} \put(-0.3,-0.65){$m^{(3)}$} \pswedge[linestyle=dotted,fillstyle=solid,fillcolor=lightgray](0,0){0.16}{120}{240} \pswedge[linestyle=dotted,fillstyle=solid,fillcolor=gray](0,0){0.16}{240}{360} \pswedge[linestyle=dotted,fillstyle=solid,fillcolor=darkgray](0,0){0.16}{0}{120} \psdots*(0,0) \psline{->}(0.06,-0.05)(0.19,-0.2) \psline{->}(0.05,0.09)(0.14,0.24) \psline{->}(-0.1,0)(-0.19,-0.1) \put(0.21,-0.37){$\eps l^{(j,m^{(2)})}$} \put(-0.65,-0.25){$\eps l^{(j,m^{(1)})}$} \put(0.08,0.29){$\eps l^{(j,m^{(3)})}$} \endpspicture \end{minipage} \end{center} \caption{\label{F:Dj_Mj}\smal Two example points $k^{(j)}$ in the case of the hexagonal lattice and the corresponding ranges of $\eps\ell^{(j,m)}$ for all $m\in M_j$. In (a) we have $M_j=\{(0,0)^T,(1,1)^T\}=:\{m^{(1)},m^{(2)}\}$ and in (b) $M_j=\{(0,0)^T, (0,1)^T, (1,1)^T\}=:\{m^{(1)},m^{(2)},m^{(3)}\}$. The shaded sections along the boundary of $\B$ are those $k\in \B$ for which $\chi_{D_{\eps^r}}(k-k^{(j)}+K^m)\neq 0$ for the $m\in M_j$ written next to the respective section.} \end{figure} When imposing the orthogonality condition, the common factor $e^{\ri K^m\cdot x}$ of the right hand side of \eqref{E:Oeps} is canceled in the complex inner product with $p_{n_j}(k^{(j)};x)e^{\ri K^m\cdot x}$, so that the same solvability condition holds for all $m \in M_j$. Using \eqref{E:Oeps}, \eqref{E:normalize_Bloch}, and \eqref{E:om_deriv_2} (with $n_*$ and $k_*$ replaced by $n_j$ and $k^{(j)}$), we obtain \beq\label{E:CME_Four \Omega\wh{A}_j(\ell) -\frac{1}{2}\left(\ell_1^2\pa_{k_1}^2\omega_{n_j}(k^{(j)}) +\ell_2^2\pa_{k_2}^2\omega_{n_j}(k^{(j)}) +2\ell_1\ell_2\pa_{k_1,k_2}^2\omega_{n_j}(k^{(j)})\right)\wh{A}_j(\ell) + \wh{\CN}_j(\ell) = 0 \ee for $\ell\in D_{\eps^{r-1}}$, where \begin{align*} \wh{\CN}_j(\ell) &=\frac{\omega_*}{2}\big\langle\chici(\pkt)\wt{G}_j(\ell;\pkt), p_{n_j}(k^{(j)};\pkt)e^{-\ri K^m\cdot(\pkt)}\big\rangle\\ &=\frac{\omega_*}{2}\sum_{a,b,c,d=1}^3\Gamma_{a,b,c}^{(d)} \sum_{\substack{\alpha,\beta,\gamma\in \{1,\ldots,N\} \text{ s.t.}\\ \CA_{\alpha,\beta,\gamma,j}\neq \emptyset}}\int_U\chici(x) u_{n_\alpha,a}(k^{(\alpha)};x)u_{n_\beta,b}(k^{(\beta)};x)\\ & \hspace{6.5cm}\times \barl{u_{n_\gamma,c}}(k^{(\gamma)};x) \barl{u_{n_j,d}}(k^{(j)};x)\dd x\; \tilde{h}_{\alpha,\beta,\gamma}^{(\eps)}(\ell)\\ &=:\sum_{\substack{\alpha,\beta,\gamma\in \{1,\ldots,N\} \text{ s.t.}\\ \CA_{\alpha,\beta,\gamma,j}\neq \emptyset}} I_{\alpha,\beta,\gamma,j}\tilde{h}_{\alpha,\beta,\gamma}^{(\eps)}(\ell), \end{align*} i.e. with \eqref{E:PNL_ans_full} and the definition of $\Gamma$ in \eqref{E:Gamma_def} \beq \label{E:I_coeffs \begin{array}{rl} I_{\alpha,\beta,\gamma,j} :=& \displaystyle \frac{\omega_*}{2}\sum_{a,b,c,d=1}^3\Gamma_{a,b,c}^{(d)} \int_U\chici u_{n_\alpha,a}(k^{(\alpha)};\pkt)u_{n_\beta,b}(k^{(\beta)};\pkt) \barl{u_{n_\gamma,c}}(k^{(\gamma)};\pkt)\barl{u_{n_j,d}}(k^{(j)};\pkt)\\ =& \displaystyle \frac{\omega_*}{2} \int_U \chici \left[2(u_{n_\alpha}(k^{(\alpha)};\pkt) \cdot \barl{u_{n_\gamma}}(k^{(\gamma)};\pkt)) u_{n_\beta}(k^{(\beta)};\pkt) \right.\\ &\left. \qquad\; + (u_{n_\alpha}(k^{(\alpha)};\pkt) \cdot u_{n_\beta}(k^{(\beta)};\pkt)) \barl{u_{n_\gamma}}(k^{(\gamma)};\pkt)\right]\cdot\barl{u_{n_j}}(k^{(j)};\pkt)\;. \end{array} \ee The symmetries in $\Gamma_{a,b,c}^{(d)}$ imply symmetries in $I_{\alpha,\beta,\gamma,j}$. Namely, due to the symmetries $\Gamma_{a,b,c}^{(d)}=\Gamma_{b,a,c}^{(d)}$ and $\Gamma_{a,b,c}^{(d)}=\Gamma_{a,b,d}^{(c)}$ we have \beq \label{E:I_sym1 I_{\alpha,\beta,\gamma,j}=I_{\beta,\alpha,\gamma,j} \text{ and } I_{\alpha,\beta,\gamma,j}=I_{\alpha,\beta,j,\gamma} \qquad\text{for all } \alpha,\beta,\gamma,j\in\{1,\ldots,N\}, \ee and due to $\Gamma_{a,b,c}^{(d)}=\Gamma_{c,d,a}^{(b)}$ we have \beq\label{E:I_sym2 I_{\alpha,\beta,\gamma,j}=\overline{I_{\gamma,j,\alpha,\beta}} \qquad\text{for all } \alpha,\beta,\gamma,j\in\{1,\ldots,N\}. \ee Symmetries \eqref{E:I_sym1} and \eqref{E:I_sym2} imply, in particular, that $I_{\alpha,\beta,\alpha,\beta}=I_{\alpha,\beta,\beta,\alpha}\in\mathbb\R$ for all $\alpha,\beta\in\{1,\ldots,N\}$. Let the crystal satisfy the rotational symmetry $\eta(x)=\eta(r_\nu(x))$ and $\chici(x)=\chici(r_\nu(x))$ for all $x\in \R^2$ and some $\nu\in (-\pi,\pi]$ and let $U$ be chosen so that $r_\nu(U)=U$. If for each $m\in\{\alpha,\beta,\gamma,j\}\subset\{1,\dots,N\}$ there exists $m'\in\{1,\dots,N\}$ such that \[ k^{(m')}=r_\nu(k^{(m)}), \ and if $\omega_n(k^{(m)})$ is a geometrically simple eigenvalue of \eqref{E:shifted} for each $m\in \{\alpha,\beta,\gamma,j\}$, then \beq\label{E:I_sym_rot I_{\alpha,\beta,\gamma,j}=I_{\alpha',\beta',\gamma',j'}. \ee This is seen by the change of variables $y=r_\nu(x)$ in \eqref{E:I_coeffs}, using the facts $r_\nu(U)=U$ and $r_\nu(v)\cdot r_\nu(w)=v\cdot w$ for all $v,w\in \C^3$, and employing the symmetry \eqref{E:rot_sym_Bloch}. Additional symmetries in $I_{\alpha,\beta,\gamma,j}$ arise when a spatial reflection symmetry in $\eta$ and $\chici$ is present. For instance when $\eta(x)=\eta(S_2(x))$, $\chici(x)=\chici(S_2(x))$ for all $x\in \R^2$ (see Section \ref{S:refl_sym}) and if for each $m\in\{\alpha,\beta,\gamma,j\}\subset\{1,\dots,N\}$ there exists $m'\in\{1,\dots,N\}$ such that \[ k^{(m')} = S_2(k^{(m)}) \ and such that $S_2(k^{(m)}) \doteq k^{(m)}$ does \textit{not} hold for any $m\in \{\alpha,\beta,\gamma,j\}$, then \beq\label{E:I_sym_refl I_{\alpha,\beta,\gamma,j} = I_{\alpha',\beta',\gamma',j'}. \ee This is proved via a change of variables in \eqref{E:I_coeffs} using \eqref{E:refl2_sym_Bloch}, where $a=0$ due to our assumptions. A similar result holds for the reflection symmetry in $x_1$. Returning now back to \eqref{E:CME_Four}, for smooth envelopes $A_j$ we can neglect the contribution of $\wh{A}_j$ from $\ell\in \R^2\setminus D_{\eps^{r-1}}$ or simply assume that $\wh{A}_j$ satisfy \eqref{E:CME_Four} also there. This step can be rigorously justified via a persistence argument similar to that in \cite{DU09,DU_err11}. $\tilde{h}_{\alpha,\beta,\gamma}^{(\eps)}$ will then be replaced by $\wh{A}_{\alpha}*\wh{A}_{\beta}*\wh{\barl{A}}_{\gamma}$. The inverse Fourier transform then produces the \emph{coupled mode equations} \beq\label{E:CME \Omega A_j +\frac{1}{2}\left(\pa_{k_1}^2\omega_{n_j}(k^{(j)})\pa_{y_1}^2 +\pa_{k_2}^2\omega_{n_j}(k^{(j)})\pa_{y_2}^2 +2\pa^2_{k_1,k_2}\omega_{n_j}(k^{(j)})\pa_{y_1,y_2}^2\right)A_j +\CN_j =0 \ee on $\R^2$, where $\CN_j$ is given by \begin{align*} \CN_j &=\sum_{\substack{\alpha,\beta,\gamma\in \{1,\ldots,N\} \text{ s.t.}\\ \CA_{\alpha,\beta,\gamma,j}\neq \emptyset}} I_{\alpha,\beta,\gamma,j}A_\alpha A_\beta\barl{A_\gamma}. \end{align*} Note that the coupled mode equations have the same general structure as those for gap solitons of the scalar Gross--Pitaevskii equation \cite{DU09}. A localized solution $A$ of \eqref{E:CME} should produce via \eqref{E:ansatz_phys} an approximation of a gap soliton of the Maxwell problem \eqref{E:NL_Maxw}. A rigorous justification of this statement can be done via the Lyapunov--Schmidt reduction similarly to \cite{DPS09,DU09,DU_err11} and will be the subject of a future project. System \eqref{E:CME} does not have localized solutions for arbitrary values of coefficients. The coefficients of the derivative terms are given by the band structure and $\Omega=\pm 1$ is determined by the condition that $\omega=\omega_* + \eps^2 \Omega$ lies in the gap. But the function $\chici$ in $I_{\alpha,\beta,\gamma,j}$ has not been fixed and remains free at this point. The linear part of the operator in \eqref{E:CME} is definite due to our assumption (A4) in Section \ref{S:slowly envelope} and the fact that $\Omega<0$ at upper edges and $\Omega>0$ at lower edges. The linear part of the operator is positive definite at lower edges $\omega_*$, where $k^{(j)}$ are points of maxima and negative definite at upper edges. In case $N=1$, where $\CN_1 =\gamma |A_1|^2A_1$ and $\gamma =\tfrac{3\omega_*}{2}\int_U \chici|u_{n_j}(k^{(1)};\pkt)|^4$, a localized solution exists in the upper edge case only if $\chici$ is such that $\gamma>0$ while in the lower edge case $\chici$ has to produce $\gamma<0$. Physically it makes sense to set $\chici=0$ there, where $\eta=1$ (i.e.{} in vacuum/air). In the annulus regions, where $\eta=2.1025$, we set $\chici=1$ (a focusing nonlinearity) if $\gamma>0$ is needed and $\chici=-1$ (a defocusing nonlinearity) if $\gamma<0$ is required. This is in agreement with previous results on bifurcation of gap solitons from spectral edges in the periodic nonlinear Schr\"odinger equation \cite{HKS92,AL92,Pankov05,DPS09,DU09}, where bifurcation from upper/lower edges occurs for the focusing/defocusing nonlinearity respectively. In the case $N>1$ our numerical examples produce all $I_{\alpha,\beta,\gamma,j}$ of the same sign so that we set in the annulus regions, once again, $\chici=1$ if $\omega_*$ is an upper edge of a gap and $\chici=-1$ if it is a lower edge. \subsection{Examples of Coupled Mode Equations}\label{S:CME_examples} We present next coupled mode equations for gap solitons in the vicinity of spectral edges for the example in Section \ref{S:hex_struct} as well as for other canonical examples. As seen in Figure \ref{F:band_str}, there are 3 spectral gaps $(0,s_1)$, $(s_2,s_3)$ and $(s_4,s_5)$ on the positive part of the spectral $\omega$ axis for this specific example. We have the numerical values \[ \begin{array}{c} s_1=\omega_1(\Gamma)\approx 3.610,\; s_2=\omega_6(\Gamma)\approx 3.701,\; s_3=\omega_7(\Gamma)\approx 3.750,\\ s_4=\omega_{12}(0,2.351)\approx 3.873,\; s_5=\omega_{13}(0,2.407)\approx 3.882. \end{array} \] At $s_1$ and $s_2$ several bands lie very close to each other at the extremal point $k=\Gamma$. It is, however, not known whether these truly touch and the eigenvalues have higher multiplicity than one. Numerical tests have shown that varying the value of $\eta$ for the annulus material does not change the ordering of bands at $k=\Gamma$ near $s_1$ and $s_2$. We, therefore, assume that the edges $s_1$ and $s_2$ are simple eigenvalues at $k=\Gamma$ leading to $N=1$ at $s_1$ and $s_2$. If it can be proved that, for instance, $s_1$ is indeed a double eigenvalue, then $N=2$ at $s_1$. Likewise, $N$ would change if the multiplicity could be established for $s_2$. Similarly, the band $\omega_{12}$ is close to $\omega=s_5$ at four distinct $k$-points along $\pa \B_0$. At the point $k=(0,2.351)$ the numerical value is maximal and an analogous test shows that it remains maximal for a range of values of $\eta$. We thus assume that within $\B_0$ the value $\omega=s_5$ is attained only at $k=(0,2.351)$. Due to the discrete rotational symmetry of the band structure we thus have $N=6$ at $s_5$. Analogously, we have $N=6$ at $s_4$. Except for the simplest case with $N=1$, like in Section \ref{S:CME_N1}, we determine the sets $\CA_{\alpha,\beta,\gamma,j}$ using a Matlab program. First of all, it is clear that for any $k^{\flat}\in\B$ the sets $M_\flat$ and $M_\flat^{(2)}$ contain only those $(n,o,q)\in \Z^2$ with $n_l,o_l,q_l\in \{-1,0,1\}$ for $l=1,2$. To determine $\CA_{\alpha,\beta,\gamma,j}$, we therefore need to test only finitely many integer vectors $(n,o,q)$ for conditions \eqref{E:o_cond}, \eqref{E:n_cond}. For an example with $N=3$ we show in Section \ref{S:CME_3_extrema} the resulting sets $\CA_{\alpha,\beta,\gamma,j}$ computed using this routine. \subsubsection{Coupled Mode Equations near Edges for the Example in Section \ref{S:hex_struct}}\label{S:CME_hex \paragraph{Coupled Mode Equations near the Edges $s_1,s_2$ and $s_3$ ($N=1$)} \label{S:CME_N1}\ At the edges $s_1$, $s_2$ and $s_3$ in Figure \ref{F:band_str} the situation is particularly simple. As discussed at the beginning of Section \ref{S:CME_examples}, we have $N=1$ and $k^{(1)} =\Gamma =\bspm 0 \\0 \espm$. Since $k^{(1)}\in \operatorname{int}(\B)$, any (small) neighborhood of $k^{(1)}$ lies completely within $\B$ and thus $M_1=\{\bspm 0 \\0 \espm\}$. A simple inspection determines that we have $\CA_{1,1,1,1} = \big\{(\bspm 0 \\0 \espm, \bspm 0\\0 \espm, \bspm 0 \\0 \espm)\big\}$. The resulting coupled mode equation for $A=A_1$ is \beq\label{E:CME_1 \left(\Omega +\alpha (\partial_{y_1}^2 +\partial_{y_2}^2)\right)A+\gamma |A|^2A=0, \ee where $\alpha =\tfrac{1}{2}\partial_{k_1}^2\omega_{n_1}(\Gamma) =\tfrac{1}{2}\partial_{k_2}^2\omega_{n_1}(\Gamma)$ (cf.{} \eqref{E:der2_Gamma}) and $\gamma=I_{1,1,1,1}$. The three cases $s_1,s_2$ and $s_3$ differ by the value of $n_1$, i.e.{} the band index. At $\omega_*=s_1$ we have $n_1=1$, at $\omega_*=s_2$ we have $n_1=6$ and at $\omega_*=s_3$ we have $n_1=7$. And, as discussed at the end of Section \ref{S:cme}, at the upper edges $s_1,s_3$ we have $\Omega=-1$ and the function $\chici$ has the value $1$ in the annulus regions and $0$ otherwise. At $s_2$ we have $\Omega=1$ and $\chici=-1$ in the annuli. In Section \ref{S:numerics_s2} we present a numerical example on a gap soliton approximation near $s_2$. We list here, therefore, the numerical values of the CME coefficients for the case $s_2$: \[ \omega_*=s_2\approx 3.701: \alpha\approx -0.0107,\ \gamma\approx -3.057. \] \paragraph{Coupled Mode Equations near the Edge $s_5$ ($N=6$)}\ At the upper edge $s_5$ in Figure \ref{F:band_str} we have $N=6$, $n_1=n_2=\ldots=n_6=13$, $k^{(1)} \approx (0,2.458)$ lying on the line from $\Gamma$ to $r_{2\pi/3}(M)$, and $k^{(j)}$, $j=2,\ldots,6$, obtained via a rotation of $k^{(1)}$. In detail \[ k^{(j)} = r_{(j-1)\tfrac{\pi}{3}}(k^{(1)}) \qquad\text{for }j=2,\ldots,6. \] The symmetry properties \eqref{E:der2_rot_sym}, \eqref{E:der2_refl_sym}, and \eqref{E:der2_refl_symB} produce relations among the linear coefficients of the CMEs. The sets $\CA_{\alpha,\beta,\gamma,j}$ are either empty or contain solely the element $\{\bspm 0 \\0 \espm, \bspm 0 \\0 \espm, \bspm 0 \\0 \espm\}$ as checked by the Matlab routine. The resulting CMEs are \beq\label{E:CME_s5} \begin{array}{rl} (\Omega+\alpha_1\pa_{y_1}^2+\beta_1\pa_{y_2}^2)A_1+N_1 =&0,\\ (\Omega+\alpha_2\pa_{y_1}^2+\beta_2\pa_{y_2}^2 +\mu\pa^2_{y_1,y_2})A_2+N_2 =& 0,\\ (\Omega+\alpha_2\pa_{y_1}^2+\beta_2\pa_{y_2}^2 -\mu\pa^2_{y_1,y_2})A_3+N_3 =& 0,\\ (\Omega+\alpha_1\pa_{y_1}^2+\beta_1\pa_{y_2}^2)A_4+N_4 =& 0,\\ (\Omega+\alpha_2\pa_{y_1}^2+\beta_2\pa_{y_2}^2 +\mu\pa^2_{y_1,y_2})A_5+N_5 =& 0,\\ (\Omega+\alpha_2\pa_{y_1}^2+\beta_2\pa_{y_2}^2 -\mu\pa^2_{y_1,y_2})A_6+N_6 =& 0, \end{array} \ee where $\Omega=-1$, $\alpha_1 = \pa_{k_1}^2\omega_{13}(k^{(1)})$, $\beta_1 = \pa_{k_2}^2\omega_{13}(k^{(1)})$, $\alpha_2 = \tfrac{1}{4}(\alpha_1+3\beta_1)$, $\beta_2=\tfrac{1}{4}(3\alpha_1+\beta_1)$, $\mu = \tfrac{\sqrt{3}}{4}(\alpha_1-\beta_1) = \pa_{k_1,k_2}^2\omega_{13}(k^{(2)})$, and \[ \begin{split} N_1 & = 2\sum_{i=1}^6 I_{i,1,i,1}|A_i|^2A_1-I_{1,1,1,1}|A_1|^2A_1 +2(I_{2,5,4,1}A_2A_5+I_{3,6,4,1}A_3A_6)\bar{A}_4,\\ N_2 & = 2\sum_{i=1}^6 I_{i,2,i,2}|A_i|^2A_2-I_{2,2,2,2}|A_2|^2A_2 +2(I_{1,4,5,2}A_1A_4+I_{3,6,5,2}A_3A_6)\bar{A}_5,\\ N_3 & = 2\sum_{i=1}^6 I_{i,3,i,3}|A_i|^2A_3-I_{3,3,3,3}|A_3|^2A_3 +2(I_{1,4,6,3}A_1A_4+I_{2,5,6,3}A_2A_5)\bar{A}_6,\\ N_4 & = 2\sum_{i=1}^6 I_{i,4,i,4}|A_i|^2A_4-I_{4,4,4,4}|A_4|^2A_4 +2(I_{2,5,1,4}A_2A_5+I_{3,6,1,4}A_3A_6)\bar{A}_1,\\ N_5 & = 2\sum_{i=1}^6 I_{i,5,i,5}|A_i|^2A_5-I_{5,5,5,5}|A_5|^2A_5 +2(I_{1,4,2,5}A_1A_4+I_{3,6,2,5}A_3A_6)\bar{A}_2,\\ N_6 & = 2\sum_{i=1}^6 I_{i,6,i,6}|A_i|^2A_6-I_{6,6,6,6}|A_6|^2A_6 +2(I_{1,4,3,6}A_1A_4+I_{2,5,3,6}A_2A_5)\bar{A}_3. \end{split} \ Due to symmetries, many of the coefficients in the nonlinear terms are equal. Symmetry \eqref{E:I_sym_rot} with $\nu=\pi/3$ and symmetry \eqref{E:I_sym1} imply \[ \begin{split} \gamma_0:=&I_{1,1,1,1}= I_{2,2,2,2}=\ldots=I_{6,6,6,6},\\ \gamma_1:=&I_{2,1,2,1}=I_{3,2,3,2}=\ldots=I_{6,5,6,5}=I_{1,6,1,6}\\ =&I_{1,2,1,2}=I_{2,3,2,3}=\ldots=I_{5,6,5,6}=I_{6,1,6,1},\\ \gamma_2:=&I_{3,1,3,1}=I_{4,2,4,2}=I_{5,3,5,3}=I_{6,4,6,4}=I_{1,5,1,5}=I_{2,6,2,6}\\ =&I_{1,3,1,3}=I_{2,4,2,4}=I_{3,5,3,5}=I_{4,6,4,6}=I_{5,1,5,1}=I_{6,2,6,2},\\ \gamma_3:=&I_{4,1,4,1}=I_{5,2,5,2}=I_{6,3,6,3} =I_{1,4,1,4}=I_{2,5,2,5}=I_{3,6,3,6},\\ \gamma_4:=&I_{2,5,1,4}=I_{3,6,2,5}=I_{1,4,3,6} =I_{2,5,4,1}=I_{3,6,5,2}=I_{1,4,6,3}. \end{split} \] Using \eqref{E:I_sym2} and \eqref{E:I_sym1}, we get \[ \overline{\gamma_4}=I_{3,6,4,1}=I_{3,6,1,4}=I_{1,4,5,2}=I_{1,4,2,5} =I_{2,5,6,3}=I_{2,5,3,6}. \ We have $\gamma_0,\gamma_1,\gamma_2,\gamma_3\in \R$ as explained below \eqref{E:I_sym2}. Finally, because $k^{(1)}=(k^{(4)}_1,-k^{(4)}_2)^T$, $k^{(2)}=(k^{(3)}_1,-k^{(3)}_2)^T$, and $k^{(5)}=(k^{(6)}_1,-k^{(6)}_2)^T$ with $k^{(1)}, k^{(2)}$ and $k^{(5)}$ lying in the interior of $\B$ away from the line $k_2=0$, the symmetry \eqref{E:I_sym_refl} applies and we get \[ I_{2,5,4,1}=I_{3,6,1,4}. \ Therefore \beq\label{E:I_real} I_{2,5,4,1}=I_{3,6,1,4}=I_{3,6,4,1}=I_{4,1,5,2}=\overline{I_{2,5,4,1}} \ee so that also $\gamma_4\in\R$. The second, third and fourth equalities in \eqref{E:I_real} hold due to \eqref{E:I_sym1}, \eqref{E:I_sym_rot}, and \eqref{E:I_sym2}. As a result the nonlinear terms in \eqref{E:CME_s5} can be simplified to \[ \begin{split} N_1 & := 2\left(\frac{\gamma_0}{2}|A_1|^2+\gamma_1(|A_2|^2+|A_6|^2)+\gamma_2(|A_3|^2+|A_5|^2) +\gamma_3|A_4|^2\right)A_1 +2\gamma_4(A_2A_5+A_3A_6)\barl{A_4},\\ N_2 & := 2\left(\frac{\gamma_0}{2}|A_2|^2+\gamma_1(|A_1|^2+|A_3|^2)+\gamma_2(|A_4|^2+|A_6|^2) +\gamma_3|A_5|^2\right)A_2 +2\gamma_4(A_1A_4+A_3A_6)\barl{A_5},\\ N_3 & := 2\left(\frac{\gamma_0}{2}|A_3|^2+\gamma_1(|A_2|^2+|A_4|^2)+\gamma_2(|A_1|^2+|A_5|^2) +\gamma_3|A_6|^2\right)A_3 +2\gamma_4(A_1A_4+A_2A_5)\barl{A_6},\\ N_4 & := 2\left(\frac{\gamma_0}{2}|A_4|^2+\gamma_1(|A_3|^2+|A_5|^2)+\gamma_2(|A_2|^2+|A_6|^2) +\gamma_3|A_1|^2\right)A_4 +2\gamma_4(A_2A_5+A_3A_6)\barl{A_1},\\ N_5 & := 2\left(\frac{\gamma_0}{2}|A_5|^2+\gamma_1(|A_4|^2+|A_6|^2)+\gamma_2(|A_1|^2+|A_3|^2) +\gamma_3|A_2|^2\right)A_5 +2\gamma_4(A_1A_4+A_3A_6)\barl{A_2},\\ N_6 & := 2\left(\frac{\gamma_0}{2}|A_6|^2+\gamma_1(|A_1|^2+|A_5|^2)+\gamma_2(|A_2|^2+|A_4|^2) +\gamma_3|A_3|^2\right)A_6 +2\gamma_4(A_1A_4+A_2A_5)\barl{A_3} \end{split} \] with $\gamma_0,\gamma_1,\gamma_2,\gamma_3,\gamma_4\in \R$. A system of six CMEs with the same structure as above arises also at the edge $s_4$. In Section \ref{S:numerics_s5} a numerical example of gap soliton asymptotics near $s_5$ is given. The numerical values of the coefficients in the CMEs \eqref{E:CME_s5} for $s_5$ are \[ \begin{split} \omega_*=s_5 \approx 3.882:\ & \alpha_1 \approx 0.0189,\,\alpha_2 \approx 0.146,\,\beta_1 \approx 0.189,\, \beta_2 \approx 0.0614,\,\mu\approx -0.0736,\\ & \gamma_0\approx 1.282,\,\gamma_1\approx 0.789, \gamma_2\approx 0.757,\, \gamma_3\approx 1.193,\,\gamma_4\approx 0.714. \end{split} \ As $s_5$ is an upper edge edge, the coefficients $\gamma_j$, $j\in\{0,\dots,4\}$, were computed using $\chici=1$ in the annulus regions. \subsubsection{Additional CME Examples} \paragraph{Example of Coupled Mode Equations for $N=2$}\label{S:CME_N2}\ An example of a situation for $N=2$ is when the locations of the extrema are $k^{(1)}=K$, $k^{(2)}=r_{\pi/3}(K)$. With $b^{(1)}, b^{(2)}$ as in Section \ref{S:hex_struct} we then have $k^{(1)} =\tfrac{4\pi}{3a_0}\bspm 1 \\0 \espm$ and $k^{(2)} =\tfrac{2\pi}{3a_0}\bspm 1\\\sqrt{3} \espm$. The corresponding integer shift sets are $M_1 = \{\bspm 0 \\0 \espm, \bspm 0\\ 1 \espm, \bspm 1 \\1 \espm\}$, $M_2 = \{\bspm 0 \\0 \espm, \bspm 1 \\0 \espm, \bspm 1 \\1 \espm\}$. Due to the rotation symmetry of the bands and their labeling according to size, we necessarily have $n_1=n_2$. We define $n_*:=n_1=n_2$. From \eqref{E:mix_K} we have \ \partial_{k_1,k_2}^2\omega_{n_*}(k^{(1)})=0 \ and using \eqref{E:der2_rot_sym} with $\alpha=\pi/3$, we obtain \begin{align*} \partial_{k_1}^2\omega_{n_*}(k^{(2)}) &=\tfrac{1}{4}\big(\partial_{k_1}^2\omega_{n_*}(k^{(1)}) +3\partial_{k_2}^2\omega_{n_*}(k^{(1)})\big),\\ \partial_{k_2}^2\omega_{n_*}(k^{(2)}) &=\tfrac{1}{4}\big(3\partial_{k_1}^2\omega_{n_*}(k^{(1)}) +\partial_{k_2}^2\omega_{n_*}(k^{(1)})\big),\\ \partial_{k_1,k_2}^2\omega_{n_*}(k^{(2)}) &=\tfrac{\sqrt{3}}{4}\big(\partial_{k_1}^2\omega_{n_*}(k^{(1)}) -\partial_{k_2}^2\omega_{n_*}(k^{(1)})\big). \end{align*} After having numerically checked the sets $\CA_{\alpha,\beta,\gamma,j}$ for all combinations of $\alpha,\beta,\gamma,j$ to determine the nonlinear terms, we thus arrive at the CMEs \beq\label{E:CME_K_Kprime \begin{split} \big(\Omega +\alpha_1\partial_{y_1}^2 +\beta_1\partial_{y_2}^2\big)A_1+\big(\gamma_0|A_1|^2 +2\gamma_1|A_2|^2\big)A_1 &=0, \\ \big(\Omega +\alpha_2\partial_{y_1}^2+\beta_2\partial_{y_2}^2 +\mu\partial_{y_1,y_2}^2\big)A_2+\big(\gamma_0|A_2|^2 +2\gamma_1|A_1|^2\big)A_2 &=0, \end{split} \ee where $\alpha_1=\tfrac{1}{2}\partial_{k_1}^2\omega_{n_*}(k^{(1)})$, $\beta_1=\tfrac{1}{2}\partial_{k_2}^2\omega_{n_*}(k^{(1)})$, and $\alpha_2=\tfrac{1}{4}(\alpha_1+3\beta_1)$, $\beta_2=\tfrac{1}{4}(3\alpha_1+\beta_1)$, $\mu=\tfrac{\sqrt{3}}{2}(\alpha_1-\beta_1)$, $\gamma_0:= I_{1,1,1,1}=I_{2,2,2,2}$ using symmetry \eqref{E:I_sym_rot} with $\nu=\pi/3$, and $\gamma_1:= I_{1,2,1,2}=I_{1,2,2,1}$ using \eqref{E:I_sym1}. \paragraph{Example of Coupled Mode Equations for $N=3$.}\label{S:CME_3_extrema}\ Let us assume that a gap edge for $N=3$ has extremal points at $k^{(1)}=M$, $k^{(2)}=r_{\pi/3}(M)$, $k^{(3)}=r_{2\pi/3}(M)$. With the choice of the reciprocal lattice vectors $b^{(1)}, b^{(2)}$ as in Section \ref{S:hex_struct} we have $k^{(1)}=\tfrac{1}{2}b^{(2)}$, $k^{(2)}=\tfrac{1}{2}\big(b^{(1)}+b^{(2)}\big)$, and $k^{(3)}=\tfrac{1}{2}b^{(1)}$ with the corresponding integer shift sets $M_1 = \{\bspm 0\\ 0 \espm, \bspm 0\\ 1 \espm\}$, $M_2 = \{\bspm 0 \\0 \espm, \bspm 1\\ 1 \espm\}$, and $M_3 = \{\bspm 0\\ 0 \espm, \bspm 1\\ 0 \espm\}$. Similarly to Section \ref{S:CME_N2} we have $n_1=n_2=n_3=:n_*$. Using \eqref{E:der2_rot_sym} and \eqref{E:mix_Mprime}--\eqref{E:M_der}, we get \begin{align*} &\partial_{k_1}^2\omega_{n_*}(k^{(1)}) =\partial_{k_1}^2\omega_{n_*}(k^{(2)}) =\partial_{k_1}^2\omega_{n_*}(k^{(3)}) =\partial_{k_2}^2\omega_{n_*}(k^{(1)}) =\partial_{k_2}^2\omega_{n_*}(k^{(2)}) =\partial_{k_2}^2\omega_{n_*}(k^{(3)})=:\alpha,\\ &\partial_{k_1,k_2}^2\omega_{n_*}(k^{(1)}) =\partial_{k_1,k_2}^2\omega_{n_*}(k^{(2)}) =\partial_{k_1,k_2}^2\omega_{n_*}(k^{(3)})=0. \end{align* \begin{table}[h!] \footnotesize \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline $j$ & term & $\bspm\alpha \\ \beta\\ \gamma\espm$ & $k^{(\alpha)}+k^{(\beta)}$& \multicolumn{2}{|c|}{$(n,o,q)^T$ from $\CA_{\alpha,\beta,\gamma,j}$ }& coefficient of \\ & in $\CN_j$ & & $-k^{(\gamma)}-k^{(j)}$ & $m=M_j(:,1)$ & $m=M_j(:,2)$ & the term\\ \hline 1 & $|A_1|^2A_1$ & $\bspm 1\\1\\1\espm$ & $\bspm 0\\0\espm$ & $\bspm 0 & 0 \\0 &0\\0&0\espm, \bspm 0&0\\0&1\\0&1\espm$ & $\bspm 0&1\\0&0\\0&0\espm, \bspm 0&1\\0&1\\0&1\espm$ & $I_{1,1,1,1}$\\ \cline{2-7} & $|A_2|^2A_1$ & $\bspm 1\\2\\2\espm$ & $\bspm 0\\0\espm$ & $\bspm 0&0\\0&0\\0&0\espm, \bspm 0&0\\1&1\\1&1\espm$ & $\bspm 0&1\\0&0\\0&0\espm, \bspm 0&1\\1&1\\1&1\espm$ & $2 I_{1,2,2,1}$\\ \cline{3-6} & & $\bspm 2\\1\\2\espm$ & $\bspm 0\\0\espm$ & $\bspm 0&0\\0&0\\0&0\espm, \bspm 0&0\\1&1\\1&1\espm, \bspm 1&0\\0&1\\1&1\espm, \bspm 1&0\\-1&0\\0&0\espm$ & $\bspm 0&1\\0&0\\0&0\espm, \bspm 0&1\\1&1\\1&1\espm, \bspm 1&1\\0&1\\1&1\espm, \bspm 1&1\\-1&0\\0&0\espm$ & \\ \cline{2-7} & $|A_3|^2A_1$ & $\bspm 1\\3\\3\espm$ & $\bspm 0\\0\espm$ & $\bspm 0&0\\0&0\\0&0\espm, \bspm 0&0\\1&0\\1&0\espm$ & $\bspm 0&1\\0&0\\0&0\espm, \bspm 0&1\\1&0\\1&0\espm$ & $2 I_{1,3,3,1}$\\ \cline{3-6} & & $\bspm 3\\1\\3\espm$ & $\bspm 0\\0\espm$ & $\bspm 0&-1\\0&1\\0&0\espm, \bspm 0&-1\\1&1\\1&0\espm, \bspm 1&0\\0&0\\1&0\espm, \bspm 1&0\\-1&0\\0&0\espm$ & $\bspm 0&0\\0&1\\0&0\espm, \bspm 0&0\\1&1\\1&0\espm, \bspm 1&1\\0&0\\1&0\espm, \bspm 1&1\\-1&0\\0&0\espm$ & \\ \cline{2-7} & $A_2^2\barl{A_1}$ & $\bspm 2\\2\\1\espm$ & $b^{(1)}$ & $\bspm 0&0\\1&0\\0&0\espm, \bspm 0&0\\1&1\\0&1\espm, \bspm 1&0\\0&0\\0&0\espm, \bspm 1&0\\0&1\\0&1\espm$ & $\bspm 0&1\\1&0\\0&0\espm, \bspm 0&1\\1&1\\0&1\espm, \bspm 1&1\\0&0\\0&0\espm, \bspm 1&1\\0&1\\0&1\espm$ & $I_{2,2,1,1}$\\ \cline{2-7} & $A_3^2\barl{A_1}$ & $\bspm 3\\3\\1\espm$ & $b^{(1)}-b^{(2)}$ & $\bspm 0&-1\\1&0\\0&0\espm, \bspm 0&-1\\1&1\\0&1\espm, \bspm 1&0\\0&0\\0&1\espm, \bspm 1&0\\0&-1\\0&0\espm$ & $\bspm 0&0\\1&0\\0&0\espm, \bspm 0&0\\1&1\\0&1\espm, \bspm 1&1\\0&0\\0&1\espm, \bspm 1&1\\0&-1\\0&0\espm$ & $I_{3,3,1,1}$\\ \hline 2 & $A_1^2\barl{A_2}$ & $\bspm 1\\1\\2\espm$ & $-b^{(1)}$ & $\bspm 0 & 0 \\0 &1\\1&1\espm, \bspm 0&0\\-1&0\\0&0\espm, \bspm -1&0\\0 &0\\0&0\espm, \bspm -1&0 \\1 &1\\1&1\espm$ & $\bspm 0&1\\0&0\\0&0\espm, \bspm 0&1\\1&1\\1&1\espm, \bspm 1&1\\0&1\\1&1\espm, \bspm 1&1\\-1&0\\0&0\espm$ & $I_{1,1,2,2}$\\ \cline{2-7} & $|A_1|^2A_2$ & $\bspm 1\\2\\1\espm$ & $\bspm 0\\0\espm$ & $\bspm 0&0\\0&0\\0&0\espm, \bspm 0&0\\0&1\\0&1\espm, \bspm -1&0\\1&0\\0&0\espm, \bspm -1&0\\1&1\\0&1\espm$& $\bspm 0&1\\1&0\\0&0\espm, \bspm 0&1\\1&1\\0&1\espm, \bspm 1&1\\0&0\\0&0\espm, \bspm 1&1\\0&1\\0&1\espm $ & $2 I_{1,2,1,2}$\\ \cline{3-6} & & $\bspm 2\\1\\1\espm$ & $\bspm 0\\0\espm$ & $\bspm 0&0\\0&0\\0&0\espm, \bspm 0&0\\0&1\\0&1\espm$ & $\bspm 1&1\\0&0\\0&0\espm, \bspm 1&1\\0&1\\0&1\espm$ & \\ \cline{2-7} & $|A_2|^2A_2$ & $\bspm 2\\2\\2\espm$ & $\bspm 0\\0\espm$ & $\bspm 0&0\\0&0\\0&0\espm, \bspm 0&0\\1&1\\1&1\espm$ & $\bspm 1&1\\0&0\\0&0\espm, \bspm 1&1\\1&1\\1&1\espm$ & $I_{2,2,2,2}$\\ \cline{2-7} & $|A_3|^2A_2$ & $\bspm 2\\3\\3\espm$ & $\bspm 0\\0\espm$ & $\bspm 0&0\\0&0\\0&0\espm, \bspm 0&0\\1&0\\1&0\espm$& $\bspm 1&1\\0&0\\0&0\espm, \bspm 1&1\\1&0\\1&0\espm$ & $2 I_{2,3,3,2}$\\ \cline{3-6} & & $\bspm 3\\2\\3\espm$ & $\bspm 0\\0\espm$ & $\bspm 0&0\\0&0\\0&0\espm, \bspm 0&0\\1&0\\1&0\espm, \bspm 0&-1\\0&1\\0&0\espm, \bspm 0&-1\\1&1\\1&0\espm$& $\bspm 1&0\\0&1\\0&0\espm, \bspm 1&0\\1&1\\1&0\espm, \bspm 1&1\\0&0\\0&0\espm, \bspm 1&1\\1&0\\1&0\espm $ & \\ \cline{2-7} & $A_3^2\barl{A_2}$ & $\bspm 3\\3\\2\espm$ & $-b^{(2)}$ & $\bspm 0&0\\1&0\\1&1\espm, \bspm 0&0\\0&-1\\0&0\espm, \bspm 0&-1\\0&0\\0&0\espm, \bspm 0&-1\\1&1\\1&1\espm$ & $\bspm 1&0\\0&0\\0&0\espm, \bspm 1&0\\1&1\\1&1\espm, \bspm 1&1\\1&0\\1&1\espm, \bspm 1&1\\0&-1\\0&0\espm$ & $I_{3,3,2,2}$\\ \hline 3 & $A_1^2\barl{A_3}$ & $\bspm 1\\1\\3\espm$ & $b^{(2)}-b^{(1)}$ & $\bspm 0&1\\0&0\\1&0\espm, \bspm 0&1\\-1&0\\0&0\espm, \bspm -1&0\\0&1\\0&0\espm, \bspm -1&0\\1&1\\1&0\espm$& $\bspm 0&0\\0&1\\0&0\espm, \bspm 0&0\\1&1\\1&0\espm, \bspm 1&1\\0&0\\1&0\espm, \bspm 1&1\\-1&0\\0&0\espm$ & $I_{1,1,3,3}$\\ \cline{2-7} & $|A_1|^2 A_3$ & $\bspm 1\\3\\1\espm$ & $\bspm 0\\0\espm$ & $\bspm 0&1\\0&0\\0&1\espm, \bspm 0&1\\0&-1\\0&0\espm, \bspm -1&0\\1&0\\0&0\espm, \bspm -1&0\\1&1\\0&1\espm$& $\bspm 0&0\\1&0\\0&0\espm, \bspm 0&0\\1&1\\0&1\espm, \bspm 1&1\\0&0\\0&1\espm, \bspm 1&1\\0&-1\\0&0\espm$ & $2I_{1,3,1,3}$\\ \cline{3-6} & & $\bspm 3\\1\\1\espm$ & $\bspm 0\\0\espm$ & $\bspm 0&0\\0&0\\0&0\espm, \bspm 0&0\\0&1\\0&1\espm$& $\bspm 1&0\\0&0\\0&0\espm, \bspm 1&0\\0&1\\0&1\espm$ & \\ \cline{2-7} & $A_2^2 \barl{A_3}$ & $\bspm 2\\2\\3\espm$ & $b^{(2)}$ & $\bspm 0&0\\0&1\\0&0\espm, \bspm 0&0\\1&1\\1&0\espm, \bspm 0&1\\0&0\\0&0\espm, \bspm 0&1\\1&0\\1&0\espm$& $\bspm 1&0\\0&1\\0&0\espm, \bspm 1&0\\1&1\\1&0\espm, \bspm 1&1\\0&0\\0&0\espm, \bspm 1&1\\1&0\\1&0\espm$ & $I_{2,2,3,3}$\\ \cline{2-7} & $|A_2|^2 A_3$ & $\bspm 2\\3\\2\espm$ & $\bspm 0\\0\espm$ & $\bspm 0&0\\0&0\\0&0\espm, \bspm 0&0\\1&1\\1&1\espm, \bspm 0&1\\1&0\\1&1\espm, \bspm 0&1\\0&-1\\0&0\espm$& $\bspm 1&0\\0&0\\0&0\espm, \bspm 1&0\\1&1\\1&1\espm, \bspm 1&1\\1&0\\1&1\espm, \bspm 1&1\\0&-1\\0&0\espm$ & $2I_{2,3,2,3}$\\ \cline{3-6} & & $\bspm 3\\2\\2\espm$ & $\bspm 0\\0\espm$ & $\bspm 0&0\\0&0\\0&0\espm, \bspm 0&0\\1&1\\1&1\espm$& $\bspm 1&0\\0&0\\0&0\espm, \bspm 1&0\\1&1\\1&1\espm$ & \\ \cline{2-7} & $|A_3|^2A_3$ & $\bspm 3\\3\\3\espm$ & $\bspm 0\\0\espm$ & $\bspm 0&0\\0&0\\0&0\espm, \bspm 0&0\\1&0\\1&0\espm$& $\bspm 1&0\\0&0\\0&0\espm, \bspm 1&0\\1&0\\1&0\espm$ & $I_{3,3,3,3}$\\ \hline \end{tabular} \caption{\label{T:NL_terms_3extrema}\smal Calculation of the nonlinear terms for Section \ref{S:CME_3_extrema}.} \end{center} \end{table} The sets $\CA_{\alpha,\beta,\gamma,j}$ are, once again, determined using the Matlab routine and the results are for illustration listed in Table \ref{T:NL_terms_3extrema}. The resulting CMEs are \beq\label{E:CME_3extrema \begin{split} \left(\Omega +\alpha(\partial_{y_1}^2+\partial_{y_2}^2)\right)A_1 +\left(\gamma_0|A_1|^2+2\gamma_1(|A_2|^2+|A_3|^2)\right)A_1 +\gamma_2(A_2^2+A_3^2)\barl{A_1} & = 0,\\ \left(\Omega+\alpha(\partial_{y_1}^2+\partial_{y_2}^2)\right)A_2 +\left(\gamma_0|A_2|^2+2\gamma_1(|A_1|^2+|A_3|^2)\right)A_2 +\gamma_2(A_1^2+A_3^2)\barl{A_2} & = 0,\\ \left(\Omega +\alpha(\partial_{y_1}^2+\partial_{y_2}^2)\right)A_3 +\left(\gamma_0|A_3|^2+2\gamma_1(|A_1|^2 +|A_2|^2)\right)A_3 +\gamma_2(A_1^2+A_2^2)\barl{A_3} & = 0, \end{split} \ee where the following symmetries have been used: $\gamma_0:=I_{1,1,1,1}=I_{2,2,2,2}=I_{3,3,3,3}$ due to \eqref{E:I_sym_rot} with $\nu=\pi/3$; $\gamma_1:=I_{1,2,2,1}=I_{2,3,3,2}=I_{1,2,1,2}=I_{2,3,2,3}$ due to \eqref{E:I_sym_rot} with $\nu=\pi/3$, and \eqref{E:I_sym1}. Moreover, $\gamma_1=I_{2,1,1,2}=I_{1,3,3,1}=I_{3,1,1,3}$, where the second equality follows from \eqref{E:I_sym_rot} with $\nu=\pi/3$ and the facts that $k^{(1)}=r_{\pi/3}(k^{(3)}-b^{(1)})$ and $u_n(k^{(3)}-b^{(1)};x)=u_n(k^{(3)};x)$ for all $n\in \N$. Finally $\gamma_2:=I_{2,2,1,1}=I_{3,3,2,2}=\overline{I_{1,1,2,2}}=\overline{I_{2,2,3,3}}$ due to \eqref{E:I_sym_rot} and \eqref{E:I_sym2}, and $\gamma_2=I_{2,2,1,1}=I_{1,1,3,3}$ using \eqref{E:I_sym_rot} together with $k^{(1)}=r_{\pi/3}(k^{(3)}-b^{(1)})$ and $u_n(k^{(3)}-b^{(1)};x)=u_n(k^{(3)};x)$ for all $n\in \N$. All the nonlinear coefficients are real: $\gamma_0,\gamma_1\in \R$ due to \eqref{E:I_sym2} and $\gamma_2\in \R$ since $\gamma_2=I_{2,2,1,1}=\overline{I_{1,1,2,2}}$ by \eqref{E:I_sym2} and at the same time $\gamma_2=I_{2,2,1,1}=I_{1,1,2,2}$ by \eqref{E:I_sym_refl}, where we are using the facts that $k^{(2)}=(k^{(1)}_1,-k^{(1)}_2)^T$ and that $k^{(2)}\doteq k^{(1)}$ does not hold. \section{Numerical Examples of Gap Soliton Approximations}\label{S:numerics} We compute here numerically localized solutions of the CMEs for the examples $s_2,s_5$ in Section \ref{S:CME_hex}. Then, using the leading order term in \eqref{E:ansatz_phys}, we generate and plot an approximation of a gap soliton of the nonlinear Maxwell problem \eqref{E:NL_Maxw}. In the evaluation of \eqref{E:ansatz_phys} we position the photonic crystal so that the center of one of the annuli lies at the origin $x=0$. \subsection{Gap Soliton near the Edge $s_2$}\label{S:numerics_s2} Figure \ref{F:s2_envel_GS} plots in (a) the unique positive localized solution, the so called Townes soliton, of \eqref{E:CME_1} for the case $\omega_*=s_2$ and in (b) the intensity $I=|E_1|^2+|E_2|^2+|E_3|^2$ of the leading order term in \eqref{E:ansatz_phys}. In Figure \ref{F:s2_E1-E3} we show the absolute value of the individual components $E_1, E_2, E_3$. As the Townes soliton is radially symmetric, it was computed using the shooting method on \eqref{E:CME_1} in polar coordinates. The fourth to fifth order explicit Runge--Kutta method ODE45 of Matlab was used in the shooting method. \begin{figure}[h!] \begin{center} \epsfig{figure = FIGURES/envelope_s2_GS_approx.eps,scale=0.65} \caption{\label{F:s2_envel_GS} \small (a) CME solution, (b) intensity of the gap soliton approximation for the case $\omega_*=s_2$. See Section \ref{S:numerics_s2}.} \end{center} \end{figure} \begin{figure}[h!] \begin{center} \hspace{-0.5cm} \epsfig{figure = FIGURES/GS_approx_s2_E1-E3.eps,scale=0.65} \caption{\label{F:s2_E1-E3}\small Absolute value of the components $E_1,E_2,E_3$ of the gap soliton approximation for $\omega_*=s_2$. See Section \ref{S:numerics_s2}.} \end{center} \end{figure} \subsection{Gap Soliton near the Edge $s_5$}\label{S:numerics_s5} Here we restrict to solutions of \eqref{E:CME_s5} with the symmetry \[ A_1=A_4,\ A_2=A_5,\ A_3=A_6, \ which reduces the problem to a system of three equations for $A_1,A_2,A_3$. To find a localized solution, we first replace $\mu$ by $0$, and $\alpha_1,\alpha_2,\beta_1,\beta_2$ by the average of these four numbers. Also the coefficients in each $\CN_j$, $j\in \{1,2,3\}$, are replaced by their average. For this modified system the Townes soliton with $A_1=A_2=A_3$ is computed via the shooting method as in Section \ref{S:numerics_s2}. Then a numerical homotopy in the coefficients is used to get a solution of \eqref{E:CME_s5}. The homotopy is applied to a fourth order centered finite difference discretization of \eqref{E:CME_s5}. Our homotopy always results in $A_1=0$ so that in the end we produce a solution of \eqref{E:CME_s5} with $A_1=A_4=0$ and $A_2=A_5\neq 0$, $A_3=A_6\neq 0$. The two components $A_2,A_3$ are plotted in Figure \ref{F:s5_envel_GS} together with the intensity of the corresponding leading order term in \eqref{E:ansatz_phys}. In Figure \ref{F:s5_E1-E3} we plot the individual components of $E$ in absolute value. \begin{figure}[h!] \begin{center} \epsfig{figure = FIGURES/envelopes_s5_GS_approx,scale=0.9} \end{center} \caption{\label{F:s5_envel_GS}\small (a) CME solutions $A_2, A_3$, (b) intensity of the gap soliton approximation for the case $\omega_*=s_5$. See Section \ref{S:numerics_s5}.} \end{figure} \begin{figure}[h!] \begin{center} \hspace{-0.5cm} \epsfig{figure = FIGURES/GS_approx_s5_E1-E3.eps,scale=0.9} \caption{\label{F:s5_E1-E3}\small Absolute value of the components $E_1,E_2,E_3$ of the gap soliton approximation for $\omega_*=s_2$. See Section \ref{S:numerics_s5}.} \end{center} \end{figure} \section{Conclusions} We have considered monochromatic out-of-plane gap solitons in Kerr nonlinear 2D photonic crystals as described by the full vector Maxwell system. Using a model of the nonlinear polarization which does not produce higher harmonics, we arrive at a cubically nonlinear curl-curl problem for the fundamental harmonic. For gap solitons with frequencies in spectral gaps but in an asymptotic vicinity of a gap edge we assume a standard slowly varying envelope approximation based on the gap edge Bloch waves modulated by slowly varying envelopes of small amplitude. These envelopes are then shown to satisfy a system of coupled mode equations (CMEs) of the same structure as in the case of gap solitons of the 2D periodic nonlinear Schr\"odinger equation \cite{DU09,DU_err11}. In particular the system generally involves mixed derivatives. Being a constant coefficient system depending only on the slow variables, the CMEs is a simple effective model for the near edge gap solitons. Similarly to \cite{DU09} the derivation of CMEs needs to be carried out in Bloch variables due to the possible quasi-periodicity of gap edge Bloch waves. Symmetries among the coefficients of the CMEs are determined using symmetries of the band structure and among the Bloch waves. We provide an example of a photonic crystal with a hexagonal periodicity lattice and a circular material structure in the periodicity cell. For this crystal three gaps are numerically observed (for $\omega>0$). CMEs are then derived for several gap edges including a case where a system of six CMEs arises. Numerical computations of localized solutions of these CMEs and of the corresponding gap soliton approximations are then performed. For the CME system with six components only solutions with four nonzero components were numerically constructed and it is unclear whether a solution with all six nonzero components exists. A rigorous justification of the CMEs, which states that for a certain class of CME solutions the full Maxwell system has gap soliton solutions which are indeed approximated by the slowly varying envelope asymptotic expansion, is expected to hold by similar arguments to those in \cite{DPS09,DU09,DU_err11}. It will be the subject of future work. \section*{Acknowledgments} We thank Stefan Findeisen, Karlsruhe Institute of Technology, for carrying out the finite element computations in Section \ref{S:hex_struct}. T. Dohnal was partially supported by DFG Research Training Group 1924: Analysis, Simulation and Design of Nanotechnological Processes. \bibliographystyle{plain}
1202.3375
\section{Introduction} Interaction effects are expected to be important for the physics of bilayer graphene and may cause a formation of correlated many-body phases.\cite{Castro,Kotov} This needs to be contrasted to intrinsic monolayer graphene in which a vanishing density of states at the Dirac points suppresses the influence of electronic correlations.\cite{Kotov,Goerbig} Recent experiments on suspended bilayer graphene, \cite{Martin, Weitz,Freitag,Lau} which is free of substrate effects, reveal a gapped state at and around the charge neutrality point. The state may be of topological origin\cite{Qi} due to the observed\cite{Martin,Freitag} conductance of the order of $e^2/h$ and may exhibit an anomalous quantum Hall effect, i.e. a quantum Hall effect at zero magnetic field. In the most recent experiment on high mobility samples from Ref.~\onlinecite{Lau}, a completely insulating behavior was found. From the theory point of view, several proposals were given \cite{Min, Nandkishore,Zhang,Vafek,FZhang,Jung,Lemonik,ZhangMacD,Vafek_three,Kharitonov,Scherer} for the existence of gapped (and gapless) phases at the charge neutrality point, including those that break the time-reversal symmetry. Most of them are based on the particle-hole (excitonic) binding which is the most natural assumption in the understanding of a gapped phase at the charge neutrality point. These theories assume a quadratic dispersion of the electrons in the low-energy effective description \cite{McFa}, and direct hopping between two sublattices in different layers that leads to the linear dispersion (``triagonal warping") is neglected. This assumption is justified if the chemical potential is not exactly situated at the charge-neutrality point. To explore additional possibilities for gapped phases in the presence of a finite chemical potential, we discuss here superconducting instabilities, especially with an eye on the possibility of topological (fully gapped) superconductivity on the honeycomb bilayer. Bilayer graphene may be potentially also viewed as a strongly-correlated system with a possibility to support a layered antiferromagnetic state,\cite{FZhang,Jung} similar to the Mott physics of high $T_c$ superconductors. The existence of a layered antiferromagnetic state is supported by the most recent experiment with high quality samples, \cite{Lau} which feature completely insulating behavior at the charge neutrality point. There is, so far, no systematic study of superconducting instabilities in the presence of electron-electron and electron-phonon interactions on the honeycomb bilayer at finite doping (see, however, Ref.~\onlinecite{Vafek_two} for fermions in the presence of weak electron-electron interactions only at zero chemical potential). To address this question, we study in the present paper a microscopic model of a single effective honeycomb monolayer with reduced nearest neighbor hopping and third-neighbor hopping, in addition to inter-site attractive interactions. The kinetic term of the effective model is obtained by integrating out the ``high-energy'' degrees of freedom from the direct interlayer hopping (i.e. assuming strong interlayer hopping in the honeycomb bilayer), and the inter-site superexchange interaction originates from the Hubbard on-site repulsion. This model is to a certain degree biased to antiferromagnetism and $d$-wave superconductivity, but preserves the usual low-energy description of the bilayer graphene.\cite{McFa} Moreover, in contrast to the usual low-energy model of bilayer graphene, the present model accounts for the lattice symmetry of the original model (the honeycomb bilayer) that may be relevant for the symmetry of the superconducting order parameters. Our primary interest here is to find the most probable symmetry of a superconducting instability on the honeycomb bilayer together with an understanding of its nature i.e. whether this instability is topological. We also aim at an understanding of the change in the superconducting order parameter and correlations as we go from a monolayer to a few-layer honeycomb lattice. The mean-field solution of the introduced model yields a time-reversal symmetry breaking $d+id$-wave superconducting state at weak coupling, which continuously transforms into $d_{x^2-y^2}$-wave with increasing interaction. Near 3/8 and 5/8 filling of the $\pi$-bands, i.e.~near the van-Hove singularity in the density of states, the Cooper pairing becomes much stronger. Our conclusion is that the $d + i d$ superconducting instability is the leading superconducting instability of the honeycomb bilayer with strong interlayer hopping at finite doping and the same instability may be present in the bilayer graphene at finite doping. However, due to the presumed smallness of coupling constant and order parameter, as well as strong quantum fluctuations in two dimensions, it may be difficult to detect this order experimentally in today's graphene samples. The remaining part of the paper is organized as follows. In Sec. II we define our effective two-band model on an effective honeycomb lattice with third-nearest-neighbor hopping. The model is then, in Sec. III, solved by a Bogoliubov - de Gennes (BdG) transformation for a singlet bond-pairing order parameter, and we discuss the relevant symmetries. Section IV presents the phase diagram obtained from a numerical solution of the BdG equations. In Sec. V, the relevance for the physics of the bilayer graphene is discussed, and our main conclusions are presented in Sec. VI. Two Appendices summarize analytically obtained solutions in the weak-coupling BCS limit. \section{Model} The honeycomb bilayer lattice consists of two Bernal-stacked honeycomb lattices, each consisting of two triangular sublattices as illustrated in Fig.~1 such that the unit cell contains four lattice sites. The Hamiltonian of free electrons on such a lattice is given by \begin{eqnarray} H_{0}& = & - t \sum_{\vec{j},\sigma} \sum_{\vec{u}} \left( a_{1,\vec{j},\sigma}^{\dagger} b_{1, \vec{j} + \vec{u},\sigma} + a_{2,\vec{j},\sigma}^{\dagger} b_{2, \vec{j} -\vec{u},\sigma} + \mbox{H.c}\right) \nonumber \\ && - t_{\bot} \sum_{\vec{j},\sigma} \left( a_{1,\vec{j},\sigma}^{\dagger} a_{2, \vec{j},\sigma} + \mbox{H.c}\right) \nonumber \\ && - \mu \sum_{i,\vec{j}} \left( a_{i,\vec{j},\sigma}^{\dagger}a_{i,\vec{j},\sigma} + b_{i,\vec{j},\sigma}^{\dagger} b_{i, \vec{j},\sigma}\right). \label{freeham} \end{eqnarray} Here, the index $i = 1,2$ denotes the layer and $\vec{j}$ enumerates primitive cells. The sum runs over $\vec{u}=\vec{u}_0,\vec{u}_1,\vec{u}_2$, where $\vec{u}_1=a(\frac{3}{2},\frac{\sqrt{3}}{2})$ and $\vec{u}_2=a(\frac{3}{2},-\frac{\sqrt{3}}{2})$ are the primitive vectors of the lattice, and $\vec{u}_0$=(0, 0) is an auxillary vector for denoting the hopping between sites in the same primitive cell. The norm of these vectors is $|\vec{u}|=\sqrt{3}a$, in terms of the distance, $a$, between neighboring sites in each layer, and $t$ is the associated hopping energy, whereas $t_{\bot}$ denotes the interlayer hopping energy, between A sites in two different layers. The finite chemical potential $\mu$ takes into account doping, either due to the electric-field effect or to chemically active adatoms. The operators $a_{i,\vec{n},\sigma}^{\dagger} (a_{i, \vec{n},\sigma})$ represent electron creation (annihilation) on the sublattice site $A_i$ of the layer $i$ with spin $\sigma = \uparrow, \downarrow$, and $b_{i,\vec{n},\sigma}^{\dagger} (b_{i, \vec{n},\sigma})$ those for electrons on the sublattice site $B_i$. $\mu$ is the chemical potential. We use units such that $\hbar = 1$. \begin{figure} \centering \includegraphics[width=7cm]{Fig1.pdf} \caption{(Color online) (a) A view of Bernal stacked honeycomb lattices 1 and 2 with corresponding sublattice sites A1, B1, and A2, B2 respectively. (b) The model reduces to a monolayer model with the third neighbor hopping $\tilde t \equiv t^2 / t_\bot$ and the nearest neighbor hopping $2\tilde t$ (see the text). } \label{fig:01} \end{figure} By introducing the Fourier transforms $ a_{i, \vec{k}, \sigma} = \sum_{\vec{j}} a_{i, \vec{j}, \sigma} \exp( i \vec{k} \cdot \vec{j})$ and $ b_{i, \vec{k}, \sigma} = \sum_{\vec{j}} b_{i, \vec{j}, \sigma} \exp( i \vec{k} \cdot \vec{j})$, and diagonalizing the Hamiltonian, one obtains the spectrum \begin{equation} E^{\pm}_{\alpha} (\vec{k}) = \pm \left[(-1)^{\alpha} \frac{t_{\bot}}{2} + \sqrt{\frac{t_{\bot}^{2}}{4} + t^2 |\gamma_{\vec{k}}|^{2}}\right]\;, \label{dispersion_gb} \end{equation} where $\alpha = 1,2$ and $\pm$ denote 4 different branches of dispersion and \begin{equation}\label{gamma} \gamma_{\vec{k}} = \sum_{\vec u} e^{i\vec{k}\cdot\vec{u}} = 1 + e^{i\vec{k}\cdot\vec{u}_1}+e^{i\vec{k}\cdot\vec{u}_2}. \end{equation} In their orginal work,\cite{McFa} McCann and Fal'ko showed that the four-band model may be simplified to an effective two-band model if one considers energies much smaller than $t_{\bot}$. In momentum space, the Hamiltonian in Eq.~(\ref{freeham}) becomes \begin{eqnarray} H_0 &=& \sum_\sigma \int_{BZ} \frac{d^2 \vec{k}}{(2\pi)^2} \\ && \left\{ - t \left( \gamma_{\vec{k}} a_{1,\sigma,\vec{k}}^{\dagger} b_{1,\sigma,\vec{k}} \right.\right. + \left. \gamma^*_{\vec{k}} a_{2,\sigma,\vec{k}}^{\dagger} b_{2,\sigma,\vec{k}} + \text{H.c.}\right)\nonumber \\ && - t_\bot \left(a_{1,\sigma,\vec{k}}^{\dagger} a_{2,\sigma,\vec{k}} + \text{H.c.}\right)\nonumber \\ && - \mu \left(a_{1,\sigma,\vec{k}}^{\dagger} a_{1,\sigma,\vec{k}} + a_{2,\sigma,\vec{k}}^{\dagger} a_{2,\sigma,\vec{k}} \right. \nonumber \\ &&\left.\left. + b_{1,\sigma,\vec{k}}^{\dagger}b_{1,\sigma,\vec{k}} + b_{2,\sigma,\vec{k}}^{\dagger} b_{2,\sigma,\vec{k}}\right) \right\}. \end{eqnarray} If we introduce the spinor \begin{equation} \Psi_{\sigma}(\vec{k}) = (a_{1,\sigma, \vec{k}}, a_{2,\sigma, \vec{k}}, b_{2,\sigma, \vec{k}}, b_{1,\sigma, \vec{k}})^{T}, \end{equation} the Hamiltonian can be expressed as a $4\times 4$ matrix, \begin{equation} H_0(\vec{k}) = \sum_\sigma \Psi^\dagger_{\sigma}(\vec{k}) \left[\begin{array}{cccc} - \mu & -t_\bot & 0 & - t \gamma_{\vec{k}} \\ -t_\bot & - \mu & - t \gamma^*_{\vec{k}} & 0\\ 0 & - t \gamma_{\vec{k}} & - \mu & 0 \\ -t \gamma^*_{\vec{k}}& 0 & 0& - \mu\\ \end{array} \right] \Psi_{\sigma}(\vec{k}). \end{equation} One may further define $2 \times 2$ matrices $ H_{11} = - \mu I + t_\bot \sigma_x, \, H_{22} = - \mu I, \, H_{12} = - t (\mathrm{Re} \gamma_{\vec{k}} \sigma_x + \mathrm{Im} \gamma_{\vec{k}} \sigma_y) = H_{21}$, such that the eigenvalue equation can be written in the following form ($\vec{k}$ indices are implied) \begin{equation} \left[\begin{array}{cc} H_{11} & H_{12} \\ H_{21} & H_{22} \\ \end{array} \right] \left[\begin{array}{c} \Psi_1 \\ \Psi_2 \\ \end{array} \right]= E \left[\begin{array}{c} \Psi_1 \\ \Psi_2 \\ \end{array} \right], \end{equation} from which we obtain \begin{equation} \{H_{22} - H_{21} (H_{11} - E)^{-1} H_{12}\} \Psi_2 = E \Psi_2.\label{jed} \end{equation} If we assume $t_\bot$ to be the largest energy scale and consider the low-energy limit ($E \ll t_\bot$), Eq.~(\ref{jed}) becomes \begin{equation}\label{ham:eff} H_{\mathrm{eff}} \Psi_2 \equiv \left[\begin{array}{cc} - \mu & \frac{t^2}{t_\bot} \gamma_{\vec{k}}^2 \\ \frac{t^2}{t_\bot} \gamma_{\vec{k}}^{*2} & - \mu \\ \end{array} \right] \Psi_2 = E \Psi_2, \end{equation} with $ \Psi_2 (\vec {k})= (b_{2,\sigma,\vec{k}}, b_{1,\sigma,\vec{k}})^T.$ The two-band model described by the Hamiltonian in Eq. (\ref{ham:eff}), is also valid in the limit \cite{McFa} where $E\ll t_{\bot}\ll t$. For energies larger than $t_{\bot}$, one needs to take into account the other two bands which overlap in energy with those considered in Eq. (\ref{ham:eff}). In the following sections we use the simplified two-band model at even larger energies, up to the van-Hove singularity. Formally, this amounts to increasing artificially (with respect to the graphene bilayer) the interlayer hopping $t_{\bot}$ such that it becomes the largest energy scale, $t_{\bot}\gg t$. In that limit Eq. (\ref{ham:eff}) becomes the exact description of the honeycomb bilayer for $E, t \ll t_\bot$ and for the wavevectors of the whole Brillouin zone. We will adopt that model in the following. The Hamiltonian in Eq. (\ref{ham:eff}) corresponds, in real space, to a single-layer honeycomb lattice with nearest-neighbor and third-neighbor hoppings. Whereas the effective hopping amplitude of the latter is given by $t^2/t_{\bot}$, the effective nearest-neighbor hopping is twice as large.\cite{Bena} This means that due to the strong interlayer hopping, the complete low-energy physics is projected onto the B1 and B2 sublattices which themselves form a hexagonal lattice (see Fig.~\ref{fig:01}). As mentioned above, the model is equivalent to the graphene bilayer in the small-momentum limit, i.e. for $t^2/t_\bot |ka|^2 \sim \mu \ll t^2/t_\bot$ and reproduces correctly the finite density of states (DOS) at $E=0$ of bilayer graphene [Fig.~\ref{fig:02}]. Finally, the Hamiltonian (\ref{ham:eff}) does not take into account direct hopping between the B1 and B2 sublattices, which may though easily be accounted for by adding $-t'\gamma_{\vec{k}}^*$ to the off-diagonal matrix elements, where $t'\simeq 0.3$ eV is the associated hopping amplitude. This term yields the so-called trigonal warping close to the charge-neutrality point, which consists of a splitting of the parabolic band-contact point into four linear Dirac points.\cite{McFa} However, these Dirac points are present only at very low energies, for chemical potentials $|\mu|$ in the meV range, such that the parabolic-band approximation becomes valid even at low dopings. Since we are interested, here, in moderate doping, we neglect this additional term and use the effective band model (\ref{ham:eff}) in the following sections. \begin{figure} \includegraphics[width = 8cm]{Fig2.png} \caption{(Color online) Non-interacting dispersion (a) and density of states (b) of the projected monolayer model. Linear dispersion in the vicinity of the K-points in the graphene monolayer (c) in comparison to the quadratic dispersion in our model (d). We use $\tilde t = t^2/t_{\bot}$ for the unit of energy.} \label{fig:02} \end{figure} Since we consider the effective hopping $t^2/t_{\bot}$ to be small and if there is a significant on-site repulsion $U$, spin-singlet bonds between B1 and B2 sites are expected to form due to superexchange processes. Therefore, we apply the $t-J$ model but relax the requirement of the model that double occupation of sites is excluded. We justify this by our primary aim: to find the most probable symmetry of the superconducting instability. As we will be working in the mean-field approximation, we just assume an effective nearest-neighbor attractive interaction between electrons on B1 and B2 sublattices, and in doing this we favor spin-singlet bond formation. The spin-singlet formation directly follows from the mean-field approach to the $t-J$ model \cite{Black}. If the attractive interaction is not too strong, it can be simply added to Hamiltonian (\ref{ham:eff}), with the help of the term \begin{equation} H_{I} = - J \sum_{\vec{j},\vec{u}} \sum_{\sigma} b_{1,\vec{j},\sigma}^{\dagger} b_{1, \vec{j},\sigma} b_{2,\vec{j}+\vec{u},-\sigma}^{\dagger} b_{2, \vec{j}+\vec{u},-\sigma}, \label{interaction} \end{equation} where $J > 0$. Now we apply the BCS ansatz by introducing the superconducting order parameter as a 3 component complex vector $$\bold{\Delta}\equiv(\Delta_{\vec{u}_0},\Delta_{\vec{u}_1},\Delta_{\vec{u}_2})$$ where the components are defined by \begin{equation} \Delta_{\vec{u}} = \frac{1}{\sqrt{2}}\langle b_{1,\vec{j},\uparrow} b_{2, \vec{j} + \vec{u},\downarrow} - b_{1,\vec{j},\downarrow} b_{2, \vec{j} + \vec{u},\uparrow} \rangle, \end{equation} and correspond to the spin-singlet pairing amplitudes of three inequivalent pairs of nearest neighbors. The interaction part $H_I$ in the mean-field approximation becomes \begin{eqnarray} H_{BCS}& =& \sqrt{2} J \sum_{\vec{j}, \vec{u}} \Delta_{\vec{u}} \left( b_{1,\vec{u}, \uparrow}^{\dagger} b_{2, \vec{j} + \vec{u}, \downarrow}^{\dagger} - b_{1,\vec{j}, \downarrow}^{\dagger} b_{2, \vec{j} + \vec{u}, \uparrow}^{\dagger} \right) + \mbox{H.c.} \nonumber \\ && + 2 N \sum_{\vec{u}} J |\Delta_{\vec{u}}|^{2}, \end{eqnarray} where $N$ is the number of unit cells. \section{Bogoliubov - de Gennes analysis and pairing symmetries} \begin{figure} \centering \caption{ Different pairing instabilities in real space: (a) s-wave, (b) $d_{x^2-y^2}$ wave, (c) $d_{xy}$ wave, and (d) $d_{x^2-y^2} + i d_{xy}$ time reversal breaking $d$-wave.} \label{fig:03} \end{figure} The complete BCS Hamiltonian in momentum space is given by \begin{eqnarray} && H = - \frac{t^2}{t_\bot} \sum_{\vec{k},\sigma} \left(\gamma_{\vec{k}}^2 b_{2, \vec{k} \sigma}^\dagger b_{1, \vec{k} \sigma} + \text{h.c.}\right) \nonumber \\ && + \sqrt{2}J \sum_{\vec{k}} \left[ \sum_{\vec{u}} \Delta_{\vec{u}} e^{i \vec{k} \cdot \vec{u}} \left( b_{2, \vec{k} \uparrow}^\dagger b_{1, -\vec{k} \downarrow}^\dagger - b_{2, \vec{k} \downarrow}^\dagger b_{1, -\vec{k} \uparrow}^\dagger \right) + \mbox{H.c.} \right]\nonumber \\ &&- \mu \sum_{\vec{k},\sigma} \left (b_{1, \vec{k} \sigma}^\dagger b_{1, \vec{k} \sigma} + b_{2, \vec{k} \sigma}^\dagger b_{2, \vec{k} \sigma}\right). \label{BCS_HAM} \end{eqnarray} Similar to the case of the honeycomb monolayer,\cite{Black} we can make our description much more transparent if we apply the following transformation that diagonalizes the kinetic part of the above Hamiltonian, \begin{equation} \left[ \begin{array}{c} b_{2, \vec{k} \sigma}\\b_{1, \vec{k} \sigma}\end{array} \right] = \frac{1}{\sqrt{2}} \left[ \begin{array}{c} d_{\vec{k} \sigma} + c_{\vec{k} \sigma}\\ e^{-i 2 \varphi_{\vec{k}}} (d_{\vec{k} \sigma} - c_{\vec{k} \sigma}) \end{array}\right], \end{equation} where $\varphi_{\vec{k}} = \arg(\gamma_{\vec{k}})$. In this basis, where $c_{\vec{k} \sigma}$ and $d_{\vec{k} \sigma}$ represent the electron states in the upper and lower band, respectively, the Hamiltonian transforms into \begin{eqnarray} &&H = \nonumber \\ && \sum_{\vec{k}} \left\{ \sum_{\sigma}(\tilde{t} \epsilon_{\vec{k}} - \mu) c_{\vec{k} \sigma}^{\dagger} c_{\vec{k} \sigma} + \sum_{\sigma}(- \tilde{t} \epsilon_{\vec{k}} - \mu) d_{\vec{k} \sigma}^{\dagger} d_{\vec{k} \sigma} \right. \nonumber \\ &&+ \sqrt{2} J \left[ \sum_{\vec{u}} \Delta_{\vec{u}} \cos(\vec{k}\cdot \vec{u} - 2 \varphi_{\vec{k}}) (d_{\vec{k} \uparrow}^{\dagger} d_{-\vec{k} \downarrow}^{\dagger} - c_{\vec{k} \uparrow}^{\dagger} c_{-\vec{k} \downarrow}^{\dagger}) \right. \nonumber \\ && \left.\left. +\sum_{\vec{u}} i \Delta_{\vec{u}} \sin(\vec{k}\cdot \vec{u} - 2 \varphi_{\vec{k}}) (c_{\vec{k} \uparrow}^{\dagger} d_{-\vec{k} \downarrow}^{\dagger} - d_{\vec{k} \uparrow}^{\dagger} c_{-\vec{k} \downarrow}^{\dagger})\right] + \mbox{H.c.} \right\}. \nonumber \\\label{ham} \end{eqnarray} Here $\tilde{t} \equiv t^2/t_\bot$ and $\epsilon_{\vec{k}} \equiv |\gamma_{\vec{k}}|^2$. The eigenvalues are given by \begin{equation} E_{\vec{k}} = \pm \sqrt{(\tilde{t} \epsilon_{\vec{k}})^2 + \mu^2 + 2 J^2 \left( |S_{\vec{k}}|^2 + |C_{\vec{k}}|^2 \right) \pm 2 \sqrt{A}},\label{valuesc} \end{equation} where $C_{\vec{k}} = \sum_{\vec{u}} \Delta_{\vec{u}} \cos(\vec{k}\cdot \vec{u} - 2 \varphi_{\vec{k}})$, $S_{\vec{k}} = \sum_{\vec{u}} \Delta_{\vec{u}} \sin(\vec{k} \cdot\vec{u} - 2 \varphi_{\vec{k}})$ and \begin{equation} A = (\mu^2 + 2J^2 |S_{\vec{k}}|^2) \tilde{t}^2 \epsilon_{\vec{k}}^2 + 4J^4 (\mathrm{Re} C_{\vec{k}} \mathrm{Im} S_{\vec{k}} - \mathrm{Im} C_{\vec{k}} \mathrm{Re} S_{\vec{k}})^2. \label{A} \end{equation} If all $\Delta_{\vec{u}}$ are purely real, i.e. there is no time-reversal symmetry breaking, then the second term in $A$ is zero and the expression for the dispersion simplifies to \begin{equation} E_{\vec{k}} = \pm \sqrt{\left(\tilde{t} \epsilon_{\vec{k}} \pm \sqrt{\mu^2 + 2J^2S_{\vec{k}}^2 }\right)^2 + 2J^2 C_{\vec{k}}^2 }. \label{values} \end{equation} In this case $S_{\vec{k}}$ only renormalizes the chemical potential, whereas $C_{\vec{k}}$ plays the main role in the description of the superconducting order parameter. A comparison between the Bogoliubov energy dispersion in Eq. (\ref{values}) and the usual BCS expression shows that $C_{\vec{k}}$ can be identified with the gap function. However, this name may be misleading because $C_{\vec{k}}$ does not describe the gap, as in the example in Eq.~(\ref{s_wave_E}) below. The symmetry analysis of the order parameter on a honeycomb lattice,\cite{Black} yields the basis vectors which correspond to $s$, $d_{x^2-y^2}$ and $d_{xy}$ waves, respectively: \begin{eqnarray} \bold\Delta = \left\{ \begin{array}{ccc} \Delta \; (1,&1,&1) \\ \Delta \; (2,&-1,&-1) \\ \Delta \; (0,&1,&-1) \end{array} \right. . \label{possibilities} \end{eqnarray} The gap function $C_{\vec{k}}$ corresponding to these symmetries is shown in Fig.~\ref{fig:04}, in comparison with the monolayer case. The last two possibilities belong to a two-dimensional subspace of irreducible representation of permutation group ${\cal{S}}_3$.\cite{Polleti} This means that any superposition of these two order parameters, which we may identify with the $d_{x^2-y^2}$ [$(2,-1,-1)$ of Eq. (\ref{possibilities}) and permutations] and $d_{xy}$ [$(0,1,-1)$ of Eq. (\ref{possibilities}) and permutations] solutions of $d$-wave superconductivity, is possible from a symmetry point of view. In spite of this principle possibility, the precise realization of a particular order parameter is a question of energy calculations. One notices that the spatial point symmetry of the underlying honeycomb lattice is $C_{3v}$, which includes $2\pi/3$ rotations, whereas a transformation from $d_{x^2-y^2}$ to $d_{xy}$ involves $\pi/4$ rotations. The order parameters thus have a different symmetry than the underlying lattice, as one may also see in Fig..~\ref{fig:04}, such that the two order parameters do not represent degenerate ground states. Indeed we find, within the BCS mean-field theory, that the $d_{x^2-y^2}$ solution has a lower energy than the $d_{xy}$ solution. This finding needs to be contrasted to the case of $p$-wave superconductivity on the square lattice.\cite{cg} In the latter case, superpositions of the $p_x$ and $p_y$ solutions are also permitted by the symmetry of the order parameter, but both solutions are related to each other by $\pi/2$ rotations that respect the point symmetry of the underlying (square) lattice. The $p_x$ and $p_y$ solutions are therefore degenerate. The above arguments indicate that the $C_{3v}$ symmetry of the honeycomb lattice is dynamically broken, only through interactions, via the formation of a $d_{x^2-y^2}$ order parameter. This is similar to the findings of Poletti \textit{et al.} in the context of superfluidity of spinless fermions with nearest-neighbor attraction.\cite{Polleti} Also in this case, the $C_{3v}$ symmetry is dynamically broken. Notice finally that in the small-$J$ limit, i.e. at weak coupling or in the low-energy limit, the BdG system recovers the symmetry of the $C_{3v}$ group but has also an (emergent) continuous rotational symmetry that will lead to a $d_{x^2-y^2} \pm \;i \;\sqrt{3}\; d_{xy}$ instability (see Appendix \ref{appA}). \begin{figure} \includegraphics[width = 9cm]{Fig4.png} \caption{(Color online) $C_k$ in the first Brillouin zone calculated for three possible symmetries on monolayer and projected bilayer lattices. } \label{fig:04} \end{figure} In the case of an $s$-wave order parameter with $\bold{\Delta} = \Delta \; (1,1,1)$, a small-wave-vector expansion ($|\vec{q}|a\ll 1$) around the $K$-points yields \begin{equation} C_{\vec{K}_{\pm} + \vec{q}} \approx \mp \frac{\sqrt{3}}{2} q_y a \Delta \, ,\qquad S_{\vec{K}_{\pm} + \vec{q}} \approx +\frac{\sqrt{3}}{2} q_x a \Delta. \end{equation} Thus both couplings are non-zero and no simple effective picture emerges by looking at the Hamiltonian in Eq.~(\ref{ham}). The lower excitation energy branch can be approximated in the small-momentum limit as \begin{eqnarray} \nonumber E_{\vec{q}} &\simeq& \sqrt{\mu^2 - 2 \mu \tilde{t} \epsilon_{\vec{K}_{\pm}+\vec{q}} + \frac{3 }{2} J^2 (|\vec{q}| a)^2 \Delta^2}\\ &\simeq& \sqrt{\mu^2 - \frac{3}{2}[3\mu \tilde{t} - (J\Delta)^2] (|\vec{q}| a)^2}, \label{s_wave_E} \end{eqnarray} where we have used $\epsilon_{\vec{K}_{\pm}+\vec{q}}\simeq 9 (|\vec{q}|a)^2/4$. If the coupling strengths are such that $E_{\vec{q}}$ has a minimum at $q = 0$, that is for $(J\Delta)^2>3\mu\tilde{t}$, a special superconducting instability may be realized (if other possibilities, order parameters, have higher free energy).\cite{Note1} In the absence of trigonal warping at very low doping, we obtain a time-reversal invariant superconducting instability with two kinds of Cooper pairs with $p_x + i p_y$ and $p_x - i p_y$ pairings. Due to the forms of $C_{\vec{k}}$ and $S_{\vec{k}}$ in the above Hamiltonian in the small momentum limit, $p$-wave Cooper pairings are expected. For a sufficiently large chemical potential, one can neglect $S_{\vec{k}}$ in Eq. (\ref{values}) and the system may be unstable towards a $p_y$ gapless superconductor, with gap minima on the Fermi surface, i.e. on a circle. For $\bold{\Delta} = \Delta (2,-1,-1)$, the small-momentum expansion around the $K$-points yields \begin{eqnarray} \nonumber C_{\vec{K}_{\pm} + \vec{q}}(d_{x^2-y^2}) &\approx& -3 \frac{(q_x^2 - q_y^2)}{|\vec{q}|^2} \Delta\, , \\ S_{\vec{K}_{\pm} + \vec{q}}(d_{x^2-y^2}) &\approx& \mp 6 \frac{q_x q_y}{|\vec{q}|^2} \Delta \label{eq:dx2y2} \end{eqnarray} and for $\bold{\Delta} = \Delta (0,1,-1)$ \begin{eqnarray} \nonumber C_{\vec{K}_{\pm} + \vec{q}}(d_{xy}) &\approx& 2 \sqrt{3} \frac{q_x q_y}{|\vec{q}|^2} \Delta \, ,\\ S_{\vec{K}_{\pm} + \vec{q}}(d_{xy}) &\approx& \mp \sqrt{3} \frac{(q_x^2 - q_y^2)}{|\vec{q}|^2} \Delta. \label{eq:dxy} \end{eqnarray} The gap function $C_{\vec{k}}$ thus clearly shows the $d_{x^2-y^2}$ and the $d_{xy}$ symmetry in Eq. (\ref{eq:dx2y2}) and (\ref{eq:dxy}), respectively. Notice that one may superpose two waves in the manner \begin{equation} C_{\vec{k}}(d\pm id)=C_{\vec{k}}(d_{x^2-y^2}) \pm i \sqrt{3} C_{\vec{k}}(d_{xy}), \end{equation} and \begin{equation} S_{\vec{k}}(d\pm id)=S_{\vec{k}}(d_{x^2-y^2}) \pm i \sqrt{3} S_{\vec{k}}(d_{xy}) , \end{equation} which is identified with the $d + i d$-wave superconducting phase in the following. In the small-wave-vector limit, the combined forms of $C_{\vec{k}}$, \begin{equation} C_{\vec{K}_{\pm} + \vec{q}} (d+id)\approx \mp i S_{\vec{K}_{\pm} + \vec{q}} \approx 3 (q_x + i q_y)^2/|\vec{q}|^2 \label{ex_plus} \end{equation} and \begin{equation} C_{\vec{K}_{\pm} + \vec{q}} (d-id)\approx \pm i S_{\vec{K}_{\pm} + \vec{q}} \approx 3 (q_x - i q_y)^2/|\vec{q}|^2, \label{ex_minus} \end{equation} restore the rotational symmetry -- they are indeed eigenstates of rotation in two dimensions with the value of angular momentum equal to two. Thus a fixed complex combination in real space, either $d_{x^2-y^2} + i \sqrt{3}\; d_{xy}$ or $d_{x^2-y^2} - i \sqrt{3}\; d_{xy}$, leads to the same form of the expansion in small momenta at both valley points, either (\ref{ex_plus}) or (\ref{ex_minus}). Because it is the same irrespective of the valley $K$ or $K'$ one obtains a solution that spontaneously breaks time-reversal symmetry. Thus we can identify the solution with the broken time-reversal symmetry $d + i d$ state. Something similar happens in the monolayer case, but the $d$-wave symmetry is recognized as a global dependence of the order parameter on the $\vec{k}$ vector in the Brillouin zone around the central $\Gamma$-point (see Ref. \onlinecite{Linder}) and $p$-wave behavior around $\vec{K}_{\pm}$ points.\cite{uchoa} In the bilayer case the time-reversal symmetry breaking $d$-wave order parameter emerges as a property of the low-energy small-momentum effective description around the $K$ points, as shown above. \section {Phase Diagram} We have found the ground state of our model Hamiltonian for a broad range of $J$ and $\mu$ by minimizing the free energy. At zero temperature, as a function of the order parameter, it is given by \begin{equation} F = -\sum_{{\vec k}\in \mathrm{IBZ}} \sum_{\alpha=\pm 1} E_{{\vec k},\alpha} + 2NJ\sum_{{\vec u}}|\Delta_{\vec u}|^2, \end{equation} where the first sum is over all wave vectors $\vec{k}$ in the first Brillouin zone and two Bogoliubov bands with positive energies. The ground state is defined as a global minimum of the free energy in the order parameter space. In the present study, we concentrate on superconducting order parameters in a variational approach, and thus we cannot exclude that other correlated (non-superconducting) phases may have an even lower energy. In the mean-field approach, superconducting ground states are expected even for infinitesimal positive values of $J$. The order parameter space is 6-dimensional, because it is defined by 3 complex numbers. However, adding the same phase to all three complex parameters does not modify the physical state, so one can always make one of the parameters purely real (we set $\Delta_{d_{x^2-y^2}}$ real) and reduce the order parameter space dimensionality to 5. We used the amoeba numerical method\cite{Numrec} to directly minimize the free energy. Five-dimensional minimization often reveals more than one local minimum, but we were always able to identify the lowest-lying state to a satisfying level of certainty. However, for small values of $J$, the local free-energy minima are extremely shallow, with energies only slightly lower than the free energy of the normal state. Such features in the free-energy landscape are completely clouded by numerical noise due to the discretization of the first Brillouin zone. Our numerical calculations are therefore limited to higher values of $J$, which give a solution with the amplitude of the order parameter larger than $10^{-4}$. This is marked by the dashed lines in Fig.~\ref{fig:05}. \begin{figure} \includegraphics[width = 9cm]{Fig5.png} \caption{(Color online) (a) The order parameter amplitude, $\Delta$, in the $(\mu,J)$ parameter space, obtained by a minimization of the free energy, (b) the single-particle excitation gap, (c) the contribution of $id_{xy}$ and (d) $s$-wave component in the ground state order parameter. The green dashed line marks where $\Delta$ drops below $10^{-4}$. Below this line, our numerics is not reliable. We use $\tilde t = t^2/t_{\bot}$ for the unit of energy.} \label{fig:05} \end{figure} Our results are shown on Fig.~\ref{fig:05} where the relevant quantities are represented by color in the $(\mu,J)$ plane. The amplitude of the order parameter is shown in Fig.~5(a). Upon small to moderate doping, the SC instability increases and becomes particularly favorable at the filling $5/8$, which corresponds to the chemical potential $\mu / \tilde t =1$, and the van-Hove singularity in the non-interacting DOS. For further doping the SC instability decreases. This gives to Fig.~\ref{fig:05}(a) roughly the look of the inverse DOS of Fig.~\ref{fig:02}(b). The gap in the single-particle excitations is shown in Fig.~\ref{fig:05}(b). It is particularly pronounced in the case of strong mixing of $d_{x^2-y^2}$ and $i d_{xy}$ symmetry components, as we can see from Fig.~\ref{fig:05}(c). The contribution of different pairing symmetries is defined by the ratio $w$ of different components of $\Delta$, where \begin{eqnarray} {\bf \Delta} &=& \Delta_s \hat{e}_s + i \Delta_{is} \hat{e}_s + \Delta_{d_{xy}} \hat{e}_{d_{xy}} + i \Delta_{id_{xy}} \hat{e}_{d_{xy}} \nonumber \\ &+& \Delta_{d_{x^2-y^2}} \hat{e}_{d_{x^2-y^2}}, \end{eqnarray} with $\hat{e}_s = (1,1,1)/\sqrt{3}$, $\hat{e}_{d_{xy}} = (0,1,-1)/\sqrt{2}$, and $\hat{e}_{d_{x^2-y^2}} = (2,-1,-1)/\sqrt{6}$. Fig.~\ref{fig:05}(c) shows the ratio $w(id_{xy}) = | \Delta_{id_{xy}} | / |\Delta|$, and Fig.~\ref{fig:05}(d) the ratio $w(s) = | \Delta_{s} | / |\Delta|$. The contributions of $is$ and $d_{xy}$ components are negligible in all cases, and $d_{x^2-y^2}$ is the dominant component. The numerical results are, for clarity, also shown on Fig.~\ref{fig:06} for three chosen values of the chemical potential, $\mu/\tilde{t} =0.04, 0.55, 1$. Fig.~\ref{fig:06}(a) shows a sudden increase in the pairing amplitude with the increasing interaction $J$ (note the logarithmic scale on the $y$-axis). For small $J$, the pairing amplitude is much larger for $\mu/\tilde{t}=1 $, i.e. at the van-Hove singularity, and in this case the single-particle excitation gap is also larger due the strong mixing of $d_{x^2-y^2}$ and $i d_{xy}$ symmetries. Contributions of relevant components are compared in Figs.~\ref{fig:06}(c)-(e). At higher values of $J$ one has a pure $d_{x^2-y^2}$ symmetry, whereas a mixture of $d_{x^2-y^2}$ and $id_{xy}$ symmetries is found at lower values of $J$. The contribution of $id_{xy}$ symmetry increases with decreasing $J$ and almost pure $d+id$ symmetries are usually found at the lowest accessible values of $J$. \begin{figure} \includegraphics[width = 9cm]{Fig6.pdf} \caption{(Color online) (a) The order parameter amplitude $\Delta$ and (b) the single-particle excitation gap as a function of $J$, for $\mu=0.04,0.55,1$. (c)-(e) The contributions of 3 relevant symmetry components. $d_{x^2-y^2}$ component is the dominant one for large $J$. The contribution of $id_{xy}$ increases with decreasing $J$ until the two contributions are equal and we find a pure $d+id$-wave symmetry. We use $\tilde t = t^2/t_{\bot}$ for the unit of energy. The data are plotted only above the value for the coupling $J$ which is numerically significant, as mentioned in the text (see also the dashed green line in Fig. \ref{fig:05}). } \label{fig:06} \end{figure} Our numerical calculations were performed on processors with 8GB of RAM which limited the number of $\vec{k}$-points in the first Brillouin zone to $4000\times 4000$, but we checked that results do not differ qualitatively even with a much sparser $2000\times 2000$ $\vec{k}$-grid. A much denser and probably a non-uniform discretization of the first Brillouin zone would be needed to probe the weak-coupling behavior of our model, that is for values of $J$ below the dashed lines in Fig. \ref{fig:05}. Notice, however, that the system in the small-$J$ limit may be treated analytically within the weak-coupling limit the results of which are presented in Appendices A and B, for the cases of finite and zero chemical potential, respectively. In this weak-coupling regime and at finite chemical potential, we find that the $d+id$ superconducting order parameter yields the lowest mean-field energy, when compared to order parameters that respect time-reversal symmetry (Appendix A), in agreement with our numerical results for larger values of $J$. In the weak-coupling limit, in the symmetry-protected subspace of $d_{x^2-y^2}$ and $d_{xy}$ order parameters the complex combination $d_{x^2-y^2}+ i \sqrt{3} d_{xy}$ leads to fully gapped system with no nodes at the Fermi surface. This means that the gap is proportional to $|C_{\vec{k}}| = const$, and maximum gain in the energy for this superconducting instability is obtained. Notice that this topological instability is in line with a theorem for the BCS description, according to which a time-reversal symmetry broken 2D superconducting state has a lower free energy, as compared to time-reversal symmetric ones, when confronted with two-dimensional representations of the superconducting order parameter.\cite{cg} Indeed, as mentioned after Eq. (\ref{possibilities}), the $d_{x^2-y^2}$ and $d_{xy}$ components of the order parameter $\bf{\Delta}$ form a two-dimensional irreducible representation of the symmetry group of the honeycomb lattice. Although the theorem of Ref.~\onlinecite{cg} was derived for a single band, it is expected also to apply to the present case at finite doping when the higher Bogoliubov band is irrelevant for the superconducting instability. This instability occurs at any strength of attractive interaction at finite doping since the gap opens as \begin{equation}\label{eq:gap} J\Delta \propto \exp\left[-\frac{8\pi}{\sqrt{3}}\frac{1}{\rho(\mu)J}\right]\end{equation} (see Appendix A), in terms of the DOS $\rho(E_F)$ at the Fermi level $E_F$. This is simply the BCS expression with the pairing potential equal to $J$. Finally, we notice that the weak-coupling analysis yields a different picture at zero-doping (Appendix B), where a time-reversal-symmetric superconducting order parameter (with any real combination of $d_{x^2-y^2}$ and $d_{xy}$) is energetically favored. \section{Possible relevance for bilayer graphene} In the following we will discuss possible relevance of our model for the physics of bilayer graphene. With an estimate \cite{Castro,Wehling} for the Coulomb on-site repulsion, $U \sim 10$ eV, intralayer nearest-neighbor hopping,\cite{data} $t \sim 3$ eV, and interlayer hopping,\cite{data} $t_\bot \sim 0.4$ eV, bilayer graphene may have a tendency to develop strongly-correlated electron phases. Notice that, although similar energy scales are found in monolayer graphene, the latter is to great accuracy described in terms of (quasi-)free electrons because of a vanishing DOS at the Fermi level, in the absence of intensive doping.\cite{Castro,Kotov,Goerbig} On the contrary, electronic correlations are much more efficient in bilayer graphene as a consequence of the finite DOS even at the band-contact points. This finite DOS may also be invoked when considering screening. Whereas screening is highly inefficient in monolayer graphene, and one needs then to take into account the long-range nature of the electronic interaction potential, the screening properties in bilayer graphene are similar to those in usual 2D electron systems with a parabolic band dispersion, albeit with a rather small band mass ($\sim 0.05m_0$, in terms of the bare electron mass). In this sense, an approach based on the Hubbard model, as used here excluding nearest and further-neighbor interactions, is better justified in bilayer than in monolayer graphene. However, this remains a strong approximation, as in the case of 2D electrons in GaAs heterostructures, and numerical calculations indicate that longer-range terms remain relevant also in bilayer graphene.\cite{Wehling} Generally, the interplay between a strong on-site repulsion $U$ and the hopping terms $t$ and $t_{\perp}$ leads to antiferromagnetic Heisenberg-type exchange interactions, $J \sim t^2/U \sim 1\,\text{eV}$ between nearest neighbors in the same layer and $J_\bot \sim t_\bot^2/U \sim 16\,\text{meV}$ between nearest neighbors in opposite layers. Although clear evidence for antiferromagnetism is lacking in bilayer graphene, the quadratic dispersion of juxtaposed conduction and valence bands (together with the non-zero density of states) favor antiferromagnetic fluctuations.\cite{MacDonald} Because the low-energy electrons move preferentially on the B1 and B2 sublattice sites, one needs to estimate an effective exchange interaction between them that may be obtained from a perturbative expansion, $J_\text{eff} \sim J^2 J_\bot/t_\bot^2 \sim t^4/U^3 \sim 100\,\text{meV}$. Remember that the effective hopping parameter in the projected honeycomb lattice (between the B1 and B2 sites) is a more subtle issue because it is derived in the limit where $t_\bot\gg t$, in contrast to the natural order in bilayer graphene. In order to make a comparison between our effective model and that of bilayer graphene, in view of the correlated phases we consider, it is therefore more appropriate to define the effective hopping indirectly from the value of $J_\text{eff}$ and $U$, $J_\text{eff} \sim t_\text{eff}^2/U$, which yields a value of $t_\text{eff} \sim 1\,\text{eV}$ that should replace the value $\tilde{t}$ in the previous sections. Therefore modeled with two effective parameters, $J_\text{eff}$ and $t_\text{eff}$, bilayer graphene may be compared with the effective honeycomb lattice considered in our paper and the corresponding $t-J$ model. The main feature of bilayer graphene appears to be that $J_\text{eff}\sim 0.1 t_\text{eff} \ll t_\text{eff}$ and in considering the relevance of our model we should confine ourselves to weak couplings, and small or moderate dopings; because we simplified the high-momentum physics of the bilayer (by considering the large $t_\bot$ limit) we should confine ourselves to lower dopings. First one sees from Fig. \ref{fig:05} that the gaps are in the meV range (2 to 5 meV for the maximal gaps) if one considers the energy scale $t_\text{eff}\sim J\sim 1$ eV. Thus our results indicate very small energy scales that are unlikely to be resolved in today's graphene samples. Furthermore we should use $t_\text{eff}$ and $J_\text{eff}$ for $t$ and $J$ for the exponent in the weak-coupling analysis in the Appendix A. Because we estimate $ t_\text{eff}/J_\text{eff} \sim 10$, the weak-coupling analysis yields an exponential suppression and gaps below 1 meV, in agreement with our numerical findings shown in Fig. \ref{fig:05}. \section{Conclusions} We presented an analysis of a model of honeycomb bilayer with attractive interactions that (1) supports $d + i\; d$ superconductivity with the canonical effective (low-momentum) description $\sim (k_x + i k_y)^2$ at both valley points, and (2) at moderate and strong couplings transforms into $d_{x^2-y^2}$ superconductivity. The implied $tJ$ model may be relevant for future investigations of such a complex and intriguing system as the graphene bilayer. We discussed the possibility of a superconducting instability in this framework and concluded that $d + i d$ is the leading superconducting instability in the case of the graphene bilayer at moderate dopings and low energy scales. We would like to point out also to the difference between monolayer and bilayer case that follows form the symmetry analysis of the simple model with attractive interactions and ensuing short range order parameter on both lattices. In the effective description around $\vec{K}$ points $s$-wave and $p$-wave are found \cite{Black,Linder} in the monolayer case, and $p$-wave and $d$-wave in the bilayer case. The bilayer honeycomb lattice appears at moderate dopings as yet another stage on which time reversal symmetry breaking $d$-wave superconductivity may appear (see \cite{Baskaran,Black,Honerkamp,Pathak,Pellegrino,Gu} for moderately doped monolayer) and may be driven by similar physics as in the case of predicted instabilities at special (very high) dopings of honeycomb monolayer \cite{Ch,Ab}. In the case we presented the canonical \cite{rg} low momentum description, $\sim (k_x + i k_y)^2$, holds due to the quadratically dispersing Dirac electrons. \begin{acknowledgments} We thank A.M. Black-Schaffer, M. Civelli, M. Franz, and Y. Hatsugai for useful discussions. Furthermore, we thank D. Tanaskovi\'c for support and his implication at the early stage of this project. J.V. and M.V.M. are supported by the Serbian Ministry of Education and Science under project No. ON171017, and M.O.G. by the ANR (Agence Nationale de la Recherche) project NANOSIM GRAPHENE under Grant No. ANR-09-NANO-016. The authors acknowledge financial support from bilateral MES-CNRS 2011/12 program. This research was funded in part by the National Science Foundation under Grant No. NSF PHY05-51164; M.V.M. and M.O.G. acknowledge the hospitality of KITP, Santa Barbara. Numerical simulations were run on the AEGIS e-Infrastructure, supported in part by FP7 projects EGI-InSPIRE, PRACE-1IP and HP-SEE. \end{acknowledgments}
1202.3247
\section{Introduction} This paper is a companion technical report to the article ``Continuation-Passing C: from threads to events through continuations'' \cite{cpc2012}. It contains the complete version of the proofs presented in the article. It does not, however, give any background or motivation for our work: please refer to the original article. \section{Lambda-lifting in an imperative language} \label{sec:lifting} To prove the correctness of lambda-lifting in an imperative, call-by-value language when functions are called in tail position, we do not reason directly on CPC programs, because the semantics of C is too broad and complex for our purposes. The CPC translator leaves most parts of converted programs intact, transforming only control structures and function calls. Therefore, we define a simple language with restricted values, expressions and terms, that captures the features we are most interested in (Section~\ref{sec:definitions}). The reduction rules for this language (Section~\ref{sec:naive-def}) use a simplified memory model without pointers and enforce that local variables are not accessed outside of their scope, as ensured by our boxing pass. This is necessary since lambda-lifting is not correct in general in the presence of extruded variables. It turns out that the ``naive'' reduction rules defined in Section~\ref{sec:naive-def} do not provide strong enough invariants to prove this correctness theorem by induction, mostly because we represent memory with a store that is not invariant with respect to lambda-lifting. Therefore, in Section~\ref{sec:semopt}, we define an equivalent, ``optimised'' set of reduction rules which enforces more regular stores and closures. The proof of correctness is then carried out in Section~\ref{sec:correction-ll} using these optimised rules. We first define the invariants needed for the proof and formulate a strengthened version of the correctness theorem (Theorem~\ref{thm:correction-ll}, Section~\ref{sec:strong-invariants}). A comprehensive overview of the proof is then given in Section~\ref{sec:overview}. The proof is fully detailed in Section~\ref{sec:proof-correctness}, with the help of a number of lemmas to keep the main proof shorter (Sections~\ref{sec:rewriting-lemmas} and~\ref{sec:aliasing-lemmas}). The main limitation of this proof is that Theorems~\ref{thm:lambda-lifting-correctness} and~\ref{thm:correction-ll} are implications, not equivalences: we do not prove that if a term does not reduce, it will not reduce once lifted. For instance, this proof does not ensure that lambda-lifting does not break infinite loops. \subsection{Definitions} \label{sec:definitions} In this section, we define the terms (Definition~\ref{def:full-language}), the reduction rules (Section~\ref{sec:naive-def}) and the lambda-lifting transformation itself (Section~\ref{sec:lifting-def}) for our small imperative language. With these preliminary definitions, we are then able to characterise \emph{liftable parameters} (Definition~\ref{dfn:var-liftable-simple}) and state the main correctness theorem (Theorem~\ref{thm:lambda-lifting-correctness}, Section~\ref{sec:correctness}). \begin{definition}[Values, expression and terms]\label{def:full-language} Values are either boolean and integer constants or $\ensuremath{\mathbf{1}}$, a special value for functions returning \texttt{void}. \[v \mathrel{\mathop\ordinarycolon}\mkern-.9mu\mathrel{\mathop\ordinarycolon}\mkern-1.2mu= \quad\ensuremath{\mathbf{1}} \;|\; \ensuremath{\mathop{\mathbf{true}}\nolimits} \;|\; \ensuremath{\mathop{\mathbf{false}}\nolimits} \;|\; n \in \mathbf{N}\] Expressions are either values or variables. We deliberately omit arithmetic and boolean operators, with the sole concern of avoiding boring cases in the proofs. \[e \mathrel{\mathop\ordinarycolon}\mkern-.9mu\mathrel{\mathop\ordinarycolon}\mkern-1.2mu= \quad v \;|\; x\;|\; \dotsc\] Terms are consist of assignments, conditionals, sequences, recursive functions definitions and calls. \begin{align*} T \mathrel{\mathop\ordinarycolon}\mkern-.9mu\mathrel{\mathop\ordinarycolon}\mkern-1.2mu= & \quad e \;|\; x \mathrel{\mathop\ordinarycolon}\mkern-1.2mu= T \;|\; \ite{T}{T}{T} \;|\; T\ ;\ T \\ \;|\; & \letrec{f(\range{x}{1}{n})}{T}{T} \;|\; f(T,\dotsc,T) \end{align*} \qedhere \end{definition} Our language focuses on the essential details affected by the transformations: recursive functions, conditionals and memory accesses. Loops, for instance, are ignored because they can be expressed in terms of recursive calls and conditional jumps --- and that is, in fact, how the splitting pass translates them. Since lambda-lifting happens after the splitting pass, our language need to include inner functions (although they are not part of the C language), but it can safely exclude \texttt{goto} statements. \subsubsection{Naive reduction rules\label{sec:naive-def}} \paragraph{Environments and stores} Handling inner functions requires explicit closures in the reduction rules. We need environments, written $\rho$, to bind variables to locations, and a store, written $s$, to bind locations to values. \emph{Environments} and \emph{stores} are partial functions, equipped with a single operator which extends and modifies a partial function: \ajout{\cdot}{\cdot}{\cdot}. \begin{definition} The modification (or extension) $f'$ of a partial function $f$, written $f' = \ajout{f}{x}{y}$, is defined as follows: \begin{align*} f'(t) =& \begin{cases} y&\text{when $t$ = $x$}\\ f(t)&\text{otherwise} \end{cases}\\ \mathop{\mathrm{dom}}\nolimits(f') =& \mathop{\mathrm{dom}}\nolimits(f)\cup\{x\} \end{align*} \qedhere \end{definition} \begin{definition}[Environments of variables and functions] Environments of variables are defined inductively by \[\rho \mathrel{\mathop\ordinarycolon}\mkern-.9mu\mathrel{\mathop\ordinarycolon}\mkern-1.2mu= \varepsilon \;|\; (x,l)\cdot\rho,\] i.e.\ the empty domain function and $\ajout{\rho}{x}{l}$ (respectively). Environments of functions associate function names to closures: \[\ensuremath{\mathcal{F}} : \{f, g, h, \dotsc\} \rightarrow \{\fun{\range{x}{1}{n}}{T}{\rho,\ensuremath{\mathcal{F}}}\}.\] \qedhere \end{definition} Note that although we have a notion of locations, which correspond roughly to memory addresses in C, there is no way to copy, change or otherwise manipulate a location directly in the syntax of our language. This is on purpose, since adding this possibility would make lambda-lifting incorrect: it translates the fact, ensured by the boxing pass in the CPC translator, that there are no extruded variables in the lifted terms. \paragraph{Reduction rules} We use classical big-step reduction rules for our language (Figure~\ref{sem-proof:naive}, p.~\pageref{sem-proof:naive}). \begin{figure} \begin{gather*} \inferrule*[Left=(val)]{ }{\reductionN{v}{s}{v}{s}} \qquad\qquad \inferrule*[Left=(var)]{\rho\ x = l \in \mathop{\mathrm{dom}}\nolimits\ s}{\reductionN{x}{s}{s\ l}{s}}\\ \inferrule*[Left=(assign)]{\reductionN{a}{s}{v}{s'} \\ \rho\ x = l \in \mathop{\mathrm{dom}}\nolimits\ s'}{\ \reductionN{x \mathrel{\mathop\ordinarycolon}\mkern-1.2mu= a}{s}{\ensuremath{\mathbf{1}}}{\subst{s'}{l}{v}}}\qquad\qquad \inferrule*[Left=(seq)]{\reductionN{a}{s}{v}{s'} \\ \reductionN{b}{s'}{v'}{s''}}{\reductionN{a\ ;\ b}{s}{v'}{s''}}\\ \inferrule*[Left=(if-t.)]{\reductionN{a}{s}{\ensuremath{\mathop{\mathbf{true}}\nolimits}}{s'} \\ \reductionN{b}{s'}{v}{s''}}{\ \reductionN{\ite{a}{b}{c}}{s}{v}{s''}}\qquad\qquad \inferrule*[Left=(if-f.)]{\reductionN{a}{s}{\ensuremath{\mathop{\mathbf{false}}\nolimits}}{s'} \\ \reductionN{c}{s'}{v}{s''}}{\ \reductionN{\ite{a}{b}{c}}{s}{v}{s''}}\\ \inferrule*[Left=(letrec)]{\ \reductionNF{b}{s}{v}{s'}{\mathcal{F'}} \\\\ \mathcal{F'}=\ajout{\ensuremath{\mathcal{F}}}{f}{\fun{\range{x}{1}{n}}{a}{\rho,\ensuremath{\mathcal{F}}}} }{\ \reductionN{\letrec{f(\range{x}{1}{n})}{a}{b}}{s}{v}{s'}}\\ \inferrule*[Left=(call)]{\ \ensuremath{\mathcal{F}}\,f = \fun{\range{x}{1}{n}}{b}{\rho',\mathcal{F'}}\\ \rho''= \drange{x}{l}{1}{n}\\ \text{$l_{i}$ fresh and distinct}\\\\ \forall i,\reductionN{a_i}{s_i}{v_i}{s_{i+1}} \\ \reductionNF[\rho''\cdot\rho']{b}{\ajout{s_{n+1}}{l_i}{v_i}}{v}{s'}{\ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f}} }{\ \reductionN{f(\range{a}{1}{n})}{s_{1}}{v}{s'}} \end{gather*} \caption{``Naive'' reduction rules\label{sem-proof:naive}} \end{figure} In the (call) rule, we need to introduce \emph{fresh} locations for the parameters of the called function. This means that we must choose locations that are not already in use, in particular in the environments $\rho'$ and $\ensuremath{\mathcal{F}}$. To express this choice, we define two ancillary functions, $\mathop{\mathrm{Env}}\nolimits$ and $\mathop{\mathrm{Loc}}\nolimits$, to extract the environments and locations contained in the closures of a given environment of functions $\ensuremath{\mathcal{F}}$. \begin{definition}[Set of environments, set of locations] \[\mathop{\mathrm{Env}}\nolimits(\ensuremath{\mathcal{F}}) = \bigcup\left\{ \rho, \rho' \ |\ \fun{\range{x}{1}{n}}{M}{\rho,\mathcal{F'}} \in \mathop{\mathrm{Im}}\nolimits(\ensuremath{\mathcal{F}}), \rho' \in \mathop{\mathrm{Env}}\nolimits(\mathcal{F'})\right\}\] \[\mathop{\mathrm{Loc}}\nolimits(\ensuremath{\mathcal{F}}) = \bigcup\left\{ \mathop{\mathrm{Im}}\nolimits(\rho) \ |\ \rho \in \mathop{\mathrm{Env}}\nolimits(\ensuremath{\mathcal{F}})\right\}\] \[\text{A location $l$ is said to \emph{appear} in } \ensuremath{\mathcal{F}} \ensuremath{\quad\text{iff}\quad} l \in \mathop{\mathrm{Loc}}\nolimits(\ensuremath{\mathcal{F}}).\] \qedhere \end{definition} These functions allow us to define fresh locations. \begin{definition}[Fresh location] In the (call) rule, a location is \emph{fresh} when: \begin{itemize} \item $l \notin \mathop{\mathrm{dom}}\nolimits(s_{n+1})$, i.e.\ $l$ is not already used in the store before the body of $f$ is evaluated, and \item $l$ doesn't appear in $\ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f}$, i.e.\ $l$ will not interfere with locations captured in the environment of functions. \end{itemize} \qedhere \end{definition} Note that the second condition implies in particular that $l$ does not appear in either $\ensuremath{\mathcal{F}}$ or $\rho'$. \subsubsection{Lambda-lifting}\label{sec:lifting-def} Lambda-lifting can be split into two parts: parameter lifting and block floating\cite{danvy}. We will focus only on the first part here, since the second one is trivial. Parameter lifting consists in adding a free variable as a parameter of every inner function where it appears free. This step is repeated until every variable is bound in every function, and closed functions can safely be floated to top-level. Note that although the transformation is called lambda-lifting, we do not focus on a single function and try to lift all of its free variables; on the contrary, we define the lifting of a single free parameter $x$ in every possible function. Smart lambda-lifting algorithms strive to minimize the number of lifted variables. Such is not our concern in this proof: parameters are lifted in every function where they might potentially be free. \begin{definition}[Parameter lifting in a term]\label{dfn:lifted-term} Assume that $x$ is defined as a parameter of a given function $g$, and that every inner function in $g$ is called $h_i$ (for some $i\in\mathbf{N}$). Also assume that function parameters are unique before lambda-lifting. \noindent Then the \emph{lifted form} $\lift{M}$ of the term $M$ with respect to $x$ is defined inductively as follows: {\allowdisplaybreaks \begin{gather*} \lift{\ensuremath{\mathbf{1}}} = \ensuremath{\mathbf{1}} \qquad \lift{n} = n \\ \lift{true} = true \qquad \lift{false} = false \\ \lift{y} = y \quad \text{ and } \quad \lift{y \mathrel{\mathop\ordinarycolon}\mkern-1.2mu= a}= y \mathrel{\mathop\ordinarycolon}\mkern-1.2mu= \lift{a} \quad \text{(even if $y=x$)} \\ \lift{a\ ;\ b} = \lift{a}\ ;\ \lift{b} \\ \lift{\ite{a}{b}{c}} = \ite{\lift{a}}{\lift{b}}{\lift{c}} \\ \lift{ \letrec{f(\range{x}{1}{n})}{a}{b} } = \begin{cases} \letrec{f(\range{x}{1}{n}x)}{\lift{a}}{\lift{b}} &\text{if $f = h_i$}\\ \letrec{f(\range{x}{1}{n})}{\lift{a}}{\lift{b}} &\text{otherwise} \end{cases}\\ \lift{f(\range{a}{1}{n})} = \begin{cases} f(\lift{a_{1}},\dotsc,\lift{a_{n}},x)&\text{if $f = h_i$ for some $i$}\\ f(\lift{a_{1}},\dotsc,\lift{a_{n}})&\text{otherwise} \end{cases} \end{gather*} } \qedhere \end{definition} \subsubsection{Correctness condition}\label{sec:correctness} We show that parameter lifting is correct for variables defined in functions whose inner functions are called exclusively in \emph{tail position}. We call these variables \emph{liftable parameters}. We first define tail positions as usual \cite{clinger}: \begin{definition}[Tail position] \emph{Tail positions} are defined inductively as follows: \begin{enumerate} \item $M$ and $N$ are in tail position in \ite{P}{M}{N}. \item $N$ is in tail position in $N$ and $M \ ;\ N$ and \letrec{f(\range{x}{1}{n})}{M}{N}. \end{enumerate} \qedhere \end{definition} A parameter $x$ defined in a function $g$ is liftable if every inner function in $g$ is called exclusively in tail position. \begin{definition}[Liftable parameter] \label{dfn:var-liftable-simple} A parameter $x$ is \emph{liftable} in $M$ when: \begin{itemize} \item $x$ is defined as the parameter of a function $g$, \item inner functions in $g$, named $h_i$, are called exclusively in tail position in $g$ or in one of the $h_i$. \end{itemize} \qedhere \end{definition} Our main theorem states that performing parameter-lifting on a liftable parameter preserves the reduction: \begin{theorem}[Correctness of lambda-lifting] \label{thm:lambda-lifting-correctness} If $x$ is a liftable parameter in $M$, then \[\exists t, \reductionNF[\varepsilon]{M}{\varepsilon}{v}{t}{\varepsilon} \text{ implies } \exists t', \reductionNF[\varepsilon]{\lift{M}}{\varepsilon}{v}{t'}{\varepsilon}.\] \end{theorem} Note that the resulting store $t'$ changes because lambda-lifting introduces new variables, hence new locations in the store, and changes the values associated with lifted variables; Section~\ref{sec:correction-ll} is devoted to the proof of this theorem. To maintain invariants during the proof, we need to use an equivalent, ``optimised'' set of reduction rules; it is introduced in the next section. \subsection{Optimised reduction rules\label{sec:semopt}} The naive reduction rules (Section~\ref{sec:naive-def}) are not well-suited to prove the correctness of lambda-lifting. Indeed, the proof is by induction and requires a number of invariants on the structure of stores and environments. Rather than having a dozen of lemmas to ensure these invariants during the proof of correctness, we translate them as constraints in the reduction rules. To this end, we introduce two optimisations --- minimal stores (Section~\ref{sec:mini-store}) and compact closures (Section~\ref{sec:compact-closures}) --- which lead to the definition of an optimised set of reduction rules (Figure~\ref{sem-proof:opt}, Section~\ref{sec:opt-rules}). The equivalence between optimised and naive reduction rules is shown in Section~\ref{sec:sem-equiv}. \subsubsection{Minimal stores} \label{sec:mini-store} In the naive reduction rules, the store grows faster when reducing lifted terms, because each function call adds to the store as many locations as it has function parameters. This yields stores of different sizes when reducing the original and the lifted term, and that difference cannot be accounted for locally, at the rule level. Consider for instance the simplest possible case of lambda-lifting: \begin{gather} \letrec{g(x)}{(\letrec{h()}{x}{h()})}{g(\ensuremath{\mathbf{1}})}\tag{original} \\ \letrec{g(x)}{(\letrec{h(y)}{y}{h(x)})}{g(\ensuremath{\mathbf{1}})}\tag{lifted} \end{gather} At the end of the reduction, the store for the original term is $\{l_x \mapsto \ensuremath{\mathbf{1}} \}$ whereas the store for the lifted term is $\{l_x \mapsto \ensuremath{\mathbf{1}} ; l_y \mapsto \ensuremath{\mathbf{1}} \}$. More complex terms would yield even larger stores, with many out-of-date copies of lifted variables. To keep the store under control, we need to get rid of useless variables as soon as possible during the reduction. It is safe to remove a variable $x$ from the store once we are certain that it will never be used again, i.e.\ as soon as the term in tail position in the function which defines $x$ has been evaluated. This mechanism is analogous to the deallocation of a stack frame when a function returns. To track the variables whose location can be safely reclaimed after the reduction of some term $M$, we introduce \emph{split environments}. Split environments are written $\env{\ensuremath{{\rho_{T}}}}{\rho}$, where $\ensuremath{{\rho_{T}}}$ is called the \emph{tail environment} and $\rho$ the non-tail one; only the variables belonging to the tail environment may be safely reclaimed. The reduction rules build environments so that a variable $x$ belongs to $\ensuremath{{\rho_{T}}}$ if and only if the term $M$ is in tail position in the current function $f$ and $x$ is a parameter of $f$. In that case, it is safe to discard the locations associated to all of the parameters of $f$, including $x$, after $M$ has been reduced because we are sure that the evaluation of $f$ is completed (and there are no first-class functions in the language to keep references on variables beyond their scope of definition). We also define a \emph{cleaning} operator, $\gc{\cdot}{\cdot}$, to remove a set of variables from the store. \begin{definition}[Cleaning of a store] The store $s$ cleaned with respect to the variables in $\rho$, written $\gc{\rho}{s}$, is defined as $\gc{\rho}{s} = s |_{\mathop{\mathrm{dom}}\nolimits(s)\setminus\mathop{\mathrm{Im}}\nolimits(\rho)}$. \qedhere \end{definition} \subsubsection{Compact closures} \label{sec:compact-closures} Another source of complexity with the naive reduction rules is the inclusion of useless variables in closures. It is safe to remove from the environments of variables contained in closures the variables that are also parameters of the function: when the function is called, and the environment restored, these variables will be hidden by the freshly instantiated parameters. This is typically what happens to lifted parameters: they are free variables, captured in the closure when the function is defined, but these captured values will never be used since calling the function adds fresh parameters with the same names. We introduce \emph{compact closures} in the optimised reduction rules to avoid dealing with this hiding mechanism in the proof of lambda-lifting. A compact closure is a closure that does not capture any variable which would be hidden when the closure is called because of function parameters having the same name. \begin{definition}[Compact closure and environment] A closure $\fun{\range{x}{1}{n}}{M}{\rho,\ensuremath{\mathcal{F}}}$ is \emph{compact} if $\forall i, x_i\notin\mathop{\mathrm{dom}}\nolimits(\rho)$ and \ensuremath{\mathcal{F}}\/ is compact. An environment is \emph{compact} if it contains only compact closures. \qedhere \end{definition} We define a canonical mapping from any environment $\ensuremath{\mathcal{F}}$ to a compact environment $\close{\ensuremath{\mathcal{F}}}$, restricting the domains of every closure in $\ensuremath{\mathcal{F}}$. \begin{definition}[Canonical compact environment] The \emph{canonical compact environment} $\close{\ensuremath{\mathcal{F}}}$ is the unique environment with the same domain as $\ensuremath{\mathcal{F}}$ such that \begin{align*} \forall f \in \mathop{\mathrm{dom}}\nolimits(\ensuremath{\mathcal{F}}), \ensuremath{\mathcal{F}}\,f &= \fun{\range{x}{1}{n}}{M}{\rho,\mathcal{F'}}\\ \text{implies }\close{\ensuremath{\mathcal{F}}}\ f &= \fun{\range{x}{1}{n}}{M}{\rho|_{\mathop{\mathrm{dom}}\nolimits(\rho)\setminus\{\range{x}{1}{n}\}},\close{\mathcal{F'}}}. \end{align*} \qedhere \end{definition} \subsubsection{Optimised reduction rules} \label{sec:opt-rules} Combining both optimisations yields the \emph{optimised} reduction rules (Figure~\ref{sem-proof:opt}, p.~\pageref{sem-proof:opt}), used Section~\ref{sec:correction-ll} for the proof of lambda-lifting. We ensure minimal stores by cleaning them in the (val), (var) and (assign) rules, which correspond to tail positions; split environments are introduced in the (call) rule to distinguish fresh parameters, to be cleaned, from captured variables, which are preserved. Tail positions are tracked in every rule through split environments, to avoid cleaning variables too early, in a non-tail branch. We also build compact closures in the (letrec) rule by removing the parameters of $f$ from the captured environment $\rho'$. \begin{figure}[hbt] \begin{gather*} \inferrule*[Left=(val)]{ }{\reduction{v}{s}{v}{\gc{\ensuremath{{\rho_{T}}}}{s}}}\qquad\qquad \inferrule*[Left=(var)]{\ensuremath{{\rho_{T}}}\cdot\rho\ x = l \in \mathop{\mathrm{dom}}\nolimits\ s}{\reduction{x}{s}{s\ l}{\gc{\ensuremath{{\rho_{T}}}}{s}}}\\ \inferrule*[Left=(assign)]{\reduction[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{a}{s}{v}{s'} \\ \ensuremath{{\rho_{T}}}\cdot\rho\ x = l \in \mathop{\mathrm{dom}}\nolimits\ s'}{\ \reduction{x \mathrel{\mathop\ordinarycolon}\mkern-1.2mu= a}{s}{\ensuremath{\mathbf{1}}}{\gc{\ensuremath{{\rho_{T}}}}{\subst{s'}{l}{v}}}}\qquad\qquad \inferrule*[Left=(seq)]{\reduction[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{a}{s}{v}{s'} \\ \reduction{b}{s'}{v'}{s''}}{\reduction{a\ ;\ b}{s}{v'}{s''}}\\ \inferrule*[Left=(if-t.)]{\reduction[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{a}{s}{\ensuremath{\mathop{\mathbf{true}}\nolimits}}{s'} \\ \reduction{b}{s'}{v}{s''}}{\ \reduction{\ite{a}{b}{c}}{s}{v}{s''}}\qquad\qquad \inferrule*[Left=(if-f.)]{\reduction[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{a}{s}{\ensuremath{\mathop{\mathbf{false}}\nolimits}}{s'} \\ \reduction{c}{s'}{v}{s''}}{\ \reduction{\ite{a}{b}{c}}{s}{v}{s''}}\\ \inferrule*[Left=(letrec)]{\ \reductionF{b}{s}{v}{s'}{\mathcal{F'}} \\\\ \rho' = \ensuremath{{\rho_{T}}}\cdot\rho|_{\mathop{\mathrm{dom}}\nolimits(\ensuremath{{\rho_{T}}}\cdot\rho)\setminus\{\range{x}{1}{n}\}} \\ \mathcal{F'}=\ajout{\ensuremath{\mathcal{F}}}{f}{\fun{\range{x}{1}{n}}{a}{\rho',\ensuremath{\mathcal{F}}}} }{\ \reduction{\letrec{f(\range{x}{1}{n})}{a}{b}}{s}{v}{s'}}\\ \inferrule*[Left=(call)]{\ \ensuremath{\mathcal{F}}\,f = \fun{\range{x}{1}{n}}{b}{\rho',\mathcal{F'}}\\ \rho''= \drange{x}{l}{1}{n}\\ \text{$l_{i}$ fresh and distinct}\\\\ \forall i,\reduction[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{a_i}{s_i}{v_i}{s_{i+1}} \\ \reductionF[\env{\rho''}{\rho'}]{b}{\ajout{s_{n+1}}{l_i}{v_i}}{v}{s'}{\ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f}} }{\ \reduction{f(\range{a}{1}{n})}{s_{1}}{v}{\gc{\ensuremath{{\rho_{T}}}}{s'}}} \end{gather*} \caption{Optimised reduction rules\label{sem-proof:opt}} \end{figure} \begin{theorem}[Equivalence between naive and optimised reduction rules]\label{thm:sem-equiv} Optimised and naive reduction rules are equivalent: every reduction in one set of rules yields the same result in the other. It is necessary, however, to take care of locations left in the store by the naive reduction: \[ \reductionF[\env{\varepsilon}{\varepsilon}]{M}{\varepsilon}{v}{\varepsilon}{\varepsilon} \ensuremath{\quad\text{iff}\quad} \exists s,\reductionNF[\varepsilon]{M}{\varepsilon}{v}{s}{\varepsilon} \] \end{theorem} We prove this theorem in Section~\ref{sec:sem-equiv}. \subsection{Equivalence of optimised and naive reduction rules\label{sec:sem-equiv}} This section is devoted to the proof of equivalence between the optimised naive reduction rules (Theorem~\ref{thm:sem-equiv}). To clarify the proof, we introduce intermediate reduction rules (Figure~\ref{sem-proof:inter}, p.~\pageref{sem-proof:inter}), with only one of the two optimisations: minimal stores, but not compact closures. The proof then consists in proving that optimised and intermediate rules are equivalent (Lemma~\ref{lem:IimpliesO} and Lemma~\ref{lem:OimpliesI}, Section~\ref{subsec:first-step}), then that naive and intermediate rules are equivalent (Lemma~\ref{lem:IimpliesN} and Lemma~\ref{lem:NimpliesI}, Section~\ref{subsec:second-step}). \[ \text{Naive rules} \xtofrom[\text{Lemma~\ref{lem:IimpliesN}}]{\text{Lemma~\ref{lem:NimpliesI}}} \text{Intermediate rules} \xtofrom[\text{Lemma~\ref{lem:OimpliesI}}]{\text{Lemma~\ref{lem:IimpliesO}}} \text{Optimised rules} \] \begin{figure}[hbt] \begin{gather*} \inferrule*[Left=(val)]{ }{\reductionI{v}{s}{v}{\gc{\ensuremath{{\rho_{T}}}}{s}}}\qquad\qquad \inferrule*[Left=(var)]{\ensuremath{{\rho_{T}}}\cdot\rho\ x = l \in \mathop{\mathrm{dom}}\nolimits\ s}{\reductionI{x}{s}{s\ l}{\gc{\ensuremath{{\rho_{T}}}}{s}}}\\ \inferrule*[Left=(assign)]{\reductionI[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{a}{s}{v}{s'} \\ \ensuremath{{\rho_{T}}}\cdot\rho\ x = l \in \mathop{\mathrm{dom}}\nolimits\ s'}{\ \reductionI{x \mathrel{\mathop\ordinarycolon}\mkern-1.2mu= a}{s}{\ensuremath{\mathbf{1}}}{\gc{\ensuremath{{\rho_{T}}}}{\subst{s'}{l}{v}}}}\qquad\qquad \inferrule*[Left=(seq)]{\reductionI[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{a}{s}{v}{s'} \\ \reductionI{b}{s'}{v'}{s''}}{\reductionI{a\ ;\ b}{s}{v'}{s''}}\\ \inferrule*[Left=(if-t.)]{\reductionI[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{a}{s}{\ensuremath{\mathop{\mathbf{true}}\nolimits}}{s'} \\ \reductionI{b}{s'}{v}{s''}}{\ \reductionI{\ite{a}{b}{c}}{s}{v}{s''}}\qquad\qquad \inferrule*[Left=(if-f.)]{\reductionI[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{a}{s}{\ensuremath{\mathop{\mathbf{false}}\nolimits}}{s'} \\ \reductionI{c}{s'}{v}{s''}}{\ \reductionI{\ite{a}{b}{c}}{s}{v}{s''}}\\ \inferrule*[Left=(letrec)]{\ \reductionIF{b}{s}{v}{s'}{\mathcal{F'}} \\\\ \rho' = \ensuremath{{\rho_{T}}}\cdot\rho \\ \mathcal{F'}=\ajout{\ensuremath{\mathcal{F}}}{f}{\fun{\range{x}{1}{n}}{a}{\rho,\ensuremath{\mathcal{F}}}} }{\ \reductionI{\letrec{f(\range{x}{1}{n})}{a}{b}}{s}{v}{s'}}\\ \inferrule*[Left=(call)]{\ \ensuremath{\mathcal{F}}\,f = \fun{\range{x}{1}{n}}{b}{\rho',\mathcal{F'}}\\ \rho''= \drange{x}{l}{1}{n}\\ \text{$l_{i}$ fresh and distinct}\\\\ \forall i,\reductionI[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{a_i}{s_i}{v_i}{s_{i+1}} \\ \reductionIF[\env{\rho''}{\rho'}]{b}{\ajout{s_{n+1}}{l_i}{v_i}}{v}{s'}{\ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f}} }{\ \reductionI{f(\range{a}{1}{n})}{s_{1}}{v}{\gc{\ensuremath{{\rho_{T}}}}{s'}}} \end{gather*} \caption{Intermediate reduction rules\label{sem-proof:inter}} \end{figure} \subsubsection{Optimised and intermediate reduction rules equivalence\label{subsec:first-step}} In this section, we show that optimised and intermediate reduction rules are equivalent: \[ \text{Intermediate rules} \xtofrom[\text{Lemma~\ref{lem:OimpliesI}}]{\text{Lemma~\ref{lem:IimpliesO}}} \text{Optimised rules} \] We must therefore show that it is correct to use compact closures in the optimised reduction rules. Compact closures carry the implicit idea that some variables can be safely discarded from the environments when we know for sure that they will be hidden. The following lemma formalises this intuition. \begin{lemma}[Hidden variables elimination]\label{lem:intro-in-env2} \begin{align*} \forall l,l', \reductionI[\env{\ensuremath{{\rho_{T}}}\cdot(x,l)}{\rho}]{M}{s}{v}{s'} \ensuremath{\quad\text{iff}\quad}& \reductionI[\env{\ensuremath{{\rho_{T}}}\cdot(x,l)}{(x,l')\cdot\rho}]{M}{s}{v}{s'} \\ \forall l,l', \reduction[\env{\ensuremath{{\rho_{T}}}\cdot(x,l)}{\rho}]{M}{s}{v}{s'} \ensuremath{\quad\text{iff}\quad}& \reduction[\env{\ensuremath{{\rho_{T}}}\cdot(x,l)}{(x,l')\cdot\rho}]{M}{s}{v}{s'} \end{align*} Moreover, both derivations have the same height. \end{lemma} \begin{proof} The exact same proof holds for both intermediate and optimised reduction rules. By induction on the structure of the derivation. The proof relies solely on the fact that $\ensuremath{{\rho_{T}}}\cdot(x,l)\cdot\rho = \ensuremath{{\rho_{T}}}\cdot(x,l)\cdot(x,l')\cdot\rho$. \paragraph{(seq)} $\ensuremath{{\rho_{T}}}\cdot(x,l)\cdot\rho = \ensuremath{{\rho_{T}}}\cdot(x,l)\cdot(x,l')\cdot\rho$. So, \[\reductionI[\env{}{\ensuremath{{\rho_{T}}}\cdot(x,l)\cdot(x,l')\cdot\rho}]{a}{s}{v}{s'} \ensuremath{\quad\text{iff}\quad} \reductionI[\env{}{\ensuremath{{\rho_{T}}}\cdot(x,l)\cdot\rho}]{a}{s}{v}{s'}\] Moreover, by the induction hypotheses, \[\reductionI[\env{\ensuremath{{\rho_{T}}}\cdot(x,l)}{(x,l')\cdot\rho}]{b}{s'}{v'}{s''} \ensuremath{\quad\text{iff}\quad} \reductionI[\env{\ensuremath{{\rho_{T}}}\cdot(x,l)}{\rho}]{b}{s'}{v'}{s''}\] Hence, \[\reductionI[\env{\ensuremath{{\rho_{T}}}\cdot(x,l)}{(x,l')\cdot\rho}]{a\ ;\ b}{s}{v'}{s''} \ensuremath{\quad\text{iff}\quad} \reductionI[\env{\ensuremath{{\rho_{T}}}\cdot(x,l)}{\rho}]{a\ ;\ b}{s}{v'}{s''}\] The other cases are similar. \paragraph{(val)} $\reductionI[\env{\ensuremath{{\rho_{T}}}\cdot(x,l)}{\rho}]{v}{s}{v}{\gc{\ensuremath{{\rho_{T}}}\cdot(x,l)}{s}} \ensuremath{\quad\text{iff}\quad} \reductionI[\env{\ensuremath{{\rho_{T}}}\cdot(x,l)}{(x,l')\cdot\rho}]{v}{s}{v}{\gc{\ensuremath{{\rho_{T}}}\cdot(x,l)}{s}}$ \paragraph{(var)} $\ensuremath{{\rho_{T}}}\cdot(x,l)\cdot\rho = \ensuremath{{\rho_{T}}}\cdot(x,l)\cdot(x,l')\cdot\rho$ so, with $l'' = \ensuremath{{\rho_{T}}}\cdot(x,l)\cdot\rho\ y$, \[\reductionI[\env{\ensuremath{{\rho_{T}}}\cdot(x,l)}{\rho}]{y}{s}{s\ l''}{\gc{\ensuremath{{\rho_{T}}}\cdot(x,l)}{s}} \ensuremath{\quad\text{iff}\quad} \reductionI[\env{\ensuremath{{\rho_{T}}}\cdot(x,l)}{(x,l')\cdot\rho}]{y}{s}{s\ l''}{\gc{\ensuremath{{\rho_{T}}}\cdot(x,l)}{s}}\] \paragraph{(assign)} $\ensuremath{{\rho_{T}}}\cdot(x,l)\cdot\rho = \ensuremath{{\rho_{T}}}\cdot(x,l)\cdot(x,l')\cdot\rho$. So, \[\reductionI[\env{}{\ensuremath{{\rho_{T}}}\cdot(x,l)\cdot(x,l')\cdot\rho}]{a}{s}{v}{s'} \ensuremath{\quad\text{iff}\quad} \reductionI[\env{}{\ensuremath{{\rho_{T}}}\cdot(x,l)\cdot\rho}]{a}{s}{v}{s'}\] Hence, with $l'' = \ensuremath{{\rho_{T}}}\cdot(x,l)\cdot\rho\ y$, \[\reductionI[\env{\ensuremath{{\rho_{T}}}\cdot(x,l)}{\rho}]{y \mathrel{\mathop\ordinarycolon}\mkern-1.2mu= a}{s}{\ensuremath{\mathbf{1}}}{\gc{\ensuremath{{\rho_{T}}}\cdot(x,l)}{\subst{s'}{l''}{v}}} \ensuremath{\quad\text{iff}\quad} \reductionI[\env{\ensuremath{{\rho_{T}}}\cdot(x,l)}{(x,l')\cdot\rho}]{y \mathrel{\mathop\ordinarycolon}\mkern-1.2mu= a}{s}{\ensuremath{\mathbf{1}}}{\gc{\ensuremath{{\rho_{T}}}\cdot(x,l)}{\subst{s'}{l''}{v}}}\] \paragraph{(if-true) and (if-false)} are proved similarly to (seq). \paragraph{(letrec)} $\ensuremath{{\rho_{T}}}\cdot(x,l)\cdot\rho = \ensuremath{{\rho_{T}}}\cdot(x,l)\cdot(x,l')\cdot\rho = \rho'$. Moreover, by the induction hypotheses, \[\reductionIF[\env{\ensuremath{{\rho_{T}}}\cdot(x,l)}{(x,l')\cdot\rho}]{b}{s}{v}{s'}{\mathcal{F'}} \ensuremath{\quad\text{iff}\quad} \reductionIF[\env{\ensuremath{{\rho_{T}}}\cdot(x,l)}{\rho}]{b}{s}{v}{s'}{\mathcal{F'}}\] Hence, \begin{gather*} \reductionI[\env{\ensuremath{{\rho_{T}}}\cdot(x,l)}{(x,l')\cdot\rho}]{\letrec{f(\range{x}{1}{n})}{a}{b}}{s}{v}{s'} \ensuremath{\quad\text{iff}\quad}\\ \reductionI[\env{\ensuremath{{\rho_{T}}}\cdot(x,l)}{\rho}]{\letrec{f(\range{x}{1}{n})}{a}{b}}{s}{v}{s'} \end{gather*} \paragraph{(call)} $\ensuremath{{\rho_{T}}}\cdot(x,l)\cdot\rho = \ensuremath{{\rho_{T}}}\cdot(x,l)\cdot(x,l')\cdot\rho$. So, \[ \forall i, \reductionI[\env{}{\ensuremath{{\rho_{T}}}\cdot(x,l)\cdot(x,l')\cdot\rho}]{a_i}{s_i}{v_i}{s_{i+1}} \ensuremath{\quad\text{iff}\quad} \reductionI[\env{}{\ensuremath{{\rho_{T}}}\cdot(x,l)\cdot\rho}]{a_i}{s_i}{v_i}{s_{i+1}}\] Hence, \[\reductionI[\env{\ensuremath{{\rho_{T}}}\cdot(x,l)}{(x,l')\cdot\rho}]{f(\range{a}{1}{n})}{s_{1}}{v}{\gc{\ensuremath{{\rho_{T}}}\cdot(x,l)}{s'}} \ensuremath{\quad\text{iff}\quad} \reductionI[\env{\ensuremath{{\rho_{T}}}\cdot(x,l)}{\rho}]{f(\range{a}{1}{n})}{s_{1}}{v}{\gc{\ensuremath{{\rho_{T}}}\cdot(x,l)}{s'}}. \qedhere \] \end{proof} Now we can show the required lemmas and prove the equivalence between the intermediate and optimised reduction rules. \begin{lemma}[Intermediate implies optimised]\label{lem:IimpliesO} \[\text{If }\reductionI{M}{s}{v}{s'} \text{ then }\reductionR{M}{s}{v}{s'}. \] \end{lemma} \begin{proof} By induction on the structure of the derivation. The interesting cases are (letrec) and (call), where compact environments are respectively built and used. \paragraph{(letrec)} By the induction hypotheses, \[\reductionF{b}{s}{v}{s'}{\close{\mathcal{F'}}}\] Since we defined canonical compact environments so as to match exactly the way compact environments are built in the optimised reduction rules, the constraints of the (letrec) rule are fulfilled: \[\close{\mathcal{F'}}=\ajout{\close{\ensuremath{\mathcal{F}}}}{f}{\fun{\range{x}{1}{n}}{a}{\rho',\close{\ensuremath{\mathcal{F}}}}},\] hence: \[\reductionR{\letrec{f(\range{x}{1}{n})}{a}{b}}{s}{v}{s'}\] \paragraph{(call)} By the induction hypotheses, \[\forall i, \reductionR[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{a_i}{s_i}{v_i}{s_{i+1}}\] and \[\reductionF[\env{\rho''}{\rho'}]{b}{\ajout{s_{n+1}}{l_i}{v_i}}{v}{s'}% {\close{(\ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f})}}\] Lemma~\ref{lem:intro-in-env2} allows to remove hidden variables, which leads to \[\reductionF[\env{\rho''}{\rho'_{|\mathop{\mathrm{dom}}\nolimits(\rho')\setminus\{\range{x}{1}{n}\}}}]{b}{\ajout{s_{n+1}}{l_i}{v_i}}{v}{s'}% {\close{(\ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f})}}\] Besides, \[\close{\ensuremath{\mathcal{F}}}\ f = \fun{\range{x}{1}{n}}{b}{\rho'_{|\mathop{\mathrm{dom}}\nolimits(\rho')\setminus\{\range{x}{1}{n}\}},\close{\mathcal{F'}}}\] and \[\close{(\ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f})} = \ajout{\close{\mathcal{F'}}}{f}{\close{\ensuremath{\mathcal{F}}}\ f}\] Hence \[\reductionR{f(\range{a}{1}{n})}{s_{1}}{v}{\gc{\ensuremath{{\rho_{T}}}}{s'}}. \] \paragraph{(val)} \reductionR{v}{s}{v}{\gc{\ensuremath{{\rho_{T}}}}{s}} \paragraph{(var)} \reductionR{x}{s}{s\ l}{\gc{\ensuremath{{\rho_{T}}}}{s}} \paragraph{(assign)} By the induction hypotheses, \reductionR[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{a}{s}{v}{s'}. Hence, \[\reductionR{x \mathrel{\mathop\ordinarycolon}\mkern-1.2mu= a}{s}{\ensuremath{\mathbf{1}}}{\gc{\ensuremath{{\rho_{T}}}}{\subst{s'}{l}{v}}}\] \paragraph{(seq)} By the induction hypotheses, \[\reductionR[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{a}{s}{v}{s'} \qquad\qquad \reductionR{b}{s'}{v'}{s''}\] Hence, \[\reductionR{a\ ;\ b}{s}{v'}{s''}\] \paragraph{(if-true) and (if-false)} are proved similarly to (seq). \qedhere \end{proof} \begin{lemma}[Optimised implies intermediate]\label{lem:OimpliesI} \[ \text{If }\reduction{M}{s}{v}{s'} \text{ then }\forall \mathcal{G} \text{ such that } \close{\mathcal{G}}=\ensuremath{\mathcal{F}}, \reductionIF{M}{s}{v}{s'}{\mathcal{G}}.\] \end{lemma} \begin{proof} First note that, since $\close{\mathcal{G}}=\ensuremath{\mathcal{F}}$, $\ensuremath{\mathcal{F}}$ is necessarily compact. By induction on the structure of the derivation. The interesting cases are (letrec) and (call), where non-compact environments are respectively built and used. \paragraph{(letrec)} Let $\mathcal{G} \text{ such as } \close{\mathcal{G}}=\ensuremath{\mathcal{F}}$. Remember that $\rho' = \ensuremath{{\rho_{T}}}\cdot\rho|_{\mathop{\mathrm{dom}}\nolimits(\ensuremath{{\rho_{T}}}\cdot\rho)\setminus\{\range{x}{1}{n}\}}$. Let \[\mathcal{G'}=\ajout{\mathcal{G}}{f}{\fun{\range{x}{1}{n}}{a}{\ensuremath{{\rho_{T}}}\cdot\rho,\ensuremath{\mathcal{F}}}}\] which leads, since $\ensuremath{\mathcal{F}}$ is compact ($\close{\ensuremath{\mathcal{F}}} = \ensuremath{\mathcal{F}}$), to \begin{align*} \close{\mathcal{G'}}&=\ajout{\ensuremath{\mathcal{F}}}{f}{\fun{\range{x}{1}{n}}{a}{\rho',\ensuremath{\mathcal{F}}}}\\ &=\mathcal{F'} \end{align*} By the induction hypotheses, \[\reductionIF{b}{s}{v}{s'}{\mathcal{G'}}\] Hence, \[\reductionIF{\letrec{f(\range{x}{1}{n})}{a}{b}}{s}{v}{s'}{\mathcal{G}}\] \paragraph{(call)} Let $\mathcal{G} \text{ such as } \close{\mathcal{G}}=\ensuremath{\mathcal{F}}$. By the induction hypotheses, \[\forall i, \reductionIF[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{a_i}{s_i}{v_i}{s_{i+1}}{\mathcal{G}}\] Moreover, since $\close{\mathcal{G}}\ f=\ensuremath{\mathcal{F}}\,f$, \[\mathcal{G}\ f = \fun{\range{x}{1}{n}}{b}{\drange{x}{l}{i}{j}\rho',\mathcal{G'}}\] where $\close{\mathcal{G'}} = \mathcal{F'}$, and the $l_i$ are some locations stripped out when compacting $\mathcal{G}$ to get $\ensuremath{\mathcal{F}}$. By the induction hypotheses, \[\reductionIF[\env{\rho''}{\rho'}]{b}{\ajout{s_{n+1}}{l_i}{v_i}}{v}{s'}% {\ajout{\mathcal{G'}}{f}{\mathcal{G}\ f}}\] Lemma~\ref{lem:intro-in-env2} leads to \[\reductionIF[\env{\rho''}{\drange{x}{l}{i}{j}\rho'}]{b}{\ajout{s_{n+1}}{l_i}{v_i}}{v}{s'}% {\ajout{\mathcal{G'}}{f}{\mathcal{G}\ f}}\] Hence, \[\reductionIF{f(\range{a}{1}{n})}{s_{1}}{v}{\gc{\ensuremath{{\rho_{T}}}}{s'}}{\mathcal{G}}. \] \paragraph{(val)} $\forall\mathcal{G} \text{ such as } \close{\mathcal{G}}=\ensuremath{\mathcal{F}}, \reductionIF{v}{s}{v}{s'}{\mathcal{G}}$ \paragraph{(var)} $\forall\mathcal{G} \text{ such as } \close{\mathcal{G}}=\ensuremath{\mathcal{F}}, \reductionIF{x}{s}{s\ l}{\gc{\ensuremath{{\rho_{T}}}}{s}}{\mathcal{G}}$ \paragraph{(assign)} Let $\mathcal{G} \text{ such as } \close{\mathcal{G}}=\ensuremath{\mathcal{F}}$. By the induction hypotheses, $\reductionIF[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{a}{s}{v}{s'}{\mathcal{G}}$. Hence, \[\reductionIF{x \mathrel{\mathop\ordinarycolon}\mkern-1.2mu= a}{s}{\ensuremath{\mathbf{1}}}{\gc{\ensuremath{{\rho_{T}}}}{\subst{s'}{l}{v}}}{\mathcal{G}}\] \paragraph{(seq)} Let $\mathcal{G} \text{ such as } \close{\mathcal{G}}=\ensuremath{\mathcal{F}}$. By the induction hypotheses, \[\reductionIF[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{a}{s}{v}{s'}{\mathcal{G}} \qquad\qquad \reductionIF{b}{s'}{v'}{s''}{\mathcal{G}}\] Hence \[\reductionIF{a\ ;\ b}{s}{v'}{s''}{\mathcal{G}}\] \paragraph{(if-true) and (if-false)} are proved similarly to (seq). \qedhere \end{proof} \subsubsection{Intermediate and naive reduction rules equivalence\label{subsec:second-step}} In this section, we show that the naive and intermediate reduction rules are equivalent: \[ \text{Naive rules} \xtofrom[\text{Lemma~\ref{lem:IimpliesN}}]{\text{Lemma~\ref{lem:NimpliesI}}} \text{Intermediate rules} \] We must therefore show that it is correct to use minimal stores in the intermediate reduction rules. We first define a partial order on stores: \begin{definition}[Store extension] \[ s \sqsubseteq s' \ensuremath{\quad\text{iff}\quad} s'|_{\mathop{\mathrm{dom}}\nolimits(s)} = s \qedhere\] \end{definition} \begin{property}\label{prop:ext-order} Store extension ($\sqsubseteq$) is a partial order over stores. The following operations preserve this order: $\gc{\rho}{\cdot}$ and $\subst{\cdot}{l}{v}$, for some given $\rho$, $l$ and $v$. \end{property} \begin{proof} Immediate when considering the stores as function graphs: $\sqsubseteq$ is the inclusion, $\gc{\rho}{\cdot}$ a relative complement, and $\subst{\cdot}{l}{v}$ a disjoint union (preceded by $\gc{(l,v')}{\cdot}$ when $l$ is already bound to some $v'$).\qedhere \end{proof} Before we prove that using minimal stores is equivalent to using full stores, we need an alpha-conversion lemma, which allows us to rename locations in the store, provided the new location does not already appear in the store or the environments. It is used when choosing a fresh location for the (call) rule in proofs by induction. \begin{lemma}[Alpha-conversion]\label{lem:rename-loc} If \reductionI{M}{s}{v}{s'} then, for all $l$, for all $l'$ appearing neither in $s$ nor in $\ensuremath{\mathcal{F}}$ nor in $\rho\cdot\ensuremath{{\rho_{T}}}$, \[\reductionIF[\env{\ensuremath{{\rho_{T}}}[l'/l]}{\rho[l'/l]}]{M}{s[l'/l]}{v}{s'[l'/l]}{\ensuremath{\mathcal{F}}[l'/l]}.\] Moreover, both derivations have the same height. \end{lemma} \begin{proof} By induction on the height of the derivation. For the (call) case, we must ensure that the fresh locations $l_i$ do not clash with $l'$. In case they do, we conclude by applying the induction hypotheses twice: first to rename the clashing $l_i$ into a fresh $l'_i$, then to rename $l$ into $l'$. Two preliminary elementary remarks. First, provided $l'$ appears neither in $\rho$ or $\ensuremath{{\rho_{T}}}$, nor in $s$, \[(\gc{\rho}{s})[l'/l] = \gc{(\rho[l'/l])}{(s[l'/l])}\] and \[(\ensuremath{{\rho_{T}}}\cdot\rho)[l'/l] = \ensuremath{{\rho_{T}}}[l'/l]\cdot\rho[l'/l].\] Moreover, if $\reductionI{M}{s}{v}{s'}$, then $\mathop{\mathrm{dom}}\nolimits(s') = \mathop{\mathrm{dom}}\nolimits(s)\setminus\ensuremath{{\rho_{T}}}$ (straightforward by induction). This leads to: $\ensuremath{{\rho_{T}}}=\varepsilon \Rightarrow \mathop{\mathrm{dom}}\nolimits(s') = \mathop{\mathrm{dom}}\nolimits(s)$. By induction on the height of the derivation, because the induction hypothesis must be applied twice in the case of the (call) rule. \paragraph{(call)} $\forall i, \mathop{\mathrm{dom}}\nolimits(s_i) = \mathop{\mathrm{dom}}\nolimits(s_{i+1})$. Thus, $\forall i, l' \notin \mathop{\mathrm{dom}}\nolimits(s_i)$. This leads, by the induction hypotheses, to \[\forall i,\reductionI[\env{}{(\ensuremath{{\rho_{T}}}\cdot\rho)[l'/l]}]{a_i}{s_i[l'/l]}{v_i}{s_{i+1}[l'/l]}{\ensuremath{\mathcal{F}}[l'/l]}\] Moreover , $\mathcal{F'}$ is part of $\ensuremath{\mathcal{F}}$. As a result, since $l'$ does not appear in $\ensuremath{\mathcal{F}}$, it does not appear in $\mathcal{F'}$, nor in $\ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f}$. It does not appear in $\rho'$ either (since $\rho'$ is part of $\mathcal{F'}$). On the other hand, there might be some $j$ such that $l_j = l'$, so $l'$ might appear in $\rho''$. In that case, we apply the induction hypotheses a first time to rename $l_j$ in some $l_j' \neq l'$. One can chose $l_j'$ such that it does not appear in $s_{n+1}$, $\ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f}$ nor in $\rho''\cdot\rho$. As a result, $l_j'$ is fresh. Since $l_j$ is fresh too, and does not appear in $\mathop{\mathrm{dom}}\nolimits(s')$ (because of our preliminary remarks), this leads to a mere substitution in $\rho''$: \[\reductionIF[\env{\rho''[l'_j/l_j]}{\rho'}]{b}{\ajout{s_{n+1}}{l_i[l'_j/l_j]}{v_i}}{v}{s'}{\ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f}}\] Once this (potentially) disturbing $l_j$ has been renamed (we ignore it in the rest of the proof), we apply the induction hypotheses a second time to rename $l$ to $l'$: \[\reductionIF[\env{\rho''[l'/l]}{\rho'[l'/l]}]{b}{(\ajout{s_{n+1}}{l_i}{v_i})[l'/l]}{v}{s'[l'/l]}{\ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f}}\] Now, $(\ajout{s_{n+1}}{l_i}{v_i})[l'/l] = \ajout{s_{n+1}[l'/l]}{l_i}{v_i}$. Moreover, \[\ensuremath{\mathcal{F}}[l'/l]\ f = \fun{\range{x}{1}{n}}{b}{\rho'[l'/l],\mathcal{F'}[l'/l]}\] and \[(\ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f})[l'/l] = \ajout{\mathcal{F'}[l'/l]}{f}{\ensuremath{\mathcal{F}}[l'/l]\ f}\] Finally, $\rho''[l'/l] = \rho''$. Hence: \[\reductionIF[\env{\ensuremath{{\rho_{T}}}[l'/l]}{\rho[l'/l]}]{f(\range{a}{1}{n})}{s_{1}[l'/l]}{v}{\gc{\ensuremath{{\rho_{T}}}[l'/l]}{s'[l'/l]}}{\ensuremath{\mathcal{F}}[l'/l]}. \] \paragraph{(val)} $\reductionIF[\env{\ensuremath{{\rho_{T}}}[l'/l]}{\rho[l'/l]}]{v}{s[l'/l]}{v}{\gc{\ensuremath{{\rho_{T}}}[l'/l]}{s[l'/l]}}{\ensuremath{\mathcal{F}}[l'/]}$ \paragraph{(var)} $s[l'/l](\ensuremath{{\rho_{T}}}[l'/l]\cdot\rho[l'/l]\ x) = s(\ensuremath{{\rho_{T}}}\cdot\rho\ x) = v$ implies \[\reductionIF[\env{\ensuremath{{\rho_{T}}}[l'/l]}{\rho[l'/l]}]{x}{s[l'/l]}{v}{\gc{\ensuremath{{\rho_{T}}}[l'/l]}{s[l'/l]}}{\ensuremath{\mathcal{F}}[l'/]}\] \paragraph{(assign)} By the induction hypotheses, \[\reductionIF[\env{}{(\ensuremath{{\rho_{T}}}\cdot\rho)[l'/l]}]{a}{s[l'/l]}{v}{s'[l'/]}{\ensuremath{\mathcal{F}}[l'/]}\] Let $s''=\subst{s'}{\ensuremath{{\rho_{T}}}\cdot\rho\ x}{v}$. Then, \[\subst{s'[l'/l]}{(\ensuremath{{\rho_{T}}}\cdot\rho)[l'/l]\ x}{v} = s''[l'/l]\] Hence \[\reductionIF[\env{\ensuremath{{\rho_{T}}}[l'/l]}{\rho[l'/l]}]{x \mathrel{\mathop\ordinarycolon}\mkern-1.2mu= a}{s[l'/l]}{\ensuremath{\mathbf{1}}}{\gc{\ensuremath{{\rho_{T}}}[l'/l]}{s''[l'/l]}}{\ensuremath{\mathcal{F}}[l'/]}\] \paragraph{(seq)} By the induction hypotheses, \[\reductionIF[\env{}{(\ensuremath{{\rho_{T}}}\cdot\rho)[l'/l]}]{a}{s[l'/l]}{v}{s'[l'/l]}{\ensuremath{\mathcal{F}}[l'/]}\] Besides, $\mathop{\mathrm{dom}}\nolimits(s')=\mathop{\mathrm{dom}}\nolimits(s)$, therefore $l' \notin \mathop{\mathrm{dom}}\nolimits(s')$. Then, by the induction hypotheses, \[\reductionIF[\env{\ensuremath{{\rho_{T}}}[l'/l]}{\rho[l'/l]}]{b}{s'[l'/l]}{v'}{s''[l'/l]}{\ensuremath{\mathcal{F}}[l'/]}\] Hence \[\reductionIF[\env{\ensuremath{{\rho_{T}}}[l'/l]}{\rho[l'/l]}]{a\ ;\ b}{s[l'/l]}{v'}{s''[l'/l]}{\ensuremath{\mathcal{F}}[l'/]}\] \paragraph{(if-true) and (if-false)} are proved similarly to (seq). \paragraph{(letrec)} Since $l'$ appears neither in $\rho'$ nor in $\ensuremath{\mathcal{F}}$, it does not appear in $\mathcal{F'}$ either. By the induction hypotheses, \[\reductionIF[\env{\ensuremath{{\rho_{T}}}[l'/l]}{\rho[l'/l]}]{b}{s[l'/l]}{v}{s'[l'/l]}{\mathcal{F'}[l'/l]}\] Moreover, \[\mathcal{F'}[l'/l]=\ajout{\ensuremath{\mathcal{F}}[l'/l]}{f}{\fun{\range{x}{1}{n}}{a}{\rho'[l'/l],\ensuremath{\mathcal{F}}}}\] Hence \[\reductionIF[\env{\ensuremath{{\rho_{T}}}[l'/l]}{\rho[l'/l]}]{\letrec{f(\range{x}{1}{n})}{a}{b}}{s}{v}{s'}{\ensuremath{\mathcal{F}}[l'/]} \qedhere \] \end{proof} To prove that using minimal stores is correct, we need to extend them so as to recover the full stores of naive reduction. The following lemma shows that extending a store before an (intermediate) reduction extends the resulting store too: \begin{lemma}[Extending a store in a derivation] \label{lem:extend-store} \[ \text{Given the reduction }\reductionI{M}{s}{v}{s'}, \text{ then } \forall t \sqsupseteq s, \exists t'\sqsupseteq s', \reductionI{M}{t}{v}{t'}. \] Moreover, both derivations have the same height. \end{lemma} \begin{proof} By induction on the height of the derivation. The most interesting case is (call), which requires alpha-converting a location (hence the induction on the height rather than the structure of the derivation). (var), (val) and (assign) are straightforward by the induction hypotheses and Property~\ref{prop:ext-order}; (seq), (if-true), (if-false) and (letrec) are straightforward by the induction hypotheses. \paragraph{(call)} Let $t_1 \sqsupseteq s_1$. By the induction hypotheses, \begin{align*} \exists t_2 \sqsupseteq s_2&, \reductionI[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{a_1}{t_1}{v_1}{t_2}\\ \exists t_{i+1} \sqsupseteq s_{i+1}&, \reductionI[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{a_i}{t_i}{v_i}{t_{i+1}}\\ \exists t_{n+1} \sqsupseteq s_{n+1}&, \reductionI[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{a_n}{t_n}{v_n}{t_{n+1}} \end{align*} The locations $l_i$ might belong to $\mathop{\mathrm{dom}}\nolimits(t_{n+1})$ and thus not be fresh. By alpha-conversion (Lemma~\ref{lem:rename-loc}), we chose fresh $l'_i$ (not in $\mathop{\mathrm{Im}}\nolimits(\rho')$ and $\mathop{\mathrm{dom}}\nolimits(s')$) such that \[\reductionIF[\env{(l'_i,v_i)}{\rho'}]{b}{\ajout{s_{n+1}}{l'_i}{v_i}}{v}{s'}% {\ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f}}\] By Property~\ref{prop:ext-order}, $\ajout{t_{n+1}}{l'_i}{v_i} \sqsupseteq \ajout{s_{n+1}}{l'_i}{v_i}$. By the induction hypotheses, \[\exists t' \sqsupseteq s', \reductionIF[\env{(l'_i,v_i)}{\rho'}]{b}{\ajout{t_{n+1}}{l'_i}{v_i}}{v}{t'}% {\ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f}}\] Moreover, $\gc{\ensuremath{{\rho_{T}}}}{t'} \sqsupseteq \gc{\ensuremath{{\rho_{T}}}}{s'}$. Hence, \[\reductionI{f(\range{a}{1}{n})}{t_{1}}{v}{\gc{\ensuremath{{\rho_{T}}}}{t'}}. \] \paragraph{(var)} Let $t \sqsupseteq s$. $\reductionI{v}{t}{v}{\gc{\ensuremath{{\rho_{T}}}}{t}}$ and $\exists t' = \gc{\ensuremath{{\rho_{T}}}}{t} \sqsupseteq \gc{\ensuremath{{\rho_{T}}}}{s} = s'$ (Property~\ref{prop:ext-order}). \paragraph{(val)} Let $t \sqsupseteq s$. $\reductionI{x}{t}{t\ l}{\gc{\ensuremath{{\rho_{T}}}}{t}}$ and $\exists t' = \gc{\ensuremath{{\rho_{T}}}}{t} \sqsupseteq \gc{\ensuremath{{\rho_{T}}}}{s} = s'$ (Property~\ref{prop:ext-order}). Moreover, $t\ l = s\ l$ because $l \in \mathop{\mathrm{dom}}\nolimits(s)$ and $t|_{\mathop{\mathrm{dom}}\nolimits(s)} = s$. \paragraph{(assign)} Let $t \sqsupseteq s$. By the induction hypotheses, \[\exists t' \sqsupseteq s',\reductionI[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{a}{t}{v}{t'}\] Hence, \[\reductionI{x \mathrel{\mathop\ordinarycolon}\mkern-1.2mu= a}{t}{\ensuremath{\mathbf{1}}}{\gc{\ensuremath{{\rho_{T}}}}{\subst{t'}{l}{v}}}\] concludes, since $\gc{\ensuremath{{\rho_{T}}}}{\subst{t'}{l}{v}} \sqsupseteq \gc{\ensuremath{{\rho_{T}}}}{\subst{t'}{l}{v}}$ (Property~\ref{prop:ext-order}). \paragraph{(seq)} Let $t \sqsupseteq s$. By the induction hypotheses, \begin{align*} \exists t' \sqsupseteq s'&, \reductionI[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{a}{t}{v}{t'}\\ \exists t''\sqsupseteq s''&, \reductionI{b}{t'}{v'}{t''} \end{align*} Hence, \[\exists t''\sqsupseteq s'',\reductionI{a\ ;\ b}{t}{v'}{t''}\] \paragraph{(if-true) and (if-false)} are proved similarly to (seq). \paragraph{(letrec)} Let $t \sqsupseteq s$. By the induction hypotheses, \[\exists t'\sqsupseteq s',\reductionIF{b}{s}{v}{s'}{\mathcal{F'}}\] Hence, \[\exists t'\sqsupseteq s',\reductionI{\letrec{f(\range{x}{1}{n})}{a}{b}}{s}{v}{t'} \qedhere \] \end{proof} Now we can show the required lemmas and prove the equivalence between the intermediate and naive reduction rules. \begin{lemma}[Intermediate implies naive]\label{lem:IimpliesN} \[\text{If }\reductionI{M}{s}{v}{s'} \text{ then } \exists t'\sqsupseteq s', \reductionN[\ensuremath{{\rho_{T}}}\cdot\rho]{M}{s}{v}{t'}. \] \end{lemma} \begin{proof} By induction on the height of the derivation, because some stores are modified during the proof. The interesting cases are (seq) and (call), where Lemma~\ref{lem:extend-store} is used to extend intermediary stores. Other cases are straightforward by Property~\ref{prop:ext-order} and the induction hypotheses. \paragraph{(seq)} By the induction hypotheses, \[\exists t' \sqsupseteq s', \reductionN{a}{s}{v}{t'}.\] Moreover, \[\reductionI{b}{s'}{v'}{s''}.\] Since $t' \sqsupseteq s'$, Lemma~\ref{lem:extend-store} leads to: \[\exists t\sqsupseteq s'', \reductionI{b}{t'}{v'}{t}\] and the height of the derivation is preserved. By the induction hypotheses, \[\exists t''\sqsupseteq t, \reductionN{b}{t'}{v'}{t''}\] Hence, since $\sqsubseteq$ is transitive (Property~\ref{prop:ext-order}), \[\exists t''\sqsupseteq s'',\reductionN{a\ ;\ b}{s}{v'}{t''}.\] \paragraph{(call)} Similarly to the (seq) case, we apply the induction hypotheses and Lemma~\ref{lem:extend-store}: \begin{align*} \exists t_2 \sqsupseteq s_2&, \reductionN{a_1}{s_1}{v_1}{t_2}&\text{(Induction)}\\ \exists t'_{i+1} \sqsupseteq s_{i+1}&, \reductionI[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{a_i}{t_i}{v_i}{t'_{i+1}}&\text{(Lemma~\ref{lem:extend-store})}\\ \exists t_{i+1} \sqsupseteq t'_{i+1} \sqsupseteq s_{i+1}&, \reductionN{a_i}{t_i}{v_i}{t_{i+1}}&\text{(Induction)}\\ \exists t'_{n+1} \sqsupseteq s_{n+1}&, \reductionI[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{a_n}{t_n}{v_n}{t'_{n+1}}&\text{(Lemma~\ref{lem:extend-store})}\\ \exists t_{n+1} \sqsupseteq t'_{n+1} \sqsupseteq s_{n+1}&, \reductionN{a_n}{t_n}{v_n}{t_{n+1}}&\text{(Induction)} \end{align*} The locations $l_i$ might belong to $\mathop{\mathrm{dom}}\nolimits(t_{n+1})$ and thus not be fresh. By alpha-conversion (Lemma~\ref{lem:rename-loc}), we choose a set of fresh $l'_i$ (not in $\mathop{\mathrm{Im}}\nolimits(\rho')$ and $\mathop{\mathrm{dom}}\nolimits(s')$) such that \[\reductionIF[\env{(l'_i,v_i)}{\rho'}]{b}{\ajout{s_{n+1}}{l'_i}{v_i}}{v}{s'}% {\ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f}}.\] By Property~\ref{prop:ext-order}, $\ajout{t_{n+1}}{l'_i}{v_i} \sqsupseteq \ajout{s_{n+1}}{l'_i}{v_i}$. Lemma~\ref{lem:extend-store} leads to, \[\exists t \sqsupseteq s', \reductionIF[\env{(l'_i,v_i)}{\rho'}]{b}{\ajout{t_{n+1}}{l'_i}{v_i}}{v}{t}% {\ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f}}.\] By the induction hypotheses, \[\exists t' \sqsupseteq t \sqsupseteq s', \reductionNF[(l'_i,v_i)\cdot\rho']{b}{\ajout{t_{n+1}}{l'_i}{v_i}}{v}{t'}% {\ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f}}.\] Moreover, $\gc{\ensuremath{{\rho_{T}}}}{t'} \sqsupseteq \gc{\ensuremath{{\rho_{T}}}}{s'}$. Hence, \[\reductionN{f(\range{a}{1}{n})}{s_1}{v}{\gc{\ensuremath{{\rho_{T}}}}{t'}}. \] \paragraph{(val)} $\reductionN{v}{s}{v}{t'}$ with $t' = s \sqsupseteq \gc{\ensuremath{{\rho_{T}}}}{s} = s'$. \paragraph{(var)} $\reductionN{x}{s}{s\ l}{s''}$ with $t' = s \sqsupseteq \gc{\ensuremath{{\rho_{T}}}}{s} = s'$. \paragraph{(assign)} By the induction hypotheses, \[\exists s''\sqsupseteq s',\reductionN{a}{s}{v}{t'}\] Hence, \[\reductionN{x \mathrel{\mathop\ordinarycolon}\mkern-1.2mu= a}{s}{\ensuremath{\mathbf{1}}}{\subst{t'}{l}{v}}\] concludes since $\subst{t'}{l}{v}\sqsupseteq\subst{s'}{l}{v}$ (Property~\ref{prop:ext-order}). \paragraph{(if-true) and (if-false)} are proved similarly to (seq). \paragraph{(letrec)} By the induction hypotheses, \[\exists t'\sqsupseteq s',\reductionNF{b}{s}{v}{s'}{\mathcal{F'}}.\] Hence, \[\exists t'\sqsupseteq s',\reductionN{\letrec{f(\range{x}{1}{n})}{a}{b}}{s}{v}{t'}. \qedhere \] \end{proof} The proof of the converse property --- i.e.\ if a term reduces in the naive reduction rules, it reduces in the intermediate reduction rules too --- is more complex because the naive reduction rules provide very weak invariants about stores and environments. For that reason, we add an hypothesis to ensure that every location appearing in the environments $\rho$, $\ensuremath{{\rho_{T}}}$ and $\ensuremath{\mathcal{F}}$ also appears in the store $s$: \[\mathop{\mathrm{Im}}\nolimits(\ensuremath{{\rho_{T}}}\cdot\rho)\cup\mathop{\mathrm{Loc}}\nolimits(\ensuremath{\mathcal{F}}) \subset \mathop{\mathrm{dom}}\nolimits(s).\] Moreover, since stores are often larger in the naive reduction rules than in the intermediate ones, we need to generalise the induction hypothesis. \begin{lemma}[Naive implies intermediate]\label{lem:NimpliesI} Assume $\mathop{\mathrm{Im}}\nolimits(\ensuremath{{\rho_{T}}}\cdot\rho)\cup\mathop{\mathrm{Loc}}\nolimits(\ensuremath{\mathcal{F}}) \subset \mathop{\mathrm{dom}}\nolimits(s)$. Then, $\reductionN[\ensuremath{{\rho_{T}}}\cdot\rho]{M}{s}{v}{s'}$ implies \[ \forall t \sqsubseteq s \text{ such that } \mathop{\mathrm{Im}}\nolimits(\ensuremath{{\rho_{T}}}\cdot\rho)\cup\mathop{\mathrm{Loc}}\nolimits(\ensuremath{\mathcal{F}}) \subset \mathop{\mathrm{dom}}\nolimits(t), \quad \reductionI{M}{t}{v}{s'|_{\mathop{\mathrm{dom}}\nolimits(t)\setminus\mathop{\mathrm{Im}}\nolimits(\ensuremath{{\rho_{T}}})}}. \] \end{lemma} \begin{proof} By induction on the structure of the derivation. \paragraph{(val)} Let $t \sqsubseteq s$. Then \begin{align*} \gc{\ensuremath{{\rho_{T}}}}{t} &= s|_{\mathop{\mathrm{dom}}\nolimits(t)\setminus\mathop{\mathrm{Im}}\nolimits(\ensuremath{{\rho_{T}}})} && \text{because $s|_{\mathop{\mathrm{dom}}\nolimits(t)} = t$}\\ &= s'|_{\mathop{\mathrm{dom}}\nolimits(t)\setminus\mathop{\mathrm{Im}}\nolimits(\ensuremath{{\rho_{T}}})} && \text{because $s' = s$} \end{align*} Hence, \[\reductionI{v}{t}{v}{\gc{\ensuremath{{\rho_{T}}}}{t}}.\] \paragraph{(var)} Let $t \sqsubseteq s$ such that $\mathop{\mathrm{Im}}\nolimits(\ensuremath{{\rho_{T}}}\cdot\rho)\cup\mathop{\mathrm{Loc}}\nolimits(\ensuremath{\mathcal{F}}) \subset \mathop{\mathrm{dom}}\nolimits(t)$. Note that $l \in \mathop{\mathrm{Im}}\nolimits(\ensuremath{{\rho_{T}}}\cdot\rho) \subset \mathop{\mathrm{dom}}\nolimits(t)$ implies $t\ l = s\ l$. Then, \begin{align*} \gc{\ensuremath{{\rho_{T}}}}{t} &= s|_{\mathop{\mathrm{dom}}\nolimits(t)\setminus\mathop{\mathrm{Im}}\nolimits(\ensuremath{{\rho_{T}}})} && \text{because $s|_{\mathop{\mathrm{dom}}\nolimits(t)} = t$}\\ &= s'|_{\mathop{\mathrm{dom}}\nolimits(t)\setminus\mathop{\mathrm{Im}}\nolimits(\ensuremath{{\rho_{T}}})} && \text{because $s' = s$} \end{align*} Hence, \[\reductionI{x}{t}{t\ l}{\gc{\ensuremath{{\rho_{T}}}}{t}}.\] \paragraph{(assign)} Let $t \sqsubseteq s$ such that $\mathop{\mathrm{Im}}\nolimits(\ensuremath{{\rho_{T}}}\cdot\rho)\cup\mathop{\mathrm{Loc}}\nolimits(\ensuremath{\mathcal{F}}) \subset \mathop{\mathrm{dom}}\nolimits(t)$. By the induction hypotheses, since $\mathop{\mathrm{Im}}\nolimits(\varepsilon) = \emptyset$, \[\reductionI[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{a}{t}{v}{s'|_{\mathop{\mathrm{dom}}\nolimits(t)}}\] Note that $l \in \mathop{\mathrm{Im}}\nolimits(\ensuremath{{\rho_{T}}}\cdot\rho) \subset \mathop{\mathrm{dom}}\nolimits(t)$ implies $l \in \mathop{\mathrm{dom}}\nolimits(s'|_{\mathop{\mathrm{dom}}\nolimits(t)})$. Then \begin{align*} \gc{\ensuremath{{\rho_{T}}}}{(\subst{s'|_{\mathop{\mathrm{dom}}\nolimits(t)}}{l}{v})} &=\gc{\ensuremath{{\rho_{T}}}}{(\subst{s'}{l}{v})|_{\mathop{\mathrm{dom}}\nolimits(t)}} &\text{because $l \in \mathop{\mathrm{dom}}\nolimits(s'|_{\mathop{\mathrm{dom}}\nolimits(t)})$}\\ &=(\subst{s'}{l}{v})|_{\mathop{\mathrm{dom}}\nolimits(t)\setminus\mathop{\mathrm{Im}}\nolimits(\ensuremath{{\rho_{T}}})} \end{align*} Hence, \[ \reductionI{x \mathrel{\mathop\ordinarycolon}\mkern-1.2mu= a}{s}{\ensuremath{\mathbf{1}}}{\gc{\ensuremath{{\rho_{T}}}}% {(\subst{s'|_{\mathop{\mathrm{dom}}\nolimits(t)}}{l}{v})}}. \] \paragraph{(seq)} Let $t \sqsubseteq s$ such that $\mathop{\mathrm{Im}}\nolimits(\ensuremath{{\rho_{T}}}\cdot\rho)\cup\mathop{\mathrm{Loc}}\nolimits(\ensuremath{\mathcal{F}}) \subset \mathop{\mathrm{dom}}\nolimits(t)$. By the induction hypotheses, since $\mathop{\mathrm{Im}}\nolimits(\varepsilon) = \emptyset$, \[\reductionI[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{a}{t}{v}{s'|_{\mathop{\mathrm{dom}}\nolimits(t)}}\] Moreover, $s'|_{\mathop{\mathrm{dom}}\nolimits(t)} \sqsubseteq s'$ and $\mathop{\mathrm{Im}}\nolimits(\ensuremath{{\rho_{T}}}\cdot\rho)\cup\mathop{\mathrm{Loc}}\nolimits(\ensuremath{\mathcal{F}}) \subset \mathop{\mathrm{dom}}\nolimits(s'|_{\mathop{\mathrm{dom}}\nolimits(t)})=\mathop{\mathrm{dom}}\nolimits(t)$. By the induction hypotheses, this leads to: \[\reductionI{b}{s'|_{\mathop{\mathrm{dom}}\nolimits(t)}}{v'}{s''|_{\mathop{\mathrm{dom}}\nolimits(s'|_{\mathop{\mathrm{dom}}\nolimits(t)})\setminus\mathop{\mathrm{Im}}\nolimits(\ensuremath{{\rho_{T}}})}}.\] Hence, with $\mathop{\mathrm{dom}}\nolimits(s'|_{\mathop{\mathrm{dom}}\nolimits(t)})=\mathop{\mathrm{dom}}\nolimits(t)$, \[\reductionI{a\ ;\ b}{t}{v'}{s''|_{\mathop{\mathrm{dom}}\nolimits(t)\setminus\mathop{\mathrm{Im}}\nolimits(\ensuremath{{\rho_{T}}})}}.\] \paragraph{(if-true) and (if-false)} are proved similarly to (seq). \paragraph{(letrec)} Let $t \sqsubseteq s$ such that $\mathop{\mathrm{Im}}\nolimits(\ensuremath{{\rho_{T}}}\cdot\rho)\cup\mathop{\mathrm{Loc}}\nolimits(\ensuremath{\mathcal{F}}) \subset \mathop{\mathrm{dom}}\nolimits(t)$. \[\mathop{\mathrm{Loc}}\nolimits(\mathcal{F'}) = \mathop{\mathrm{Loc}}\nolimits(\ensuremath{\mathcal{F}})\cup\mathop{\mathrm{Im}}\nolimits(\ensuremath{{\rho_{T}}}\cdot\rho) \text{ implies } \mathop{\mathrm{Im}}\nolimits(\ensuremath{{\rho_{T}}}\cdot\rho)\cup\mathop{\mathrm{Loc}}\nolimits(\mathcal{F'}) \subset \mathop{\mathrm{dom}}\nolimits(t).\] Then, by the induction hypotheses, \[\reductionIF{b}{t}{v}{s'|_{\mathop{\mathrm{dom}}\nolimits(t)\setminus\mathop{\mathrm{Im}}\nolimits(\ensuremath{{\rho_{T}}})}}{\mathcal{F'}}.\] Hence, \[\reductionI{\letrec{f(\range{x}{1}{n})}{a}{b}}{t}{v}{s'|_{\mathop{\mathrm{dom}}\nolimits(t)\setminus\mathop{\mathrm{Im}}\nolimits(\ensuremath{{\rho_{T}}})}}.\] \paragraph{(call)} Let $t \sqsubseteq s_1$ such that $\mathop{\mathrm{Im}}\nolimits(\ensuremath{{\rho_{T}}}\cdot\rho)\cup\mathop{\mathrm{Loc}}\nolimits(\ensuremath{\mathcal{F}}) \subset \mathop{\mathrm{dom}}\nolimits(t)$. Note the following equalities: \begin{align*} s_1|_{\mathop{\mathrm{dom}}\nolimits(t)} &= t\\ s_2|_{\mathop{\mathrm{dom}}\nolimits(t)} &\sqsubseteq s_2\\ \mathop{\mathrm{Im}}\nolimits(\ensuremath{{\rho_{T}}}\cdot\rho)\cup\mathop{\mathrm{Loc}}\nolimits(\ensuremath{\mathcal{F}}) \subset \mathop{\mathrm{dom}}\nolimits(s_2|_{\mathop{\mathrm{dom}}\nolimits(t)}) & = \mathop{\mathrm{dom}}\nolimits(t)\\ s_3|_{\mathop{\mathrm{dom}}\nolimits(s_2|_{\mathop{\mathrm{dom}}\nolimits(t)})} &= s_3|_{\mathop{\mathrm{dom}}\nolimits(t)} \end{align*} By the induction hypotheses, they yield: \begin{align*} \reductionI[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{a_1}{t}{v_1}{s_2|_{\mathop{\mathrm{dom}}\nolimits(t)}}\\ \reductionI[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{a_2}{s_2|_{\mathop{\mathrm{dom}}\nolimits(t)}}{v_1}{s_3|_{\mathop{\mathrm{dom}}\nolimits(t)}}\\ \forall i,\reductionI[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{a_i}{s_i|_{\mathop{\mathrm{dom}}\nolimits(t)}}{v_i}{s_{i+1}|_{\mathop{\mathrm{dom}}\nolimits(t)}} \end{align*} Moreover, $s_{n+1}|_{\mathop{\mathrm{dom}}\nolimits(t)} \sqsubseteq s_{n+1}$ implies $\ajout{s_{n+1}|_{\mathop{\mathrm{dom}}\nolimits(t)}}{l_i}{v_i} \sqsubseteq \ajout{s_{n+1}}{l_i}{v_i}$ (Property~\ref{prop:ext-order}) and: \begin{align*} \mathop{\mathrm{Im}}\nolimits(\rho''\cdot\rho')\cup\mathop{\mathrm{Loc}}\nolimits(\ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f}) &= \mathop{\mathrm{Im}}\nolimits(\rho'') \cup (\mathop{\mathrm{Im}}\nolimits(\rho')\cup\mathop{\mathrm{Loc}}\nolimits(\mathcal{F'}))\\ &\subset \{l_i\} \cup \mathop{\mathrm{Loc}}\nolimits(\ensuremath{\mathcal{F}})\\ &\subset \{l_i\} \cup \mathop{\mathrm{dom}}\nolimits(t)\\ &\subset \mathop{\mathrm{dom}}\nolimits(\ajout{s_{n+1}|_{\mathop{\mathrm{dom}}\nolimits(t)}}{l_i}{v_i}) \end{align*} Then, by the induction hypotheses, \[\reductionIF[\env{\rho''}{\rho'}]{b}{\ajout{s_{n+1}|_{\mathop{\mathrm{dom}}\nolimits(t)}}{l_i}{v_i}}{v}% {s'|_{\mathop{\mathrm{dom}}\nolimits(\ajout{s_{n+1}|_{\mathop{\mathrm{dom}}\nolimits(t)}}{l_i}{v_i})\setminus\mathop{\mathrm{Im}}\nolimits(\rho'')}}% {\ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f}}\] Finally, \begin{align*} \gc{\ensuremath{{\rho_{T}}}}{s'|_{\mathop{\mathrm{dom}}\nolimits(\ajout{s_{n+1}|_{\mathop{\mathrm{dom}}\nolimits(t)}}{l_i}{v_i})\setminus\mathop{\mathrm{Im}}\nolimits(\rho'')}} &= \gc{\ensuremath{{\rho_{T}}}}{s'|_{\mathop{\mathrm{dom}}\nolimits(t)\cup\{l_i\}\setminus\{l_i\}}} = \gc{\ensuremath{{\rho_{T}}}}{s'|_{\mathop{\mathrm{dom}}\nolimits(t)}}\\ &= (\gc{\ensuremath{{\rho_{T}}}}{s'})|_{\mathop{\mathrm{dom}}\nolimits(t)\setminus\mathop{\mathrm{Im}}\nolimits(\ensuremath{{\rho_{T}}})} \quad\text{(by definition of $\gc{\cdot}{\cdot}$)} \end{align*} Hence, \[\reductionI{f(\range{a}{1}{n})}{t}{v}% {(\gc{\ensuremath{{\rho_{T}}}}{s'})|_{\mathop{\mathrm{dom}}\nolimits(t)\setminus\mathop{\mathrm{Im}}\nolimits(\ensuremath{{\rho_{T}}})}}. \qedhere \] \end{proof} \subsection{Correctness of lambda-lifting\label{sec:correction-ll}} In this section, we prove the correctness of lambda-lifting (Theorem~\ref{thm:lambda-lifting-correctness}, p.~\pageref{thm:lambda-lifting-correctness}) by induction on the height of the optimised reduction. Section~\ref{sec:strong-invariants} defines stronger invariants and rewords the correctness theorem with them. Section~\ref{sec:overview} gives an overview of the proof. Sections~\ref{sec:rewriting-lemmas} and~\ref{sec:aliasing-lemmas} prove a few lemmas needed for the proof. Section~\ref{sec:proof-correctness} contains the actual proof of correctness. \subsubsection{Strengthened hypotheses} \label{sec:strong-invariants} We need strong induction hypotheses to ensure that key invariants about stores and environments hold at every step. For that purpose, we define \emph{aliasing-free environments}, in which locations may not be referenced by more than one variable, and \emph{local positions}. They yield a strengthened version of liftable parameters (Definition~\ref{dfn:var-liftable}). We then define lifted environments (Definition~\ref{dfn:lifted-env}) to mirror the effect of lambda-lifting in lifted terms captured in closures, and finally reformulate the correctness of lambda-lifting in Theorem~\ref{thm:correction-ll} with hypotheses strong enough to be provable directly by induction. \begin{definition}[Aliasing]\label{dfn:aliasing} A set of environments $\mathcal{E}$ is \emph{aliasing-free} when: \[\forall \rho,\rho' \in \mathcal{E}, \forall x \in \mathop{\mathrm{dom}}\nolimits(\rho), \forall y \in \mathop{\mathrm{dom}}\nolimits(\rho'),\ \rho\ x = \rho'\ y \Rightarrow x = y. \] By extension, an environment of functions $\ensuremath{\mathcal{F}}$ is aliasing-free when $\mathop{\mathrm{Env}}\nolimits(\ensuremath{\mathcal{F}})$ is aliasing-free. \end{definition} The notion of aliasing-free environments is not an artifact of our small language, but translates a fundamental property of the C semantics: distinct function parameters or local variables are always bound to distinct memory locations (Section~6.2.2, paragraph~6 in ISO/IEC 9899 \cite{iso9899}). A local position is any position in a term except inner functions. Local positions are used to distinguish functions defined directly in a term from deeper nested functions, because we need to enforce Invariant~\ref{case:loc} (Definition~\ref{dfn:var-liftable}) on the former only. \begin{definition}[Local position] \emph{Local positions} are defined inductively as follows: \begin{enumerate} \item $M$ is in local position in $M$, $x \mathrel{\mathop\ordinarycolon}\mkern-1.2mu= M$, $M \ ;\ M$, \ite{M}{M}{M} and $f(M,\dotsc,M)$. \item $N$ is in local position in \letrec{f(\range{x}{1}{n})}{M}{N}. \qedhere \end{enumerate} \end{definition} We extend the notion of liftable parameter (Definition~\ref{dfn:var-liftable-simple}, p.~\pageref{dfn:var-liftable-simple}) to enforce invariants on stores and environments. \begin{definition}[Extended liftability]\label{dfn:var-liftable} The parameter $x$ is \emph{liftable} in $(M,\ensuremath{\mathcal{F}},\ensuremath{{\rho_{T}}},\rho)$ when: \begin{enumerate} \item $x$ is defined as the parameter of a function $g$, either in $M$ or in $\ensuremath{\mathcal{F}}$, \label{case:def} \item in both $M$ and $\ensuremath{\mathcal{F}}$, inner functions in $g$, named $h_i$, are defined and called exclusively: \begin{enumerate} \item in tail position in $g$, or \item in tail position in some $h_j$ (with possibly $i=j$), or \item in tail position in $M$, \end{enumerate} \label{case:pos} \item for all $f$ defined in local position in $M$, $x \in \mathop{\mathrm{dom}}\nolimits(\ensuremath{{\rho_{T}}}\cdot\rho) \Leftrightarrow \exists i, f = h_i$, \label{case:loc} \item moreover, if $h_i$ is called in tail position in $M$, then $x \in \mathop{\mathrm{dom}}\nolimits(\ensuremath{{\rho_{T}}})$, \label{case:term} \item in \ensuremath{\mathcal{F}}, $x$ appears necessarily and exclusively in the environments of the $h_i$'s closures, \label{case:exclu} \item $\ensuremath{\mathcal{F}}$ contains only compact closures and $\mathop{\mathrm{Env}}\nolimits(\ensuremath{\mathcal{F}})\cup\{\rho,\ensuremath{{\rho_{T}}}\}$ is aliasing-free. \label{case:share} \qedhere \end{enumerate} \end{definition} We also extend the definition of lambda-lifting (Definition~\ref{dfn:lifted-term}, p.~\pageref{dfn:lifted-term}) to environments, in order to reflect changes in lambda-lifted parameters captured in closures. \begin{definition}[Lifted form of an environment]\label{dfn:lifted-env} \begin{align*} \text{If } \ensuremath{\mathcal{F}}\,f =& \fun{\range{x}{1}{n}}{b}{\rho',\mathcal{F'}}\qquad\text{then}\\ \lift{\ensuremath{\mathcal{F}}}\ f=& \begin{cases} \fun{\range{x}{1}{n}x}{\lift{b}}{\rho'|_{\mathop{\mathrm{dom}}\nolimits(\rho')\setminus\{x\}},\lift{\mathcal{F'}}}&\text{when $f = h_i$ for some $i$}\\ \fun{\range{x}{1}{n}}{\lift{b}}{\rho',\lift{\mathcal{F'}}}&\text{otherwise} \qedhere \end{cases} \end{align*} \end{definition} Lifted environments are defined such that a liftable parameter never appears in them. This property will be useful during the proof of correctness. \begin{lemma}\label{lem:Fstarclean} If $x$ is a liftable parameter in $(M,\ensuremath{\mathcal{F}},\ensuremath{{\rho_{T}}},\rho)$, then $x$ does not appear in \lift{\ensuremath{\mathcal{F}}}. \end{lemma} \begin{proof} Since $x$ is liftable in $(M, \ensuremath{\mathcal{F}}, \ensuremath{{\rho_{T}}}, \rho)$, it appears exclusively in the environments of $h_i$. By definition, it is removed when building \lift{\ensuremath{\mathcal{F}}}. \qedhere \end{proof} These invariants and definitions lead to a correctness theorem with stronger hypotheses. \begin{theorem}[Correctness of lambda-lifting]\label{thm:correction-ll} If $x$ is a liftable parameter in $(M,\ensuremath{\mathcal{F}},\ensuremath{{\rho_{T}}},\rho)$, then \[\reduction{M}{s}{v}{s'} \text{ implies } \reductionF{\lift{M}}{s}{v}{s'}{\lift{\ensuremath{\mathcal{F}}}}\] \end{theorem} Since naive and optimised reductions rules are equivalent (Theorem~\ref{thm:sem-equiv}, p.~\pageref{thm:sem-equiv}), the proof of Theorem~\ref{thm:lambda-lifting-correctness} (p.~\pageref{thm:lambda-lifting-correctness}) is a direct corollary of this theorem. \begin{corollary} If $x$ is a liftable parameter in $M$, then \[\exists t, \reductionNF[\varepsilon]{M}{\varepsilon}{v}{t}{\varepsilon} \text{ implies } \exists t', \reductionNF[\varepsilon]{\lift{M}}{\varepsilon}{v}{t'}{\varepsilon}.\] \end{corollary} \subsubsection{Overview of the proof} \label{sec:overview} With the enhanced liftability definition, we have invariants strong enough to perform a proof by induction of the correctness theorem. This proof is detailed in Section~\ref{sec:proof-correctness}. The proof is not by structural induction but by induction on the height of the derivation. This is necessary because, even with the stronger invariants, we cannot apply the induction hypotheses directly to the premises in the case of the (call) rule: we have to change the stores and environments, which means rewriting the whole derivation tree, before using the induction hypotheses. To deal with this most difficult case, we distinguish between calling one of the lifted functions ($f = h_i$) and calling another function (either $g$, where $x$ is defined, or any other function outside of $g$). Only the former requires rewriting; the latter follows directly from the induction hypotheses. In the (call) rule with $f = h_i$, issues arise when reducing the body $b$ of the lifted function. During this reduction, indeed, the store contains a new location $l'$ bound by the environment to the lifted variable $x$, but also contains the location $l$ which contains the original value of $x$. Our goal is to show that the reduction of $b$ implies the reduction of $\lift{b}$, with store and environments fulfilling the constraints of the (call) rule. To obtain the reduction of the lifted body $\lift{b}$, we modify the reduction of $b$ in a series of steps, using several lemmas: \begin{itemize} \item the location $l$ of the free variable $x$ is moved to the tail environment (Lemma~\ref{lem:switch-x}); \item the resulting reduction meets the induction hypotheses, which we apply to obtain the reduction of the lifted body $\lift{b}$; \item however, this reduction does not meet the constraints of the optimised reduction rules because the location $l$ is not fresh: we rename it to a fresh location $l'$ to hold the lifted variable (Lemma~\ref{lem:rename-loc-opt}); \item finally, since we renamed $l$ to $l'$, we need to reintroduce a location $l$ to hold the original value of $x$ (Lemmas~\ref{lem:intro-in-store} and~\ref{lem:intro-in-env}). \end{itemize} The rewriting lemmas used in the (call) case are shown in Section~\ref{sec:rewriting-lemmas}. For every other case, the proof consists in checking thoroughly that the induction hypotheses apply, in particular that $x$ is liftable in the premises. These verifications consist in checking Invariants~\ref{case:loc} to~\ref{case:share} of the extended liftability definition (Definition~\ref{dfn:var-liftable}) --- Invariants~\ref{case:def} and~\ref{case:pos} are obvious enough not to be detailed. To keep the main proof as compact as possible, the most difficult cases of liftability, related to aliasing, are proven in some preliminary lemmas (Section~\ref{sec:aliasing-lemmas}). One last issue arises during the induction when one of the premises does not contain the lifted variable $x$. In that case, the invariants do not hold, since they assume the presence of $x$. But it turns out that in this very case, the lifting function is the identity (since there is no variable to lift) and lambda-lifting is trivially correct. \subsubsection{Rewriting lemmas} \label{sec:rewriting-lemmas} Calling a lifted function has an impact on the resulting store: new locations are introduced for the lifted parameters and the earlier locations, which are not modified anymore, are hidden. Because of these changes, the induction hypotheses do not apply directly in the case of the (call) rule for a lifted function $h_i$. We use the following four lemmas to obtain, through several rewriting steps, a reduction of lifted terms meeting the induction hypotheses. \begin{itemize} \item Lemma~\ref{lem:switch-x} shows that moving a variable from the non-tail environment $\rho$ to the tail environment $\ensuremath{{\rho_{T}}}$ does not change the result, but restricts the domain of the store. It is used transform the original free variable $x$ (in the non-tail environment) to its lifted copy (which is a parameter of $h_i$, hence in the tail environment). \item Lemma~\ref{lem:rename-loc-opt} handles alpha-conversion in stores and is used when choosing a fresh location. \item Lemmas~\ref{lem:intro-in-store} and~\ref{lem:intro-in-env} finally add into the store and the environment a fresh location, bound to an arbitrary value. It is used to reintroduce the location containing the original value of $x$, after it has been alpha-converted to $l'$. \end{itemize} \begin{lemma}[Switching to tail environment]\label{lem:switch-x} If $\reduction[\env{\ensuremath{{\rho_{T}}}}{(x,l)\cdot\rho}]{M}{s}{v}{s'}$ and $x \notin \mathop{\mathrm{dom}}\nolimits(\ensuremath{{\rho_{T}}})$ then $\reduction[\env{\ensuremath{{\rho_{T}}}\cdot(x,l)}{\rho}]{M}{s}{v}{s'|_{\mathop{\mathrm{dom}}\nolimits(s')\setminus\{l\}}}$. Moreover, both derivations have the same height. \end{lemma} \begin{proof} By induction on the structure of the derivation. For the (val), (var), (assign) and (call) cases, we use the fact that $\gc{\ensuremath{{\rho_{T}}}\cdot(x,l)}{s} = s'|_{\mathop{\mathrm{dom}}\nolimits(s')\setminus\{l\}}$ when $s' = \gc{\ensuremath{{\rho_{T}}}}{s}$. \paragraph{(val)} $\reduction[\env{\ensuremath{{\rho_{T}}}\cdot(x,l)}{\rho}]{v}{s}{v}{\gc{\ensuremath{{\rho_{T}}}\cdot(x,l)}{s}}$ and $\gc{\ensuremath{{\rho_{T}}}\cdot(x,l)}{s} = s'|_{\mathop{\mathrm{dom}}\nolimits(s')\setminus\{l\}}$ with $s' = \gc{\ensuremath{{\rho_{T}}}}{s}$. \paragraph{(var)} $\reduction[\env{\ensuremath{{\rho_{T}}}\cdot(x,l)}{\rho}]{y}{s}{s\ l'}{\gc{\ensuremath{{\rho_{T}}}\cdot(x,l)}{s}}$ and $\gc{\ensuremath{{\rho_{T}}}\cdot(x,l)}{s} = s'|_{\mathop{\mathrm{dom}}\nolimits(s')\setminus\{l\}}$, with $l' = \ensuremath{{\rho_{T}}}\cdot(x,l)\cdot\rho\ y$ and $s' = \gc{\ensuremath{{\rho_{T}}}}{s}$. \paragraph{(assign)} By hypothesis, $\reduction[\env{}{\ensuremath{{\rho_{T}}}\cdot(x,l)\cdot\rho}]{a}{s}{v}{s'}$ hence $\reduction[\env{\ensuremath{{\rho_{T}}}\cdot(x,l)}{\rho}]{y \mathrel{\mathop\ordinarycolon}\mkern-1.2mu= a}{s}{\ensuremath{\mathbf{1}}}{\gc{\ensuremath{{\rho_{T}}}\cdot(x,l)}{\subst{s'}{l'}{v}}}$ and $\gc{\ensuremath{{\rho_{T}}}\cdot(x,l)}{\subst{s'}{l'}{v}} = s'|_{\mathop{\mathrm{dom}}\nolimits(s')\setminus\{l\}}$ with $l' = \ensuremath{{\rho_{T}}}\cdot(x,l)\cdot\rho\ y$ and $s' = \gc{\ensuremath{{\rho_{T}}}}{\subst{s'}{l'}{v}}$. \paragraph{(seq)} By hypothesis, $\reduction[\env{}{\ensuremath{{\rho_{T}}}\cdot(x,l)\cdot\rho}]{a}{s}{v}{s'}$ and, by the induction hypotheses, $\reduction[\env{\ensuremath{{\rho_{T}}}\cdot(x,l)}{\rho}]{b}{s'}{v}{s''|_{\mathop{\mathrm{dom}}\nolimits(s'')\setminus\{l\}}}$ hence \[\reduction[\env{\ensuremath{{\rho_{T}}}\cdot(x,l)}{\rho}]{a\ ;\ b}{s}{v}{s''|_{\mathop{\mathrm{dom}}\nolimits(s'')\setminus\{l\}}}.\] \paragraph{(if-true) and (if-false)} are proved similarly to (seq). \paragraph{(letrec)} By the induction hypotheses, \[\reductionF[\env{\ensuremath{{\rho_{T}}}\cdot(x,l)}{\rho}]{b}{s}{v}{s'|_{\mathop{\mathrm{dom}}\nolimits(s')\setminus\{l\}}}{\mathcal{F'}}\] hence \[\reduction[\env{\ensuremath{{\rho_{T}}}\cdot(x,l)}{\rho}]{\letrec{f(\range{x}{1}{n})}{a}{b}}{s}{v}{s'|_{\mathop{\mathrm{dom}}\nolimits(s')\setminus\{l\}}}\] \paragraph{(call)} The hypotheses do not change, and the conclusion becomes: \[\reduction[\env{\ensuremath{{\rho_{T}}}\cdot(x,l)}{\rho}]{f(\range{a}{1}{n})}{s_{1}}{v}{\gc{\ensuremath{{\rho_{T}}}\cdot(x,l)}{s'}}\] as expected, since $\gc{\ensuremath{{\rho_{T}}}\cdot(x,l)}{s'} = s''|_{\mathop{\mathrm{dom}}\nolimits(s'')\setminus\{l\}}$ with $s'' = \gc{\ensuremath{{\rho_{T}}}}{s'}$ \qedhere \end{proof} \begin{lemma}[Alpha-conversion]\label{lem:rename-loc-opt} If \reduction{M}{s}{v}{s'} then, for all $l$, for all $l'$ appearing neither in $s$ nor in $\ensuremath{\mathcal{F}}$ nor in $\rho\cdot\ensuremath{{\rho_{T}}}$, \[\reductionF[\env{\ensuremath{{\rho_{T}}}[l'/l]}{\rho[l'/l]}]{M}{s[l'/l]}{v}{s'[l'/l]}{\ensuremath{\mathcal{F}}[l'/l]}\] Moreover, both derivations have the same height. \end{lemma} \begin{proof} See Lemma~\ref{lem:rename-loc}, p.~\ref{lem:rename-loc}. \qedhere \end{proof} \begin{lemma}[Spurious location in store]\label{lem:intro-in-store} If $\reduction{M}{s}{v}{s'}$ and $k$ does not appear in either $s$, $\ensuremath{\mathcal{F}}$ or $\ensuremath{{\rho_{T}}}\cdot\rho$, then, for all value $u$, $\reduction{M}{\ajout{s}{k}{u}}{v}{\ajout{s'}{k}{u}}$. Moreover, both derivations have the same height. \end{lemma} \begin{proof} By induction on the height of the derivation. The key idea is to add $(k,u)$ to every store in the derivation tree. A collision might occur in the (call) rule, if there is some $j$ such that $l_j = k$. In that case, we need to rename $l_j$ to some fresh variable $l'_j \neq k$ (by alpha-conversion) before applying the induction hypotheses. \paragraph{(call)} By the induction hypotheses, \[\forall i, \reduction[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{a_i}{\ajout{s_i}{k}{u}}{v_i}{\ajout{s_{i+1}}{k}{u}}\] Because $k$ does not appear in $\ensuremath{\mathcal{F}}$, \[k \notin \mathop{\mathrm{Loc}}\nolimits(\ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f}) \subset \mathop{\mathrm{Loc}}\nolimits(\ensuremath{\mathcal{F}})\] For the same reason, it does not appear in $\rho'$. On the other hand, there might be a $j$ such that $l_j = k$, so $k$ might appear in $\rho''$. In that case, we rename $l_j$ in some fresh $l'_j \neq k$, appearing in neither $s_{n+1}$, nor $\mathcal{F'}$ or $\rho''\cdot\rho'$ (Lemma~\ref{lem:rename-loc-opt}). After this alpha-conversion, $k$ does not appear in either $\rho''\cdot\rho'$, $\ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f}$, or $\ajout{s_{n+1}}{l_i}{v_i}$. By the induction hypotheses, \[\reductionF[\env{\rho''}{\rho'}]{b}{\ajout{\ajout{s_{n+1}}{l_i}{v_i}}{k}{u}}{v}% {\ajout{s'}{k}{u}}{\ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f}}\] Moreover, $\gc{\ensuremath{{\rho_{T}}}}{\ajout{s'}{k}{u}} = \ajout{\gc{\ensuremath{{\rho_{T}}}}{s'}}{k}{u}$ (since $k$ does not appear in $\ensuremath{{\rho_{T}}}$). Hence \[\reduction{f(\range{a}{1}{n})}{\ajout{s_{1}}{k}{u}}{v}{\gc{\ensuremath{{\rho_{T}}}}{\ajout{s'}{k}{u}}}. \] \paragraph{(val)} $\reduction{v}{\ajout{s}{k}{u}}{v}{\gc{\ensuremath{{\rho_{T}}}}{\ajout{s}{k}{u}}}$ and $\gc{\ensuremath{{\rho_{T}}}}{\ajout{s}{k}{u}} = \ajout{\gc{\ensuremath{{\rho_{T}}}}{s}}{k}{u}$ since $k$ does not appear in $\ensuremath{{\rho_{T}}}$. \paragraph{(var)} $\reduction{x}{\ajout{s}{k}{u}}{(\ajout{s}{k}{u})\ l}{\gc{\ensuremath{{\rho_{T}}}}{\ajout{s}{k}{u}}}$, with $\gc{\ensuremath{{\rho_{T}}}}{\ajout{s}{k}{u}} = \ajout{\gc{\ensuremath{{\rho_{T}}}}{s}}{k}{u}$ since $k$ does not appear in $\ensuremath{{\rho_{T}}}$, and $(\ajout{s}{k}{u})\ l = s\ l$ since $k \neq l$ ($k$ does not appear in $s$). \paragraph{(assign)} By the induction hypotheses, $\reduction[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{a}{\ajout{s}{k}{u}}{v}{\ajout{s'}{k}{u}}$. And $k \neq l$ (since $k$ does not appear in $s$) then $\subst{\ajout{s'}{k}{u}}{l}{v} = \ajout{\subst{s'}{l}{v}}{k}{u}$. Moreover, $k$ does not appear in $\ensuremath{{\rho_{T}}}$ then $\gc{\ensuremath{{\rho_{T}}}}{\ajout{\subst{s'}{l}{v}}{k}{u}} = \ajout{\gc{\ensuremath{{\rho_{T}}}}{\subst{s'}{l}{v}}}{k}{u}$. Hence \[\reduction{x \mathrel{\mathop\ordinarycolon}\mkern-1.2mu= a}{\ajout{s}{k}{u}}{\ensuremath{\mathbf{1}}}{\ajout{\gc{\ensuremath{{\rho_{T}}}}{\subst{s'}{l}{v}}}{k}{u}}\] \paragraph{(seq)} By the induction hypotheses, \[\reduction[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{a}{\ajout{s}{k}{u}}{\ensuremath{\mathop{\mathbf{true}}\nolimits}}{\ajout{s'}{k}{u}}\] \[\reduction{b}{\ajout{s'}{k}{u}}{v'}{\ajout{s''}{k}{u}}\] Hence \[\reduction{a\ ;\ b}{\ajout{s}{k}{u}}{v'}{\ajout{s''}{k}{u}}\] \paragraph{(if-true) and (if-false)} are proved similarly to (seq). \paragraph{(letrec)} The location $k$ does not appear in $\mathcal{F'}$, because it does not appear in either $\ensuremath{\mathcal{F}}$ or $\rho'\subset\ensuremath{{\rho_{T}}}\cdot\rho$ ($\mathcal{F'}=\ajout{\ensuremath{\mathcal{F}}}{f}{\fun{\range{x}{1}{n}}{a}{\rho',\ensuremath{\mathcal{F}}}}$). Then, by the induction hypotheses, \[\reductionF{b}{\ajout{s}{k}{u}}{v}{\ajout{s'}{k}{u}}{\mathcal{F'}}\] Hence \[\reduction{\letrec{f(\range{x}{1}{n})}{a}{b}}{\ajout{s}{k}{u}}{v}{\ajout{s'}{k}{u}}. \qedhere \] \end{proof} \begin{lemma}[Spurious variable in environments]\label{lem:intro-in-env} \begin{align*} \forall l,l', \reduction[\env{\ensuremath{{\rho_{T}}}\cdot(x,l)}{\rho}]{M}{s}{v}{s'} \ensuremath{\quad\text{iff}\quad}& \reduction[\env{\ensuremath{{\rho_{T}}}\cdot(x,l)}{(x,l')\cdot\rho}]{M}{s}{v}{s'} \end{align*} Moreover, both derivations have the same height. \end{lemma} \begin{proof} See Lemma~\ref{lem:intro-in-env2}, p.~\ref{lem:intro-in-env2}. \qedhere \end{proof} \subsubsection{Aliasing lemmas} \label{sec:aliasing-lemmas} We need three lemmas to show that environments remain aliasing-free during the proof by induction in Section~\ref{sec:proof-correctness}. The first lemma states that concatenating two environments in an aliasing-free set yields an aliasing-free set. The other two prove that the aliasing invariant (Invariant~\ref{case:share}, Definition~\ref{dfn:var-liftable}) holds in the context of the (call) and (letrec) rules, respectively. \begin{lemma}[Concatenation]\label{lem:alias-concat} If $\mathcal{E}\cup\{\rho,\rho'\}$ is aliasing-free then $\mathcal{E}\cup\{\rho\cdot\rho'\}$ is aliasing-free. \end{lemma} \begin{proof} By exhaustive check of cases. We want to prove \begin{align*} \forall \rho_1,\rho_2 \in \mathcal{E}\cup\{\rho\cdot\rho'\}, \forall x \in \mathop{\mathrm{dom}}\nolimits(\rho_1), \forall y \in \mathop{\mathrm{dom}}\nolimits(\rho_2),\ \rho_1\ x = \rho_2\ y \Rightarrow x = y.\\ \intertext{given that} \forall \rho_1,\rho_2 \in \mathcal{E}\cup\{\rho,\rho'\}, \forall x \in \mathop{\mathrm{dom}}\nolimits(\rho_1), \forall y \in \mathop{\mathrm{dom}}\nolimits(\rho_2),\ \rho_1\ x = \rho_2\ y \Rightarrow x = y. \end{align*} If $\rho_1\in \mathcal{E}$ and $\rho_2\in \mathcal{E}$, immediate. If $\rho_1\in \{\rho\cdot\rho'\}$, $\rho_1\ x = \rho\ x\ \text{or}\ \rho'\ x$. This is the same for $\rho_2$. Then $\rho_1\ x = \rho_2\ y$ is equivalent to $\rho\ x = \rho'\ y$ (or some other combination, depending on $x$, $y$, $\rho_1$ and $\rho_2$) which leads to the expected result. \qedhere \end{proof} \begin{lemma}[Aliasing in (call) rule]\label{lem:aliasing-call} Assume that, in a (call) rule, \begin{itemize} \item $\ensuremath{\mathcal{F}}\,f = \fun{\range{x}{1}{n}}{b}{\rho',\mathcal{F'}}$, \item $\mathop{\mathrm{Env}}\nolimits(\ensuremath{\mathcal{F}})$ is aliasing-free, and \item $\rho''= \drange{x}{l}{1}{n}$, with fresh and distinct locations $l_{i}$. \end{itemize} Then $\mathop{\mathrm{Env}}\nolimits(\ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f})\cup\{\rho',\rho''\}$ is also aliasing-free. \end{lemma} \begin{proof} Let $\mathcal{E} = \mathop{\mathrm{Env}}\nolimits(\ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f})\cup\{\rho'\}$. We know that $\mathcal{E}\subset\mathop{\mathrm{Env}}\nolimits(\ensuremath{\mathcal{F}})$ so $\mathcal{E}$ is aliasing-free We want to show that adding fresh and distinct locations from $\rho''$ preserves this lack of freedom. More precisely, we want to show that \begin{align*} \forall \rho_1,\rho_2 \in \mathcal{E}\cup\{\rho''\}, \forall x \in \mathop{\mathrm{dom}}\nolimits(\rho_1), \forall y \in \mathop{\mathrm{dom}}\nolimits(\rho_2),\ \rho_1\ x = \rho_2\ y \Rightarrow x = y\\ \intertext{given that} \forall \rho_1,\rho_2 \in \mathcal{E}, \forall x \in \mathop{\mathrm{dom}}\nolimits(\rho_1), \forall y \in \mathop{\mathrm{dom}}\nolimits(\rho_2),\ \rho_1\ x = \rho_2\ y \Rightarrow x = y. \end{align*} We reason by checking of all cases. If $\rho_1\in \mathcal{E}$ and $\rho_2\in \mathcal{E}$, immediate. If $\rho_1 = \rho_2 = \rho''$ then $\rho''\ x = \rho''\ y \Rightarrow x = y$ holds because the locations of $\rho''$ are distinct. If $\rho_1 = \rho''$ and $\rho_2 \in \mathcal{E}$ then $\rho_1\ x = \rho_2\ y \Rightarrow x = y$ holds because $\rho_1\ x \neq \rho_2\ y$ (by freshness hypothesis). \qedhere \end{proof} \begin{lemma}[Aliasing in (letrec) rule]\label{lem:aliasing-letrec} If\/ $\mathop{\mathrm{Env}}\nolimits(\ensuremath{\mathcal{F}})\cup\{\rho,\ensuremath{{\rho_{T}}}\}$ is aliasing free, then, for all $x_i$, \[\mathop{\mathrm{Env}}\nolimits(\ensuremath{\mathcal{F}}) \cup \{\rho,\ensuremath{{\rho_{T}}}\} \cup \{\ensuremath{{\rho_{T}}}\cdot\rho\ |_{\mathop{\mathrm{dom}}\nolimits(\ensuremath{{\rho_{T}}}\cdot\rho)\setminus\{\range{x}{1}{n}\}}\}\] is aliasing free. \end{lemma} \begin{proof} Let $\mathcal{E} = \mathop{\mathrm{Env}}\nolimits(\ensuremath{\mathcal{F}})\cup\{\rho,\ensuremath{{\rho_{T}}}\}$ and $\rho'' = \ensuremath{{\rho_{T}}}\cdot\rho|_{\mathop{\mathrm{dom}}\nolimits(\ensuremath{{\rho_{T}}}\cdot\rho)\setminus\{\range{x}{1}{n}\}}$. Adding $\rho''$, a restricted concatenation of $\ensuremath{{\rho_{T}}}$ and $\rho$, to $\mathcal{E}$ preserves aliasing freedom, as in the proof of Lemma~\ref{lem:alias-concat}. If $\rho_1\in \mathcal{E}$ and $\rho_2\in \mathcal{E}$, immediate. If $\rho_1\in \{\rho''\}$, $\rho_1\ x = \rho\ x\ \text{or}\ \rho'\ x$. This is the same for $\rho_2$. Then $\rho_1\ x = \rho_2\ y$ is equivalent to $\rho\ x = \rho'\ y$ (or some other combination, depending on $x$, $y$, $\rho_1$ and $\rho_2$) which leads to the expected result. \qedhere \end{proof} \subsubsection{Proof of correctness} \label{sec:proof-correctness} We finally show Theorem~\ref{thm:correction-ll}. \begin{xrefthm}{thm:correction-ll} If $x$ is a liftable parameter in $(M,\ensuremath{\mathcal{F}},\ensuremath{{\rho_{T}}},\rho)$, then \[\reduction{M}{s}{v}{s'} \text{ implies } \reductionF{\lift{M}}{s}{v}{s'}{\lift{\ensuremath{\mathcal{F}}}}\] \end{xrefthm} Assume that $x$ is a liftable parameter in $(M,\ensuremath{\mathcal{F}},\ensuremath{{\rho_{T}}},\rho)$. The proof is by induction on the height of the reduction of $\reduction{M}{s}{v}{s'}$. To keep the proof readable, we detail only the non-trivial cases when checking the invariants of Definition~\ref{dfn:var-liftable} to ensure that the induction hypotheses hold. \paragraph{(call) --- first case} First, we consider the most interesting case where there exists $i$ such that $f = h_i$. The variable $x$ is a liftable parameter in $(h_i(\range{a}{1}{n}),\ensuremath{\mathcal{F}},\ensuremath{{\rho_{T}}},\rho)$ hence in $(a_i,\ensuremath{\mathcal{F}},\varepsilon,\ensuremath{{\rho_{T}}}\cdot\rho)$ too. Indeed, the invariants of Definition~\ref{dfn:var-liftable} hold: \begin{itemize} \item Invariant~\ref{case:loc}: By definition of a local position, every $f$ defined in local position in $a_i$ is in local position in $h_i(\range{a}{1}{n})$, hence the expected property by the induction hypotheses. \item Invariant~\ref{case:term}: Immediate since the premise does not hold : since the $a_i$ are not in tail position in $h_i(\range{a}{1}{n})$, they cannot feature calls to $h_i$ (by Invariant~\ref{case:pos}). \item Invariant~\ref{case:share}: Lemma~\ref{lem:alias-concat}, p.~\pageref{lem:alias-concat}. \end{itemize} The other invariants hold trivially. By the induction hypotheses, we get \[\reduclift[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{a_{i}}{s_{i}}{v_{i}}{s_{i+1}}.\] By definition of lifting, $\lift{h_i(\range{a}{1}{n})} = h_i(\lift{a_{1}},\dotsc,\lift{a_{n}},x)$. But $x$ is not a liftable parameter in $(b,\mathcal{F'},\rho'',\rho')$ since the Invariant~\ref{case:term} might be broken: $x \notin \mathop{\mathrm{dom}}\nolimits(\rho'')$ ($x$ is not a parameter of $h_i$) but $h_j$ might appear in tail position in $b$. On the other hand, we have $x \in \mathop{\mathrm{dom}}\nolimits(\rho')$: since, by hypothesis, $x$ is a liftable parameter in $(h_i(\range{a}{1}{n}),\ensuremath{\mathcal{F}},\ensuremath{{\rho_{T}}},\rho)$, it appears necessarily in the environments of the closures of the $h_i$, such as $\rho'$. This allows us to split $\rho'$ into two parts: $\rho' = (x,l)\cdot\rho'''$. It is then possible to move $(x,l)$ to the tail environment, according to Lemma~\ref{lem:switch-x}: \[\reductionF[\env{\rho''(x,l)}{\rho'''}]{b}{\ajout{s_{n+1}}{l_i}{v_i}}{v}{s'|_{\mathop{\mathrm{dom}}\nolimits(s')\setminus\{l\}}}{\ \ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f}}\] This rewriting ensures that $x$ is a liftable parameter in $(b,\ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f},\rho''\cdot(x,l),\rho''')$. Indeed, the invariants of Definition~\ref{dfn:var-liftable} hold: \begin{itemize} \item Invariant~\ref{case:loc}: Every function defined in local position in $b$ is an inner function in $h_i$ so, by Invariant~\ref{case:pos}, it is one of the $h_i$ and $x \in \mathop{\mathrm{dom}}\nolimits(\rho''\cdot(x,l)\cdot\rho''')$. \item Invariant~\ref{case:term}: Immediate since $x \in \mathop{\mathrm{dom}}\nolimits(\rho''\cdot(x,l)\cdot\rho''')$. \item Invariant~\ref{case:exclu}: Immediate since $\mathcal{F'}$ is included in $\ensuremath{\mathcal{F}}$. \item Invariant~\ref{case:share}: Immediate for the compact closures. Aliasing freedom is guaranteed by Lemma~\ref{lem:aliasing-call} (p.~\pageref{lem:aliasing-call}). \end{itemize} The other invariants hold trivially. By the induction hypotheses, \[\reductionF[\env{\rho''(x,l)}{\rho'''}]{\lift{b}}{\ajout{s_{n+1}}{l_i}{v_i}}{v}{s'|_{\mathop{\mathrm{dom}}\nolimits(s')\setminus\{l\}}}{\ \lift{\ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f}}}\] The $l$ location is not fresh: it must be rewritten into a fresh location, since $x$ is now a parameter of $h_i$. Let $l'$ be a location appearing in neither $\lift{\ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f}}$, nor $\ajout{s_{n+1}}{l_i}{v_i}$ or $\rho''\cdot\ensuremath{{\rho_{T}}}'$. Then $l'$ is a fresh location, which is to act as $l$ in the reduction of $\lift{b}$. We will show that, after the reduction, $l'$ is not in the store (just like $l$ before the lambda-lifting). In the meantime, the value associated to $l$ does not change (since $l'$ is modified instead of $l$). Lemma~\ref{lem:Fstarclean} implies that $x$ does not appear in the environments of \lift{\ensuremath{\mathcal{F}}}, so it does not appear in the environments of $\lift{\ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f}}\subset\lift{\ensuremath{\mathcal{F}}}$ either. As a consequence, lack of aliasing implies by Definition~\ref{dfn:aliasing} that the label $l$, associated to $x$, does not appear in $\lift{\ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f}}$ either, so \[\lift{\ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f}}[l'/l] = \lift{\ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f}}.\] Moreover, $l$ does not appear in $s'|_{\mathop{\mathrm{dom}}\nolimits(s')\setminus\{l\}}$. By alpha-conversion (Lemma~\ref{lem:rename-loc-opt}, since $l'$ does not appear in the store or the environments of the reduction, we rename $l$ to $l'$: \[\reductionF[\env{\rho''(x,l')}{\rho'''}]{\lift{b}}{\ajout{s_{n+1}[l'/l]}{l_i}{v_i}}{v}{s'|_{\mathop{\mathrm{dom}}\nolimits(s')\setminus\{l\}}}{\ \lift{\ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f}}}.\] We want now to reintroduce $l$. Let $v_x = s_{n+1}\ l$. The location $l$ does not appear in $\ajout{s_{n+1}[l'/l]}{l_i}{v_i}$, $\lift{\ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f}}$, or $\rho''(x,l')\cdot\rho'''$. Thus, by Lemma~\ref{lem:intro-in-store}, \[\reductionF[\env{\rho''(x,l')}{\rho'''}]{\lift{b}}{\ajout{\ajout{s_{n+1}[l'/l]}{l_i}{v_i}}{l}{v_x}}{v}{\ajout{s'|_{\mathop{\mathrm{dom}}\nolimits(s')\setminus\{l\}}}{l}{v_x}}{\ \lift{\ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f}}}.\] Since \begin{align*} \ajout{\ajout{s_{n+1}[l'/l]}{l_i}{v_i}}{l}{v_x} &= \ajout{\ajout{s_{n+1}[l'/l]}{l}{v_x}}{l_i}{v_i} && \text{because $\forall i, l \neq l_i$}\\ &= \ajout{\ajout{s_{n+1}}{l'}{v_x}}{l_i}{v_i} && \text{because $v_x = s_{n+1} l$}\\ &= \ajout{\ajout{s_{n+1}}{l_i}{v_i}}{l'}{v_x} && \text{because $\forall i, l' \neq l_i$} \end{align*} and $\ajout{s'|_{\mathop{\mathrm{dom}}\nolimits(s')\setminus\{l\}}}{l}{v_x} = \ajout{s'}{l}{v_x}$, we finish the rewriting by Lemma~\ref{lem:intro-in-env}, \[\reductionF[\env{\rho''(x,l')}{(x,l)\cdot\rho'''}]{\lift{b}}{\ajout{\ajout{s_{n+1}}{l_i}{v_i}}{l'}{v_x}}{v}{\ \ajout{s'}{l}{v_x}}{\lift{\ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f}}}.\] Hence the result: \[\inferrule*[Left=(call)]{\ \lift{\ensuremath{\mathcal{F}}}\ h_i = \fun{\range{x}{1}{n}x}{\lift{b}}{\rho',\lift{\mathcal{F'}}}\\ \rho''= \drange{x}{l}{1}{n}(x,\ensuremath{{\rho_{T}}}\ x)\\ \text{$l'$ and $l_{i}$ fresh and distinct}\\\\ \forall i,\reduclift[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{a_{i}}{s_{i}}{v_{i}}{s_{i+1}} \\ \reduclift[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{x}{s_{n+1}}{v_x}{s_{n+1}} \\ \reductionF[\env{\rho''(x,l')}{\rho'}]{\lift{b}}{\ajout{\ajout{s_{n+1}}{l_i}{v_i}}{l'}{v_x}}{v}{\ \ajout{s'}{l}{v_x}}{\lift{\ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f}}} }{\ \reduclift{h_i(\range{a}{1}{n})}{s_{1}}{v}{\gc{\ensuremath{{\rho_{T}}}}{\ajout{s'}{l}{v_x}}}}\] Since $l \in \mathop{\mathrm{dom}}\nolimits(\ensuremath{{\rho_{T}}})$ (because $x$ is a liftable parameter in $(h_i(\range{a}{1}{n}),\ensuremath{\mathcal{F}},\ensuremath{{\rho_{T}}},\rho)$), the extraneous location is reclaimed as expected: $\gc{\ensuremath{{\rho_{T}}}}{\ajout{s'}{l}{v_x}} = \gc{\ensuremath{{\rho_{T}}}}{s'}$. \paragraph{(call) --- second case} We now consider the case where $f$ is not one of the $h_i$. The variable $x$ is a liftable parameter in $(f(\range{a}{1}{n}),\ensuremath{\mathcal{F}},\ensuremath{{\rho_{T}}},\rho)$ hence in $(a_i,\ensuremath{\mathcal{F}},\varepsilon,\ensuremath{{\rho_{T}}}\cdot\rho)$ too. Indeed, the invariants of Definition~\ref{dfn:var-liftable} hold: \begin{itemize} \item Invariant~\ref{case:loc}: By definition of a local position, every $f$ defined in local position in $a_i$ is in local position in $f(\range{a}{1}{n})$, hence the expected property by the induction hypotheses. \item Invariant~\ref{case:term}: Immediate since the premise does not hold : the $a_i$ are not in tail position in $f(\range{a}{1}{n})$ so they cannot feature calls to $h_i$ (by Invariant~\ref{case:pos}:). \item Invariant~\ref{case:share}: Lemma~\ref{lem:alias-concat}, p.~\pageref{lem:alias-concat}. \end{itemize} The other invariants hold trivially. By the induction hypotheses, we get \[\reduclift[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{a_{i}}{s_{i}}{v_{i}}{s_{i+1}},\] and, by Definition~\ref{dfn:lifted-term}, \[\lift{f(\range{a}{1}{n})} = f(\lift{a_{1}},\dotsc,\lift{a_{n}}).\] If $x$ is not defined in $b$ or $\ensuremath{\mathcal{F}}$, then $\lift{}$ is the identity function and can trivially be applied to the reduction of $b$. Otherwise, $x$ is a liftable parameter in $(b,\ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f},\rho'',\rho')$. Indeed, the invariants of Definition~\ref{dfn:var-liftable} hold. Assume that $x$ is defined as a parameter of some function $g$, in either $b$ or $\ensuremath{\mathcal{F}}$: \begin{itemize} \item Invariant~\ref{case:loc}: We have to distinguish the cases where $f = g$ (with $x \in \mathop{\mathrm{dom}}\nolimits(\rho'')$) and $f \neq g$ (with $x \notin \mathop{\mathrm{dom}}\nolimits(\rho'')$ and $x \notin \mathop{\mathrm{dom}}\nolimits(\rho')$). In both cases, the result is immediate by the induction hypotheses. \item Invariant~\ref{case:term}: If $f \neq g$, the premise cannot hold (by the induction hypotheses, Invariant~\ref{case:pos}). If $f = g$, $x \in \mathop{\mathrm{dom}}\nolimits(\rho'')$ (by the induction hypotheses, Invariant~\ref{case:pos}). \item Invariant~\ref{case:exclu}: Immediate since $\mathcal{F'}$ is included in $\ensuremath{\mathcal{F}}$. \item Invariant~\ref{case:share}: Immediate for the compact closures. Aliasing freedom is guaranteed by Lemma~\ref{lem:aliasing-call} (p.~\pageref{lem:aliasing-call}). \end{itemize} The other invariants hold trivially. By the induction hypotheses, \[\reductionF[\env{\rho''}{\rho'}]{\lift{b}}{\ajout{s_{n+1}}{l_{i}}{v_{i}}}{v}{s'}{\lift{\ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f}}}\] hence: \[\inferrule*[Left=(call)]{\ \lift{\ensuremath{\mathcal{F}}}\ f = \fun{\range{x}{1}{n}}{\lift{b}}{\rho',\lift{\mathcal{F'}}}\\ \rho''= \drange{x}{l}{1}{n}\\ \text{$l_{i}$ fresh and distinct}\\\\ \forall i,\reduclift[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{a_{i}}{s_{i}}{v_{i}}{s_{i+1}} \\ \reductionF[\env{\rho''}{\rho'}]{\lift{b}}{\ajout{s_{n+1}}{l_{i}}{v_{i}}}{v}{s'}{\lift{\ajout{\mathcal{F'}}{f}{\ensuremath{\mathcal{F}}\,f}}} }{\ \reduclift{f(\range{a}{1}{n})}{s_{1}}{v}{\gc{\ensuremath{{\rho_{T}}}}{s'}}}\] \paragraph{(letrec)} The parameter $x$ is a liftable in $(\letrec{f(\range{x}{1}{n})}{a}{b},\ensuremath{\mathcal{F}},\ensuremath{{\rho_{T}}},\rho)$ so $x$ is a liftable parameter in $(b,\mathcal{F'},\ensuremath{{\rho_{T}}},\rho)$ too. Indeed, the invariants of Definition~\ref{dfn:var-liftable} hold: \begin{itemize} \item Invariants~\ref{case:loc} and~\ref{case:term}: Immediate by the induction hypotheses and definition of tail and local positions. \item Invariant~\ref{case:exclu}: By the induction hypotheses, Invariant~\ref{case:loc} ($x$ is to appear in the new closure if and only if $f = h_i$). \item Invariant~\ref{case:share}: Lemma~\ref{lem:aliasing-letrec} (p.~\pageref{lem:aliasing-letrec}). \end{itemize} The other invariants hold trivially. By the induction hypotheses, we get \[\reductionF{\lift{b}}{s}{v}{s'}{\lift{\mathcal{F'}}}.\] If $f \neq h_i$, \[\lift{ \letrec{f(\range{x}{1}{n})}{a}{b} } = \letrec{f(\range{x}{1}{n})}{\lift{a}}{\lift{b}}\] hence, by definition of $\lift{\mathcal{F'}}$, \[\inferrule*[Left=(letrec)]{\ \reductionF{\lift{b}}{s}{v}{s'}{\lift{\mathcal{F'}}} \\\\ \rho' = \ensuremath{{\rho_{T}}}\cdot\rho|_{\mathop{\mathrm{dom}}\nolimits(\ensuremath{{\rho_{T}}}\cdot\rho)\setminus\{\range{x}{1}{n}\}} \\ \lift{\mathcal{F'}}= \ajout{\lift{\ensuremath{\mathcal{F}}}}{f}{\fun{\range{x}{1}{n}}{\lift{a}}{\rho',F}} }{\ \reduclift{\letrec{f(\range{x}{1}{n})}{a}{b}}{s}{v}{s'}}\] On the other hand, if $f = h_i$, \[\lift{ \letrec{f(\range{x}{1}{n})}{a}{b} } = \letrec{f(\range{x}{1}{n}x)}{\lift{a}}{\lift{b}}\] hence, by definition of $\lift{\mathcal{F'}}$, \[\inferrule*[Left=(letrec)]{\ \reductionF{\lift{b}}{s}{v}{s'}{\lift{\mathcal{F'}}} \\\\ \rho' = \ensuremath{{\rho_{T}}}\cdot\rho|_{\mathop{\mathrm{dom}}\nolimits(\ensuremath{{\rho_{T}}}\cdot\rho)\setminus\{\range{x}{1}{n}x\}} \\ \lift{\mathcal{F'}}= \ajout{\lift{\ensuremath{\mathcal{F}}}}{h_i}{\fun{\range{x}{1}{n}x}{\lift{a}}{\rho',F}} }{\ \reduclift{\letrec{h_i(\range{x}{1}{n})}{a}{b}}{s}{v}{s'}}\] \paragraph{(val)} $\lift{v}=v$ so \[\inferrule*[Left=(val)]{ }{\reduclift{v}{s}{v}{\gc{\ensuremath{{\rho_{T}}}}{s}}}\] \paragraph{(var)} $\lift{y}=y$ so \[\inferrule*[Left=(var)]{\ensuremath{{\rho_{T}}}\cdot\rho\ y = l \in \mathop{\mathrm{dom}}\nolimits\ s}{\reduclift{y}{s}{s\ l}{\gc{\ensuremath{{\rho_{T}}}}{s}}}\] \paragraph{(assign)} The parameter $x$ is liftable in $(y \mathrel{\mathop\ordinarycolon}\mkern-1.2mu= a,\ensuremath{\mathcal{F}},\ensuremath{{\rho_{T}}},\rho)$ so in $(a,\ensuremath{\mathcal{F}},\varepsilon,\ensuremath{{\rho_{T}}}\cdot\rho)$ too. Indeed, the invariants of Definition~\ref{dfn:var-liftable} hold: \begin{itemize} \item Invariant~\ref{case:share}: Lemma~\ref{lem:alias-concat}, p.~\pageref{lem:alias-concat}. \end{itemize} The other invariants hold trivially. By the induction hypotheses, we get \[\reduclift[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{a}{s}{v}{s'}.\] Moreover \[\lift{y \mathrel{\mathop\ordinarycolon}\mkern-1.2mu= a}=y\mathrel{\mathop\ordinarycolon}\mkern-1.2mu= \lift{a},\] so : \[\inferrule*[Left=(assign)]{\reduclift[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{a}{s}{v}{s'} \\ \ensuremath{{\rho_{T}}}\cdot\rho\ y = l \in \mathop{\mathrm{dom}}\nolimits\ s'}{\ \reduclift{y \mathrel{\mathop\ordinarycolon}\mkern-1.2mu= a}{s}{\ensuremath{\mathbf{1}}}{\gc{\ensuremath{{\rho_{T}}}}{\subst{s'}{l}{v}}}}\] \paragraph{(seq)} The parameter $x$ is liftable in $(a\ ;\ b,\ensuremath{\mathcal{F}},\ensuremath{{\rho_{T}}},\rho)$. If $x$ is not defined in $a$ or $\ensuremath{\mathcal{F}}$, then $\lift{}$ is the identity function and can trivially be applied to the reduction of $a$. Otherwise, $x$ is a liftable parameter in $(a,\ensuremath{\mathcal{F}},\varepsilon,\ensuremath{{\rho_{T}}}\cdot\rho)$. Indeed, the invariants of Definition~\ref{dfn:var-liftable} hold: \begin{itemize} \item Invariant~\ref{case:share}: Lemma~\ref{lem:alias-concat}, p.~\pageref{lem:alias-concat}. \end{itemize} The other invariants hold trivially. If $x$ is not defined in $b$ or $\ensuremath{\mathcal{F}}$, then $\lift{}$ is the identity function and can trivially be applied to the reduction of $b$. Otherwise, $x$ is a liftable parameter in $(b,\ensuremath{\mathcal{F}},\ensuremath{{\rho_{T}}},\rho)$. Indeed, the invariants of Definition~\ref{dfn:var-liftable} hold trivially. By the induction hypotheses, we get \reduclift[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{a}{s}{v}{s'} and \reduclift{b}{s'}{v'}{s''}.\\ Moreover, \[\lift{a\ ;\ b} = \lift{a}\ ;\ \lift{b},\] hence: \[\inferrule*[Left=(seq)]{\reduclift[\env{}{\ensuremath{{\rho_{T}}}\cdot\rho}]{a}{s}{v}{s'} \\ \reduclift{b}{s'}{v'}{s''}}{\ \reduclift{a\ ;\ b}{s}{v'}{s''}}\] \paragraph{(if-true) and (if-false)} are proved similarly to (seq).\qedhere \newpage \section{CPS conversion}\label{sec:cps-conversion} In this section, we prove the correctness of the CPS-conversion performed by the CPC translator. This conversion is defined only on a subset of C programs that we call \emph{CPS-convertible terms} (Section~\ref{sec:cps-convertible}). We first show that the \emph{early evaluation} of function parameters in CPS-convertible terms is correct (Section~\ref{sec:early-eval}). To simplify the proof of correctness of CPS-conversion, we then introduce small-step reduction rules featuring contexts and early evaluation (Section~\ref{sec:ss-reduction}). In Section~\ref{sec:cps-terms}, we define \emph{CPS terms}, with the \texttt{push} and \texttt{invoke} operators to build and execute continuations, and the associated reduction rules. Since the syntax of CPS-terms does not ensure a correct reduction, we also define \emph{well-formed} CPS-terms, which are the image of CPS-convertible terms by CPS-conversion. The proof of correctness of CPS-conversion is finally carried out in Section~\ref{sec:translation}. It consists merely in checking that the reduction rules for CPS-convertible terms and well-formed CPS-terms execute in lock-step. \subsection{CPS-convertible form}\label{sec:cps-convertible} CPS conversion is not defined for every C function; instead, we restrict ourselves to a subset of functions, which we call the {\em CPS-convertible\/} subset. The CPS-convertible form restricts the calls to cps functions to make it straightforward to capture their continuation. In CPS-convertible form, a call to a cps function \texttt{f} is either in tail position, or followed by a tail call to another cps function whose parameters are \emph{non-shared} variables that cannot be modified by \texttt{f}. In the C language, we define the CPS-convertible form as follows: \begin{definition}[CPS-convertible form]\label{def:cps-c} A function {\tt h} is in \emph{CPS-convertible form} if every call to a cps function that it contains matches one of the following patterns, where both \texttt{f} and \texttt{g} are cps functions, \texttt{e$_\text{\tt 1}$, ..., e$_\text{\tt n}$} are any C expressions and \texttt{x, y$_\text{\tt 1}$, ..., y$_\text{\tt n}$} are distinct, non-shared variables: \begin{eqnarray} \text{\tt return f(e$_\text{\tt 1}$, ..., e$_\text{\tt n}$);}\\ \text{\tt x = f(e$_\text{\tt 1}$, ..., e$_\text{\tt n}$); return g(x, y$_\text{\tt 1}$, ..., y$_\text{\tt n}$);}\\ \text{\tt f(e$_\text{\tt 1}$, ..., e$_\text{\tt n}$); return g(x, y$_\text{\tt 1}$, ..., y$_\text{\tt n}$);}\label{useless1}\\ \text{\tt f(e$_\text{\tt 1}$, ..., e$_\text{\tt n}$); return;}\\ \text{\tt f(e$_\text{\tt 1}$, ..., e$_\text{\tt n}$); g(x, y$_\text{\tt 1}$, ..., y$_\text{\tt n}$); return;}\\ \text{\tt x = f(e$_\text{\tt 1}$, ..., e$_\text{\tt n}$); g(x, y$_\text{\tt 1}$, ..., y$_\text{\tt n}$); return;}\label{useless2} \end{eqnarray} \qedhere \end{definition} Note the use of \texttt{return} to explicitly mark calls in tail position. The forms (\ref{useless1}) to (\ref{useless2}) are only necessary to handle the cases where \texttt{f} and \texttt{g} return \texttt{void}; in the rest of the proof, we ignore these cases that are a syntactical detail of the C language, and focus on the essential cases (1) and (2). To prove the correctness of CPS-conversion, we need to express this definition in our small imperative language. This is done by defining CPS-convertible terms, which are a subset of the terms introduced in Definition~\ref{def:full-language} (Section~\ref{sec:definitions}). A program in CPS-convertible form consists of a set of mutually-recursive functions with no free variables, the body of each of which is a CPS-convertible term. A CPS\hyp{}convertible term has two parts: the head and the tail. The head is a (possibly empty) sequence of assignments, possibly embedded within conditional statements. The tail is a (possibly empty) sequence of function calls in a highly restricted form: their parameters are (side-effect free) expressions, except possibly for the last one, which can be another function call of the same form. Values and expressions are left unchanged. \begin{definition}[CPS-convertible terms]\label{def:cps-language} \begin{align*} v \mathrel{\mathop\ordinarycolon}\mkern-.9mu\mathrel{\mathop\ordinarycolon}\mkern-1.2mu= & \quad\ensuremath{\mathbf{1}} \;|\; \ensuremath{\mathop{\mathbf{true}}\nolimits} \;|\; \ensuremath{\mathop{\mathbf{false}}\nolimits} \;|\; n \in \mathbf{N}\tag{values}\\ \ensuremath{\mathop{expr}\nolimits\/} \mathrel{\mathop\ordinarycolon}\mkern-.9mu\mathrel{\mathop\ordinarycolon}\mkern-1.2mu= & \quad v \;|\; x\;|\; \ldots\tag{expressions}\\ F \mathrel{\mathop\ordinarycolon}\mkern-.9mu\mathrel{\mathop\ordinarycolon}\mkern-1.2mu= & f(\ensuremath{\mathop{expr}\nolimits\/}, \ldots, \ensuremath{\mathop{expr}\nolimits\/}) \;|\; f(\ensuremath{\mathop{expr}\nolimits\/}, \ldots, \ensuremath{\mathop{expr}\nolimits\/}, F) \tag{nested function calls}\\ Q \mathrel{\mathop\ordinarycolon}\mkern-.9mu\mathrel{\mathop\ordinarycolon}\mkern-1.2mu= & \epsilon \;|\; Q\ ;\ F \tag{tail}\\ T \mathrel{\mathop\ordinarycolon}\mkern-.9mu\mathrel{\mathop\ordinarycolon}\mkern-1.2mu= & \ensuremath{\mathop{expr}\nolimits\/} \;|\; x \mathrel{\mathop\ordinarycolon}\mkern-1.2mu= \ensuremath{\mathop{expr}\nolimits\/}\ ;\ T \;|\; \ite{e}{T}{T} \;|\; Q\tag{head} \end{align*} \end{definition} The essential property of CPS\hyp{}convertible terms, which makes their CPS conversion immediate to perform, is the guarantee that there is no cps call outside of the tails. It makes continuations easy to represent as a series of function calls (tails) and separates them clearly from imperative blocks (heads), which are not modified by the CPC translator. The tails are a generalisation of Definition~\ref{def:cps-c}, which will be useful for the proof of correctness of CPS-conversion. Note that {\tt x = f(e$_\text{\tt 1}$, ..., e$_\text{\tt n}$); return g(x, y$_\text{\tt 1}$, ..., y$_\text{\tt n}$)} is represented by $g(f(\range{e}{1}{n}), \range{y}{1}{n})$: this translation is correct because, contrary to C, our language guarantees a left-to-right evaluation of function parameters. Also noteworthy are the facts that: \begin{itemize} \item there is no letrec construct anymore since every function is defined at top-level, \item assignments, conditions and function parameters of $f$ are restricted to expressions, to ensure that function calls only appear in tail position, \item there is no need to forbid shared variables in the parameters of $g$ because they are ruled out of our language by design. \end{itemize} \subsection{Early evaluation} \label{sec:early-eval} In this section, we prove that correctness of \emph{early evaluation}, ie.\ evaluating the expressions $\ensuremath{\mathop{expr}\nolimits\/}$ before $F$ when reducing $f(\ensuremath{\mathop{expr}\nolimits\/}, \ldots, \ensuremath{\mathop{expr}\nolimits\/}, F)$ in a tail. This result is necessary to show the correctness of the CPS-conversion, because function parameters are evaluated before any function call when building continuations. The reduction rules may be simplified somewhat for CPS-convertible terms. We do not need to keep an explicit environment of functions since there are no inner functions any more; for the same reason, the (letrec) rule disappears. Instead, we use a constant environment $\ensuremath{\mathcal{F}}$ holding every function used in the reduced term $M$. To account for the absence of free variables, the closures in $\ensuremath{\mathcal{F}}$ need not carry an environment. As a result, in the (call) rule, $\rho'=\varepsilon$ and $\mathcal{F'}=\ensuremath{\mathcal{F}}$. Early evaluation is correct for lifted terms because a lifted term can never modify the variables that are not in its environment, since it cannot access them through closures. \begin{lemma}\label{lem:lift-store-invariant} Let $M$ be a lambda-lifted term. Then, \[\reductionN{M}{s}{v}{s'}\] implies \[s|_{\mathop{\mathrm{dom}}\nolimits(s)\setminus\mathop{\mathrm{Im}}\nolimits(\rho)}=s'|_{\mathop{\mathrm{dom}}\nolimits(s)\setminus\mathop{\mathrm{Im}}\nolimits(\rho)}.\] \end{lemma} \begin{proof} By induction on the structure of the reduction. The key points are the use of $\rho'=\varepsilon$ in the (call) case, and the absence of (letrec) rules. \paragraph{(val) and (var)} Trivial ($s = s'$). \paragraph{(assign)} By the induction hypotheses, \[s|_{\mathop{\mathrm{dom}}\nolimits(s)\setminus\mathop{\mathrm{Im}}\nolimits(\rho)}=s'|_{\mathop{\mathrm{dom}}\nolimits(s)\setminus\mathop{\mathrm{Im}}\nolimits(\rho)} \text{ and } l \in \mathop{\mathrm{Im}}\nolimits(\rho),\] hence \[s|_{\mathop{\mathrm{dom}}\nolimits(s)\setminus\mathop{\mathrm{Im}}\nolimits(\rho)}=(\subst{s'}{l}{v})|_{\mathop{\mathrm{dom}}\nolimits(s)\setminus\mathop{\mathrm{Im}}\nolimits(\rho)}.\] \paragraph{(seq)} By the induction hypotheses, \[s|_{\mathop{\mathrm{dom}}\nolimits(s)\setminus\mathop{\mathrm{Im}}\nolimits(\rho)}=s'|_{\mathop{\mathrm{dom}}\nolimits(s)\setminus\mathop{\mathrm{Im}}\nolimits(\rho)} \text{ and } s'|_{\mathop{\mathrm{dom}}\nolimits(s')\setminus\mathop{\mathrm{Im}}\nolimits(\rho)}=s''|_{\mathop{\mathrm{dom}}\nolimits(s')\setminus\mathop{\mathrm{Im}}\nolimits(\rho)}.\] Since, $\mathop{\mathrm{dom}}\nolimits(s)\subset\mathop{\mathrm{dom}}\nolimits(s')$, the second equality can be restricted to \[s'|_{\mathop{\mathrm{dom}}\nolimits(s)\setminus\mathop{\mathrm{Im}}\nolimits(\rho)}=s''|_{\mathop{\mathrm{dom}}\nolimits(s)\setminus\mathop{\mathrm{Im}}\nolimits(\rho)}.\] Hence, \[s|_{\mathop{\mathrm{dom}}\nolimits(s)\setminus\mathop{\mathrm{Im}}\nolimits(\rho)}=s''|_{\mathop{\mathrm{dom}}\nolimits(s)\setminus\mathop{\mathrm{Im}}\nolimits(\rho)}.\] \paragraph{(if-true) and (if-false)} are proved similarly to (seq). \paragraph{(letrec)} doesn't occur since $M$ is lambda-lifted. \paragraph{(call)} By the induction hypotheses, \[ (\subst{s_{n+1}}{l_i}{v_i})|_{\mathop{\mathrm{dom}}\nolimits(\subst{s_{n+1}}{l_i}{v_i})\setminus\mathop{\mathrm{Im}}\nolimits(\rho''\cdot\rho')}= s'|_{\mathop{\mathrm{dom}}\nolimits(\subst{s_{n+1}}{l_i}{v_i})\setminus\mathop{\mathrm{Im}}\nolimits(\rho''\cdot\rho')}\] Since $\rho' = \varepsilon$, $\mathop{\mathrm{Im}}\nolimits(\rho'') = \{l_i\}$ and $\mathop{\mathrm{dom}}\nolimits(s_{n+1})\cap\{l_i\} = \emptyset$ (by freshness), \[ (\subst{s_{n+1}}{l_i}{v_i})|_{\mathop{\mathrm{dom}}\nolimits(s_{n+1})}= s'|_{\mathop{\mathrm{dom}}\nolimits(s_{n+1})}\] so $s_{n+1} = s'|_{\mathop{\mathrm{dom}}\nolimits(s_{n+1})}$.\\ Since $\mathop{\mathrm{dom}}\nolimits(s)\setminus\mathop{\mathrm{Im}}\nolimits(\rho)\subset\mathop{\mathrm{dom}}\nolimits(s)\subset\mathop{\mathrm{dom}}\nolimits(s_{n+1})$, \[s_{n+1}|_{\mathop{\mathrm{dom}}\nolimits(s)\setminus\mathop{\mathrm{Im}}\nolimits(\rho)} = s'|_{\mathop{\mathrm{dom}}\nolimits(s)\setminus\mathop{\mathrm{Im}}\nolimits(\rho)}.\] Finally, we can prove similarly to the (seq) case that \[s|_{\mathop{\mathrm{dom}}\nolimits(s)\setminus\mathop{\mathrm{Im}}\nolimits(\rho)}=s_{n+1}|_{\mathop{\mathrm{dom}}\nolimits(s)\setminus\mathop{\mathrm{Im}}\nolimits(\rho)}.\] Hence, \[s|_{\mathop{\mathrm{dom}}\nolimits(s)\setminus\mathop{\mathrm{Im}}\nolimits(\rho)}=s'|_{\mathop{\mathrm{dom}}\nolimits(s)\setminus\mathop{\mathrm{Im}}\nolimits(\rho)}. \qedhere\] \end{proof} As a consequence, a tail of function calls cannot modify the current store, only extend it with the parameters of the called functions. \begin{corollary}\label{cor:lift-store-invariant} For every tail $Q$, \[\reductionN{Q}{s}{v}{s'} \text{ implies } s \sqsubseteq s'.\] \end{corollary} \begin{proof} We prove the corollary by induction on the structure of a tail. First remember that \emph{store extension} (written $\sqsubseteq$) is a partial order over stores (Property~\ref{prop:ext-order}), defined in Section~\ref{subsec:second-step} as follows: $s \sqsubseteq s' \ensuremath{\quad\text{iff}\quad} s'|_{\mathop{\mathrm{dom}}\nolimits(s)} = s$. The case $\epsilon$ is trivial. The case $Q\ ;\ F$ is immediate by induction ((seq) rule), since $\sqsubseteq$ is transitive. Similarly, it is pretty clear that $f(\ensuremath{\mathop{expr}\nolimits\/}, \ldots, \ensuremath{\mathop{expr}\nolimits\/}, F)$ follows by induction and transitivity from $f(\ensuremath{\mathop{expr}\nolimits\/}, \ldots, \ensuremath{\mathop{expr}\nolimits\/})$ ((call) rule). We focus on this last case. Lemma~\ref{lem:lift-store-invariant} implies: \[ (\subst{s_{n+1}}{l_i}{v_i})|_{\mathop{\mathrm{dom}}\nolimits(\subst{s_{n+1}}{l_i}{v_i})\setminus\mathop{\mathrm{Im}}\nolimits(\rho''\cdot\rho')}= s'|_{\mathop{\mathrm{dom}}\nolimits(\subst{s_{n+1}}{l_i}{v_i})\setminus\mathop{\mathrm{Im}}\nolimits(\rho''\cdot\rho')}.\] Since $\rho' = \varepsilon$, $\mathop{\mathrm{Im}}\nolimits(\rho'') = \{l_i\}$ and $\mathop{\mathrm{dom}}\nolimits(s_{n+1})\cap\{l_i\} = \emptyset$ (by freshness), \[ (\subst{s_{n+1}}{l_i}{v_i})|_{\mathop{\mathrm{dom}}\nolimits(s_{n+1})}= s'|_{\mathop{\mathrm{dom}}\nolimits(s_{n+1})}\] so $s_{n+1} = s'|_{\mathop{\mathrm{dom}}\nolimits(s_{n+1})}$. The evaluation of $\ensuremath{\mathop{expr}\nolimits\/}$ parameters do not change the store: $s_{n+1}=s$. The expected result follows: $s = s'|_{\mathop{\mathrm{dom}}\nolimits(s)}$, hence $s \sqsubseteq s'$. \end{proof} This leads to the correctness of early evaluation. \begin{theorem}[Early evaluation] \label{thm:preevaluation} For every tail $Q$, $\reductionN{Q}{s}{v}{s'}$ implies $\reductionN{Q[x\setminus s(\rho\ x)]}{s}{v}{s'}$ (provided $x\in\mathop{\mathrm{dom}}\nolimits(\rho)$ and $\rho\ x\in\mathop{\mathrm{dom}}\nolimits(s)$). \end{theorem} \begin{proof} Immediate induction on the structure of tails and expressions: Corollary~\ref{cor:lift-store-invariant} implies that $s \sqsubseteq s''$ and $\rho\ x\in\mathop{\mathrm{dom}}\nolimits(s)$ ensures that $s(\rho\ x) = s''(\rho\ x)$ in the relevant cases (namely the (seq) rule for $Q\ ;\ F$ and the (call) rule for $f(\ensuremath{\mathop{expr}\nolimits\/}, \ldots, \ensuremath{\mathop{expr}\nolimits\/}, F)$). \end{proof} \subsection{Small-step reduction} \label{sec:ss-reduction} We define the semantics of CPS\hyp{}convertible terms through a set of small-step reduction rules. We distinguish three kinds of reductions: $\rightarrow_T$ to reduce the head of terms, $\rightarrow_Q$ to reduce the tail, and $\rightarrow_e$ to evaluate expressions. These rules describe a stack machine with a store $\sigma$ to keep the value of variables. Since free and shared variables have been eliminated in earlier passes, there is a direct correspondence at any point in the program between variable names and locations, with no need to dynamically maintain an extra environment. We use contexts as a compact representation for stacks. The head rules $\rightarrow_T$ reduce triples made of a term, a context and a store: $\langle T, C[\ ], \sigma \rangle$. The tail rules $\rightarrow_Q$, which merely unfold tails with no need of a store, reduce couples of a tail and a context: $\langle Q, C[\ ], \rangle$. The expression rules do not need context to reduce, thus operating on couples made of an expression and a store: $\langle e, \sigma \rangle$. \paragraph{Contexts} Contexts are sequences of function calls. In those sequences, function parameters shall be already evaluated: constant expressions are allowed, but not variables. As a special case, the last parameter might be a ``hole'' instead, written $\circleddash$, to be filled with the return value of the next, nested function. \begin{definition}[Contexts] Contexts are defined inductively: \[C \mathrel{\mathop\ordinarycolon}\mkern-.9mu\mathrel{\mathop\ordinarycolon}\mkern-1.2mu= [\ ] \;|\; C[[\ ]\ ;\ f(v, \ldots, v)] \;|\; C[[\ ]\ ;\ f(v, \ldots, v, \circleddash)] \] \end{definition} \begin{definition}[CPS\hyp{}convertible reduction rules] \begin{align} \langle x \mathrel{\mathop\ordinarycolon}\mkern-1.2mu= \ensuremath{\mathop{expr}\nolimits\/}\ ;\ T, C[\ ], \sigma \rangle &\rightarrow_T \langle T, C[\ ], \sigma[x\mapsto v] \rangle \\&\quad\text{when $\langle \ensuremath{\mathop{expr}\nolimits\/},\sigma \rangle \rightarrow_e^\star v$}\notag\\ \langle \ite{\ensuremath{\mathop{expr}\nolimits\/}}{T_1}{T_2}, C[\ ], \sigma \rangle &\rightarrow_T \langle T_1, C[\ ], \sigma \rangle \\&\quad\text{when }\langle \ensuremath{\mathop{expr}\nolimits\/},\sigma \rangle \rightarrow_e^\star \ensuremath{\mathop{\mathbf{true}}\nolimits}\notag\\ \langle \ite{\ensuremath{\mathop{expr}\nolimits\/}}{T_1}{T_2}, C[\ ], \sigma \rangle &\rightarrow_T \langle T_2, C[\ ], \sigma \rangle \\&\quad\text{when }\langle \ensuremath{\mathop{expr}\nolimits\/},\sigma \rangle \rightarrow_e^\star \ensuremath{\mathop{\mathbf{false}}\nolimits}\notag\\ \langle \ensuremath{\mathop{expr}\nolimits\/}, C[[\ ]\ ;\ f(v_1, \ldots, v_n)], \sigma \rangle &\rightarrow_T \langle \epsilon, C[[\ ]\ ;\ f(v_1, \ldots, v_n)] \rangle\\ \langle \ensuremath{\mathop{expr}\nolimits\/}, C[[\ ]\ ;\ f(v_1, \ldots, v_n, \circleddash)], \sigma \rangle &\rightarrow_T \langle \epsilon, C[[\ ]\ ;\ f(v_1, \ldots, v_n, v)] \rangle \\&\quad\text{when $\langle \ensuremath{\mathop{expr}\nolimits\/},\sigma \rangle \rightarrow_e^\star v$}\notag\\ \langle \ensuremath{\mathop{expr}\nolimits\/}, [\ ], \sigma \rangle &\rightarrow_T v \quad\text{when $\langle \ensuremath{\mathop{expr}\nolimits\/},\sigma \rangle \rightarrow_e^\star v$}\notag\\ \langle Q, C[\ ], \sigma \rangle &\rightarrow_T \langle Q[x_i\setminus \sigma\ x_i], C[\ ] \rangle \label{rule:preevaluation} \\&\quad\text{for every $x_i$ in $\mathop{\mathrm{dom}}\nolimits(\sigma)$}\notag\\ \notag\\ \langle Q\ ;\ f(v_1, \ldots, v_n), C[\ ] \rangle &\rightarrow_Q \langle Q, C[[\ ]\ ;\ f(v_1, \ldots, v_n)] \rangle \label{rule:context} \\ \langle Q\ ;\ f(v_1, \ldots, v_n, F), C[\ ] \rangle &\rightarrow_Q \langle Q\ ;\ F, C[[\ ]\ ;\ f(v_1, \ldots, v_n, \circleddash)] \rangle \label{rule:context-hole} \\ \langle \epsilon, C[[\ ]\ ;\ f(v_1, \ldots, v_n)] \rangle &\rightarrow_Q \langle T, C[\ ], \sigma \rangle \label{rule:exec} \\&\quad\text{when } f(x_1,\ldots,x_n) = T \text{ and }\sigma = \{x_i\mapsto v_i\}\notag \end{align} We do not detail the rules for $\rightarrow_e$, which simply looks for variables in $\sigma$ and evaluates arithmetical and boolean operators. \end{definition} \paragraph{Early evaluation} Note that Rule~\ref{rule:preevaluation} evaluates every function parameter in a tail before the evaluation of the tail itself. This is precisely the early evaluation process described above, which is correct by Theorem~\ref{thm:preevaluation}. We introduce early evaluation directly in the reduction rules rather than using it as a lemma to simplify the proof of correctess of the CPS-conversion. \subsection{CPS terms} \label{sec:cps-terms} Unlike classical CPS conversion techniques \cite{plotkin}, our CPS terms are not continuations, but a procedure which builds and executes the continuation of a term. Construction is performed by $\ensuremath{\mathop{\mathbf{push}}\nolimits}$, which adds a function to the current continuation, and execution by $\ensuremath{\mathop{\mathbf{invoke}}\nolimits}$, which calls the first function of the continuation, optionally passing it the return value of the current function. \begin{definition}[CPS terms] \begin{align*} v \mathrel{\mathop\ordinarycolon}\mkern-.9mu\mathrel{\mathop\ordinarycolon}\mkern-1.2mu= & \quad\ensuremath{\mathbf{1}} \;|\; \ensuremath{\mathop{\mathbf{true}}\nolimits} \;|\; \ensuremath{\mathop{\mathbf{false}}\nolimits} \;|\; n \in \mathbf{N}\tag{values}\\ \ensuremath{\mathop{expr}\nolimits\/} \mathrel{\mathop\ordinarycolon}\mkern-.9mu\mathrel{\mathop\ordinarycolon}\mkern-1.2mu= & \quad v \;|\; x\;|\; \ldots\tag{expressions}\\ Q \mathrel{\mathop\ordinarycolon}\mkern-.9mu\mathrel{\mathop\ordinarycolon}\mkern-1.2mu= & \ensuremath{\mathop{\mathbf{invoke}}\nolimits} \;|\; \ensuremath{\mathop{\mathbf{push}}\nolimits}\ f(\ensuremath{\mathop{expr}\nolimits\/}, \ldots, \ensuremath{\mathop{expr}\nolimits\/})\ ;\ Q \;|\; \ensuremath{\mathop{\mathbf{push}}\nolimits}\ f(\ensuremath{\mathop{expr}\nolimits\/}, \ldots, \ensuremath{\mathop{expr}\nolimits\/}, \boxdot)\ ;\ Q\tag{tail}\\ T \mathrel{\mathop\ordinarycolon}\mkern-.9mu\mathrel{\mathop\ordinarycolon}\mkern-1.2mu= & \invoke\ \ensuremath{\mathop{expr}\nolimits\/} \;|\; x \mathrel{\mathop\ordinarycolon}\mkern-1.2mu= \ensuremath{\mathop{expr}\nolimits\/}\ ;\ T \;|\; \ite{e}{T}{T} \;|\; Q\tag{head} \end{align*} \end{definition} \paragraph{Continuations and reduction rules} A continuation is a sequence of function calls to be performed, with already evaluated parameters. We write $\cdot$ for appending a function to a continuation, and $\boxdot$ for a ``hole'', i.e.\ an unknown parameter. \begin{definition}[Continuations] \[\mathcal{C} \mathrel{\mathop\ordinarycolon}\mkern-.9mu\mathrel{\mathop\ordinarycolon}\mkern-1.2mu= \varepsilon \;|\; \ f(v, \ldots, v) \cdot \mathcal{C} \;|\; \ f(v, \ldots, v, \boxdot) \cdot \mathcal{C} \] \end{definition} The reduction rules for CPS terms are isomorphic to the rules for CPS\hyp{}convertible terms, except that they use continuations instead of contexts. \begin{definition}[CPS reduction rules] {\allowdisplaybreaks \begin{align} \langle x \mathrel{\mathop\ordinarycolon}\mkern-1.2mu= \ensuremath{\mathop{expr}\nolimits\/}\ ;\ T, \mathcal{C}, \sigma \rangle &\rightarrow_T \langle T, \mathcal{C}, \sigma[x\mapsto v] \rangle \\&\quad\text{when $\langle \ensuremath{\mathop{expr}\nolimits\/},\sigma \rangle \rightarrow_e^\star v$}\notag\\ \langle \ite{\ensuremath{\mathop{expr}\nolimits\/}}{T_1}{T_2}, \mathcal{C}, \sigma \rangle &\rightarrow_T \langle T_1, \mathcal{C}, \sigma \rangle \\&\quad\text{if }\langle \ensuremath{\mathop{expr}\nolimits\/},\sigma \rangle \rightarrow_e^\star \ensuremath{\mathop{\mathbf{true}}\nolimits}\notag\\ \langle \ite{\ensuremath{\mathop{expr}\nolimits\/}}{T_1}{T_2}, \mathcal{C}, \sigma \rangle &\rightarrow_T \langle T_2, \mathcal{C}, \sigma \rangle \\&\quad\text{if }\langle \ensuremath{\mathop{expr}\nolimits\/},\sigma \rangle \rightarrow_e^\star \ensuremath{\mathop{\mathbf{false}}\nolimits}\notag\\ \langle \invoke\ \ensuremath{\mathop{expr}\nolimits\/}, f(v_1, \ldots, v_n) \cdot \mathcal{C}, \sigma \rangle &\rightarrow_T \langle \ensuremath{\mathop{\mathbf{invoke}}\nolimits}, f(v_1, \ldots, v_n) \cdot \mathcal{C} \rangle\\ \langle \invoke\ \ensuremath{\mathop{expr}\nolimits\/}, f(v_1, \ldots, v_n, \boxdot) \cdot \mathcal{C}, \sigma \rangle &\rightarrow_T \langle \ensuremath{\mathop{\mathbf{invoke}}\nolimits}, f(v_1, \ldots, v_n, v) \cdot \mathcal{C} \rangle \\&\quad\text{when $\langle \ensuremath{\mathop{expr}\nolimits\/},\sigma \rangle \rightarrow_e^\star v$}\notag\\ \langle \invoke\ \ensuremath{\mathop{expr}\nolimits\/}, \varepsilon, \sigma \rangle &\rightarrow_T v \quad\text{when $\langle \ensuremath{\mathop{expr}\nolimits\/},\sigma \rangle \rightarrow_e^\star v$}\notag\\ \langle Q, \mathcal{C}, \sigma \rangle &\rightarrow_T \langle Q[x_i\setminus \sigma\ x_i], \mathcal{C} \rangle \\&\quad\text{for every $x_i$ in $\mathop{\mathrm{dom}}\nolimits(\sigma)$}\notag\\ \notag\\ \langle \ensuremath{\mathop{\mathbf{push}}\nolimits}\ f(v_1, \ldots, v_n)\ ;\ Q, \mathcal{C} \rangle &\rightarrow_Q \langle Q, f(v_1, \ldots, v_n) \cdot \mathcal{C} \rangle\\ \langle \ensuremath{\mathop{\mathbf{push}}\nolimits}\ f(v_1, \ldots, v_n, \boxdot)\ ;\ Q, \mathcal{C} \rangle &\rightarrow_Q \langle Q, f(v_1, \ldots, v_n, \boxdot) \cdot \mathcal{C} \rangle\\ \langle \ensuremath{\mathop{\mathbf{invoke}}\nolimits}, f(v_1, \ldots, v_n) \cdot \mathcal{C} \rangle &\rightarrow_Q \langle T, \mathcal{C}, \sigma \rangle \\&\quad\text{when } f(x_1,\ldots,x_n) = T \text{ and }\sigma = \{x_i\mapsto v_i\}\notag \end{align} } \end{definition} \paragraph{Well-formed terms} Not all CPS term will lead to a correct reduction. If we $\ensuremath{\mathop{\mathbf{push}}\nolimits}$ a function expecting the result of another function and $\ensuremath{\mathop{\mathbf{invoke}}\nolimits}$ it immediately, the reduction blocks: \[\langle \ensuremath{\mathop{\mathbf{push}}\nolimits}\ f(v_1, \ldots, v_n, \boxdot)\ ;\ \ensuremath{\mathop{\mathbf{invoke}}\nolimits}, \mathcal{C}, \sigma \rangle \rightarrow \langle \ensuremath{\mathop{\mathbf{invoke}}\nolimits}, f(v_1, \ldots, v_n, \boxdot) \cdot \mathcal{C}, \sigma \rangle \not\rightarrow\] \emph{Well-formed terms} avoid this behaviour. \begin{definition}[Well-formed term] A continuation queue is \emph{well-formed} if it does not end with: \[\ensuremath{\mathop{\mathbf{push}}\nolimits}\ f(\ensuremath{\mathop{expr}\nolimits\/},\ldots, \ensuremath{\mathop{expr}\nolimits\/}, \boxdot)\ ;\ \ensuremath{\mathop{\mathbf{invoke}}\nolimits}.\] A term is \emph{well-formed} if every continuation queue in this term is well-formed. \end{definition} \subsection{Correctess of the CPS-conversion}\label{sec:translation} We define the CPS conversion as a mapping from CPS\hyp{}convertible terms to CPS terms. \begin{definition}[CPS conversion] \begin{align*} (Q\ ;\ f(\ensuremath{\mathop{expr}\nolimits\/}, \ldots, \ensuremath{\mathop{expr}\nolimits\/}))^\blacktriangle &= \ensuremath{\mathop{\mathbf{push}}\nolimits}\ f(\ensuremath{\mathop{expr}\nolimits\/}, \ldots, \ensuremath{\mathop{expr}\nolimits\/})\ ;\ Q^\blacktriangle\\ (Q\ ;\ f(\ensuremath{\mathop{expr}\nolimits\/}, \ldots, \ensuremath{\mathop{expr}\nolimits\/}, F))^\blacktriangle &= \ensuremath{\mathop{\mathbf{push}}\nolimits}\ f(\ensuremath{\mathop{expr}\nolimits\/}, \ldots, \ensuremath{\mathop{expr}\nolimits\/}, \boxdot)\ ;\ (Q\ ;\ F)^\blacktriangle\\ \epsilon^\blacktriangle &= \ensuremath{\mathop{\mathbf{invoke}}\nolimits}\\ (x \mathrel{\mathop\ordinarycolon}\mkern-1.2mu= \ensuremath{\mathop{expr}\nolimits\/}\ ;\ T)^\blacktriangle &= x \mathrel{\mathop\ordinarycolon}\mkern-1.2mu= \ensuremath{\mathop{expr}\nolimits\/}\ ;\ T^\blacktriangle\\ (\ite{\ensuremath{\mathop{expr}\nolimits\/}}{T_1}{T_2})^\blacktriangle &= \ite{\ensuremath{\mathop{expr}\nolimits\/}}{T_1^\blacktriangle}{T_2^\blacktriangle}\\ \ensuremath{\mathop{expr}\nolimits\/}^\blacktriangle &= \invoke\ \ensuremath{\mathop{expr}\nolimits\/} \end{align*} \end{definition} In the rest of this section, we prove that this mapping yields an isomorphism between the reduction rules of CPS\hyp{}convertible terms and well-formed CPS terms, whence the correctness of our CPS conversion (Theorem~\ref{thm:cps-correct}). We first prove two lemmas to show that $^\blacktriangle$ yields only well-formed CPS terms. This leads to a third lemma to show that $^\blacktriangle$ is a bijection between CPS\hyp{}convertible terms and well-formed CPS terms. CPS\hyp{}convertible terms have been carefully designed to make CPS conversion as simple as possible. Accordingly, the following three proofs, while long and tedious, are fairly trivial. \begin{lemma} Let $Q$ be a continuation queue. Then $Q^\blacktriangle$ is well-formed. \end{lemma} \begin{proof} By induction on the structure of a tail. \[ \epsilon^\blacktriangle = \ensuremath{\mathop{\mathbf{invoke}}\nolimits} \] and \[ (\epsilon\ ;\ f(\ensuremath{\mathop{expr}\nolimits\/}, \ldots, \ensuremath{\mathop{expr}\nolimits\/}))^\blacktriangle = \ensuremath{\mathop{\mathbf{push}}\nolimits}\ f(\ensuremath{\mathop{expr}\nolimits\/}, \ldots, \ensuremath{\mathop{expr}\nolimits\/})\ ;\ \ensuremath{\mathop{\mathbf{invoke}}\nolimits} \] are well-formed by definition. \[ ((Q\ ;\ F)\ ;\ f(\ensuremath{\mathop{expr}\nolimits\/}, \ldots, \ensuremath{\mathop{expr}\nolimits\/}))^\blacktriangle = \ensuremath{\mathop{\mathbf{push}}\nolimits}\ f(\ensuremath{\mathop{expr}\nolimits\/}, \ldots, \ensuremath{\mathop{expr}\nolimits\/})\ ;\ (Q\ ;\ F)^\blacktriangle \] and \[ (Q\ ;\ f(\ensuremath{\mathop{expr}\nolimits\/}, \ldots, \ensuremath{\mathop{expr}\nolimits\/}, F))^\blacktriangle = \ensuremath{\mathop{\mathbf{push}}\nolimits}\ f(\ensuremath{\mathop{expr}\nolimits\/}, \ldots, \ensuremath{\mathop{expr}\nolimits\/}, \boxdot)\ ;\ (Q\ ;\ F)^\blacktriangle \] are well-formed by induction. \end{proof} \begin{lemma} Let $T$ be a CPS\hyp{}convertible term. Then $T^\blacktriangle$ is well-formed. \end{lemma} \begin{proof} Induction on the structure of $T$, using the above lemma. \end{proof} \begin{lemma}\label{lemma:cps-iso} The $^\blacktriangle$ relation is a bijection between CPS\hyp{}convertible terms and well-formed CPS terms. \end{lemma} \begin{proof} Consider the following mapping from well-formed CPS terms to CPS\hyp{}convertible terms: \begin{align*} (\ensuremath{\mathop{\mathbf{push}}\nolimits}\ f(\ensuremath{\mathop{expr}\nolimits\/}, \ldots, \ensuremath{\mathop{expr}\nolimits\/})\ ;\ Q)^\blacktriangledown &= Q^\blacktriangledown\ ;\ f(\ensuremath{\mathop{expr}\nolimits\/}, \ldots, \ensuremath{\mathop{expr}\nolimits\/})\\ (\ensuremath{\mathop{\mathbf{push}}\nolimits}\ f(\ensuremath{\mathop{expr}\nolimits\/}, \ldots, \ensuremath{\mathop{expr}\nolimits\/}, \boxdot)\ ;\ Q)^\blacktriangledown &= Q'\ ;\ f(\ensuremath{\mathop{expr}\nolimits\/}, \ldots, \ensuremath{\mathop{expr}\nolimits\/}, F)\\ &\quad\text{with $Q^\blacktriangledown = Q'\ ;\ F$}\tag{*}\\ \ensuremath{\mathop{\mathbf{invoke}}\nolimits}^\blacktriangledown &= \epsilon\\ (x \mathrel{\mathop\ordinarycolon}\mkern-1.2mu= \ensuremath{\mathop{expr}\nolimits\/}\ ;\ T)^\blacktriangledown &= x \mathrel{\mathop\ordinarycolon}\mkern-1.2mu= \ensuremath{\mathop{expr}\nolimits\/}\ ;\ T^\blacktriangledown\\ \ite{\ensuremath{\mathop{expr}\nolimits\/}}{T_1}{T_2}^\blacktriangledown &= \ite{\ensuremath{\mathop{expr}\nolimits\/}}{T_1^\blacktriangledown}{T_2^\blacktriangledown}\\ (\invoke\ \ensuremath{\mathop{expr}\nolimits\/})^\blacktriangledown &= \ensuremath{\mathop{expr}\nolimits\/} \end{align*} (*) The existence of $Q'$ is guaranteed by well-formedness: \begin{itemize} \item $\forall T,\ T^\blacktriangledown=\epsilon\ \Rightarrow\ T=\ensuremath{\mathop{\mathbf{invoke}}\nolimits}$ (by disjunction on the definition of~$^\blacktriangledown$), \item here, $Q\neq\ensuremath{\mathop{\mathbf{invoke}}\nolimits}$ because $(\ensuremath{\mathop{\mathbf{push}}\nolimits}\ f(\ensuremath{\mathop{expr}\nolimits\/}, \ldots, \ensuremath{\mathop{expr}\nolimits\/}, \boxdot)\ ;\ Q)$ is well-formed, \item hence $Q^\blacktriangledown\neq\epsilon$. \end{itemize} One checks easily that $(T^\blacktriangledown)^\blacktriangle=T$ and $(T^\blacktriangle)^\blacktriangledown=T$. \end{proof} To conclude the proof of isomorphism, we also need an (obviously bijective) mapping from contexts to continuations: \begin{definition}[Conversion of contexts] \begin{align*} ([\ ])^\vartriangle &= \varepsilon\\ (C[[\ ]\ ;\ f(v_1, \ldots, v_n)])^\vartriangle &= f(v_1, \ldots, v_n) \cdot \mathcal{C} \\&\quad\text{with $(C[\ ])^\vartriangle = \mathcal{C}$}\\ (C[[\ ]\ ;\ f(v_1, \ldots, v_n, \circleddash)])^\vartriangle &= f(v_1,\ldots, v_n, \boxdot) \cdot \mathcal{C} \\&\quad\text{with $(C[\ ])^\vartriangle = \mathcal{C}$} \end{align*} \end{definition} The correctness theorem follows: \begin{theorem}[Correctness of CPS conversion]\label{thm:cps-correct} The $^\blacktriangle$ and $^\vartriangle$ mappings are two bijections, the inverses of which are written $^\blacktriangledown$ and $^\triangledown$. They yield an isomorphism between reduction rules of CPS\hyp{}convertible terms and CPS terms. \end{theorem} \begin{proof} Lemma~\ref{lemma:cps-iso} ensures that $^\blacktriangle$ is a bijection between CPS\hyp{}convertible terms and well-formed CPS terms. Moreover, $^\vartriangle$ is an obvious bijection between contexts and continuations. To complete the proof, we only need to apply $^\blacktriangle$, $^\vartriangle$, $^\blacktriangledown$ and $^\triangledown$ to CPS\hyp{}convertible terms, contexts, well-formed CPS terms and continuations (respectively) in every reduction rule and check that we get a valid rule in the dual reduction system. The result is summarized in Figure~\ref{fig:iso}. \end{proof} \begin{landscape} \begin{figure} \begin{align*} \langle x \mathrel{\mathop\ordinarycolon}\mkern-1.2mu= \ensuremath{\mathop{expr}\nolimits\/}\ ;\ T, C[\ ], \sigma \rangle &\rightarrow_T \langle T, C[\ ], \sigma[x\mapsto v] \rangle &\Leftrightarrow&& \langle x \mathrel{\mathop\ordinarycolon}\mkern-1.2mu= \ensuremath{\mathop{expr}\nolimits\/}\ ;\ T, \mathcal{C}, \sigma \rangle &\rightarrow_T \langle T, \mathcal{C}, \sigma[x\mapsto v] \rangle \\&&&&&\quad\text{when $\langle \ensuremath{\mathop{expr}\nolimits\/},\sigma \rangle \rightarrow_e^\star v$}\notag\\ \langle \ite{\ensuremath{\mathop{expr}\nolimits\/}}{T_1}{T_2}, C[\ ], \sigma \rangle &\rightarrow_T \langle T_1, C[\ ], \sigma \rangle &\Leftrightarrow&& \langle \ite{\ensuremath{\mathop{expr}\nolimits\/}}{T_1}{T_2}, \mathcal{C}, \sigma \rangle &\rightarrow_T \langle T_1, \mathcal{C}, \sigma \rangle \\&&&&&\quad\text{if }\langle \ensuremath{\mathop{expr}\nolimits\/},\sigma \rangle \rightarrow_e^\star \ensuremath{\mathop{\mathbf{true}}\nolimits}\notag\\ \langle \ite{\ensuremath{\mathop{expr}\nolimits\/}}{T_1}{T_2}, C[\ ], \sigma \rangle &\rightarrow_T \langle T_2, C[\ ], \sigma \rangle &\Leftrightarrow&& \langle \ite{\ensuremath{\mathop{expr}\nolimits\/}}{T_1}{T_2}, \mathcal{C}, \sigma \rangle &\rightarrow_T \langle T_2, \mathcal{C}, \sigma \rangle \\&&&&&\quad\text{if }\langle \ensuremath{\mathop{expr}\nolimits\/},\sigma \rangle \rightarrow_e^\star \ensuremath{\mathop{\mathbf{false}}\nolimits}\notag\\ \langle \ensuremath{\mathop{expr}\nolimits\/}, C[[\ ]\ ;\ f(v_1, \ldots, v_n)], \sigma \rangle &\rightarrow_T \langle \epsilon, C[[\ ]\ ;\ f(v_1, \ldots, v_n)] \rangle &\Leftrightarrow&& \langle \invoke\ \ensuremath{\mathop{expr}\nolimits\/}, f(v_1, \ldots, v_n) \cdot \mathcal{C}, \sigma \rangle &\rightarrow_T \langle \ensuremath{\mathop{\mathbf{invoke}}\nolimits}, f(v_1, \ldots, v_n) \cdot \mathcal{C} \rangle\\ \langle \ensuremath{\mathop{expr}\nolimits\/}, C[[\ ]\ ;\ f(v_1, \ldots, v_n, \circleddash)], \sigma \rangle &\rightarrow_T \langle \epsilon, C[[\ ]\ ;\ f(v_1, \ldots, v_n, v)] \rangle &\Leftrightarrow&& \langle \invoke\ \ensuremath{\mathop{expr}\nolimits\/}, f(v_1, \ldots, v_n, \boxdot) \cdot \mathcal{C}, \sigma \rangle &\rightarrow_T \langle \ensuremath{\mathop{\mathbf{invoke}}\nolimits}, f(v_1, \ldots, v_n, v) \cdot \mathcal{C} \rangle \\&&&&&\quad\text{when $\langle \ensuremath{\mathop{expr}\nolimits\/},\sigma \rangle \rightarrow_e^\star v$}\notag\\ \langle \ensuremath{\mathop{expr}\nolimits\/}, [\ ], \sigma \rangle &\rightarrow_T v &\Leftrightarrow&& \langle \invoke\ \ensuremath{\mathop{expr}\nolimits\/}, \varepsilon, \sigma \rangle &\rightarrow_T v \\&&&&&\quad\text{when $\langle \ensuremath{\mathop{expr}\nolimits\/},\sigma \rangle \rightarrow_e^\star v$}\notag\\ \langle Q, C[\ ], \sigma \rangle &\rightarrow_T \langle Q[x_i\setminus \sigma\ x_i], C[\ ] \rangle &\Leftrightarrow&& \langle Q, \mathcal{C}, \sigma \rangle &\rightarrow_T \langle Q[x_i\setminus \sigma\ x_i], \mathcal{C} \rangle \\&&&&&\quad\text{for every $x_i$ in $\mathop{\mathrm{dom}}\nolimits(\sigma)$}\notag\\ \notag\\ \langle Q\ ;\ f(v_1, \ldots, v_n), C[\ ] \rangle &\rightarrow_Q \langle Q, C[[\ ]\ ;\ f(v_1, \ldots, v_n)] \rangle &\Leftrightarrow&& \langle \ensuremath{\mathop{\mathbf{push}}\nolimits}\ f(v_1, \ldots, v_n)\ ;\ Q, \mathcal{C} \rangle &\rightarrow_Q \langle Q, f(v_1, \ldots, v_n) \cdot \mathcal{C} \rangle\\ \langle Q\ ;\ f(v_1, \ldots, v_n, F), C[\ ] \rangle &\rightarrow_Q \langle Q\ ;\ F, C[[\ ]\ ;\ f(v_1, \ldots, v_n, \circleddash)] \rangle &\Leftrightarrow&& \langle \ensuremath{\mathop{\mathbf{push}}\nolimits}\ f(v_1, \ldots, v_n, \boxdot)\ ;\ Q', \mathcal{C} \rangle &\rightarrow_Q \langle Q', f(v_1, \ldots, v_n, \boxdot) \cdot \mathcal{C} \rangle \\&&&&&\quad\text{when $Q' = (Q\ ;\ F)^\blacktriangle$}\\ \langle \epsilon, C[[\ ]\ ;\ f(v_1, \ldots, v_n)] \rangle &\rightarrow_Q \langle T, C[\ ], \sigma \rangle &\Leftrightarrow&& \langle \ensuremath{\mathop{\mathbf{invoke}}\nolimits}, f(v_1, \ldots, v_n) \cdot \mathcal{C} \rangle &\rightarrow_Q \langle T, \mathcal{C}, \sigma \rangle \\&&&&&\quad\text{when } f(x_1,\ldots,x_n) = T \text{ and }\sigma = \{x_i\mapsto v_i\}\notag \end{align*} \caption{Isomorphism between reduction rules}\label{fig:iso} \end{figure} \end{landscape}
2109.11738
\section{Introduction} \label{sec:intro} Modeling galaxy formation in the cosmological context is one of the greatest challenges in astrophysics and cosmology today. In past few decades, the broad contours of galaxy formation physics are investigated and built \citep{2010gfe..book.....M,2012RAA....12..917S,2015ARA&A..53...51S}. In theory, cosmic structures grow by gravitational instability from the initial tiny quantum fluctuations generated during the inflationary epoch. Dark matter (DM) halos, which are defined as dark matter objects with the density hundreds times the density of the Universe, are formed around the density peaks of the Universe. Halos grow by accreting surrounding smaller DM halos and diffuse matter including DM and gas \citep{2009ApJ...707..354Z}. The gas is heated by shocks when accreted into a halo \citep{2006MNRAS.368....2D}. Through radiation, the hot gas loses its energy and cools, and the cold gas spirals into the halo center to form a galaxy, which is typically a spiral. Under the hierarchical growth of the DM halo, surrounding galaxies can also be accreted into the halo and become its satellites. Some satellite galaxies, especially relatively massive ones, spiral into the center and coalesce with the central galaxy due to dynamical friction \citep{1994MNRAS.271..676L,2008ApJ...675.1095J}. A merger of a central disk galaxy with a satellite of comparable mass (say, a mass ratio $\geq 0.2$) may significantly change its internal structure, either producing an elliptical galaxy or a disk galaxy with a significant bulge \citep{2006ApJ...636L..81N, 2006ApJ...650..791C, 2011MNRAS.415.1783B}. Studies find that black hole (BH) mass is highly correlated with bulge mass \citep{2013ARA&A..51..511K}, so we expect there are supermassive black holes (SMBHs) within elliptical galaxies or bulges. The strong energy output and/or material outflow produced by an SMBH can heat and/or blow out the surrounding cold gas, thus suppressing the gas reservoir from which stars form and making the host galaxy look red \citep{2012ARA&A..50..455F}. Therefore, the bimodal distribution of galaxy color is highly correlated with the galaxy morphology distribution, with ellipticals being red and spirals being blue. To fully understand the galaxy properties and distributions, one needs to consider all the complicated physical process involved in galaxy formation and evolution mentioned above. Although there exists a broad contour of the galaxy formation process, a fully predictive framework from first principles has yet to be established \citep{2017ARA&A..55...59N}. Physical models such as semi-analytic models \citep{1991ApJ...379...52W,1993MNRAS.264..201K,2005ApJ...631...21K,2013ApJ...767..122G} and hydrodynamic simulations \citep{2015MNRAS.446..521S,2018MNRAS.473.4077P,2019MNRAS.486.2827D} approximate physics below their respective resolution scales to simulate the effects of supernovae, radiation pressure, multi-phase gas, black hole accretion, active galactic nucleus (AGN) feedback and metallicity evolution in galaxy formation. However, different approximations lead to different galaxy properties \citep{2008MNRAS.391..481S,2014ApJ...795..123L} and the uncertainty still remains. Empirical modeling such as halo occupation distribution (HOD, \citet{1998ApJ...494....1J,2000MNRAS.318.1144P,2000ApJ...543..503M,2000MNRAS.318..203S,2002ApJ...575..587B,2003MNRAS.339.1057Y,2005ApJ...633..791Z,2015MNRAS.454.1161Z}) and abundance matching (AM, \citet{1998ApJ...506...19W,2010MNRAS.402.1796W,2013MNRAS.428.3121M,2019MNRAS.488.3143B}) uses significantly weaker prior and the physical constraints come almost entirely from observations. These models connect the average galaxy properties such as occupation numbers and stellar mass to halos as a function of halo mass or halo circular velocity, and determine the parameters by fitting the observed properties of galaxies. Although more complex models have been attempted recently to incorporate properties of gas \citep{2015MNRAS.449..477P}, metallicity \citep{2016MNRAS.462..893R} and dust \citep{2018ApJ...854...36I} to compare with observations, the stellar-halo mass relation (SHMR) is still one of the most commonly used relations to model galaxy-halo connection \citep{2010MNRAS.404.1111G,2010MNRAS.402.1796W,2013MNRAS.428.3121M}, in which larger halos host larger galaxies with a relatively tight scatter. Recent studies found that galaxies with different properties, such as color, may have different SHMR, which may indicate the existence of so called galaxy assembly bias that causes the scatter in the average SHMR \citep{2010MNRAS.402.1942C,2013MNRAS.433..515W,2014MNRAS.443.3044Z,2015MNRAS.452.1958H, 2016MNRAS.457.3200M, 2021NatAs.tmp..138C}. Similarly, for HOD, the galaxy occupation number may also depend on halo properties such as halo concentration and environment other than the halo mass \citep{2020MNRAS.493.5506H}. The best secondary parameter for the halo characterization is still being sought in the development of HOD and AM . Stellar mass function (SMF) and galaxy clustering (GC) are the two most commonly used properties to constrain the parameters in HOD and AM. The measurement of these quantities usually relies on spectroscopic surveys with redshift information. In the past two decades, there has been significant progress in large spectroscopic surveys \citep{2000AJ....120.1579Y,2001MNRAS.328.1039C, 2003ApJ...592..728S, 2005A&A...439..845L, 2012ApJS..203...21A, 2012AJ....144..144B, 2014A&A...562A..23G, 2014PASJ...66R...1T, 2016arXiv161100036D, 2020ApJS..249....3A}. In the local universe ($z_s\sim0$), large spectroscopic surveys, in particular the Two Degree Field Galaxy spectroscopic survey (2dFGRS; \citet{2001MNRAS.328.1039C}) and the Sloan Digital Sky Survey (SDSS; \citet{2000AJ....120.1579Y}), have been used to measure the SMF and GC down to $10^{9.0}{\rm M_\odot}$ \citep{2001MNRAS.326..255C,2002MNRAS.332..827N,2006MNRAS.368...21L,2009MNRAS.398.2177L,2010ApJ...721..193P}, although the accuracy of the measurements, especially the GC, is still very limited by the survey volume for faint galaxies. At a higher redshift ($z_s=0.5\sim1.0$), the DEEP2 Galaxy spectroscopic survey \citep{2003SPIE.4834..161D}, the VIMOS-VLT Deep Survey (VVDS; \citet{2005A&A...439..845L}) and the VIMOS Public Extragalactic spectroscopic survey (VIPERS; \citet{2014A&A...562A..23G}) have been used to successfully measure SMF and GC for galaxies with $M_{*}>10^{10.0}{\rm M_\odot}$ \citep{2007A&A...474..443P,2008A&A...478..299M,2013ApJ...767...89M,2013A&A...557A..17M,2013A&A...558A..23D}. However, despite the huge efforts, measurement for fainter objects is still very difficult, and stellar mass limited samples are usually very small at even higher redshift. Fortunately, huge next generation spectroscopic surveys are being constructed for cosmological studies at intermediate to high redshift \citep{2011arXiv1110.3193L,2014PASJ...66R...1T,2016arXiv161100036D}. Due to limited wavelength coverage and sensitivity of the spectrographs, different populations of galaxies, such as Emission Line Galaxy (ELG), QSOs and Lyman Break Galaxy (LBG), are targeted at different redshifts. These populations are all expected to trace large-scale structures or the Cosmic Webs, so they can be used to extract information for cosmological studies. However, it is difficult to use these surveys to construct stellar mass limited samples for the target selections used by the surveys. Compared to spectroscopic surveys, photometric surveys, which take the images and obtain the photometric information, are usually deeper and more complete in terms of the stellar mass. However, without precise redshift measurement, the usefulness of photometric surveys is limited. People attempt to infer the photometric redshift from their broad band magnitudes in the photometric surveys \citep{2006A&A...457..841I, 2009ApJ...690.1236I, 2014MNRAS.445.1482S, 2020arXiv200301511N}. Photo-z has been used to measure SMF \citep{2006A&A...459..745F,2008ApJ...675..234P,2010ApJ...709..644I,2012A&A...545A..23B,2021MNRAS.503.4413M} and GC \citep{2016MNRAS.455.4301C,2020ApJ...904..128I, 2021SCPMA..6489811W}, though one has to be very careful about the error and systematics of photo-z especially for faint galaxies. There will be the next generation large and deep multi-band photometric surveys, such as Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST; \citet{2019ApJ...873..111I}) and Euclid \citep{2011arXiv1110.3193L}, which are expected to fairly sample galaxies to an unprecedented faint limit. As mentioned above, photometric and spectroscopic surveys both have their advantages and disadvantages. Future large spectroscopic surveys have the precise redshift information but only for bright objects and/or selected (or biased) populations, while photometric surveys are deeper and more complete but without accurate redshift measurement. So far, the measurement of SMF and GC is mainly from the spectroscopic surveys. Thus, studies based on SMF and GC, such as HOD and AM, are focused on the local universe or the massive end at higher redshift, resulting in a poor understanding of the faint end and the redshift evolution. To study small and faint objects, quite a few studies attempted to combine a spectroscopic survey with a photometric survey, which can reach several magnitudes fainter than pure spectroscopic surveys. With the Cosmic Webs traced by objects in spectroscopic surveys, properties and distribution of photometric objects around the Cosmic Web can be studied using the photometric surveys. Results can be achieved by stacking satellite and neighbor counts around a large sample of central galaxies, with foreground and background sources subtracted statistically \citep{1987MNRAS.229..621P,1994MNRAS.269..696L,2011ApJ...734...88W,2012MNRAS.427..428G,2013arXiv1303.4722M,2015APh....63...81N,2016MNRAS.459.3998L}. However, most previous studies only use luminosity and color properties of a photometric sample, which is not suitable for quantitative HOD and AM studies, which requires physical properties of galaxies such as stellar mass, rest-frame color and star formation rate (SFR). In this paper, we provide a method to measure the distributions and properties (luminosity, mass, color, SFR, morphology et.al.) of Photometric objects Around Cosmic webs (PAC) represented by spectroscopic objects in a spectroscopic survey. The basic idea is that for spectroscopic source $i$ at redshift $z_{s,i}$, only those objects in the photometric sample at around $z_{s,i}$ are correlated the source $i$ and share the similar redshift. Thus for the source $i$, we calculate the physical properties for all sources in the whole photometric sample by assuming they were all at the redshift $z_{s,i}$. Through the cross-correlation of the photometric and spectroscopic samples, foreground or background galaxies with wrong redshift information can be canceled out with the help of random samples, and the true distribution of photometric sources with specified properties around the spectroscopic sources can be obtained. Because both the spectroscopic and photometric samples are huge in the next generation surveys, we will develop a method to speed up the computation for the physical properties. We introduce the details of PAC in Section \ref{sec:method}. In Section \ref{sec:appli}, we apply PAC to observations. The measurement is modeled in N-body simulation using AM in Section \ref{sec:simu}. And a brief conclusion is given in Section \ref{sec:con}. We adopt the cosmology with $\Omega_m = 0.268$, $\Omega_{\Lambda} = 0.732$ and $H_0 = 71{\rm \ km/s/Mpc}$ through out the paper. \section{METHODOLOGY}\label{sec:method} In this section, we introduce a method for estimating $\bar{n}_2w_p(r_p)$ from $w_{12}(\vartheta)$, where $w_p(r_p)$ and $w_{12}(\vartheta)$ are the projected cross-correlation function (PCCF) and the angular cross-correlation function (ACCF) between a given set of spectroscopically identified galaxies and a large sample of photometric galaxies, and $\bar{n}_2$ is the mean number density of the photometric galaxies. The quantity $\bar{n}_2w_p(r_p)$ has clear physical meaning that it measures the {\it true} excess of the photometric objects around the spectroscopic objects on the sky projection. If we choose photometric galaxies at a given stellar mass, the measurement over a range of the stellar mass gives the information on the stellar mass function and the clustering as a function of the stellar mass, which are the key ingredients for understanding the connection of galaxies to dark matter halos. Therefore, we extend this method to statistically measuring the distribution of the photometric galaxies with specified physical properties (i.e stellar mass, SFR, color etc) around spectroscopically identified galaxies. \subsection{Estimating $\bar{n}_2w_p(r_p)$ from $w_{12}(\vartheta)$} Throughout this section, we call a spectroscopic sample population 1 and a photometric sample population 2. Assuming an object in population 1 is at distance $r_1$, the number of objects $dN_2$ in population 2 within a solid angle element $d\Omega_2$ in the direction $\bm{r}_2$ is: \begin{equation} dN_2 = \int n_2(\bm{r}_2)r_2^2dr_2d\Omega_2. \end{equation} Where $n_2$ is the mean number density of population 2 at the distance $r_2$. The expected number of population 2 objects around a population 1 object is: \begin{align} \langle dN_2\rangle &= \int \langle n_2(\bm{r}_2)\rangle_{1}r_2^2dr_2d\Omega_2 \notag\\ &= d\Omega_2\int \bar{n}_2[1+\xi_{12}(r_{12})]r_2^2dr_2 \notag\\ &\approx d\Omega_2[\bar{S}_2+\bar{n}_2w_p(r_1\theta)r_1^2] \,\,. \end{align} $\bar{S}_2$ is the mean angular surface density of population 2, $\xi_{12}$ is the cross-correlation function (CCF) between the two populations and $w_p(r_p=r_1\theta)\equiv\int \xi_{12}(\sqrt{r_p^2+\pi^2})d\pi$ is the projected cross-correlation function. The approximation holds if $\theta$ is small. Then we have: \begin{align} \bar{n}_2w_p(r_1\theta)r_1^2 &= \frac{\langle dN_2\rangle}{d\Omega_2}-\bar{S}_2 \notag\\ &= \bar{S}_2\frac{\langle dN_2\rangle-\langle dR_2\rangle}{\langle dR_2\rangle} \notag\\ &\equiv \bar{S}_2\frac{\langle D_1D_2\rangle-\langle D_1R\rangle}{\langle D_1R\rangle} \notag\\ &= \bar{S}_2w_{12}(\vartheta)\,\,.\label{eq:3} \end{align} Here $\langle D_1D_2\rangle$ and $\langle D_1R\rangle$ are the cross pair counts between population 1 and population 2, and between population 1 and the random sample of population 2. Hence, with our method, $\bar{n}_2w_p(r_1\theta)r_1^2$ can be estimated from $\bar{S}_2w_{12}(\vartheta)$ with only the redshift of population 1. The physical meaning of the quantity $\bar{n}_2w_p(r_1\theta)$ is the excess number of population 2 around population 1. \subsection{Physical properties of photometric sources around spectroscopically identified sources} To obtain physical properties for galaxies, distance or redshift information is needed. Unfortunately, wide deep photometric surveys do not have measured redshift for most of galaxies. Photometric redshift $z_p$ is often used to approximate the distance of galaxies, but the errors of $z_p$ are very difficult to estimate especially for faint galaxies. In our method we do not use any information of $z_p$. Instead, since those photometric sources correlated with the spectroscopically identified galaxies must share the same redshift, we can use the spectroscopic redshift for the physical properties for photometric sources around the spectroscopic object. As $\bar{n}_2w_p(r_1\theta)$ is just the number excess of neighbors around population 1, we can extend our method to estimating the distributions and properties of Photometric objects Around Cosmic webs (PAC) delineated by a spectroscopic survey. Assuming we only have a spectroscopic sample (population 1) and a large photometric sample, and we want to calculate $\bar{n}_2w_p(r_1\theta)$ between those sources in population 1 at a {\it single} redshift $z_{s}$ and sources with physical property X included in the photometric sample. To select population 2, we assume the whole photometric sample is at redshift $z_{s}$ and calculate their physical properties using SED fitting. As one may note, the calculation is correct only for sources around population 1. The physical properties calculated are incorrect for foreground and background sources. However, since foreground and background sources along the line-of-sight (LOS) to population 1 are distributed {\it statistically in the same way} as those along a random LOS, the foreground and background will be canceled out when we calculate $w_{12}(\vartheta)$ (see \citet[Sec 3.3.1]{2011ApJ...734...88W} for a more rigorous derivation). Therefore, population 2 is just selected as the sources with physical property X in the photometric sample even if we assume the whole sample is at redshift $z_s$ in the calculation. With this method, we can study the distribution of satellites and neighbors with specified physical properties around spectroscopically identified galaxies. \subsection{Spectroscopic sample with a redshift distribution} Till now, we have assumed that all the sources in population 1 have the same redshift. However, in reality, spectroscopic samples all have a redshift distribution. If the redshift range is relatively narrow and the evolution of the universe can be neglected, $\bar{n}_2w_p(r_p)$ will not vary much while the change of $r_1$ and $\theta=r_p/r_1$ may not be negligible from one spectroscopic source to another. Thus, $\bar{n}_2w_p(r_p)$ is a better statistic quantity than $\bar{n}_2w_p(r_p)r_1^2$. We re-derive the Equation \ref{eq:3}: \begin{align} \bar{n}_2w_p(r_1\theta) &= \frac{\langle dN_2\rangle/r_1^2}{d\Omega_2}-\frac{\bar{S}_2}{r_1^2} \notag\\ &= \frac{\langle dN_2\rangle/r_1^2-\langle dR_2\rangle/r_1^2}{\langle dR_2\rangle/r_1^2}\frac{\bar{S}_2}{r_1^2} \notag\\ &\equiv \frac{\langle D_1D_2\rangle_w-\langle D_1R\rangle_w}{\langle D_1R\rangle_w}\frac{\bar{S}_2}{r_1^2} \notag\\ &= \frac{\bar{S}_2}{r_1^2}w_{12,weight}(\vartheta)\,\,. \end{align} Here $\langle D_1D_2\rangle_w$ and $\langle D_1R\rangle_w$ are the weighted cross pair counts between population 1 and population 2, and between population 1 and the random sample. Given a set of sources in population 1 with positions $\{\bm{r}_{1,i}\}_{i=1,N_1}$, in order to measure the PCCF at the projected separation $r_p$, we can measure the number $dN_{2,i}$ of population 2 neighbors within the angular separation $\theta \pm 1/2d\theta$ around object $i$ in population 1. Then we measure the weighted count: \begin{equation} \langle D_1D_2\rangle_w = \sum_{i=1,N_1}dN_{2,i}/r_{1,i}^2. \end{equation} Similarly we can get the weighted count for random sample: \begin{equation} \langle D_1R\rangle_w = \sum_{i=1,N_1}dR_{2,i}/r_{1,i}^2. \end{equation} In an alternative way, we can also change the angular separation used to measure the number counts and make $r_p=r_{1,i}\theta_{i}$ the same for each galaxy in population 1. Then, we sum the counts around each galaxies without weighting. These two methods are equivalent. To minimize the effects of complex survey geometries, we adopt an estimator in analogy to the Landy-Szalay estimator for two-point auto-correlation \citep{1993ApJ...412...64L}: \begin{equation} w_{12,weight}(\theta) = \frac{\langle D_1D_2\rangle_w-\langle D_1R_2\rangle_w-\langle D_2R_1\rangle_w+\langle R_1R_2\rangle_w}{\langle R_1R_2\rangle_w}, \end{equation} where $R_1$ and $R_2$ are the random points for spectroscopic and photometric samples respectively. The above method has accounted for the variance of $\theta$ with redshift for sources in population 1, while $r_1$ in $\bar{S}_2w_{12,weight}/r_1^2$ still varies with redshift. Therefore, we divide population 1 into narrower redshift bins and reduced the error from the change of $r_1$. For population 1 with a redshift distribution, we can divided them into $m$ redshift bins. $m$ can be as large as possible only if there are sufficient galaxies in each bin. If the mean redshifts for these bins are $\{z_{s,j}\}_{j=1,m}$, we calculate the physical properties for the whole photometric catalog and select a population $2$ for each $z_{s,j}$. Then, we calculate $\bar{S}_{2,j}w_{12,weight,j}(\vartheta)/r_{1,j}^2$ for each redshift bin, and the mean $\bar{n}_2w_p(r_1\theta)$ of population 1 can be obtained by averaging over these redshift bins: \begin{equation} \bar{n}_2w_p(r_1\theta) = \frac{1}{m}\sum_{j=1}^{m}\frac{\bar{S}_{2,j}}{r_{1,j}^2}w_{12,weight,j}(\vartheta).\label{equ:nw} \end{equation} \begin{figure*} \plotone{complete.pdf} \caption{The stellar mass-z-band magnitude relation for HSC deep field galaxies with photo-z between 0.5 and 0.7. The blue dots are galaxies in the sample and the orange line shows the $95\%$ completeness limit $C_{95}(M_{*})$ of the z-band magnitude. \label{fig:f1}} \end{figure*} \section{APPLICATIONS TO CMASS and HSC samples}\label{sec:appli} In this section, we apply PAC to CMASS spectroscopic sample in the Baryon Oscillation Spectroscopic Survey (BOSS; \citet{2012ApJS..203...21A}; \citet{2012AJ....144..144B}) and Hyper Suprime-Cam Subaru Strategic Program (HSC-SSP; \citet{2019PASJ...71..114A}) PDR2 wide field photometric sample. \begin{figure*} \plotone{ACCF.pdf} \caption{Our measurement of $\bar{n}_2w_p(r_p)$ for eleven mass bins. Each panel shows the results for one mass bin of photometric sample. Colored dots with error bars are the results from observation. Lines with shadows are the median results and $1\sigma$ error of our modeling from the abundance matching (AM) using MCMC sampling method. \label{fig:f2}} \end{figure*} \begin{figure} \plotone{redshift_bins.pdf} \caption{Comparison of the $\bar{n}_2w_p(r_p)$ measured with different divisions of redshift bins (as indicated in the figure) for the spectroscopic sample. The result is insensitive to the number of redshift bins used} \label{fig:bins} \end{figure} \subsection{Observational data} We use the HSC-SSP PDR2 wide field photometric catalog \citep{2019PASJ...71..114A} as the photometric sample. To obtain more accurate physical properties, we choose sources in the footprints observed with all five bands ($grizy$) to ensure that there are enough bands for SED. Sources around bright objects are masked using {\texttt{\{grizy\}\_mask\_pdr2\_bright\_objectcenter}} flag provided by the HSC collaboration \citep{2018PASJ...70S...7C}. And we use the {\texttt{\{grizy\}\_extendedness\_value}} flag to exclude stars in the sample. Finally, there are around $2\times10^8$ galaxies in our photometric sample. We can also construct a random point catalog ($100/arcmin^2$) with the same selection criteria from the HSC database for ACCF analysis. The effective area calculated from the random point number is $501\ deg^2$. To ensure the distribution of foreground and background galaxies is the same for all LOS directions, the easiest way is to make sure that the sample is complete for galaxies with specified physical properties at the required redshift. Since the survey depth is not uniform across the HSC survey, for a low stellar mass limit, some patches remain complete while others may not. Therefore, we use HSC-SSP PDR2 deep field catalog and {\texttt{DEmP}} photo-z \citep{2020arXiv200301511N} to study the completeness of galaxies for different stellar mass. We select galaxies with photo-z between $0.5$ and $0.7$ and calculate the physical properties for these galaxies with five bands $grizy$ using the SED code {\texttt{CIGALE}} \citep{2019A&A...622A.103B}. The stellar population synthesis models of \citet{2003MNRAS.344.1000B} are used to compute the physical properties of galaxies. In these calculations, the \citet{2003PASP..115..763C} Initial Mass Function is adopted. We assume a delayed star formation history $\phi(t)\approx t\exp{-t/\tau}$ where $\tau$ is taken from $10^7$ to $1.258\times 10^{10}$ yr with an equal logarithmic interval $\Delta lg \tau=0.1$. Three metallicities, $Z/Z_{\odot}=0.4$, 1, and 2.5, are considered, where $Z_{\odot}$ is the metallicity of the Sun. We use the extinction law of \citet{2000ApJ...533..682C} with dust reddening in the range $0<E(B-V)<0.5$. As shown in Figure \ref{fig:f1}, stellar mass shows a clear correlation with $z$ band magnitude at $0.5<z_p<0.7$, so magnitude limit can be used to derive a complete sample for specified stellar mass. Particularly, we define the z-band completeness limit $C_{95}(M_{*})$ that $95\%$ of the galaxies are brighter than $C_{95}(M_{*})$ in the z-band for given stellar mass $M_{*}$. (orange line in Figure \ref{fig:f1}). Therefore, for stellar mass bin $[m_l,m_h]$ of interest, we only use the data in the survey footprints deeper than $C_{95}(m_l)$, where the depth is defined as the $z$ band limiting magnitude of $10\sigma$ detection for a point source. We would like to stress again that the photo-z is used only here to find the completeness limit, and it is not used in the following clustering analysis. We use the CMASS sample in BOSS that consists of massive galaxies with $i<19.9\ mag$ as spectroscopic sample (population 1). We first select galaxies in the redshift range of $0.5<z_s<0.7$, and then cross match it with the HSC photometric sample we constructed above and The DESI Legacy Imaging Surveys DR9 catalog \citep{2019AJ....157..168D}. After that, we get the magnitudes in seven bands $grizyW1W2$ for each CMASS galaxy in the footprint of HSC. We calculate the physical properties for these galaxies using the same SED templates but with seven bands $grizyW1W2$. As noticed by previous studies \citep{2013MNRAS.435.2764M,2016MNRAS.457.4021L,2018ApJ...858...30G}, the CMASS sample is complete to stellar mass $M_{*}\approx 10^{11.3}{\rm M_\odot}$. Therefore, we adopt a stellar mass cut at $10^{11.3} {\rm M_\odot}$ in this study. Moreover, we only consider central galaxies in the spectroscopic sample, so we select galaxies which do not have more massive neighbors within the projected distance of $1Mpc\ h^{-1}$. Finally, there are $8028$ massive ($>10^{11.3}{\rm M_\odot}$) central galaxies left in the spectroscopic sample. \begin{figure} \plotone{subhalo.pdf} \caption{Subhalo mass ($m_{peak}$) function in halos with $M_{200}=10^{14.0}{\rm M_\odot} h^{-1}$. Dots show the results from CosmicGrowth Simulation after correction for subhalos with less than $20$ particles. Solid line shows the result from \citet{2016MNRAS.457.1208H} based on high resolution zoom-in simulations.} \label{fig:f3} \end{figure} \begin{figure*} \plotone{mcmc.pdf} \caption{Constraints on parameters of the SHMR model using MCMC sampling. The central value is a median and the error means $16\sim84$ percentiles after other parameters are marginalized over. \label{fig:f4}} \end{figure*} \begin{deluxetable*}{cccccc} \label{tab:t1} \tablenum{1} \tablecaption{Posterior PDFs of the parameters from MCMC for the SHMR model.} \tablewidth{0pt} \tablehead{ \colhead{model}&\colhead{$M_0$}&\colhead{$\alpha$}&\colhead{$\beta$}&\colhead{$k$}&\colhead{$\sigma$}\\ \colhead{}&\colhead{(${\rm M_\odot}\ h^{-1}$)}&\colhead{}&\colhead{}&\colhead{(${\rm M_\odot}$)}&\colhead{} } \startdata Unified&$10^{11.65^{+0.24}_{-0.19}}$&$0.33^{+0.11}_{-0.11}$&$2.39^{+0.49}_{-0.34}$&$10^{10.20^{+0.17}_{-0.15}}$&$0.22^{+0.04}_{-0.05}$\\ \enddata \end{deluxetable*} \subsection{PAC for different mass bins} We first divide the spectroscopic sample into three mass bins $[10^{11.3},10^{11.5},10^{11.7},10^{11.9}]\ {\rm M_\odot}$. In each mass bin, galaxies are divided into four redshift bins $[0.5,0.55,0.6,0.65,0.7]$. Then, we perform SED for the whole photometric sample at redshifts 0.525, 0.575, 0.625 and 0.675 respectively. After that, the photometric sample is also divided into four mass bins $[10^{9.0},10^{9.5},10^{10.0},10^{10.5},10^{11.0}]\ {\rm M_\odot}$ at each redshift. Using Equation \ref{equ:nw}, $\bar{n}_2w_p(r_p)$ for twelve mass bins (three spectroscopic $\times$ four photometric) in the redshift range $0.5<z_s<0.7$ can be obtained. During the calculation, only footprints deeper than $C_{95}(m_l)$ are used for each mass bin. \begin{figure*} \plottwo{SHMR.pdf}{SMF.pdf} \caption{Left: The mean SHMR and its error from our work ($0.5<z_s<0.7$) comparing with the results from \citet{2010MNRAS.402.1796W} ($z_s\sim0.8$) and \citet{2013MNRAS.428.3121M} ($z_s=0.6$). Blue line with shallow shows the SHMR from our work. Green and red lines show the results from \citet{2010MNRAS.402.1796W} and \citet{2013MNRAS.428.3121M}. Right: SMF from AM comparing to observations. Blue dashed line with shadow shows the results from our work. Orange dots show the measurement at high mass end from our CMASS spectroscopic sample. Green triangles and red diamonds show the measurements from VIPERS ($0.50<z_s<0.60$; \citet{2013A&A...558A..23D}) and PRIMUS ($0.50<z_s<0.65$; \citet{2013ApJ...767...50M}) with spectroscopic redshift, and purple squares show the results from \citet{2021MNRAS.503.4413M} ($0.25<z_p<0.75$) measured in various deep photometric surveys with photo-z ($z_p$).\label{fig:f5}} \end{figure*} To evaluate the statistical error, we use the Jackknife resampling technique \citep{1982jbor.book.....E}. Mean value and error of the mean value of $\bar{n}_2w_p(r_p)$ for each mass bin can be calculated as: \begin{equation} \bar{n}_2w_p(r_p) = \frac{1}{N_{\rm sub}}\sum_{k=1}^{N_{\rm sub}}\bar{n}_{2,k}w_{p,k}(r_p) \end{equation} \begin{equation} \sigma^2 = \frac{N_{\rm sub}-1}{N_{\rm sub}}\sum_{k=1}^{N_{\rm sub}}(\bar{n}_{2,k}w_{p,k}(r_p)-\bar{n}_2w_p(r_p))^2, \end{equation} where $N_{\rm sub}$ is the number of jackknifed realizations and $\bar{n}_{2,k}w_{p,k}(r_p)$ is the excess of projected density of the $k$th realization. In this work, we adopt $N_{\rm sub}=50$. The results are shown in Figure \ref{fig:f2} with colored dots. Each panel shows the results for the same mass bin of the photometric sample. Since $\bar{n}_2$ is the same in the same panel, the difference of $\bar{n}_2w_p(r_p)$ reflects the difference of $w_p(r_p)$ between different spectroscopic mass bins. Though the footprint become smaller for the lowest mass bin $10^{9.0}{\rm M_\odot}$, the clustering signal is still quite good for all the three spectroscopic subsamples, so we can study the properties such as SMF and SHMR down to the low mass end. To test the robustness of PAC to the redshift bin used, we compare the results when different number of redshift bins are used to divide the spectroscopic subsample. As we can see from Figure \ref{fig:bins}, for a narrow redshift range as in our study ($0.5<z<0.7$), the measurement is nearly independent of the number of redshift bins used, indicating that our algorithm is robust. \section{Abundance matching with N-body simulation}\label{sec:simu} With the density and clustering information of galaxies, we can study the galaxy–halo connection using N-body simulation. HOD and AM are the two most commonly used methods for populating galaxies to dark matter halos. We follow \citet{2010MNRAS.402.1796W} and use AM to study the galaxy–halo relation based on $\bar{n}_{2}w_{p}(r_p)$ for different mass bins obtained in the observation. As we will show, the stellar-halo mass relation (SHMR), the stellar mass function (SMF) and the conditional stellar mass function (CSMF) for satellites can be inferred from the AM results for galaxies in a wide range of stellar mass ($10^{9.0}-10^{12.0}{\rm M_\odot}$). \subsection{CosmicGrowth Simulation} We use the CosmicGrowth Simulation \citep{2019SCPMA..6219511J} for our studies. CosmicGrowth Simulation is a grid of high resolution N-body simulations run in different cosmologies using an adaptive parallel P$^3$M code \citep{2002ApJ...574..538J}. We use the $\Lambda$CDM simulation with cosmological parameters $\Omega_m = 0.268$, $\Omega_{\Lambda} = 0.732$ and $\sigma_8 = 0.831$. The box size is $600\ Mpc\ h^{-1}$ with $3072^3$ dark matter particles and softening length $\eta = 0.01\ Mpc\ h^{-1}$. Groups are identified with the Friends-of-Friends algorithm with a linking length $0.2$ times the mean particle separation. The halos are then processed with HBT+ \citep{2012MNRAS.427.2437H, 2018MNRAS.474..604H} to obtain subhalos and their evolution histories. We use the catalog of Snapshot 83 at redshift about 0.57 to compare with the observation. We also use the fitting formula in \citet{2008ApJ...675.1095J} to evaluate the merger timescale for subhalos with less than $20$ particles (including orphans), which may be unresolved, and abandon those that have already merged into central subhalos. In Figure \ref{fig:f3}, we compared our subhalo mass function in halos with $M_{200}=10^{14.0}{\rm M_\odot} h^{-1}$ to the universal subhalo mass function of \citet{2016MNRAS.457.1208H} who used high resolution zoom-in simulations (mass resolution $\sim10^{3}{\rm M_\odot} h^{-1}$ for the highest one) and carefully corrected for the resolution effect. In this comparison, the halos are defined as objects of the radius $R_{200}$ within which the average density equals 200 times the critical density of the universe, and the subhalo mass is defined as its peak halo mass $m_{peak}$ in its history, to be consistent with \citet{2016MNRAS.457.1208H}. Our subhalo mass function is in good agreement with \citet{2016MNRAS.457.1208H} down to $10^{10.5}{\rm M_\odot} h^{-1}$, which is good enough for this study. \subsection{Abundance matching} The SHMR can be described by a formula of double power-law form: \begin{equation} M_{*} = \left[\frac{2}{(\frac{M_{acc}}{M_0})^{-\alpha}+(\frac{M_{acc}}{M_0})^{-\beta}}\right]k\,, \end{equation} where $M_{acc}$ is defined as the viral mass $M_{vir}$ of the halo at the time when the galaxy was last the central dominant object. We use the fitting formula in \citet{1998ApJ...495...80B} to find $M_{vir}$. The scatter in $\log(M_*)$ at a given $M_{acc}$ is described with a Gaussian function of the width $\sigma$. We use the same set of parameters for centrals and satellites (unified model) as in many studies \citep{2010MNRAS.402.1796W, 2019MNRAS.488.3143B}. Once the parameters $\{M_0,\alpha,\beta,k,\sigma\}$ are fixed, galaxies can be assigned to each dark matter halo. To compare $\bar{n}_2w_p(r_p)$ with observation, we define $\chi^2$ as: \begin{equation} \chi^2 = \frac{1}{N_p}\sum_{N_p}\left[\frac{\log(\bar{n}_2w_p(r_p))_{sim}-\log(\bar{n}_2w_p(r_p))_{ob}}{\sigma(\log(\bar{n}_2w_p(r_p))_{ob})}\right]^2\,, \end{equation} where $N_p$ is the total number of points over which $\bar{n}_2w_p(r_p)$ is compared. We only consider the radius range of $0.1<r_p<10\ Mpc\ h^{-1}$, in order to avoid the deblending problem of the HSC catalog at the smaller $r_p$ \citep{2021ApJ...919...25W}, and large errors at $r_p>10Mpc\ h^{-1}$. In order to perform a maximum likelihood analysis, we use the Markov chain Monte Carlo (MCMC) sampler {\texttt{emcee}} \citep{2013PASP..125..306F}. Posterior PDFs of the parameters of the SHMR model from MCMC are shown in Figure \ref{fig:f4} and Table \ref{tab:t1}. And the corresponding $\bar{n}_2w_p(r_p)$ and errors for each mass bin is shown by solid lines with shadows in Figure \ref{fig:f2}. The fitting is overall good for all mass bins in our samples and all the parameters are constrained well. \begin{figure} \plotone{CSMF.pdf} \caption{CSMF of satellites for centrals with different mass.} \label{fig:csmf} \end{figure} \subsection{SHMR and SMF} After we derive the parameters, we can calculate the SHMR and SMF at redshift $0.5\sim0.7$ for stellar mass range $10^{9.0}\sim10^{12.0}{\rm M_\odot}$ covered by our observational samples. In the left panel of Figure \ref{fig:f5}, we compare the SHMR from our results with other works. Blue line with shadow shows the SHMR from our work. Green line shows the unified SHMR model from \citet{2010MNRAS.402.1796W} by fitting the SMF and GC at $z_s\sim0.8$ from VVDS \citep{2007A&A...474..443P,2008A&A...478..299M}. \citet{2013MNRAS.428.3121M} uses a redshift-dependent parametrization of SHMR and constrains the parameters by fitting the SMFs from SDSS \citep{2009MNRAS.398.2177L}, Spitzer \citep{2008ApJ...675..234P} and Wide Field Camera 3 (WFC3) \citep{2012A&A...538A..33S} varying from $z_s\sim4$ to the present. We show their results at $z_s=0.6$ with red line in the figure. The SHMR from our and other works are in good agreement with each other. SMF from our AM model and other observational measurements are shown in the right panel of Figure \ref{fig:f5}. Blue dashed line with shadow shows the results from AM of our work. Orange dots show our own measurement from the CMASS spectroscopic sample at the high mass end. Green triangles and red diamonds show the measurements from VIPERS ($0.50<z_s<0.60$; \citet{2013A&A...558A..23D}) and the PRism MUlti-object Survey (PRIMUS; $0.50<z_s<0.65$; \citet{2013ApJ...767...50M}) with spectroscopic redshift. And purple squares are the results from \citet{2021MNRAS.503.4413M} ($0.25<z_p<0.75$), in which they measure the SMF using various deep photometric surveys (UKIDSS Ultra Deep Survey (UDS), COSMOS and CFHTLS-D1) with photo-z. At the high mass end, the results from AM and various observational measurements are consistent. While at the low mass end, the discrepancy between observations is still significant (nearly by a factor of two between \citet{2013A&A...558A..23D} and \citet{2013ApJ...767...50M}), which has been exhaustively discussed in literature \citep{2006MNRAS.368...21L,2007A&A...474..443P,2013A&A...558A..23D}. Since it is hard to detect the faint objects and the survey areas of the high redshift deep surveys are usually small, completeness, selection effects and cosmic variance should be carefully considered. Different weighting methods to compensate for the incompleteness and regions with different densities that the surveys observed can produce huge difference in SMF. Therefore, the SMF of the faint end at higher redshift is still very hard to measure, for which PAC is a very promising method that combines the advantages of both the photometric and spectroscopic surveys. Interestingly, the SMF from our work using PAC and AM is well consistent with a very recent measurement of \citet{2021MNRAS.503.4413M} down to $10^{9.0}{\rm M_\odot}$. \subsection{CSMF of satellites} The CSMF of satellites around a central galaxy of given stellar mass can be derived from the PAC measurement and the AM modeling. We define satellite to be the galaxies within $R_{vir}$ of the dark matter halos of the central galaxies. To get the number of satellites, the most straight forward way is to sum up the excess surface density $\bar{n}_2w_p(r_p)$ weighted by area within $R_{vir}$. However, two effects should be corrected in our study. The surface density includes all excess of galaxies along the line-of-sight direction rather than within $R_{vir}$ (see \citet{2012ApJ...760...16J}) and the measurement within $0.1R_{vir}$ is unreliable for the HSC PDR2 photometric catalog due to the deblending problem \citep{2021ApJ...919...25W}. Therefore, we use the results from AM to compensate these two effects. After populating the galaxies to halos, we measure the average number of satellites within the projected radius range $0.1R_{vir}<r_p<R_{vir}$ and within virial radius $R_{vir}$ for each central and satellite mass bins. Then, we calculate the average number of satellites within $0.1R_{vir}<r_p<R_{vir}$ in observation and infer the number within $R_{vir}$ using the ratio calculated from simulation. We show the CSMF from $10^{9.0}{\rm M_\odot}$ to $10^{11.6}{\rm M_\odot}$ for central galaxies with different mass bins in Figure \ref{fig:csmf}. Colored dots show the results from the observation and the solid lines are the results from AM. The results are consistence with each other at all mass bins for observation and simulation. \section{Conclusion}\label{sec:con} In this paper, we provide a method for estimating the projected density distribution $\bar{n}_2w_p(r_p)$ from $w_{12}(\vartheta)$ and extending this method to measure the distributions and properties of Photometric objects Around Cosmic web (PAC) traced by spectroscopic surveys. Basically, by assuming the whole photometric sample at the same redshift as spectroscopic sources, we can calculate the physical properties of the photometric sample. And through cross-correlation, foreground and background galaxies with wrong properties are canceled out and the true distribution of photometric sources with specified physical properties around the spectroscopic sources can be obtained. We apply PAC to massive ($>10^{11.3}{\rm M_\odot}$) central galaxies in BOSS CMASS sample ($0.5<z<0.7$) and HSC-SSP PDR2 wide field photometric sample. We calculate $\bar{n}_2w_p(r_p)$ for several stellar mass bins (three for CMASS $\times$ four for HSC) from $10^{9.0}{\rm M_\odot}$ to $10^{12.0}{\rm M_\odot}$ and the measurement is overall good at $0.1\ Mpc\ h^{-1}<r_p<10\ Mpc\ h^{-1}$ for all the mass bins. Then, we use abundance matching to model $\bar{n}_2w_p(r_p)$ in N-body simulation with MCMC sampling. We use the same set of parameters for central and satellite galaxies to model the observation. All the parameters are constrained well and the fitting to observation is overall good for all mass bins. Our AM model can accurately reproduce the SMF comparing to observations. The SHMR from our results are also in good agreement with previous works. Using PAC and AM, we also calculate the CSMF of satellites for centrals with different mass. We expect that PAC will have many applications with ongoing and upcoming photometric and spectroscopic surveys. However, since the wide spectroscopic surveys at higher redshifts only target specific populations of galaxies like ELGs and QSOs, the galaxy-halo connection of these populations, which is also one of the key challenges for galaxy formation and cosmological studies, should be well established to make full use of PAC. Recently, there has been some progress made in ELG HOD and AM modeling with spectroscopic or narrow band data \citep[H, Gao et al. 2021, in preparation]{2019ApJ...871..147G,2021PASJ...73.1186O}. PAC can provide more information for studying the galaxy-halo connection of ELGs. As in this work, the SHMR of normal galaxies can be obtained using PAC with a stellar mass limited spectroscopic sample such as Large Red Galaxies (LRGs). If the redshift range of ELGs is overlapped with that of LRGs, by applying PAC to ELGs and the same photometric sample, we can obtain galaxy bias of ELGs with respect to the underlying matter distribution. With the better understanding of the galaxy-halo connection for ELGs, we can extend PAC to a higher redshift where LRGs cannot be reached. It is even worth trying to simultaneously study the connections of the ELGs and the normal galaxy population to dark matter halos, since the projected density distribution $\bar{n}_2w_p(r_p)$ is expected to be precisely measured with next generation galaxy surveys. With PAC and future surveys, we can study the galaxy-halo connection for galaxies with different physical properties other than mass, such as SFR, color and morphology \citep{2021arXiv211005760X}. We can also push the understanding of SHMR, SMF and other properties to higher redshift and fainter luminosity end. With the properties and distribution of satellite, we can also study the galaxy evolution such as galaxy merger rate, merger timescale and environment quenching. Moreover, since PAC has very strong signal at the small scale, we can also quantify the fiber collision effect in spectroscopic samples. We may also apply the method to photo-z samples to quantify photo-z errors. We will explore these applications in our future studies. \begin{acknowledgments} The work is supported by NSFC (12133006, 11890691, 11621303) and by 111 project No. B20019. This work made use of the Gravity Supercomputer at the Department of Astronomy, Shanghai Jiao Tong University. The Hyper Suprime-Cam (HSC) collaboration includes the astronomical communities of Japan and Taiwan, and Princeton University. The HSC instrumentation and software were developed by the National Astronomical Observatory of Japan (NAOJ), the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU), the University of Tokyo, the High Energy Accelerator Research Organization (KEK), the Academia Sinica Institute for Astronomy and Astrophysics in Taiwan (ASIAA), and Princeton University. Funding was contributed by the FIRST program from Japanese Cabinet Office, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), the Japan Society for the Promotion of Science (JSPS), Japan Science and Technology Agency (JST), the Toray Science Foundation, NAOJ, Kavli IPMU, KEK, ASIAA, and Princeton University. This publication has made use of data products from the Sloan Digital Sky Survey (SDSS). Funding for SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The Legacy Surveys consist of three individual and complementary projects: the Dark Energy Camera Legacy Survey (DECaLS; Proposal ID \#2014B-0404; PIs: David Schlegel and Arjun Dey), the Beijing-Arizona Sky Survey (BASS; NOAO Prop. ID \#2015A-0801; PIs: Zhou Xu and Xiaohui Fan), and the Mayall z-band Legacy Survey (MzLS; Prop. ID \#2016A-0453; PI: Arjun Dey). DECaLS, BASS and MzLS together include data obtained, respectively, at the Blanco telescope, Cerro Tololo Inter-American Observatory, NSF’s NOIRLab; the Bok telescope, Steward Observatory, University of Arizona; and the Mayall telescope, Kitt Peak National Observatory, NOIRLab. The Legacy Surveys project is honored to be permitted to conduct astronomical research on Iolkam Du’ag (Kitt Peak), a mountain with particular significance to the Tohono O’odham Nation. NOIRLab is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. This project used data obtained with the Dark Energy Camera (DECam), which was constructed by the Dark Energy Survey (DES) collaboration. Funding for the DES Projects has been provided by the U.S. Department of Energy, the U.S. National Science Foundation, the Ministry of Science and Education of Spain, the Science and Technology Facilities Council of the United Kingdom, the Higher Education Funding Council for England, the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, the Kavli Institute of Cosmological Physics at the University of Chicago, Center for Cosmology and Astro-Particle Physics at the Ohio State University, the Mitchell Institute for Fundamental Physics and Astronomy at Texas A\&M University, Financiadora de Estudos e Projetos, Fundacao Carlos Chagas Filho de Amparo, Financiadora de Estudos e Projetos, Fundacao Carlos Chagas Filho de Amparo a Pesquisa do Estado do Rio de Janeiro, Conselho Nacional de Desenvolvimento Cientifico e Tecnologico and the Ministerio da Ciencia, Tecnologia e Inovacao, the Deutsche Forschungsgemeinschaft and the Collaborating Institutions in the Dark Energy Survey. The Collaborating Institutions are Argonne National Laboratory, the University of California at Santa Cruz, the University of Cambridge, Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas-Madrid, the University of Chicago, University College London, the DES-Brazil Consortium, the University of Edinburgh, the Eidgenossische Technische Hochschule (ETH) Zurich, Fermi National Accelerator Laboratory, the University of Illinois at Urbana-Champaign, the Institut de Ciencies de l’Espai (IEEC/CSIC), the Institut de Fisica d’Altes Energies, Lawrence Berkeley National Laboratory, the Ludwig Maximilians Universitat Munchen and the associated Excellence Cluster Universe, the University of Michigan, NSF’s NOIRLab, the University of Nottingham, the Ohio State University, the University of Pennsylvania, the University of Portsmouth, SLAC National Accelerator Laboratory, Stanford University, the University of Sussex, and Texas A\&M University. BASS is a key project of the Telescope Access Program (TAP), which has been funded by the National Astronomical Observatories of China, the Chinese Academy of Sciences (the Strategic Priority Research Program “The Emergence of Cosmological Structures” Grant \# XDB09000000), and the Special Fund for Astronomy from the Ministry of Finance. The BASS is also supported by the External Cooperation Program of Chinese Academy of Sciences (Grant \# 114A11KYSB20160057), and Chinese National Natural Science Foundation (Grant \# 11433005). The Legacy Survey team makes use of data products from the Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE), which is a project of the Jet Propulsion Laboratory/California Institute of Technology. NEOWISE is funded by the National Aeronautics and Space Administration. The Legacy Surveys imaging of the DESI footprint is supported by the Director, Office of Science, Office of High Energy Physics of the U.S. Department of Energy under Contract No. DE-AC02-05CH1123, by the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility under the same contract; and by the U.S. National Science Foundation, Division of Astronomical Sciences under Contract No. AST-0950945 to NOAO. \end{acknowledgments}
2109.11744
\section{Introduction} Let $\zeta(s)$ be the Riemann zeta-function and $s=\sigma+\ie t$, where $\sigma$ and $t$ are real numbers. One of the great problems in zeta-function theory is to determine the true order of $|\zeta(s)|$ in the critical strip. Due to the functional equation we can assume $1/2 \leq\sigma\leq 1$. Having $\zeta(s)\ll_{\varepsilon} |t|^{\mu(\sigma)+\varepsilon}$ for $\varepsilon>0$ where \[ \mu(\sigma) \de \limsup_{t\to\infty}\frac{\log{\left|\zeta\left(\sigma+\ie t\right)\right|}}{\log{t}}, \] it is well known that $0\leq\mu(\sigma)\leq (1-\sigma)/2$. This estimate can be improved for certain values $\sigma$, see~\cite[Chapter 5]{Titchmarsh} and also~\cite{TrudgianANewUpper,TrudgianExplLogDer,Platt,HiaryAnExplicit,BourgainDecoupling,Patel,PatelPhD} to mention only recent results. Note that the Lindel\"{o}f Hypothesis states that $\mu(\sigma)\equiv0$. On the other hand, the Riemann Hypothesis (RH) drastically improves these estimates since it implies that \begin{equation} \label{eq:someEstimateForZeta} \log{\zeta(s)} \ll_{\varepsilon,\sigma_0} \left(\log{t}\right)^{2(1-\sigma)+\varepsilon} \end{equation} for $\varepsilon>0$, $1/2<\sigma_0\leq\sigma\leq 1$ and $t$ large, see~\cite[Theorem 14.2]{Titchmarsh}. In particular, the bound~\eqref{eq:someEstimateForZeta} implies Lindel\"{o}f-type estimates for $\zeta(s)$ and $1/\zeta(s)$, see~\eqref{eq:someEstimateForRecZeta}. Titchmarsh~\cite{TitchConseq} provided quantitative versions of~\eqref{eq:someEstimateForZeta}, namely that RH guarantees \begin{gather} \log{\left|\frac{1}{\zeta(s)}\right|} \ll \frac{\log{t}}{\log\log{t}}\log{\frac{2}{\left(\sigma-\frac{1}{2}\right)\log{\log{t}}}}, \label{eq:TitchZetaBound} \\ \log{\left|\zeta(s)\right|} \ll \frac{\log{t}}{\log\log{t}} \label{eq:TitchZetaBound2} \end{gather} for $0<\sigma-1/2\ll 1/\log{\log{t}}$ and $t$ large. Estimates~\eqref{eq:TitchZetaBound} and~\eqref{eq:TitchZetaBound2} are of main interest near the critical line since \begin{equation} \label{eq:someEstimateForZeta2} \log{\zeta(s)} \ll \frac{\left(\log{t}\right)^{2(1-\sigma)}}{\log{\log{t}}} \end{equation} for $1/\log{\log{t}}\ll\sigma-1/2\leq 1/2-\varepsilon$, $\varepsilon>0$ and $t$ large, see~\cite[Equation 14.14.5]{Titchmarsh}. It should also be noted that in the case $\sigma=1$ one can replace the right-hand side of~\eqref{eq:TitchZetaBound} and~\eqref{eq:TitchZetaBound2} with $\log{\log{\log{t}}}$, see~\cite[Theorem 14.9]{Titchmarsh}, and~\cite{Lamzouri,LanguascoTrudgian} for explicit results of similar kind. The main purpose of this paper is to provide conditional effective versions of~\eqref{eq:TitchZetaBound} and~\eqref{eq:TitchZetaBound2}, see the following theorem. \begin{theorem} \label{thm:main} Assume the Riemann Hypothesis. For $1/2<\sigma\leq 3/2$ and $2\exp{\left(e^2\right)}\leq t\leq T$ we have \begin{gather} \log{\left|\frac{1}{\zeta(s)}\right|} \leq \omega_1\left(\sigma,t_0;\log{\log{\frac{3T}{2}}}\right)\log{\frac{3T}{2}}, \label{eq:corMain} \\ \log{\left|\zeta(s)\right|} \leq \omega_2\left(t_0;\log{\log{\frac{3T}{2}}}\right)\log{\frac{3T}{2}}. \label{eq:corMain2} \end{gather} Here $2\gamma_1\leq t_0\leq 50$ with $\gamma_1$ being the positive ordinate of the first nontrivial zero, and the functions $\omega_1$ and $\omega_2$, which, as $T\to\infty$ are asymptotic to $1/\log{\log{(3T/2)}}$, see~\eqref{eq:limits} and~\eqref{eq:limits2}, are given by~\eqref{eq:omega1} and~\eqref{eq:omega2}, respectively. In particular, \begin{gather} \log{\left|\frac{1}{\zeta(s)}\right|} \leq \frac{1.756\log{(2T)}}{\log{\log{(2T)}}}\log{\left(1+\frac{37.345}{\left(\sigma-\frac{1}{2}\right)\log{\log{T}}}\right)} + \frac{2.51\log{(2T)}}{\log{\log{(2T)}}}, \label{eq:corMainConrete} \\ \log{\left|\zeta(s)\right|} \leq \frac{8.45\log{(2T)}}{\log{\log{(2T)}}} \label{eq:corMain2Conrete} \end{gather} for $T\geq (2/3)\exp{\left(e^{10}\right)}$. \end{theorem} Theorem~\ref{thm:main} is included in Corollary~\ref{cor:main}, the proof of which will be provided in Section~\ref{sec:MainBound}. Observe that~\eqref{eq:TitchZetaBound2} implies \[ \left|\zeta\left(\frac{1}{2}+\ie t\right)\right| \leq \exp{\left(C\frac{\log{t}}{\log\log{t}}\right)} \] for some $C>0$ and $t$ large, and~\eqref{eq:corMain2} or~\eqref{eq:corMain2Conrete} make this effective. This was recently made explicit by the author using different techniques, and the present work can be viewed as an application of the results from~\cite{SimonicSonRH}; we are using Titchmarsh's approach~\cite{TitchConseq} which depends on conditional bounds for $S(t)$ and $S_1(t)$, see Section~\ref{sec:MainBound}. However, inequality~\eqref{eq:corMain2Conrete} provides a worse estimate for $\left|\zeta\left(1/2+\ie t\right)\right|$ than~\cite[Corollary 1]{SimonicSonRH}. In addition to Theorem~\ref{thm:main}, we also give two applications of such bounds, namely effective conditional estimates for the Mertens function $M(x)=\sum_{n\leq x}\mu(n)$ and for the number of $k$-free numbers $Q_k(x)$. As usual, $x\geq1$, $k\geq 2$ is an integer, and $\mu(n)$ is the M\"{o}bius function. Our main result in this direction is summarized by the next theorem. \begin{theorem} \label{thm:FinalApp} Assume the Riemann Hypothesis. Then \begin{equation} \label{eq:thmMertensGen} \left|M(x)\right| \leq 0.6\sqrt{x}\exp{\left(\frac{5.2251\log{x}}{\log{\log{\left(\frac{3e}{2}\sqrt{x}\right)}}}+\log{\log{x}}\right)} \quad \textrm{for} \quad x\geq 10^{10^{4.487}} \end{equation} and \begin{multline} \label{eq:thmkFree} \left|Q_{k}(x)-\frac{x}{\zeta(k)}\right| \leq 0.1x^{\frac{1}{k+1}}\exp\left(\frac{7.525\log{x}}{2(k+1)\log{\log{\left(\frac{3e}{2}x^{\frac{1}{2(k+1)}}\right)}}}\right. \\ \left.+\frac{\left(7.492k+8.25\right)\log{x}}{(k+1)\log{\log{\left(\frac{3e}{2}x^{\frac{1}{k+1}}\right)}}}+2\log{\log{x}}\right) \quad \textrm{for} \quad x\geq 10^{(k+1)10^{23.147}}. \end{multline} Additionally, \begin{equation} \label{eq:thmMertens} \left|M(x)\right| \leq Ax^{\alpha}\log{x} \quad \textrm{for} \quad x\geq 10^{10^{X}} \end{equation} and \begin{equation} \label{eq:thmSquarefree} \left|Q_2(x)-\frac{x}{\zeta(2)}\right| \leq Bx^{\beta}\log^{2}{x} \quad \textrm{for} \quad x\geq 10^{3\cdot10^{Y}}, \end{equation} where values for the constants are given by Table~\ref{tab:FinalApp}. \end{theorem} \begin{table} \centering \begin{footnotesize} \begin{tabular}{cllllllllll} \toprule $\alpha$ & $0.999$ & $0.99$ & $0.9$ & $0.85$ & $0.8$ & $0.75$ & $0.7$ & $0.65$ & $0.6$ & $0.55$ \\ $X$ & $4.487$ & $4.543$ & $5.255$ & $5.837$ & $6.654$ & $7.865$ & $9.798$ & $13.003$ & $19.53$ & $39.121$ \\ $A$ & $0.517$ & $0.505$ & $0.408$ & $0.362$ & $0.322$ & $0.286$ & $0.254$ & $0.227$ & $0.202$ & $0.179$ \\ \midrule $\beta$ & $0.4999$ & $0.499$ & $0.49$ & $0.48$ & $0.47$ & $0.46$ & $0.45$ & $0.44$ & $0.4$ & $0.35$ \\ $Y$ & $23.147$ & $23.273$ & $24.628$ & $26.326$ & $28.276$ & $30.534$ & $33.179$ & $36.314$ & $58.29$ & $234.21$ \\ $B$ & $0.0852$ & $0.0850$ & $0.0826$ & $0.0799$ & $0.0774$ & $0.0750$ & $0.0726$ & $0.0704$ & $0.0622$ & $0.0533$ \\ \bottomrule \end{tabular} \end{footnotesize} \caption{Values for the constants from Theorem~\ref{thm:FinalApp}.} \label{tab:FinalApp} \end{table} One function which is commonly associated with $M(x)$ is $m(x)=\sum_{n\leq x}\mu(n)/n$. It is well known that $m(x)=o(x)$. The following corollary to inequality~\eqref{eq:thmMertens} can be viewed as the conditional and effective version of this relation. Later we will give a quantitative formulation of the assertion~\cite[Theorem 14.25 (A)]{Titchmarsh} that RH implies that $\sum_{n=1}^{\infty}\mu(n)n^{-s}$ converges to $1/\zeta(s)$ for every $\sigma>1/2$, see Theorem~\ref{thm:m}. \begin{corollary} \label{cor:m} Assume the Riemann Hypothesis. Then \begin{equation} \label{eq:m} \left|m(x)\right| \leq \frac{51.005\log{x}+5050}{x^{0.01}} \end{equation} for $x\geq 10^{10^{4.543}}$. \end{corollary} Our proof of Theorem~\ref{thm:FinalApp} strongly relies on the ideas from Titchmarsh~\cite{TitchConseq}, and from Montgomery and Vaughan~\cite{MontyVaughanSquarefree}. It should be mentioned here that~\eqref{eq:thmMertens} and~\eqref{eq:thmSquarefree} do not simply follow from~\eqref{eq:thmMertensGen} and~\eqref{eq:thmkFree}, and special consideration is needed to provide values for $X$, $A$, and for $Y$, $B$ for selected $\alpha$ and $\beta$, respectively. Moreover, we identify some irregularities in Titchmarsh's later proof~\cite[Chapter 14]{Titchmarsh} and in Montgomery--Vaughan's proof, see Sections~\ref{sec:kfree} and~\ref{sec:MainBound}. While comparing Theorem~\ref{thm:FinalApp} with other (unconditional) explicit results, it is not hard to verify that even constants from the first column in Table~\ref{tab:FinalApp} will produce better bounds than~\eqref{eq:Ramare},~\eqref{eq:Chalker}, and~\eqref{eq:Cohen}. The same is true also for~\eqref{eq:m}. This is mainly because of our large admissible values for $x$, which is a consequence of the double logarithm from Theorem~\ref{thm:main} and also of relatively large constants in estimates for $S(t)$ and $S_1(t)$, see~\eqref{eq:SExpl} and~\eqref{eq:S1Expl}. Possible improvement upon~\eqref{eq:thmMertens} by using~\eqref{eq:someEstimateForZeta} or~\eqref{eq:someEstimateForZeta2} is briefly described in Remark~\ref{rem:RHB}. The outline of this paper is as follows. In Section~\ref{sec:app} we list some results related to Theorem~\ref{thm:FinalApp} and sketch the original proofs while Section~\ref{sec:MainBound} is devoted to the proof of Theorem~\ref{thm:main}. General bounds for $M(x)$ and $Q_k(x)$ are formulated and proved in Section~\ref{sec:Perron}, where also an effective truncated Perron's summation formula (Theorem~\ref{thm:truncatedPerron}) is given. The proofs of Theorem~\ref{thm:FinalApp} and Corollary~\ref{cor:m} are provided in Section~\ref{sec:proofApp}. \section{Two applications} \label{sec:app} The aim of this section is to discuss some results and relevant techniques concerning upper bounds for partial sums of the M\"{o}bius function $M(x)$ and for the number of $k$-free numbers $Q_k(x)$. \subsection{On the Mertens function $M(x)$} It is well known that the assertion $M(x)=o(x)$ is equivalent to the Prime Number Theorem. The most recent explicit version of this relation is provided by Ramar\'{e}~\cite{Ramare2013}, namely \begin{equation} \label{eq:Ramare} \left|M(x)\right| \leq \frac{0.013\log{x}-0.118}{\log^{2}{x}}x \end{equation} is true for $x\geq 1.08\cdot10^6$. Applying the zero-free regions for $\zeta(s)$ will improve bounds for $M(x)$, e.g., the Vinogradov--Korobov region~\cite[Chapter 6]{Ivic} implies the strongest unconditional estimate \[ M(x) \ll x\exp{\left(-C\left(\log{x}\right)^{\frac{3}{5}}\left(\log{\log{x}}\right)^{-\frac{1}{5}}\right)} \] for some absolute constant $C>0$, see~\cite[p.~191]{WalfiszWeyl}. Effective results which are based on the classical zero-free region and are thus in the form \begin{equation} \label{eq:Chalker} \left|M(x)\right| \leq C_{W}x\exp{\left(-c_{W,\varepsilon}\sqrt{\log{x}}\right)}, \quad x > x_0\left(W,\varepsilon\right) \end{equation} for some known positive constants $C_{W}$, $c_{W,\varepsilon}$ and $x_0\left(W,\varepsilon\right)$, e.g., $C_{W}=6.1\cdot10^{8}$, $c_{W,\varepsilon}=0.297$ and $x_0\left(W,\varepsilon\right)=\exp{\left(14305.32\right)}$, are given in~\cite{Chalker}. Both~\cite{Ramare2013} and~\cite{Chalker} provide also explicit bounds for $m(x)$ which are based on~\eqref{eq:Ramare} and~\eqref{eq:Chalker}. Littewood~\cite{LittlewoodMobius} proved that RH is equivalent to \begin{equation} \label{eq:mobiusDef} M(x) \ll_{\varepsilon}x^{\frac{1}{2}+\varepsilon} \end{equation} for every $\varepsilon>0$. The proof can be divided into three groups: \begin{enumerate} \item Partial summation gives \begin{equation} \label{eq:PartSumMobius} \frac{1}{s\zeta(s)} = \int_{0}^{\infty} \frac{M(x)}{x^{s+1}}\dif{x} \end{equation} for $\sigma>1$. \item Truncated Perron's formula implies \begin{equation} \label{eq:PerronNonexplicit} M(x) = \frac{1}{2\pi\ie}\int_{c-\ie T}^{c+\ie T}\frac{x^{s}\dif{s}}{s\zeta(s)} + O\left(\frac{x^{c}}{(c-1)T}\right) + O\left(\frac{x\log{x}}{T}\right) + O(1) \end{equation} for $c>1$ and $T\geq 1$. \item RH guarantees \begin{equation} \label{eq:someEstimateForRecZeta} \frac{1}{\zeta(s)} \ll_{\varepsilon,\sigma_0} |t|^{\varepsilon} \end{equation} for $1/2<\sigma_0\leq\sigma\leq 1$. \end{enumerate} If~\eqref{eq:mobiusDef} is true, then the integral in~\eqref{eq:PartSumMobius} is a holomorphic function on the half-plane $\sigma>1/2+\varepsilon$ and thus so too is $1/\zeta(s)$ by analytic continuation. Conversely, RH allows us to move the line of integration in~\eqref{eq:PerronNonexplicit} arbitrarily close to the critical line, while~\eqref{eq:someEstimateForRecZeta} and the choice $T=x^c$ for $c=1+1/\log{x}$ then imply~\eqref{eq:mobiusDef}, see~\cite[Theorem 14.25 (C)]{Titchmarsh} for details. Since Littlewood's paper there has been made some progress to determine $\varepsilon$ from~\eqref{eq:mobiusDef} as a function of $x$. Landau~\cite{LandauMobius} showed that $\varepsilon \ll \log{\log{\log{x}}}/\log{\log{x}}$, while Titchmarsh~\cite{TitchConseq} improved this to \begin{equation} \label{eq:TitchMobius} M(x) \ll \sqrt{x}\exp{\left(\frac{C\log{x}}{\log{\log{x}}}\right)} \end{equation} for some $C>0$ by using~\eqref{eq:TitchZetaBound}. For a long time Titchmarsh's result has not been improved until Maier and Montgomery~\cite{MaierMontgomery} proved that $\varepsilon\ll \left(\log{x}\right)^{-22/61}$. Shortly after their proof was published, Soundararajan~\cite{SoundMobius} obtained $\varepsilon\ll\left(\log{\log{x}}\right)^{14}\left(\log{x}\right)^{-1/2}$. According to~\cite{Balazard}, it is possible to refine his proof and show that \[ M(x) \ll_{\varepsilon} \sqrt{x}\exp{\left(C\sqrt{\log{x}}\left(\log{\log{x}}\right)^{\frac{5}{2}+\varepsilon}\right)}, \] which is currently the strongest known result while assuming only RH. For better estimates under the assumption of other plausible conjectures surrounding the zeta-function theory, see~\cite{Ng2004,Saha}. For a generalisation of Soundararajan's estimate to partial sums of $\mu(n)$ in arithmetic progressions, see~\cite{HalupczokSuger}. Having Theorem~\ref{thm:main} at disposal, it is not hard to obtain an effective version of~\eqref{eq:TitchMobius} by following procedure with steps (2) and (3) from Littlewood's proof. We choose $c=1+1/\log{x}$ and $T=e\sqrt{x}$, see Corollary~\ref{cor:Perron}. Then we move the line of integration to some $\sigma_0\in(1/2,1)$ and derive bounds for the resulting integrals by using~\eqref{eq:corMain} and Lemma~\ref{lem:zetabound}, see the proof of Theorem~\ref{thm:MainMoebius} for all details. Estimate~\eqref{eq:thmMertensGen} will follow after taking $\sigma_0-1/2\ll 1/\log{\log{x}}$ for suitable chosen constants, while the proof of~\eqref{eq:thmMertens} now consists of optimizing $\sigma_0$ and $t_0$ in order to obtain desired values for $\alpha$. \begin{remark} \label{rem:RHB} It might be possible to improve $X$ from~\eqref{eq:thmMertens} for $\alpha$ close to $1$ by using effective versions of~\eqref{eq:someEstimateForZeta} or~\eqref{eq:someEstimateForZeta2} in combination with \begin{equation} \label{eq:Mhat} \widehat{M}(x+h) - \widehat{M}(x) = \frac{1}{2\pi\ie}\int_{\sigma_0-\ie\infty}^{\sigma_0+\ie\infty} \frac{(x+h)^{s+1}-x^{s+1}}{s(s+1)\zeta(s)}\dif{s}, \end{equation} where $\widehat{M}(x)\de\sum_{n\leq x}(x-n)\mu(n)$, $0<h\leq x$ and $\sigma_0\in\left(1/2,1\right)$. Formula~\eqref{eq:Mhat} follows after moving the line of integration in the corresponding Perron's formula, the process which is justified under the assumption of RH. Observe that \begin{equation} \label{eq:Mhat2} \left|\left(\widehat{M}(x+h) - \widehat{M}(x)\right)h^{-1} - M(x)\right| \leq h+1. \end{equation} Take $\sigma_0\in\left(1/2,1\right)$. By~\eqref{eq:someEstimateForZeta2} there exist $t_0>0$ and $C>0$ such that \[ \left|\frac{1}{\zeta\left(\sigma_0+\ie t\right)}\right| \leq t^{\varepsilon_0}, \quad \varepsilon_0 = \frac{C}{\left(\log{t_0}\right)^{2\sigma_0-1}\log{\log{t_0}}} < 1 \] for $t\geq t_0$. Take $h=x^{\kappa}$ for $0<\kappa\leq 1$. By using the reflection principle, splitting the range of integration in~\eqref{eq:Mhat} into three parts from $\sigma_0$ to $\sigma_0+\ie t_0$ to $\sigma_0+\ie xh^{-1}$ to $\sigma_0+\ie \infty$, and then using~\eqref{eq:Mhat2} together with \[ \left|\frac{(x+h)^{s+1}-x^{s+1}}{s+1}\right| \leq h\left(x+h\right)^{\sigma_0} \ll hx^{\sigma_0} \] for the first two integrals and $\left|(x+h)^{s+1}-x^{s+1}\right|\leq 2\left(x+h\right)^{\sigma_0+1}\ll x^{\sigma_0+1}$ for the last, we obtain \[ M(x) \ll h + x^{\sigma_0} + \left(\frac{x}{h}\right)^{\varepsilon_0} x^{\sigma_0} \ll x^{\kappa} + x^{\sigma_0+\left(1-\kappa\right)\varepsilon_0}. \] The problem is now to optimize $\kappa$. However, we must emphasize that the implied constants in the latter inequality depend also on knowing the precise behaviour of $\left|\zeta\left(\sigma_0+\ie t\right)\right|$ for all $t\in\left[0,t_0\right]$ where $t_0$ could be large. \end{remark} \subsection{On the number of $k$-free numbers} \label{sec:kfree} Let $k\geq 2$ be an integer. We say that $n\in\N$ is a $k$-free number if it has no nontrivial divisor which is a perfect $k$th power. Denote by $Q_k(x)$ the number of $k$-free numbers not exceeding $x\geq 1$. It is not hard to prove by elementary methods that \begin{equation} \label{eq:kfree} Q_k(x) = \sum_{n\leq x}\sum_{d^{k}|n}\mu(d) = \frac{x}{\zeta(k)} + O\left(x^{\frac{1}{k}}\right), \end{equation} where implicit constants depend on $k$, see~\cite[Theorem 2.2]{MontgomeryVaughan} in the case $k=2$. Cohen et al.~\cite{CohenDressMarraki} provide an explicit bound \begin{equation} \label{eq:Cohen} \left|Q_2(x)-\frac{x}{\zeta(2)}\right| \leq 0.02767\sqrt{x} \end{equation} for $x\geq 4.4\cdot10^5$, while the author is not aware of any published effective estimates on $Q_k(x)$ for larger $k$. Similarly as before, the Vinogradov--Korobov zero-free region implies the strongest unconditional estimate \[ Q_k(x) = \frac{x}{\zeta(k)} + O\left(x^{\frac{1}{k}}\exp{\left(-Ck^{-\frac{8}{5}}\left(\log{x}\right)^{\frac{3}{5}}\left(\log{\log{x}}\right)^{-\frac{1}{5}}\right)}\right) \] for $C>0$ and the implied constants may depend on $k$, see~\cite[p.~192]{WalfiszWeyl}. Assuming RH, several authors have made improvements upon~\eqref{eq:kfree}, e.g., Montgomery and Vaughan~\cite{MontyVaughanSquarefree} proved that \begin{equation} \label{eq:kfreeMV} Q_k(x) = \frac{x}{\zeta(k)} + O\left(x^{\frac{1}{k+1}+\varepsilon}\right). \end{equation} It is conjectured that~\eqref{eq:kfreeMV} is true with $2k$ instead of $k+1$, see~\cite{MontyVaughanSquarefree} for results prior to~\eqref{eq:kfreeMV}, and~\cite{MossinghoffOliveiraTrudgian} for an overview of recent advances. We are going to state the main ideas in Montgomery and Vaughan's method with our explicit intentions in mind. Their proof starts with the observation that one can use~\eqref{eq:kfree} to write \[ Q_{k}(x) = \sum_{\substack{n^{k}m\leq x \\ n\leq y}} \mu(n) + \sum_{\substack{n^{k}m\leq x \\ n>y}} \mu(n) \] for $1\leq y\leq x^{1/k}$. Denoting by $Q_{k,1}(x)$ and $Q_{k,2}(x)$ the above two sums, written in the same order, it is not hard to show by elementary methods that \begin{equation} \label{eq:Q1} Q_{k,1}(x) = \sum_{n\leq y} \mu(n)\left\lfloor\frac{x}{n^k}\right\rfloor = \frac{x}{\zeta(k)} - \left(xf_{y}(k)+\frac{1}{2}M(y)+S_k(x,y)\right), \end{equation} where $M(x)$ is the Mertens function, \begin{equation} \label{eq:fy} f_{y}(s) \de \sum_{y<n}\frac{\mu(n)}{n^s} \end{equation} for $\sigma>1$, and \[ S_k(x,y)\de \sum_{n\leq y} \left(\frac{x}{n^k}-\left\lfloor{\frac{x}{n^k}}\right\rfloor-\frac{1}{2}\right)\mu(n). \] Trivially, $\left|S_k(x,y)\right|\leq y/2$. On the other hand, partial summation and~\eqref{eq:mobiusDef} implies conditional estimate $\left|f_y(s)\right|\ll_{\varepsilon} y^{1/2-\sigma+\varepsilon}$, see Lemma~\ref{lem:fy} for the precise explicit result. Concerning the second term $Q_{k,2}(x)$, the truncated Perron's formula is used together with \begin{equation} \label{eq:zetafy} \zeta(s)f_{y}(ks) = \sum_{m=1}^{\infty}\sum_{y<n}\frac{\mu(n)}{\left(n^{k}m\right)^s}, \end{equation} which is valid for $\sigma>1$, to obtain \begin{equation} \label{eq:PerronNonexplicit2} Q_{k,2}(x) = \frac{1}{2\pi\ie}\int_{c-\ie T}^{c+\ie T}\frac{\zeta(s)f_{y}(ks)x^s\dif{s}}{s} + O\left(\frac{yx^{c}}{(c-1)T}\right) + O\left(\frac{yx\log{x}}{T}\right) + O(y) \end{equation} for $c>1$ and $T\geq 1$. In~\cite{MontyVaughanSquarefree} the choice $c=1+1/\log{x}$ and $T=x$ is proposed, so the error term in~\eqref{eq:PerronNonexplicit2} becomes $\ll y\log{x}$. RH allows to move the line of integration to the left, and taking $y=x^{1/(k+1)}$ will then produce~\eqref{eq:kfreeMV}. We should remark here that it seems like it is assumed in~\cite{MontyVaughanSquarefree} that coefficients of the Dirichlet series for~\eqref{eq:zetafy} are bounded by some absolute constant, resulting into $x^{\varepsilon}$, $\varepsilon>0$, for the error terms in~\eqref{eq:PerronNonexplicit2}. Although this claim does not change the final result~\eqref{eq:kfreeMV}, we would like to demonstrate that it is not correct. Note also that a sketch of the proof in~\cite[pp.~446--447]{MontgomeryVaughan} is different and avoids~\eqref{eq:PerronNonexplicit2}. \begin{lemma} \label{lem:zetafy} Let $\sigma>1$, $y\geq 1$, and $k\geq 2$ be an integer. Assume that $\sum_{d=1}^{\infty}a_{d}d^{-s}$ is the Dirichlet series for $\zeta(s)f_{y}(ks)$. Then $\left|a_d\right|\leq y$. \end{lemma} \begin{proof} Observe that \[ f_{y}(ks) = \sum_{n^{1/k}\in\N_{>y}} \frac{\mu\left(n^{1/k}\right)}{n^s}. \] Then one can use Dirichlet convolution to obtain \[ a_d = \sum_{\substack{q|d \\ q^{1/k}\in\N_{>y}}} \mu\left(q^{1/k}\right). \] Trivially, $a_d=0$ if $d$ is $k$-free number. Take $d=\left(p_1^{\nu_1}\cdots p_{l}^{\nu_l}\right)^{k} m$, where $p_1,\ldots,p_l$ are distinct prime numbers and $m$ is $k$-free number. Then \begin{flalign*} a_d &= -\sum_{\substack{y<p_{j_1} \\ 1\leq j_1\leq l}} 1 + \sum_{\substack{y<p_{j_1}p_{j_2} \\ 1\leq j_1<j_2\leq l}} 1 - \sum_{\substack{y<p_{j_1}p_{j_2}p_{j_3} \\ 1\leq j_1<j_2<j_3\leq l}} 1 + \cdots + (-1)^{l}\sum_{\substack{y<p_{j_1}\cdots p_{j_l} \\ 1\leq j_1<\cdots<j_l\leq l}} 1 \\ &= -1 + \sum_{\substack{p_{j_1}\leq y \\ 1\leq j_1\leq l}} 1 - \sum_{\substack{p_{j_1}p_{j_2}\leq y \\ 1\leq j_1<j_2\leq l}} 1 + \sum_{\substack{p_{j_1}p_{j_2}p_{j_3}\leq y \\ 1\leq j_1<j_2<j_3\leq l}} 1 - \cdots - (-1)^{l}\sum_{\substack{p_{j_1}\cdots p_{j_l}\leq y \\ 1\leq j_1<\cdots<j_l\leq l}} 1. \end{flalign*} It follows that $\left|a_d\right|\leq 1+(y-1)=y$. \end{proof} Let $y$ be large enough and let $p_1,\ldots,p_l$ be all primes in the interval $\left(\sqrt{y},y\right]$. If $d=\left(p_1\cdots p_l\right)^{k}m$ for $m\in\N$, then the proof of Lemma~\ref{lem:zetafy} implies \[ a_d = -1+l = -1+\pi\left(y\right)-\pi\left(\sqrt{y}\right) \gg \frac{y}{\log{y}}. \] This shows that the coefficients $a_d$ from the Dirichlet series for~\eqref{eq:zetafy} are not bounded by some absolute constant. The outline of our proof of~\eqref{eq:kfreeMV} is now very similar to the proof of~\eqref{eq:TitchMobius}. We choose $c=1+1/\log{x}$ and $T=ex$, see Corollary~\ref{cor:Perron2}. Then we move the line of integration to an arbitrary $\sigma'\in(1/2,3/4]$ and derive bounds for the resulting integrals by using~\eqref{eq:corMain2} and Lemmas~\ref{lem:zetabound2} and~\ref{lem:fy}. Theorem~\ref{thm:MainkFree} will follow after observation that one can take $\sigma'\to 1/2$ in the final inequalities. Because we are using also Theorem~\ref{thm:MainMoebius}, the proof of~\eqref{eq:thmSquarefree} consists of optimizing $\sigma$ and $t_0$ from~\eqref{eq:corMain} while taking $t_0=2\gamma_1$ in~\eqref{eq:corMain2} in order to obtain desired values for $\beta$. \section{Proof of Theorem~\ref{thm:main}} \label{sec:MainBound} Let $N(T)$ be the number of the nontrivial zeros $\rho=\beta+\ie\gamma$ of $\zeta(s)$ with $0<\gamma\leq T$. The Riemann--von Mangoldt formula asserts that \begin{equation} \label{eq:RvM} N(T) = \frac{T}{2\pi}\log{\frac{T}{2\pi e}} + \frac{7}{8} + Q(T), \end{equation} where $Q(T)\de S(T)+R(T)$ with $S(T)\ll \log{T}$ and $R(T)\ll 1/T$, see~\cite[Section 9.3]{Titchmarsh} for details. By~\cite[Lemma 2]{BrentPlattTrudgian} it is known that $\left|R(T)\right|\leq \frac{1}{150T}$ for $T\geq 2\pi$. As usual, $\pi S(t)$ is the argument of $\zeta(s)$ on the critical line. Closely related function is \[ S_1(t)\de \int_{0}^{t}S(u)\dif{u} \] for $t\geq0$. Unconditionally we also have $S_1(t)\ll\log{t}$. However, on RH better estimates are known and recently effective bounds were provided. Let \[ \mathcal{M}(a,b,c;t)\de a + \frac{b}{\left(\log{t}\right)^{c}\log\log{t}} \] for some positive real numbers $a$, $b$ and $c$, and define \begin{gather} \mathcal{M}_1(t)\de \mathcal{M}(0.759282,20.1911,0.285;t), \label{eq:M1} \\ \mathcal{M}_2(t)\de \mathcal{M}(0.653,60.12,0.2705;t). \label{eq:M2} \end{gather} By~\cite{SimonicSonRH} we know that RH implies \[ \left|S(t)\right|\leq \phi_1(t)\frac{\log{t}}{\log{\log{t}}} \] with \begin{equation} \label{eq:SExpl} \phi_1(t) \de \left\{ \begin{array}{ll} 0.96, & 2\pi\leq t < 10^{2465}, \\ \mathcal{M}_1(t)+0.96-\mathcal{M}_1\left(10^{2465}\right), & t\geq 10^{2465}, \end{array} \right. \end{equation} and \[ \left|S_1(t)\right|\leq \phi_2(t)\frac{\log{t}}{\left(\log{\log{t}}\right)^2}, \] with \begin{equation} \label{eq:S1Expl} \phi_2(t) \de \left\{ \begin{array}{ll} 2.491, & 2\pi\leq t < 10^{208}, \\ \mathcal{M}_2(t)+2.491-\mathcal{M}_2\left(10^{208}\right), & t\geq 10^{208}. \end{array} \right. \end{equation} Observe that $\phi_1(t)$ and $\phi_2(t)$ are continuous and decreasing functions for $t\geq 2\pi$. Here some improvements may be possible. However, we should emphasize that for our proof to work we need to know analytic properties of some functions which include $\phi_1(t)$ and $\phi_2(t)$, e.g., that $\phi_2(t)\left(\log{\log{t}}\right)^{-2}\log{t}$ is an increasing function for $t\geq\exp{\left(e^2\right)}$. The main result of this section is Theorem~\ref{thm:zetaMain} which its Corollary~\ref{cor:main} immediately implies Theorem~\ref{thm:main}. We are following~\cite{TitchConseq}. In the literature one can find two similar proofs of~\eqref{eq:TitchZetaBound} and~\eqref{eq:TitchZetaBound2}, namely~\cite[Theorem 14.14 (B)]{Titchmarsh} and~\cite[Theorem 13.23]{MontgomeryVaughan}. The former proof is closer to~\cite{TitchConseq} in the sense that it also relies on conditional estimates for $S(t)$ and $S_1(t)$, while in the latter proof these functions implicitly appear through properties of $\zeta'(s)/\zeta(s)$. We would also like to note that there is an error in~\cite[Section 14.10, p.~347]{Titchmarsh}; it is true that RH implies \begin{equation} \label{eq:problem} \int_{\alpha+\ie T}^{2+\ie T}\frac{\log{\zeta(z)}}{z-s}\dif{z} = O\left(\frac{\log{T}}{t-T}\right), \end{equation} uniformly for $1/2<\alpha<2$ and $T<t$, but~\eqref{eq:problem} does not follow from~\eqref{eq:someEstimateForZeta} because this estimate is not uniform in $\sigma$. Instead one should consider the (unconditional) estimate \[ \log{\zeta(s)} = \sum_{\left|t-\gamma\right|\leq 1} \log{\left(s-\rho\right)} + O\left(\log{t}\right), \] which is uniform in $\sigma\in[-1,2]$, see~\cite[Theorem 9.6 (B)]{Titchmarsh}. On RH we then have \[ \int_{\alpha}^{2}\left|\log{\zeta\left(u+\ie T\right)}\right|\dif{u} \leq \sum_{\left|t-\gamma\right|\leq 1} \int_{\alpha}^{2} \left|\log{\left(u-\frac{1}{2}+\ie\left(T-\gamma\right)\right)}\right|\dif{u} + O\left(\log{T}\right), \] and the right-hand side of the above inequality can be easily seen $\ll \log{T}$. The main idea in Titchmarsh's older proof is to write $\log{\zeta(s)}$ as the integral of $Q(u)$. This is achieved by using Hadamard's factorization theorem for $\xi\left(1/2+\ie z\right)$, the Riemann--von Mangoldt formula~\eqref{eq:RvM}, and also Stirling's formula, see the following lemma. \begin{lemma} \label{lem:logzeta} Assume the Riemann Hypothesis. Let $s=\sigma+\ie t$ with $1/2<\sigma\leq 3/2$ and $t\geq t_0>\gamma_1$. Then \begin{equation} \label{eq:logzeta} \log{\left|\zeta(s)\right|} = \Re\left\{2\left(s-\frac{1}{2}\right)^2\int_{\gamma_1}^{\infty}\frac{Q(u)}{u\left(u^2+\left(s-\frac{1}{2}\right)^2\right)}\dif{u}\right\} + R_1, \end{equation} where \begin{flalign} \label{eq:R1} \left|R_1\right| \leq \mathcal{R}_1\left(t_0\right) &\de \left|\log{\left|\xi\left(\frac{1}{2}\right)\right|}\right| + \frac{1}{4}\log{\frac{2e}{\pi}} + \frac{\delta\left(t_0\right)}{\pi}\int_{0}^{\gamma_1}\frac{\left|\log{\left(2\pi e/u\right)}\right|}{1-\left(u/t_0\right)^2}\dif{u} \nonumber \\ &+ \frac{7}{8}\left|\log{\left(\frac{1}{\gamma_1^2}-\frac{1}{t_0^2}\right)}\right| + \frac{7+2t_0}{4}\log{\frac{2t_0}{2t_0-1}} + \frac{1}{6t_0} + \frac{4}{45t_0^3} \end{flalign} and \begin{equation} \label{eq:delta} \delta\left(t_0\right)\de \left(1+\frac{1}{t_0}\right)^2. \end{equation} \end{lemma} \begin{proof} Define $z\de -\ie\left(s-1/2\right)$ and take \begin{equation} \label{eq:Xi} \Xi(z)\de\xi\left(\frac{1}{2}+\ie z\right), \quad \xi(s) \de \frac{1}{2}s(s-1)\pi^{-\frac{s}{2}}\Gamma\left(\frac{s}{2}\right)\zeta(s). \end{equation} Let $0<\gamma_1\leq\gamma_2\leq\cdots\leq\gamma_n\leq\cdots$ denote the ordinates of the nontrivial zeros in the upper half-plane. By Hadamard's factorization theorem it follows that \[ \Xi(z) = \xi\left(\frac{1}{2}\right)\prod_{n=1}^{\infty}\left(1-\frac{z^2}{\gamma_n^2}\right). \] Therefore, \begin{equation} \label{eq:logXi1} \log{\left|\Xi(z)\right|} = \log{\left|\xi\left(\frac{1}{2}\right)\right|} + \Re\left\{I_1\right\}, \quad I_1 \de \int_{\gamma_1}^{\infty}\frac{2z^2N(u)}{u\left(z^2-u^2\right)}\dif{u}, \end{equation} see~\cite[p.~249]{TitchConseq} for details. Writing \begin{gather*} O_1 \de -\frac{z^2}{\pi}\int_{0}^{\gamma_1}\frac{\log{u}}{z^2-u^2}\dif{u} + \frac{z^2\log{\left(2\pi e\right)}}{\pi}\int_{0}^{\gamma_1}\frac{\dif{u}}{z^2-u^2}, \\ O_2 \de - \frac{7}{4}\log{z} + \frac{7}{4}\int_{\gamma_1}^{\infty}\frac{z^2\dif{u}}{u\left(z^2-u^2\right)}, \end{gather*} by~\eqref{eq:RvM} it follows that \begin{equation} \label{eq:I1} I_1 = \int_{\gamma_1}^{\infty}\frac{2z^2 Q(u)}{u\left(z^2-u^2\right)}\dif{u} + \frac{\ie}{2}z\log{z} - \frac{\pi}{4}z - \frac{\ie \log{\left(2\pi e\right)}}{2}z + \frac{7}{4}\log{z} + O_1 + O_2, \end{equation} see~\cite[p.~249]{TitchConseq} for details. Because \begin{equation} \label{eq:boundsz} t\leq |z|\leq t\left(1+\frac{1}{t_0}\right), \quad \left|z^2-u^2\right|\geq \left|u^2-t^2\right|, \end{equation} we have \[ \left|\Re\left\{O_1\right\}\right| \leq \left|O_1\right| \leq \frac{1}{\pi}\delta\left(t_0\right)\int_{0}^{\gamma_1}\frac{\left|\log{\left(2\pi e/u\right)}\right|}{1-\left(u/t_0\right)^2}\dif{u}. \] Also, \[ \int_{\gamma_1}^{\infty}\frac{z^2\dif{u}}{u\left(z^2-u^2\right)} = \frac{1}{2}\log{\left(1-\frac{z^2}{\gamma_1^2}\right)}, \] which implies \[ \left|\Re\left\{O_2\right\}\right| = \frac{7}{8}\left|\log{\left|\frac{1}{z^2}-\frac{1}{\gamma_1^2}\right|}\right| \leq \frac{7}{8}\left|\log{\left(\frac{1}{\gamma_1^2}-\frac{1}{t_0^2}\right)}\right|. \] On the other hand, by \eqref{eq:Xi} we have \begin{multline} \label{eq:logXi12} \log{\left|\Xi(z)\right|} = \log{\frac{1}{2}} - \frac{\log{\pi}}{4} + \log{\left|\frac{1}{2}+\ie z\right|} + \log{\left|-\frac{1}{2}+\ie z\right|} \\ - \Re\left\{\ie\frac{\log{\pi}}{2}z\right\} + \log{\left|\Gamma\left(\frac{1}{4}+\ie\frac{z}{2}\right)\right|} + \log{\left|\zeta(s)\right|}. \end{multline} We can write \[ \log{\left(\frac{1}{2}+\ie z\right)} = \log{z} + \frac{\pi}{2}\ie + O_3(z), \quad O_3(z)\de \log{\left(1+\frac{1}{2\ie z}\right)}. \] Taking $w=1/\left(2\ie z\right)$, we obtain \begin{gather*} \left|O_3(z)\right| \leq \log{\frac{1}{1-|w|}} \leq \log{\frac{2t_0}{2t_0-1}}, \\ \left|zO_3(z)\right|\leq \frac{1}{2|w|}\log{\frac{1}{1-|w|}}\leq t_0\log{\frac{2t_0}{2t_0-1}} \end{gather*} since $|w|\leq 1/\left(2t_0\right)$, and the former bound is also true for $\left|O_3(-z)\right|$. By Stirling's formula for $\log{\Gamma(z)}$ with an explicit error term, see~\cite[p.~294]{Olver}, equality \eqref{eq:logXi12} implies \begin{flalign} \label{eq:logXi2} \log{\left|\Xi(z)\right|} &= \log{\left|\zeta(s)\right|} + \Re\left\{\frac{\ie}{2}z\log{z} - \frac{\pi}{4}z - \frac{\ie \log{\left(2\pi e\right)}}{2}z + \frac{7}{4}\log{z}\right\} - \frac{1}{4}\log{\frac{2e}{\pi}} \nonumber \\ &+ \Re\left\{\frac{3}{4}O_3(z) + O_3(-z) + \frac{\ie}{2}zO_3(z) + O_4\left(\frac{1}{4}+\ie \frac{z}{2}\right)\right\}, \end{flalign} where \[ \left|O_4\left(\frac{1}{4}+\ie \frac{z}{2}\right)\right| \leq \frac{1}{6t_0} + \frac{4}{45t_0^3}. \] Comparing \eqref{eq:logXi1} with \eqref{eq:logXi2} while using \eqref{eq:I1} gives \eqref{eq:logzeta} with \begin{multline*} R_1 = \log{\left|\xi\left(\frac{1}{2}\right)\right|} + \frac{1}{4}\log{\frac{2e}{\pi}} \\ + \Re\left\{O_1 + O_2 - \frac{3}{4}O_3(z) - O_3(-z) - \frac{\ie}{2}zO_3(z) - O_4\left(\frac{1}{4}+\ie \frac{z}{2}\right)\right\}. \end{multline*} After collecting all bounds for the error terms, the final result easily follows. \end{proof} The next step in the proof is to restrict the range of integration in~\eqref{eq:logzeta} to some neighbourhood of $t$. \begin{lemma} \label{lem:logzeta2} Assume the Riemann Hypothesis. Let $s=\sigma+\ie t$ with $1/2<\sigma\leq 3/2$, $t\geq\max\left\{2\exp{\left(e^2\right)},t_0\right\}$, $t_0\geq2\gamma_1$, and $0<\xi\leq t_0/2$. Then \begin{equation} \label{eq:logzeta2} \log{\left|\zeta(s)\right|} = \Re\left\{2\left(s-\frac{1}{2}\right)^2\int_{t-\xi}^{t+\xi}\frac{Q(u)}{u\left(u^2+\left(s-\frac{1}{2}\right)^2\right)}\dif{u}\right\} + R_2, \end{equation} where \begin{flalign} \label{eq:R2} \left|R_2\right| &\leq \delta\left(t_0\right)\left(\frac{85}{4}+3\delta\left(t_0\right)\right)\frac{\phi_2(3t/2)\log{(3t/2)}}{\xi\left(\log{\log{(3t/2)}}\right)^2} \nonumber \\ &+ 2\delta\left(t_0\right)\left(4+\left(3+\delta\left(t_0\right)\right)\left(\frac{t_0}{2e^{e^2}}+\frac{3+e}{2e}\right)\right)\frac{\phi_2(t)\log{t}}{\xi\left(\log{\log{t}}\right)^2} \nonumber \\ &+ \frac{\delta\left(t_0\right)}{75}\left(1+\frac{1}{e}+\frac{t_0}{2\gamma_1}\right)\frac{1}{\xi} + \frac{2\left(1+t_0\right)^{2}\left|S_1\left(\gamma_1\right)\right|}{\gamma_1\left(t_0^2-\gamma_1^2\right)} + 0.81\delta\left(t_0\right)\left(3+\delta\left(t_0\right)\right) \nonumber \\ &+ \mathcal{R}_1\left(t_0\right) + \frac{4\delta\left(t_0\right)}{5e}\left(3+\frac{\delta\left(t_0\right)}{4}\right)\frac{\phi_2(3t/2)}{\left(\log{\log{(3t/2)}}\right)^2}, \end{flalign} $\delta\left(t_0\right)$ is defined by~\eqref{eq:delta} and $\mathcal{R}_1\left(t_0\right)$ is from~\eqref{eq:R1}. \end{lemma} \begin{proof} Remembering that $Q(u)=S(u)+R(u)$, we can write \begin{multline} \label{eq:settingLemLogzeta2} \int_{\gamma_1}^{\infty}\frac{Q(u)\dif{u}}{u\left(u^2+\left(s-\frac{1}{2}\right)^2\right)} = \int_{t-\xi}^{t+\xi}\frac{Q(u)\dif{u}}{u\left(u^2+\left(s-\frac{1}{2}\right)^2\right)} \\ + \left(\int_{\gamma_1}^{t-\xi}+\int_{t+\xi}^{\infty}\right)\frac{S(u)+R(u)}{u\left(u^2+\left(s-\frac{1}{2}\right)^2\right)}\dif{u}. \end{multline} The idea is to apply Lemma~\ref{lem:logzeta} on~\eqref{eq:settingLemLogzeta2}. We will estimate the modulus of the last two integrals by separating two cases which correspond to functions $S(u)$ and $R(u)$. Note that $t-\xi\geq t/2$ and $t+\xi\leq 3t/2$. We are also using estimates~\eqref{eq:boundsz}. In the case of $R(u)$, we obtain \begin{equation} \label{eq:intR} \left|\left(\int_{\gamma_1}^{t-\xi}+\int_{t+\xi}^{\infty}\right)\frac{R(u)\dif{u}}{u\left(u^2+\left(s-\frac{1}{2}\right)^2\right)}\right| \leq \frac{1}{150}\left(1+\frac{1}{e}+\frac{t_0}{2\gamma_1}\right)\frac{1}{t^2\xi}, \end{equation} because \[ 0 < \int_{\gamma_1}^{t-\xi}\frac{\dif{u}}{u^2\left(1-\left(\frac{u}{t}\right)^2\right)} + \int_{t+\xi}^{\infty}\frac{\dif{u}}{u^2\left(\left(\frac{u}{t}\right)^2-1\right)} \leq \left(1+\frac{1}{e}+\frac{\xi}{\gamma_1}\right)\frac{1}{\xi}. \] The last inequality follows by exact integration, and by the inequalities $\log{x}\leq x/e$ and $\log{(1+x)}\leq x$ which are valid for $x>0$. Such simple global inequalities are good enough for our purpose. In the case of $S(t)$, we will consider each integral in brackets in~\eqref{eq:settingLemLogzeta2} separately. Integration by parts implies \begin{flalign} \label{eq:int1} \int_{\gamma_1}^{t-\xi}\frac{S(u)\dif{u}}{u\left(u^2+\left(s-\frac{1}{2}\right)^2\right)} &= \frac{S_1\left(t-\xi\right)}{\left(t-\xi\right)\left(\left(t-\xi\right)^2+\left(s-\frac{1}{2}\right)^2\right)} - \frac{S_1\left(\gamma_1\right)}{\gamma_1\left(\gamma_1^2+\left(s-\frac{1}{2}\right)^2\right)} \nonumber \\ &+\int_{\gamma_1}^{t-\xi}\frac{3u^2+\left(s-\frac{1}{2}\right)^2}{u^2\left(u^2+\left(s-\frac{1}{2}\right)^2\right)^2}S_1(u)\dif{u}. \end{flalign} Note that $\phi_{j}(u)\left(\log{\log{u}}\right)^{-j}\log{u}$ is for $j\in\{1,2\}$ and $u\geq e^{e^2}$ an increasing function. We are going to split the range of integration in the second integral in~\eqref{eq:int1} into two parts: \begin{multline} \label{eq:int111} \left|\int_{\gamma_1}^{e^{e^2}}\frac{3u^2+\left(s-\frac{1}{2}\right)^2}{u^2\left(u^2+\left(s-\frac{1}{2}\right)^2\right)^2}S_1(u)\dif{u}\right| \leq \frac{1}{t^2}\left(3+\delta\left(t_0\right)\right)\times \\ \times\int_{\gamma_1}^{e^{e^2}}\frac{2.491\log{u}\dif{u}}{u^2\left(1-\left(\frac{u}{2e^{e^2}}\right)^2\right)^2\left(\log{\log{u}}\right)^2} \leq \frac{0.405\left(3+\delta\left(t_0\right)\right)}{t^2} \end{multline} and \begin{multline} \label{eq:int11} \left|\int_{e^{e^2}}^{t-\xi}\frac{3u^2+\left(s-\frac{1}{2}\right)^2}{u^2\left(u^2+\left(s-\frac{1}{2}\right)^2\right)^2}S_1(u)\dif{u}\right| \leq \frac{\left(3+\delta\left(t_0\right)\right)\phi_2(t)\log{t}}{t^2\left(\log{\log{t}}\right)^2}\times \\ \times\int_{e^{e^2}}^{t-\xi}\frac{\dif{u}}{u^2\left(1-\left(\frac{u}{t}\right)^2\right)^2} \leq \left(3+\delta\left(t_0\right)\right)\left(\frac{t_0}{2e^{e^{2}}}+\frac{1}{2}+\frac{3}{2e}\right)\frac{\phi_2(t)\log{t}}{t^2\xi\left(\log{\log{t}}\right)^2}. \end{multline} Observe that we used~\eqref{eq:S1Expl} in estimation~\eqref{eq:int111}. Also, \begin{gather} \frac{\left|S_1\left(t-\xi\right)\right|}{\left(t-\xi\right)\left|\left(t-\xi\right)^2+\left(s-\frac{1}{2}\right)^2\right|} \leq \left(\frac{t}{t-\xi}\right)^{2}\frac{\phi_2(t)\log{t}}{t^{2}\xi\left(\log{\log{t}}\right)^2} \leq \frac{4\phi_2(t)\log{t}}{t^{2}\xi\left(\log{\log{t}}\right)^2}, \label{eq:int12} \\ \frac{\left|S_1\left(\gamma_1\right)\right|}{\gamma_1\left|\gamma_1^2+\left(s-\frac{1}{2}\right)^2\right|} \leq \frac{t_0^2\left|S_1\left(\gamma_1\right)\right|}{t^2\gamma_1\left(t_0^2-\gamma_1^2\right)}. \label{eq:int13} \end{gather} Considering the second integral in brackets in~\eqref{eq:settingLemLogzeta2}, integration by parts implies \begin{flalign} \label{eq:int2} \int_{t+\xi}^{\infty}\frac{S(u)\dif{u}}{u\left(u^2+\left(s-\frac{1}{2}\right)^2\right)} &= -\frac{S_1\left(t+\xi\right)}{\left(t+\xi\right)\left(\left(t+\xi\right)^2+\left(s-\frac{1}{2}\right)^{2}\right)} \nonumber \\ &+ \left(\int_{t+\xi}^{3t/2}+\int_{3t/2}^{\infty}\right)\frac{3u^2+\left(s-\frac{1}{2}\right)^2}{u^2\left(u^2+\left(s-\frac{1}{2}\right)^2\right)^2}S_1(u)\dif{u}. \end{flalign} Similarly as before, bounds on the moduli of the last two integrals in~\eqref{eq:int2} are \begin{multline} \label{eq:int21} \left|\int_{t+\xi}^{3t/2}\frac{3u^2+\left(s-\frac{1}{2}\right)^2}{u^2\left(u^2+\left(s-\frac{1}{2}\right)^2\right)^2}S_1(u)\dif{u}\right| \leq \frac{\left(\frac{27}{4}+\delta\left(t_0\right)\right)\phi_2(3t/2)\log{(3t/2)}}{t^2\left(\log{\log{(3t/2)}}\right)^2}\times \\ \times\int_{t+\xi}^{3t/2}\frac{\dif{u}}{u^2\left(\left(\frac{u}{t}\right)^2-1\right)^2} \leq \frac{3}{2}\left(\frac{27}{4}+\delta\left(t_0\right)\right)\frac{\phi_2(3t/2)\log{(3t/2)}}{t^2\xi\left(\log{\log{(3t/2)}}\right)^2} \end{multline} and, because $\phi_2(u)\left(\log{\log{u}}\right)^{-2}$ is for $u\geq2\pi$ a decreasing function, \begin{multline} \label{eq:int22} \left|\int_{3t/2}^{\infty}\frac{3u^2+\left(s-\frac{1}{2}\right)^2}{u^2\left(u^2+\left(s-\frac{1}{2}\right)^2\right)^2}S_1(u)\dif{u}\right| \leq \frac{1}{e}\left(3+\frac{\delta\left(t_0\right)}{4}\right)\frac{\phi_2(3t/2)}{\left(\log{\log{(3t/2)}}\right)^2}\times \\ \times\int_{3t/2}^{\infty}\frac{u\dif{u}}{\left(u^2-t^2\right)^2} \leq \frac{2}{5e}\left(3+\frac{\delta\left(t_0\right)}{4}\right)\frac{\phi_2(3t/2)}{t^2\left(\log{\log{(3t/2)}}\right)^2}. \end{multline} In~\eqref{eq:int22} we used $\log{u}\leq u/e$. Also, \begin{equation} \label{eq:int23} \frac{\left|S_1\left(t+\xi\right)\right|}{\left(t+\xi\right)\left|\left(t+\xi\right)^2+\left(s-\frac{1}{2}\right)^{2}\right|} \leq \frac{\phi_2(3t/2)\log{(3t/2)}}{2t^2\xi\left(\log{\log{(3t/2)}}\right)^2}. \end{equation} Multiplying~\eqref{eq:settingLemLogzeta2} by $2\left(s-1/2\right)^2$, applying Lemma~\ref{lem:logzeta}, and using bounds~\eqref{eq:int111}, \eqref{eq:int11}, \eqref{eq:int12}, \eqref{eq:int13} in~\eqref{eq:int1}, and~\eqref{eq:int21}, \eqref{eq:int22}, \eqref{eq:int23} in~\eqref{eq:int2}, and~\eqref{eq:intR}, finally gives~\eqref{eq:logzeta2} with~\eqref{eq:R2}. The proof of Lemma~\ref{lem:logzeta2} is thus complete. \end{proof} \begin{theorem} \label{thm:zetaMain} Assume the Riemann Hypothesis. Let $s=\sigma+\ie t$ with $1/2<\sigma\leq 3/2$, $t\geq\max\left\{2\exp{\left(e^2\right)},t_0\right\}$, $t_0\geq2\gamma_1$, and $0<\lambda\leq \left(t_0/2\right)\log{\log{\left(3t_0/2\right)}}$. Then \begin{flalign} \label{eq:abslogzeta} \left|\log{\left|\zeta(s)\right|}\right| &\leq 2\left(\frac{\phi_1(3t/2)\log{(3t/2)}}{\log{\log{(3t/2)}}}+\frac{1}{75t}\right)\log{\left(1+\frac{2\lambda}{\left(\sigma-\frac{1}{2}\right)\log{\log{(3t/2)}}}\right)} \nonumber \\ &+ \Omega\left(t_0,\lambda,t\right) \end{flalign} and \begin{equation} \label{eq:abszeta} \log{\left|\zeta(s)\right|} \leq \frac{\lambda\log{(3t/2)}}{\pi\log{\log{(3t/2)}}} + \Omega\left(t_0,\lambda;t\right), \end{equation} where \begin{flalign*} \Omega\left(t_0,\lambda;t\right) &\de \frac{a_1\left(t_0\right)\phi_2(3t/2)\log{(3t/2)}}{\lambda\log{\log{(3t/2)}}} + \frac{a_2\left(t_0\right)\log{\log{(3t/2)}}}{\lambda} + a_3\left(t_0\right)\\ &+ 12\left(\frac{\phi_1\left(t/2\right)\log{\left(t/2\right)}}{t\log{\log{\left(t/2\right)}}}+\frac{1}{75t^2}\right) \frac{\lambda}{\log{\log{(3t/2)}}} + \frac{a_4\left(t_0\right)\phi_2(3t/2)}{\left(\log{\log{(3t/2)}}\right)^2}, \end{flalign*} and $\phi_1$ and $\phi_2$ are defined by~\eqref{eq:SExpl} and~\eqref{eq:S1Expl}, respectively, \begin{equation} \label{eq:a1} a_1\left(t_0\right)\de \delta\left(t_0\right)\left(\frac{117}{4}+3\delta\left(t_0\right) +\left(3+\delta\left(t_0\right)\right)\left(\frac{t_0}{e^{e^2}}+\frac{3+e}{e}\right)\right), \end{equation} \begin{equation} \label{eq:a2} a_2\left(t_0\right)\de \frac{\delta\left(t_0\right)}{75}\left(\frac{t_0}{2\gamma_1}+1+\frac{1}{e}\right), \end{equation} \begin{equation} \label{eq:a3} a_3\left(t_0\right)\de \frac{3\left(1+t_0\right)^{2}}{\gamma_1\left(t_0^2-\gamma_1^2\right)} + 0.81\delta\left(t_0\right)\left(3+\delta\left(t_0\right)\right) + \mathcal{R}_1\left(t_0\right) , \end{equation} \begin{equation} \label{eq:a4} a_4\left(t_0\right)\de \frac{4\delta\left(t_0\right)}{5e}\left(3+\frac{\delta\left(t_0\right)}{4}\right), \end{equation} $\delta\left(t_0\right)$ is defined by~\eqref{eq:delta} and $\mathcal{R}_1\left(t_0\right)$ is from~\eqref{eq:R1}. \end{theorem} \begin{proof} Take $\xi\de\lambda/\log{\log{(3t/2)}}$. Then $0<\xi\leq t_0/2$, and also $t-\xi\geq t/2$ and $t+\xi\leq 3t/2$. Firstly we are going to prove~\eqref{eq:abslogzeta}. By Lemma~\ref{lem:logzeta2} we thus have \begin{equation} \label{eq:R331} \log{\left|\zeta(s)\right|} = -\Re\left\{\int_{t-\xi}^{t+\xi}\frac{Q(u)\dif{u}}{u+\ie\left(s-\frac{1}{2}\right)}\right\} + R_3, \end{equation} where \[ R_3 \de R_2 - \Re\left\{\int_{t-\xi}^{t+\xi}\frac{Q(u)\dif{u}}{u-\ie\left(s-\frac{1}{2}\right)}\right\} + 2\int_{t-\xi}^{t+\xi}\frac{Q(u)}{u}\dif{u} \] and the real term $R_2$ is from~\eqref{eq:logzeta2}. Because \[ \left|\Re\left\{\frac{Q(u)}{u-\ie\left(s-\frac{1}{2}\right)}\right\}\right| = \frac{(u+t)\left|Q(u)\right|}{(u+t)^2+\left(\sigma-\frac{1}{2}\right)^2} \leq \frac{\left|Q(u)\right|}{u+t} \leq \frac{\left|Q(u)\right|}{u} \] for $u\in\left[t-\xi,t+\xi\right]$, and $\left|Q(u)\right|\leq \left|S(u)\right| + \left|R(u)\right|$, it follows \begin{equation} \label{eq:R3} \left|R_3\right| \leq \left|R_2\right| + 3\int_{t-\xi}^{t+\xi}\frac{\left|Q(u)\right|}{u}\dif{u} \leq \left|R_2\right| + 12\left(\frac{\phi_1\left(t/2\right)\log{\left(t/2\right)}}{t\log{\log{\left(t/2\right)}}}+\frac{1}{75t^2}\right)\xi \end{equation} since $\phi_1(u)\left(u\log{\log{u}}\right)^{-1}\log{u}$ is for $u\geq 2\pi$ a decreasing function. Therefore, inequality~\eqref{eq:R3} asserts that $\left|R_3\right| \leq \Omega\left(t_0,\lambda;t\right)$. Also, \begin{flalign} \label{eq:R31} \left|\Re\left\{\int_{t-\xi}^{t+\xi}\frac{Q(u)\dif{u}}{u+\ie\left(s-\frac{1}{2}\right)}\right\}\right| &\leq \int_{t-\xi}^{t+\xi}\frac{\left|Q(u)\right|\dif{u}}{\sqrt{\left(\sigma-\frac{1}{2}\right)^2+(u-t)^2}} \nonumber \\ &\leq 2\left(\frac{\phi_1(3t/2)\log{(3t/2)}}{\log{\log{(3t/2)}}}+\frac{1}{75t}\right)\int_{0}^{\xi}\frac{\dif{u}}{\sqrt{\left(\sigma-\frac{1}{2}\right)^2+u^2}} \nonumber \\ &\leq 2\left(\frac{\phi_1(3t/2)\log{(3t/2)}}{\log{\log{(3t/2)}}}+\frac{1}{75t}\right)\log\left(1+\frac{2\xi}{\sigma-\frac{1}{2}}\right). \end{flalign} Taking~\eqref{eq:R3}, together with~\eqref{eq:R2} and $\left|S_1\left(\gamma_1\right)\right|\leq 1.5$, and~\eqref{eq:R31} as an upper bound for the integral in~\eqref{eq:R331}, we obtain~\eqref{eq:abslogzeta}. In order to prove~\eqref{eq:abszeta}, observe that \begin{flalign*} -\Re\left\{\int_{t-\xi}^{t+\xi}\frac{Q(u)\dif{u}}{u+\ie\left(s-\frac{1}{2}\right)}\right\} &= \int_{0}^{\xi}\frac{u\left(Q(t-u)-Q(t+u)\right)}{\left(\sigma-\frac{1}{2}\right)^2+u^2}\dif{u} \\ &\leq \frac{1}{\pi}\log{\frac{3t}{2}} \int_{0}^{\xi}\frac{u^2 \dif{u}}{\left(\sigma-\frac{1}{2}\right)^2+u^2} \leq \frac{\xi}{\pi}\log{\frac{3t}{2}} \end{flalign*} since \begin{flalign*} Q(t-u)-Q(t+u) &= N(t-u) - N(t+u) + \frac{t+u}{2\pi}\log{\frac{t+u}{2\pi e}} - \frac{t-u}{2\pi}\log{\frac{t-u}{2\pi e}} \\ &\leq 0 + \frac{u}{\pi}\log{\frac{t+u}{2\pi}} \leq \frac{u}{\pi}\log{\frac{3t}{2}} \end{flalign*} by~\eqref{eq:RvM} and the mean-value theorem. Inequality~\eqref{eq:abszeta} now follows from~\eqref{eq:R331}. The proof of Theorem~\ref{thm:zetaMain} is thus complete. \end{proof} \begin{corollary} \label{cor:main} Assume the Riemann Hypothesis. Let $s=\sigma+\ie t$ with $1/2<\sigma\leq 3/2$, and $2\gamma_1\leq t_0\leq 50$, $\lambda\left(t_0\right)\de \left(t_0/2\right)\log{\log{\left(3t_0/2\right)}}$ and \begin{multline*} \omega\left(t_0;u\right) \de \frac{a_1\left(t_0\right)\widehat{\phi}_2(u)}{\lambda\left(t_0\right)u} + \frac{a_2\left(t_0\right)u}{\lambda\left(t_0\right)e^{u}} + \frac{15.83}{e^{u}} \\ + \frac{12\lambda\left(t_0\right)}{ue^{e^{u}}}\left(\frac{3\widehat{\phi}_1(u)}{2u}+\frac{3}{100e^{u+e^u}}\right) + \frac{a_4\left(t_0\right)\widehat{\phi}_2(u)}{u^2e^{u}}, \end{multline*} \begin{gather} \omega_1\left(\sigma,t_0;u\right) \de 2\left(\frac{\widehat{\phi}_1(u)}{u}+\frac{1}{50e^{u+e^{u}}}\right)\log{\left(1+\frac{2\lambda\left(t_0\right)}{\left(\sigma-\frac{1}{2}\right)u}\right)} + \omega\left(t_0;u\right), \label{eq:omega1} \\ \omega_2\left(t_0;u\right) \de \frac{\lambda\left(t_0\right)}{\pi u} + \omega\left(t_0;u\right), \label{eq:omega2} \end{gather} where $a_1\left(t_0\right)$, $a_2\left(t_0\right)$ and $a_4\left(t_0\right)$ are defined by~\eqref{eq:a1}, \eqref{eq:a2} and~\eqref{eq:a4}, respectively, \[ \widehat{\phi}_1(u) \de \left\{ \begin{array}{ll} 0.96, & \log\log{2\pi}\leq u < \log\log{10^{2465}}, \\ 1.719282-\mathcal{M}_1\left(10^{2465}\right)+\frac{20.1911}{ue^{0.285u}}, & u\geq \log\log{10^{2465}}, \end{array} \right. \] and \[ \widehat{\phi}_2(u) \de \left\{ \begin{array}{ll} 2.491, & \log\log{2\pi}\leq u < \log\log{10^{208}}, \\ 3.144-\mathcal{M}_2\left(10^{208}\right)+\frac{60.12}{ue^{0.2705u}}, & u\geq \log\log{10^{208}}, \end{array} \right. \] with $\mathcal{M}_1$ and $\mathcal{M}_2$ defined by~\eqref{eq:M1} and~\eqref{eq:M2}, respectively. Then $\omega_1\left(\sigma,t_0;u\right)$ and $\omega_2\left(t_0;u\right)$ are decreasing positive continuous functions for $u\geq1$ with \begin{gather} \lim_{u\to\infty}u\cdot\omega_1\left(\sigma,t_0;u\right) = \left(3.144-\mathcal{M}_2\left(10^{208}\right)\right)\frac{a_1\left(t_0\right)}{\lambda\left(t_0\right)}, \label{eq:limits} \\ \lim_{u\to\infty}u\cdot \omega_2\left(t_0;u\right) = \frac{1}{\pi}\lambda\left(t_0\right) + \left(3.144-\mathcal{M}_2\left(10^{208}\right)\right)\frac{a_1\left(t_0\right)}{\lambda\left(t_0\right)}, \label{eq:limits2} \end{gather} and $\omega_1\left(\sigma,t_0;u\right)$ is a decreasing function for $\sigma>1/2$. Moreover, for $2\exp{\left(e^2\right)}\leq t\leq T$ the inequalities~\eqref{eq:corMain} and~\eqref{eq:corMain2} are true. \end{corollary} \begin{proof} It is clear that $\omega_1\left(\sigma,t_0;u\right)$ and $\omega_2\left(t_0;u\right)$ are decreasing positive continuous functions in the variable $u\geq 1$ since $\widehat{\phi}_1(u)$, $\widehat{\phi}_2(u)$, and $ue^{-u}$ are decreasing positive functions for $u\geq 1$, and that $\omega_1\left(\sigma,t_0;u\right)$ is a decreasing function in $\sigma>1/2$. It is also clear that~\eqref{eq:limits} and~\eqref{eq:limits2} hold. Take $u=\log{\log{\left(3t/2\right)}}$. Then $\widehat{\phi}_1(u)=\phi_1(3t/2)$ and $\widehat{\phi}_2(u)=\phi_2(3t/2)$, where $\phi_1$ and $\phi_2$ are defined by~\eqref{eq:SExpl} and~\eqref{eq:S1Expl}, respectively. Because the function $a_3\left(t_0\right)$, given by~\eqref{eq:a3}, is decreasing, it follows that $a_3\left(t_0\right)\leq 15.83$. Inequalities~\eqref{eq:corMain} and~\eqref{eq:corMain2} now easily follow from Theorem~\ref{thm:zetaMain}, if we are able to show that \[ \widehat{\omega}_1(u) \de e^{u}\omega_1\left(\sigma,t_0;u\right), \quad \widehat{\omega}_2(u) \de e^{u}\omega_2\left(t_0;u\right) \] are increasing functions for $u\geq \log{\log{\left(3\exp{\left(e^2\right)}\right)}}$. Straightforward calculation confirms that \begin{flalign*} \widehat{\omega}_1'(u) &= \frac{2e^{u}\widehat{\phi}_1}{u}\left(1-\frac{1}{u}+\frac{\widehat{\phi}_1'}{\widehat{\phi}_1}-\frac{u}{50\widehat{\phi}_1e^{e^{u}}}\right) \log{\left(1+\frac{2\lambda}{\left(\sigma-\frac{1}{2}\right)u}\right)} \\ &- \frac{e^{u}\widehat{\phi}_1}{u} \left(1+\frac{u}{50\widehat{\phi}_1e^{u+e^{u}}}\right)\frac{4\lambda}{u\left(\left(\sigma-\frac{1}{2}\right)u+2\lambda\right)} + \frac{\dif{}}{\dif{u}}e^{u}\omega\left(t_0;u\right), \\ \widehat{\omega}_2'(u) &= \frac{\lambda e^{u}}{\pi u}\left(1-\frac{1}{u}\right) + \frac{\dif{}}{\dif{u}}e^{u}\omega\left(t_0;u\right) \end{flalign*} with \begin{multline*} \frac{\dif{}}{\dif{u}}e^{u}\omega\left(t_0;u\right) = \frac{a_2}{\lambda} + \frac{a_1e^{u}\widehat{\phi}_2}{u\lambda}\left(1-\frac{1}{u}+\frac{\widehat{\phi}_2'}{\widehat{\phi}_2}\right) - \frac{2a_4\widehat{\phi}_2}{u^2}\left(\frac{1}{u}-\frac{1}{2}\frac{\widehat{\phi}_2'}{\widehat{\phi}_2}\right) \\ -12\lambda e^{2u-e^{u}}\left(\frac{3}{100u^2 e^{2u+e^{u}}}+\frac{3\widehat{\phi}_1}{u^3e^{u}}+\frac{3}{50ue^{u+e^{u}}}+\frac{3\left(1-e^{-u}\right)\widehat{\phi}_1}{2u^2}-\frac{3\widehat{\phi}_1'}{2u^2e^{u}}\right) \end{multline*} for $u\in\mathcal{D}$, where \[ \mathcal{D}\de \left(1,\infty\right)\setminus\left\{\log\log{10^{208}},\log\log{10^{2465}}\right\}. \] We are going to demonstrate that $\widehat{\omega}_1'(u)>0$ and $\widehat{\omega}_2'(u)>0$ for \[ u\in\mathcal{D}_0\de\mathcal{D}\cap\left[\log{\log{\left(3\exp{\left(e^2\right)}\right)}},\infty\right). \] Then it will follow that $\widehat{\omega}_1(u)$ and $\widehat{\omega}_2(u)$ increase for all $u\geq \log{\log{\left(3\exp{\left(e^2\right)}\right)}}$ since $\widehat{\omega}_1$ and $\widehat{\omega}_2$ are continuous functions and $\left[\log{\log{\left(3\exp{\left(e^2\right)}\right)}},\infty\right)\setminus\mathcal{D}_0$ is finite. Because $2\gamma_1\leq t_0\leq 50$, we obtain $18.67\leq\lambda\left(t_0\right)\leq 36.6$, $42.65\leq a_1\left(t_0\right)\leq 44.1$, $0.033\leq a_2\left(t_0\right)\leq 0.044$ and $0.998\leq a_4\left(t_0\right)\leq 1.032$. In order to derive bounds for $\widehat{\phi}_1(u)$, $\widehat{\phi}_2(u)$ and corresponding derivatives, we will divide the set $\mathcal{D}_0$ into three parts, namely: \begin{enumerate} \item The case of $\log{\log{\left(3\exp{\left(e^2\right)}\right)}}\leq u < \log{\log{10^{208}}}$. Here we have \[ \widehat{\phi}_1=0.96, \quad \widehat{\phi}_2=2.491, \quad \widehat{\phi}_1'=\widehat{\phi}_2'=0. \] \item The case of $\log{\log{10^{208}}}< u < \log{\log{10^{2465}}}$. Here we have \[ \widehat{\phi}_1=0.96, \quad \widehat{\phi}_2 \geq 1.3273, \quad \widehat{\phi}_1'=0, \quad \frac{|\widehat{\phi}_2'|}{\widehat{\phi}_2}\leq 0.32. \] \item The case of $u>\log{\log{10^{2465}}}$. Here we have \[ 0.761\leq \widehat{\phi}_1 \leq 0.96, \quad \widehat{\phi}_2 \geq 0.656, \quad \frac{|\widehat{\phi}_1'|}{\widehat{\phi}_1}\leq 0.1, \quad \frac{|\widehat{\phi}_2'|}{\widehat{\phi}_2}\leq 0.2, \quad |\widehat{\phi}_1'|\leq 0.08. \] \end{enumerate} All these estimates simply follow from definitions for $\widehat{\phi}_1$ and $\widehat{\phi}_2$. It is not hard to see that we can now write \begin{gather} \widehat{\omega}_1'(u) \geq \frac{74.68e^{u}\widehat{\phi}_1}{u\left(u+73.2\right)}\left(1-\frac{2}{u}-\frac{|\widehat{\phi}_1'|}{\widehat{\phi}_1}-\frac{u}{25\widehat{\phi}_1 e^{e^{u}}}\right) + \frac{\dif{}}{\dif{u}}e^{u}\omega\left(t_0;u\right), \label{eq:boundderiv} \\ \widehat{\omega}_2'(u) \geq \frac{18.67 e^{u}}{\pi u}\left(1-\frac{1}{u}\right) + \frac{\dif{}}{\dif{u}}e^{u}\omega\left(t_0;u\right) \label{eq:boundderiv3} \end{gather} with \begin{multline} \label{eq:boundderiv2} \frac{\dif{}}{\dif{u}}e^{u}\omega\left(t_0;u\right) \geq \frac{e^{u}\widehat{\phi}_2}{u}\left(\frac{42.65}{36.6}\left(1-\frac{1}{u}\right)-\frac{2.064}{u^2e^{u}}-\left(\frac{44.1}{18.67}+\frac{1.032}{ue^{u}}\right) \frac{|\widehat{\phi}_2'|}{\widehat{\phi}_2}\right) \\ - 439.2e^{2u-e^u}\left(\frac{3}{50ue^{u+e^{u}}}\left(1+\frac{1}{2ue^u}\right)+\frac{3\widehat{\phi}_1}{2u^2}\left(1+\frac{2}{ue^u}\right)+ \frac{3|\widehat{\phi}_1'|}{2u^2e^u}\right), \end{multline} where we also used $\log{\left(1+u\right)}\geq u/(1+u)$ in derivation of~\eqref{eq:boundderiv}. By using bounds from each of the above cases (1)--(3) in the inequality~\eqref{eq:boundderiv2}, we obtain the following: \begin{enumerate} \item The case of $\log{\log{\left(3\exp{\left(e^2\right)}\right)}}\leq u < \log{\log{10^{208}}}$. Here we have \[ \frac{\dif{}}{\dif{u}}e^{u}\omega\left(t_0;u\right) \geq 1.413\frac{e^u}{u} - 153.52e^{2u-e^u} > 0. \] \item The case of $\log{\log{10^{208}}}< u < \log{\log{10^{2465}}}$. Here we have \[ \frac{\dif{}}{\dif{u}}e^{u}\omega\left(t_0;u\right) \geq 0.292\frac{e^u}{u} - 16.62e^{2u-e^u} > 0. \] \item The case of $u>\log{\log{10^{2465}}}$. Here we have \[ \frac{\dif{}}{\dif{u}}e^{u}\omega\left(t_0;u\right) \geq 0.366\frac{e^u}{u} - 8.47e^{2u-e^u} > 0. \] \end{enumerate} Because the expressions in the brackets in~\eqref{eq:boundderiv} and~\eqref{eq:boundderiv3} are positive in all of the above cases, it follows $\widehat{\omega}_1'(u)>0$ and $\widehat{\omega}_2'(u)>0$ for all $u\in\mathcal{D}_0$. The proof is thus complete. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:main}.] The first part of Theorem~\ref{thm:main} follows immediately from Corollary~\ref{cor:main}. Estimates~\eqref{eq:corMainConrete} and~\eqref{eq:corMain2Conrete} follow by taking $t_0=2\gamma_1$ and $u\geq 10$ in Corollary~\ref{cor:main} while observing that then $u\cdot\omega\left(2\gamma_1;u\right)$ is a decreasing function. \end{proof} In addition to Corollary~\ref{cor:main}, for our applications it is crucial to estimate $\left|1/\zeta(s)\right|$ and $\left|\zeta(s)\right|$ also for $0\leq t\leq 2\exp{\left(e^2\right)}$. We do this by using numerical methods. \begin{lemma} \label{lem:zetabound} Let $s=\sigma+\ie t$ with $1/2<\sigma\leq 3/2$ and $0\leq t\leq 2\exp{\left(e^2\right)}$. Then \[ \left|\frac{1}{\zeta(s)}\right| \leq \frac{4}{\sigma-\frac{1}{2}}. \] \end{lemma} \begin{proof} Let \[ F_n(s) \de \frac{s-\frac{1}{2}-\ie\gamma_n}{\zeta(s)} \] for $n\in\N$, and define sets \begin{gather*} \mathcal{S}_n \de \left\{z\in\C\colon \frac{1}{2}\leq \Re\{z\}\leq \frac{3}{2}, \frac{\gamma_{n-1}+\gamma_{n}}{2}\leq \Im\{z\}\leq \frac{\gamma_{n}+\gamma_{n+1}}{2}\right\}, \\ \mathcal{S}_{0} \de \left\{z\in\C\colon \frac{1}{2}\leq \Re\{z\}\leq \frac{3}{2}, 0\leq \Im\{z\}\leq 11\right\}, \end{gather*} where $\gamma_0\de 22-\gamma_1$. Observe that \[ \mathcal{S}\de \left\{z\in\C\colon \frac{1}{2}\leq \Re\{z\}\leq \frac{3}{2}, 0\leq \Im\{z\}\leq 2e^{e^{2}}\right\} \subset \mathcal{S}_{0}\cup \bigcup_{n=1}^{2703}\mathcal{S}_n. \] Because $F_n(s)$ is a holomorphic function in a neighborhood of $\mathcal{S}_n$ for each $n\in\N$, and $1/\zeta(s)$ is a holomorphic function in a neighborhood of $\mathcal{S}_{0}$, the maximum modulus principle asserts that \begin{equation} \label{eq:maxprinc} \left|\frac{1}{\zeta(s)}\right| \leq \frac{1}{\sigma-\frac{1}{2}} \cdot \max\left\{\max_{1\leq n\leq 2703}\left\{\max_{z\in\partial{\mathcal{S}_n}}\left\{\left|F_n(z)\right|\right\}\right\}, \max_{z\in\partial{\mathcal{S}_0}}\left\{\left|\frac{1}{\zeta(z)}\right|\right\}\right\} \end{equation} for $s\in\mathcal{S}$. For each $n\in\{0,1,\ldots,2703\}$ we calculated (using \emph{Mathematica}) values of $\left|F_n(z)\right|$ for $n\neq 0$ and $\left|1/\zeta(z)\right|$ for $n=0$, respectively, at $101$ points for each edge of $\partial{\mathcal{S}_n}$ and points are placed equidistantly to each other. After making further inspection by performing calculations on more points for those $n$ for which the latter procedure returned values greater than $2$, we obtain \[ \max_{z\in\partial{\mathcal{S}_0}}\left\{\left|\frac{1}{\zeta(z)}\right|\right\} \leq 2, \quad \max_{1\leq n\leq 2703}\left\{\max_{z\in\partial{\mathcal{S}_n}}\left\{\left|F_n(z)\right|\right\}\right\} \leq 3.3, \] and the latter expression attains extremum for $n=922$ and $n=923$. The lemma now follows by~\eqref{eq:maxprinc} and after rounding the constants to the largest integers. \end{proof} \begin{lemma} \label{lem:zetabound2} Let $s=\sigma+\ie t$ with $1/2\leq\sigma\leq 3/4$ and $0\leq t\leq 2\exp{\left(e^2\right)}$. Then $\left|\zeta(s)\right|\leq 14$. \end{lemma} \begin{proof} Let \[ \mathcal{T} \de \left\{z\in\C\colon \frac{1}{2}\leq \Re\{z\}\leq \frac{3}{4}, 0\leq \Im\{z\}\leq \gamma_{2703}\right\}. \] The maximum modulus principle asserts that $\left|\zeta(s)\right|\leq \max_{z\in\partial{\mathcal{T}}}\left|\zeta(z)\right|$ for $s\in\mathcal{T}$. Using \emph{Mathematica} we obtained \[ \max_{\gamma_1\leq t\leq \gamma_{2703}}\left|\zeta\left(\frac{1}{2}+\ie t\right)\right| \leq 13.5, \quad \max_{\gamma_1\leq t\leq \gamma_{2703}}\left|\zeta\left(\frac{3}{4}+\ie t\right)\right| \leq 6.91 \] by calculating values of $\left|\zeta(s)\right|$ at $101$ equidistantly distributed points for each gap between consecutive zeros. Similarly we can verify that $\left|\zeta(s)\right|$ is less than $3.5$ on the rest of $\partial{\mathcal{T}}$. The lemma now follows after rounding the constants to the largest integers. \end{proof} \section{Explicit truncated Perron's formula with applications} \label{sec:Perron} There exist several variants of the truncated Perron's formula, e.g., the classical version~\cite[Lemma 3.12]{Titchmarsh} which implies~\eqref{eq:PerronNonexplicit} and~\eqref{eq:PerronNonexplicit2}, and the version with a smooth truncation~\cite{RamanaRamare} which produces the error term without the log-factor for suitable chosen test functions. The following theorem is an explicit version of the classical variant from~\cite{RamareEigen} with the error term which is good enough for our purposes. In Section~\ref{sec:general} we derive general estimates for the Mertens function $M(x)$ and the number of $k$-free numbers $Q_{k}(x)$. \begin{theorem} \label{thm:truncatedPerron} Let $f(s)=\sum_{n=1}^{\infty}a_n n^{-s}$ be the Dirichlet series with the abscissa of absolute convergence $\bar{\sigma}$, and let $g(\sigma)=\sum_{n=1}^{\infty}\left|a_n\right|n^{-\sigma}$ for $\sigma>\bar{\sigma}$. If $x\geq 1$, $T\geq 1$, $\sigma>\max\{0,\bar{\sigma}\}$ and $\left|a_n\right|\leq \psi(n)$ for an increasing positive function $\psi$, then \[ \left|\sum_{n\leq x}a_n - \frac{1}{2\pi\ie}\int_{\sigma-\ie T}^{\sigma+\ie T}f(z)\frac{x^z}{z}\dif{z}\right| \leq 2g(\sigma)\frac{x^{\sigma}}{T} + 4e^{\sigma}\left(\frac{e\psi(ex)x\log{T}}{T} + \psi(ex)\right). \] \end{theorem} \begin{proof} By~\cite[Theorem 7.1]{RamareEigen} we know that the left-hand side of the latter inequality is not greater than \[ \frac{2x^{\sigma}}{T}\int_{1/T}^{\infty}\frac{1}{u^2}\sum_{\left|\log{\frac{x}{n}}\right|\leq u}\frac{\left|a_n\right|}{n^{\sigma}}\dif{u} = \frac{2x^{\sigma}}{T}\left(\int_{1/T}^{1}+\int_{1}^{\infty}\right)\frac{1}{u^2}\sum_{x/e^{u}\leq n\leq xe^{u}}\frac{\left|a_n\right|}{n^{\sigma}}\dif{u}. \] We need to bound the last two integrals. Trivially, \[ \int_{1}^{\infty}\frac{1}{u^2}\sum_{x/e^{u}\leq n\leq xe^{u}}\frac{\left|a_n\right|}{n^{\sigma}}\dif{u} \leq g(\sigma). \] Let $\sigma\neq 1$. By partial summation we have \[ \sum_{a\leq n\leq b}\frac{1}{n^{\sigma}} \leq a^{-\sigma} + \lfloor{b}\rfloor b^{-\sigma} - \lfloor{a}\rfloor a^{-\sigma} + \sigma\int_{a}^{b}\frac{\lfloor{y}\rfloor}{y^{1+\sigma}}\dif{y} \leq 2a^{-\sigma} + (b-a)a^{-\sigma} \] for $0<a\leq b$ since $\left|a^{1-\sigma}-b^{1-\sigma}\right|\leq (b-a)\left|\sigma-1\right|a^{-\sigma}$. It is not hard to see that the same inequality holds also for $\sigma=1$ since $\log{(b/a)}\leq b/a-1$. Therefore, \[ \int_{1/T}^{1}\frac{1}{u^2}\sum_{x/e^{u}\leq n\leq xe^{u}}\frac{\left|a_n\right|}{n^{\sigma}}\dif{u} \leq 2e^{\sigma}\psi(ex)\frac{T}{x^{\sigma}}\left(1+\frac{ex\log{T}}{T}\right), \] where we used $e^{u}-e^{-u}\leq 2eu$ which is valid for $u\in[0,1]$. The final estimate from Theorem~\ref{thm:truncatedPerron} now easily follows. \end{proof} Let $\sigma>1$. By partial summation one can easily prove that \begin{equation} \label{eq:simpleboundzeta} \zeta(\sigma) \leq \frac{\sigma}{\sigma-1}. \end{equation} Although not needed here, note that better estimate exists, see~\cite[Lemma 5.4]{Ramare}. We are going to use~\eqref{eq:simpleboundzeta} in the following corollaries to Theorem~\ref{thm:truncatedPerron}. \begin{corollary} \label{cor:Perron} Let $x\geq e^{2}$. Then \[ M(x) = \frac{1}{2\pi\ie}\int_{1+\frac{1}{\log{x}}-\ie e\sqrt{x}}^{1+\frac{1}{\log{x}}+\ie e\sqrt{x}}\frac{x^{z}}{z\zeta(z)}\dif{z} + P_1, \] where $\left|P_1\right| \leq 22.2\sqrt{x}\log{(ex)}$. \end{corollary} \begin{proof} Taking $a_n=\mu(n)$, $\psi\equiv1$, $\sigma=1+1/\log{x}$ and $T=e\sqrt{x}$ in Theorem~\ref{thm:truncatedPerron}, which conditions are clearly satisfied with such choice of parameters, and then using~\eqref{eq:simpleboundzeta} since $g(\sigma)\leq \zeta(\sigma)$ will give the stated estimate. \end{proof} \begin{corollary} \label{cor:Perron2} Let $x\geq e^{2}$, $y\geq 1$ and $k\geq 2$ be an integer. Then \[ Q_{k,2}(x) = \frac{1}{2\pi\ie} \int_{1+\frac{1}{\log{x}}-\ie ex}^{1+\frac{1}{\log{x}}+\ie ex}\zeta(z)f_{y}(kz)\frac{x^z}{z}\dif{z} + P_2, \] where $f_y(s)$ is defined by~\eqref{eq:fy}, and $\left|P_2\right|\leq 26y\log{(ex)}$. \end{corollary} \begin{proof} Let $\zeta(s)f_{y}(ks)=\sum_{n=1}^{\infty}a_n n^{-s}$ be the Dirichlet series. By Lemma~\ref{lem:zetafy} we have $\left|a_n\right|\leq y$. Therefore, taking $\psi\equiv y$, $\sigma=1+1/\log{x}$ and $T=ex$ in Theorem~\ref{thm:truncatedPerron} and then using~\eqref{eq:simpleboundzeta} since $g(\sigma)\leq y\zeta(\sigma)$ will furnish the proof. \end{proof} Constants from Corollaries~\ref{cor:Perron} and~\ref{cor:Perron2} can be improved in terms of larger $x_0\leq x$, but such an improvement is negligible for our applications. \subsection{General form of Theorem~\ref{thm:FinalApp}} \label{sec:general} We are going to formulate and prove general bounds for $M(x)$ and $Q_{k}(x)$, where constants are expressed with functions developed in previous sections. \begin{theorem} \label{thm:MainMoebius} Assume the Riemann Hypothesis. Let $1/2<\sigma_0<1$, $2\gamma_1\leq t_0\leq 50$ and $x\geq x_0\geq 4\exp{\left(2e^2\right)}$. Then \[ \left|M(x)\right| \leq \mathcal{N}_1\left(\sigma_0,t_0,x_0\right)x^{\sigma_0+\frac{1}{2}\omega_{0,1}}\log{x}, \] where \begin{equation} \label{eq:omega01} \omega_{0,1} = \omega_{0,1}\left(\sigma_0,t_0,x_0\right) \de \omega_1\left(\sigma_0,t_0;\log{\log{\left(\frac{3e\sqrt{x_0}}{2}\right)}}\right) \end{equation} with $\omega_1\left(\sigma_0,t_0;u\right)$ defined by~\eqref{eq:omega1}, and \begin{multline} \label{eq:N1} \mathcal{N}_1\left(\sigma_0,t_0,x_0\right) \de \frac{1}{\pi}\left(\frac{3e}{2}\right)^{\omega_{0,1}}\left(\frac{1}{2}+\frac{1}{\log{x_0}}+\frac{1.1-\sigma_0}{x_0^{\sigma_0-\frac{1}{2}}\log{x_0}}\right) \\ + \frac{1}{x_0^{\frac{1}{2}\omega_{0,1}}}\left(\frac{8e^{e^2}}{\pi\sigma_0\left(\sigma_0-\frac{1}{2}\right)\log{x_0}}+ \frac{23.6}{x_0^{\sigma_0-\frac{1}{2}}}\right). \end{multline} \end{theorem} \begin{proof} Take $T=e\sqrt{x}$ and observe that $T\geq 2\exp{\left(e^2\right)}$. By Cauchy's formula we have \[ \int_{1+\frac{1}{\log{x}}-\ie T}^{1+\frac{1}{\log{x}}+\ie T}\frac{x^{z}}{z\zeta(z)}\dif{z} = \left(\int_{1+\frac{1}{\log{x}}-\ie T}^{\sigma_0-\ie T}+\int_{\sigma_0-\ie T}^{\sigma_0+\ie T}+\int_{\sigma_0+\ie T}^{1+\frac{1}{\log{x}}+\ie T}\right)\frac{x^{z}}{z\zeta(z)}\dif{z} \] since under RH the integrand is a holomorphic function for $\Re\{z\}>1/2$. Denote by $\mathcal{I}_1$, $\mathcal{I}_2$ and $\mathcal{I}_3$ the latter integrals, written in the same order. By Corollary~\ref{cor:Perron} we then have \begin{equation} \label{eq:boundMmain} \left|M(x)\right| \leq \frac{1}{2\pi}\left(\left|\mathcal{I}_1\right| + \left|\mathcal{I}_2\right| + \left|\mathcal{I}_3\right|\right) + 22.2\sqrt{x}\log{(ex)}. \end{equation} We need to estimate each of the integrals. Corollary~\ref{cor:main} guarantees that \begin{equation} \label{eq:I1I3} \left|\mathcal{I}_1\right| + \left|\mathcal{I}_3\right| \leq 2\left(1+\frac{1}{\log{x}}-\sigma_0\right)\left(\frac{3e}{2}\right)^{\omega_{0,1}}x^{\frac{1}{2}+\frac{1}{2}\omega_{0,1}}. \end{equation} By Corollary~\ref{cor:main} and Lemma~\ref{lem:zetabound} we also have \begin{equation} \label{eq:I2} \left|\mathcal{I}_2\right| \leq \frac{16e^{e^2}}{\sigma_0\left(\sigma_0-\frac{1}{2}\right)}x^{\sigma_0} + 2\left(\frac{1}{2}+\frac{1}{\log{x}}\right)\left(\frac{3e}{2}\right)^{\omega_{0,1}}x^{\sigma_0+\frac{1}{2}\omega_{0,1}}\log{x}. \end{equation} Taking~\eqref{eq:I1I3} and~\eqref{eq:I2} into~\eqref{eq:boundMmain} gives the main estimate. \end{proof} Before proceeding to the formulation and proof of the similar result also for $Q_{k}(x)$, we need to obtain conditional estimate for $\left|f_y(s)\right|$. We do this entirely with the help of Theorem~\ref{thm:MainMoebius}. \begin{lemma} \label{lem:fy} Assume the Riemann Hypothesis. Let $s=\sigma+\ie t$ with $\sigma>1$, $1/2<\sigma_0<1$, $2\gamma_1\leq t_0\leq 50$, $x\geq x_0\geq 4\exp{\left(2e^2\right)}$, and $y\geq x_0$. If \begin{equation} \label{eq:alpha} \alpha=\alpha\left(\sigma_0,t_0,x_0\right) \de \sigma_0 + \frac{1}{2}\omega_{0,1}\left(\sigma_0,t_0,x_0\right) < 1, \end{equation} where $\omega_{0,1}\left(\sigma_0,t_0,x_0\right)$ is defined by~\eqref{eq:omega01}, then \[ \left|f_{y}(s)\right| \leq \mathcal{N}_1\left(\sigma_0,t_0,x_0\right)\Phi\left(\sigma,\alpha,x_0\right)y^{\alpha-\sigma}\log{y}, \] where $f_{y}(s)$ is defined by~\eqref{eq:fy} and \begin{equation} \label{eq:Phi} \Phi\left(\sigma,\alpha,x_0\right) \de 1+\frac{\sigma}{\sigma-\alpha}\left(1+\frac{1}{\left(\sigma-\alpha\right)\log{x_0}}\right). \end{equation} \end{lemma} \begin{proof} By partial summation we have \[ \sum_{y<n\leq Y} \frac{\mu(n)}{n^s} = \frac{M(Y)}{Y^s} - \frac{M(y)}{y^s} + \sigma\int_{y}^{Y}\frac{M(u)}{u^{1+s}}\dif{u} \] for $Y>y$. Taking absolute values in the latter equality and then using Theorem~\ref{thm:MainMoebius} to estimate the right-hand side, we see that we can take $Y\to\infty$ since $\alpha-\sigma<0$. The stated inequality now easily follows. \end{proof} \begin{theorem} \label{thm:MainkFree} Assume the Riemann Hypothesis. Let $1/2<\sigma_0<1$, $2\gamma_1\leq t_1\leq 50$, $2\gamma_1\leq t_2\leq 50$, $x\geq x_0^{k+1}$, $x_0\geq 4\exp{\left(2e^2\right)}$ and $\alpha<1$, where $\alpha\left(\sigma_0,t_1,x_0\right)$ is defined by~\eqref{eq:alpha}. Then \[ \left|Q_k(x) - \frac{x}{\zeta(k)}\right| \leq \mathcal{N}_2\left(\sigma_0,t_1,t_2,x_0,k,\alpha\right) x^{\frac{1/2+\alpha}{k+1}+\omega_{0,2}}\log^{2}{x}, \] where \[ \omega_{0,2} = \omega_{0,2}\left(t_2,x_0\right) \de \omega_2\left(t_2;\log{\log{\left(\frac{3e x_0}{2}\right)}}\right) \] with $\omega_2\left(t_2;u\right)$ defined by~\eqref{eq:omega2}, and \begin{flalign*} \mathcal{N}_2\left(\sigma_0,t_1,t_2,x_0,k,\alpha\right) &\de \frac{\mathcal{N}_1\Phi}{\pi(k+1)}\left(\frac{3e}{2}\right)^{\omega_{0,2}}\left(1+\frac{1}{\log{x_0}} +\frac{0.57}{\sqrt{x_0}\log^{2}{x_0}}\right) \\ &+ \frac{1}{x_0^{\omega_{0,2}}\log{x_0}} \left(\frac{\mathcal{N}_1}{k+1}\left(\frac{56e^{e^{2}}\Phi}{\pi}+\frac{1}{2x_0^{\frac{1}{2(k+1)}}}\right) +\frac{27.7}{x_0^{\frac{\alpha-1/2}{k+1}}}\right) \end{flalign*} with $\mathcal{N}_1=\mathcal{N}_1\left(\sigma_0,t_1,x_0\right)$ and $\Phi=\Phi\left(k/2,\alpha,x_0\right)$ defined by~\eqref{eq:N1} and~\eqref{eq:Phi}, respectively. \end{theorem} \begin{proof} Take $T=ex$, $y=x^{1/(k+1)}$ and $1/2<\sigma'\leq 3/4$. Observe that $x\geq x_0$, $T\geq 2\exp{\left(e^2\right)}$ and $y\geq x_0$. By Cauchy's formula we have \begin{multline*} \int_{1+\frac{1}{\log{x}}-\ie T}^{1+\frac{1}{\log{x}}+\ie T}\zeta(z)f_{y}(kz)\frac{x^z}{z}\dif{z} = \\ \left( \int_{1+\frac{1}{\log{x}}-\ie T}^{\sigma'-\ie T} + \int_{\sigma'-\ie T}^{\sigma'+\ie T} + \int_{\sigma'+\ie T}^{1+\frac{1}{\log{x}}+\ie T}\right)\zeta(z)f_{y}(kz)\frac{x^z}{z}\dif{z} + \left(2\pi\ie\right)xf_y(k). \end{multline*} since under RH the integrand is a holomorphic function for $\Re\{z\}>1/2$ and $z\neq 1$, having a simple pole at $z=1$. Denote by $\mathcal{I}_1$, $\mathcal{I}_2$ and $\mathcal{I}_3$ the latter integrals, written in the same order. By~\eqref{eq:Q1} and Corollary~\ref{cor:Perron2} we then have \begin{equation*} Q_k(x) - \frac{x}{\zeta(k)} = \frac{1}{2\pi\ie}\left(\mathcal{I}_1 + \mathcal{I}_2 + \mathcal{I}_3\right) + P_2 - \frac{1}{2}M(y) - S_k(x,y), \end{equation*} which by Corollary~\ref{cor:Perron2} and Theorem~\ref{thm:MainMoebius} immediately implies \begin{multline} \label{eq:mainQbound} \left|Q_k(x) - \frac{x}{\zeta(k)}\right| \leq \frac{1}{2\pi}\left(\left|\mathcal{I}_1\right| + \left|\mathcal{I}_2\right| + \left|\mathcal{I}_3\right|\right) \\ + 26x^{\frac{1}{k+1}}\log{(ex)} + \frac{\mathcal{N}_1}{2(k+1)}x^{\frac{\alpha}{k+1}}\log{x} + \frac{1}{2}x^{\frac{1}{k+1}}. \end{multline} Corollary~\ref{cor:main} and Lemma~\ref{lem:fy} guarantee that \begin{equation} \label{eq:I11I33} \left|\mathcal{I}_1\right| + \left|\mathcal{I}_3\right| \leq \frac{2\left(1+\frac{1}{\log{x}}-\sigma'\right)\left(\frac{3e}{2}\right)^{\omega_{0,2}}\mathcal{N}_1}{k+1} \Phi\left(k\sigma',\alpha,x_0\right)x^{\frac{\alpha-k\sigma'}{k+1}+\omega_{0,2}} \end{equation} since $\Phi\left(\sigma,\alpha,x_0\right)y^{\alpha-\sigma}$ is a decreasing function in $\sigma$. By Corollary~\ref{cor:main} and Lemma~\ref{lem:zetabound2} we also have \begin{multline} \label{eq:I22} \left|\mathcal{I}_2\right| \leq \frac{56e^{e^2}\mathcal{N}_1}{\sigma'(k+1)}\Phi\left(k\sigma',\alpha,x_0\right) x^{\frac{\sigma'+\alpha}{k+1}}\log{x} \\ + \frac{2\left(\frac{3e}{2}\right)^{\omega_{0,2}}\left(1+\frac{1}{\log{x}}\right)\mathcal{N}_1}{k+1}\Phi\left(k\sigma',\alpha,x_0\right) x^{\frac{\sigma'+\alpha}{k+1}+\omega_{0,2}}\log^{2}{x}. \end{multline} Taking~\eqref{eq:I11I33} and~\eqref{eq:I22} into~\eqref{eq:mainQbound}, and then letting $\sigma'\to 1/2$, gives the final estimate. \end{proof} \section{Proof of Theorem~\ref{thm:FinalApp} and Corollary~\ref{cor:m}} \label{sec:proofApp} We are now in the position to prove estimates~\eqref{eq:thmMertensGen}, \eqref{eq:thmkFree}, \eqref{eq:thmMertens} and~\eqref{eq:thmSquarefree} from Theorem~\ref{thm:FinalApp}, and estimate~\eqref{eq:m} from Corollary~\ref{cor:m}. All numerical computations have been made with \emph{Mathematica}. \begin{proof}[Proof of~\eqref{eq:thmMertensGen}] We are using Theorem~\ref{thm:MainMoebius}. Take $t_0=38.0820263$, \[ x_0\geq 10^{10^{4.487}}, \quad u=\log{\log{\left(\frac{3e\sqrt{x_0}}{2}\right)}}, \quad \sigma_0=\frac{1}{2}+\frac{0.842996}{u}. \] Our choice for the constants will be clear from the proof~\eqref{eq:thmMertens}, see the first row in Table~\ref{tab:Proof1}. By the definition of the function $\omega_1$, see~\eqref{eq:omega1}, we have \[ u\cdot\omega_1\left(\sigma_0,t_0;u\right) = 2\left(\widehat{\phi}_1\left(u\right)+\frac{u}{50e^{u+e^{u}}}\right) \log{\left(1+\frac{2\lambda\left(t_0\right)}{0.842996}\right)} + u\cdot\omega\left(t_0;u\right). \] This function is decreasing in $u\geq10$ which implies \[ \frac{7.3}{u} \leq \omega_{0,1}\left(\sigma_0,t_0,x_0\right) \leq \frac{8.764095}{u}. \] With this we can show that $\sigma_0+\frac{1}{2}\omega_{0,1} \leq 1/2 + 5.2251u^{-1}$ and $\mathcal{N}_1\left(\sigma_0,t_0,x_0\right)\leq 0.6$. By taking $x_0=x$, inequality~\eqref{eq:thmMertensGen} now easily follows. \end{proof} \begin{proof}[Proof of~\eqref{eq:thmkFree}] We are using Theorem~\ref{thm:MainkFree}. Take $t_1=30.3977424$, $t_2=2\gamma_1$, \[ x_0 \geq 10^{10^{23.147}}, u_1=\log{\log{\left(\frac{3e\sqrt{x_0}}{2}\right)}}, u_2=\log{\log{\left(\frac{3e x_0}{2}\right)}}, \sigma_0 = \frac{1}{2} + \frac{0.75782}{u_2}. \] Our choice for the constants will be clear from the proof~\eqref{eq:thmSquarefree}, see the first row in Table~\ref{tab:Proof2}. By the definition of the function $\omega_1$, see~\eqref{eq:omega1}, we have \[ u_1\omega_1\left(\sigma_0,t_1;u_1\right) = 2\left(\widehat{\phi}_1\left(u_1\right)+\frac{u_1}{50e^{u_1+e^{u_1}}}\right)\log{\left(1+\frac{2\lambda\left(t_1\right)u_2}{0.75782u_1}\right)} + u_1\omega\left(t_1;u_1\right). \] We have $1<u_2/u_1\leq 1.013$. Similarly as in the previous proof we can deduce from this that \begin{equation} \label{eq:2ndproofOmega01} \frac{7.5}{u_1} \leq \omega_{0,1}\left(\sigma_0,t_1,x_0\right) \leq \frac{7.525}{u_1}. \end{equation} By the definition of the function $\omega_2$, see~\eqref{eq:omega2}, we also have \[ u_2\omega_2\left(t_2;u_2\right) = \frac{1}{\pi}\lambda\left(2\gamma_1\right) + u_2\omega\left(2\gamma_1;u_2\right), \] which implies $\omega_{0,2}\left(t_2,x_0\right) \leq 7.492/u_2$. Therefore, \begin{multline} \label{eq:2ndproofMain} \frac{1}{k+1}\left(\frac{1}{2}+\sigma_0+\frac{1}{2}\omega_{0,1}\left(\sigma_0,t_1,x_0\right)\right) + \omega_{0,2}\left(t_2,x_0\right) \\ \leq \frac{1}{k+1} + \frac{7.525}{2(k+1)u_1} + \left(7.492+\frac{0.75782}{k+1}\right)\frac{1}{u_2}. \end{multline} In particular, $\alpha=\alpha\left(\sigma_0,t_1,x_0\right)\leq 0.585$. Then this and~\eqref{eq:2ndproofOmega01} imply $\mathcal{N}_1\left(\sigma_0,t_1,x_0\right)\leq 0.2$ and $\Phi\left(k/2,\alpha,x_0\right)\leq 3.41$, which further provides $\mathcal{N}_2\leq 0.1$. The proof is thus furnished by taking $x_0=x^{1/(k+1)}$ in~\eqref{eq:2ndproofMain}. \end{proof} \begin{proof}[Proof of~\eqref{eq:thmMertens}] We are using Theorem~\ref{thm:MainMoebius}. Take $u_0\de \log{\log{\left(3e\sqrt{x_0}/2\right)}}$ and $\log{x_0}=10^{X}\log{10}$. For each $\alpha$ from Table~\ref{tab:FinalApp} we are searching for $\sigma_0\in(1/2,1)$ and $t_0\in\left[30,50\right]$ such that \begin{equation} \label{eq:alpha0} \alpha_0\left(\sigma_0,t_0;u_0\right)\de \sigma_0+\frac{1}{2}\omega_{1}\left(\sigma_0,t_0;u_0\right) < \alpha \end{equation} for the smallest possible $u_0$. We do this in the following way: for the particular $u_0$ we find the minimum of $\alpha_0\left(\sigma_0,t_0;u_0\right)$ by calculating this function for each \[ \sigma_0\in\left\{\frac{1}{2}+\frac{n}{2000} \colon 1\leq n\leq 1000\right\} \] while using \texttt{FindMinimum} to determine $t_0$ in each case of $\sigma_0$. With such process we obtain values for $u_0$, $\sigma_0$ and $t_0$, and then we also calculate values for $\mathcal{N}_1$ and $X$, see Table~\ref{tab:Proof1}. Values for $X$ and $A$ from Table~\ref{tab:FinalApp} now simply follow. \end{proof} \begin{table} \centering \begin{footnotesize} \begin{tabular}{cccccc} \toprule $u_0$ & $\sigma_0$ & $t_0$ & $\alpha_0$ & $\mathcal{N}_1$ & $X$ \\ \midrule $10.472$ & $0.5805$ & $38.0820263$ & $0.998969...$ & $0.516044...$ & $4.486728...$ \\ $10.600$ & $0.5795$ & $37.7819602$ & $0.989916...$ & $0.504493...$ & $4.542320...$ \\ $12.240$ & $0.5650$ & $34.7754417$ & $0.899710...$ & $0.407781...$ & $5.254575...$ \\ $13.580$ & $0.5575$ & $33.2402484$ & $0.849801...$ & $0.361954...$ & $5.836532...$ \\ $15.460$ & $0.5495$ & $31.9460694$ & $0.799913...$ & $0.321749...$ & $6.653006...$ \\ $18.250$ & $0.5415$ & $31.0325517$ & $0.749867...$ & $0.285883...$ & $7.864688...$ \\ $22.700$ & $0.5330$ & $30.5540678$ & $0.699211...$ & $0.253936...$ & $9.797299...$ \\ $30.080$ & $0.5250$ & $30.4162800$ & $0.649986...$ & $0.226151...$ & $13.00239...$ \\ $45.110$ & $0.5165$ & $30.3958431$ & $0.599987...$ & $0.201251...$ & $19.52983...$ \\ $90.220$ & $0.5085$ & $30.4079746$ & $0.549996...$ & $0.178845...$ & $39.12086...$ \\ \bottomrule \end{tabular} \end{footnotesize} \caption{Values for the parameters from the proof of~\eqref{eq:thmMertens}.} \label{tab:Proof1} \end{table} \begin{proof}[Proof of~\eqref{eq:thmSquarefree}] We are using Theorem~\ref{thm:MainkFree} for $k=2$. Take $u_0=\log{\log{\left(3e x_0/2\right)}}$ and $\log{x_0}=10^{Y}\log{10}$. For each $\beta$ from Table~\ref{tab:FinalApp} we are searching for $\sigma_0\in(1/2,1)$ and $t_1\in\left[30,50\right]$ such that \[ \beta_0\left(\sigma_0,t_1;u_0\right) \de \frac{1}{3}\left(\frac{1}{2}+\alpha_1\left(\sigma_0,t_1;u_0\right)\right) + \omega_{2}\left(2\gamma_1;u_0\right) < \beta \] for the smallest possible $u_0$, where \[ \alpha_1\left(\sigma_0,t_1;u_0\right)\de \sigma_0+\frac{1}{2}\omega_1\left(\sigma_0,t_1;u_0-0.6932\right). \] Note that \[ u_0-0.6932 \leq u_0+\log{\frac{1+e^{-u_0}\log{\frac{3e}{2}}}{2}} = \log{\log{\left(\frac{3e\sqrt{x_0}}{2}\right)}}, \] which implies $\alpha\left(\sigma_0,t_1,x_0\right)\leq \alpha_1$. Now the method is the same as in the proof of inequality~\eqref{eq:thmMertens}, and the values for the parameters are listed in Table~\ref{tab:Proof2}. Values for $Y$ and $B$ from Table~\ref{tab:FinalApp} now simply follow. \end{proof} \begin{table} \centering \begin{footnotesize} \begin{tabular}{rcccccc} \toprule $u_0$ & $\sigma_0$ & $t_1$ & $\alpha_1$ & $\beta_0$ & $\mathcal{N}_2$ & $Y$ \\ \midrule $54.13$ & $0.5140$ & $30.3977424$ & $0.584406...$ & $0.499874...$ & $0.085162...$ & $23.14614...$ \\ $54.42$ & $0.5140$ & $30.3999606$ & $0.583951...$ & $0.498984...$ & $0.084900...$ & $23.27209...$ \\ $57.54$ & $0.5130$ & $30.3927510$ & $0.579344...$ & $0.489984...$ & $0.082523...$ & $24.62708...$ \\ $61.45$ & $0.5125$ & $30.4039281$ & $0.574239...$ & $0.479997...$ & $0.079839...$ & $26.32518...$ \\ $65.94$ & $0.5115$ & $30.3989406$ & $0.569128...$ & $0.469992...$ & $0.077357...$ & $28.27516...$ \\ $71.14$ & $0.5105$ & $30.3931181$ & $0.564026...$ & $0.459987...$ & $0.074965...$ & $30.53349...$ \\ $77.23$ & $0.5100$ & $30.4071541$ & $0.558934...$ & $0.449985...$ & $0.072556...$ & $33.17834...$ \\ $84.45$ & $0.5090$ & $30.4008385$ & $0.553851...$ & $0.439997...$ & $0.070337...$ & $36.31395...$ \\ $135.05$ & $0.5055$ & $30.3927192$ & $0.533570...$ & $0.399998...$ & $0.062108...$ & $58.28925...$ \\ $540.10$ & $0.5015$ & $30.4310282$ & $0.508366...$ & $0.349993...$ & $0.053262...$ & $234.2002...$ \\ \bottomrule \end{tabular} \end{footnotesize} \caption{Values for the parameters from the proof of~\eqref{eq:thmSquarefree}.} \label{tab:Proof2} \end{table} With the help of Theorem~\ref{thm:MainMoebius} we are able to prove the following simple generalization of Corollary~\ref{cor:m}. \begin{theorem} \label{thm:m} Assume the Riemann Hypothesis. Let $s=\sigma+\ie t$, $1/2<\sigma_0<1$, $2\gamma_1\leq t_0\leq 50$ and $x\geq x_0\geq 4\exp{\left(2e^2\right)}$. Then \[ \left|\sum_{n\leq x}\frac{\mu(n)}{n^s} - \frac{1}{\zeta(s)}\right| \leq \mathcal{N}_1\left(1+\frac{|s|}{\sigma-\alpha}\right)\frac{\log{x}}{x^{\sigma-\alpha}} + \frac{\mathcal{N}_1|s|}{\left(\sigma-\alpha\right)^2 x^{\sigma-\alpha}} \] for $\sigma>1/2$ and $\alpha<\sigma$, where $\mathcal{N}_1\left(\sigma_0,t_0,x_0\right)$ and $\alpha\left(\sigma_0,t_0,x_0\right)$ are defined by~\eqref{eq:N1} and~\eqref{eq:alpha}, respectively. \end{theorem} \begin{proof} By partial summation and~\eqref{eq:PartSumMobius} we have \[ \sum_{n\leq x}\frac{\mu(n)}{n^s} = \frac{1}{\zeta(s)} + \frac{M(x)}{x^s} - s\int_{x}^{\infty}\frac{M(u)}{u^{1+s}}\dif{u}. \] By Theorem~\ref{thm:MainMoebius} we also have $\left|M(u)\right|\leq \mathcal{N}_1 u^{\alpha}\log{u}$ for $u\geq x$. The final inequality now easily follows after the exact evaluation of the corresponding integral. \end{proof} \begin{proof}[Proof of Corollary~\ref{cor:m}] Take $s=1$, $\sigma_0=0.5795$, $t_0=37.7819602$, and \[ x_0=\left(\frac{2}{3e}\right)^2 e^{2e^{10.6}} \] in Theorem~\ref{thm:m} while using Table~\ref{tab:Proof1} and the fact that $\alpha=\alpha_0$. \end{proof} \subsection{Acknowledgements} The author thanks Roger Heath-Brown for having a discussion about Remark~\ref{rem:RHB} and~\eqref{eq:problem}, as well as Richard Brent and Harald Helfgott for useful remarks. Finally, the author is grateful to his supervisor Tim Trudgian for continual guidance and support while writing this manuscript. \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2] \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
quant-ph/0511025
\section{Introduction} \subsection{Quantum nondeterminism} Classical nondeterminism, although being an unrealistic model of computation, is a fundamental concept in computational complexity with practical applications, as shown, for example, by the importance of the theory of $NP$-completeness. There are two different views of classical nondeterminism. A nondeterministic process computing a Boolean function $f(x)$ can be seen as a deterministic process $B$ receiving, besides the input $x$, a guess, or proof, $z$ and satisfying the following conditions: If $f(x)=1$ there should exist a proof $z$ such that $B(x,z)=1$; if $f(x)=0$ then $B(x,z)=0$ for all proofs $z$. Another view of nondeterminism is to consider $B$ receiving no proof, but being probabilistic. Then $B$ should output $1$ with positive probability if and only if $f(x)=1$. It is easy to see that the two models are perfectly equivalent in the classical setting. These two views of nondeterminism have been extended to obtain two alternative definitions of quantum nondeterminism. The first one, that we call in this paper quantum strong nondeterminism, is the quantum version of the probabilistic view of nondeterminism: the quantum process $B$ should output $1$ with positive probability if and only if $f(x)=1$. The second one, that we call quantum weak nondeterminism, is the extension of the first view of nondeterminism: If $f(x)=1$ there should exist a classical proof $z$ such that $B(x,z)=1$ with probability 1; if $f(x)=0$ then $B(x,z)=0$ with probability $1$ for all classical proofs $z$. In this case, $B$ is thus an exact quantum checking procedure. The point is that, contrary to the classical case, in the quantum setting these two definitions do not seem equivalent and, in the query complexity framework, strong nondeterminism has be shown to be indeed stronger than weak nondeterminism: de Wolf \cite{deWolfSIAMJC03} has provided a total function for which the strongly quantum nondeterministic query complexity is $O(1)$, while its quantum weakly nondeterministic query complexity is $\Omega(\sqrt{n})$, where $n$ is the input length. The main advantages of the strong version of quantum nondeterminism is that the definition is mathematically very convenient and that it leads to many interesting results. For quantum Turing machines, this gives a complexity class known as quantum-$NP$, which has been shown to be equal to the classical complexity class $co-C_{=}P$ \cite{Yamakami+99}. For communication protocols, de Wolf \cite{deWolfSIAMJC03} has presented an algebraic characterization of quantum strongly nondeterministic communication complexity. Moreover, unbounded ($O(1)$ vs.~$\Omega(\log n))$) and exponential ($O(\log n)$ vs.~$\Omega(n)$) gaps are known between quantum strongly nondeterministic and classical nondeterministic communication complexity of some total functions. The latter results show the power of quantum strong nondeterminism but, in our opinion, this concept is in a way too powerful to be directly compared with classical nondeterminism. Above all, it lacks the view of nondeterminism as a proof that can be efficiently checked, a view that has been fundamental in complexity theory, for example leading to concepts such as probabilistically checkable proofs (PCP). We refer to \cite{deWolfSIAMJC03} for another discussion about these two definitions and a third natural definition where the proof is allowed to be a quantum state, that we will not consider in this paper. We only mention that, although quantum proofs can be extremely useful in some cases (see in particular the works \cite{AaronsonCCC03,Raz+CCC04} studying the power of quantum proofs in certificate complexity and communication complexity, but in the setting where proofs have to be checked only with high probability), as far as quantum weakly nondeterminism is concerned, the proof has to be checked without error and, in this case, the advantage of quantum proofs over classical proofs is not obvious at all. \subsection{Our contributions} In this paper, we focus on quantum weak nondeterminism and particularly quantum weakly nondeterministic communication complexity, which has, to our knowledge, never been studied before this work. We show a quadratic gap between classical nondeterministic and quantum weakly nondeterministic communication complexity for a total function. We believe that this separation of classical nondeterministic communication complexity and the weakest model of quantum nondeterministic communication complexity, although being only quadratic, is another indication of the power of quantum computation. Indeed, the proof being classical, such a separation reveals that, if quantum exact checking procedures are allowed, the process of guessing proofs is more powerful than with classical deterministic checking procedures. Many separations of quantum and classical communication complexity are known in the usual two-players model \cite{Aaronson+05,Bar-Yossef+STOC04,Buhrman+STOC98,Hoyer+STACS02,KlauckSTOC00,RazSTOC99,deWolfSIAMJC03}. In particular, an exponential separation of quantum exact communication complexity and classical nondeterministic communication complexity has been shown for a partial function (i.~e.~a function where the inputs satisfies a promise) by Buhrman, Cleve and Wigderson \cite{Buhrman+STOC98}. But, except de Wolf's result \cite{deWolfSIAMJC03}, no gap larger than quadratic between classical and quantum complexity, for any mode of computation, is known for total functions. Moreover, before the present work, the polynomial separations for total functions already found \cite{Aaronson+05,Hoyer+STACS02,Buhrman+STOC98,KlauckSTOC00} were based on database search-like problems, that are trivial if classical nondeterminism is allowed, and thus cannot be used to show a gap between quantum weak nondeterminism and classical nondeterminism. The total function we consider in order to show the separation is new, based of the concept of Hadamard codes, and is inspired by a function considered by Buhrman, Fortnow, Newman and R\"ohrig \cite{Buhrman+SODA03} in the slightly different framework of query complexity and property testing. We present an efficient quantum weakly nondeterministic protocol computing our function, that generalizes the protocol in \cite{Buhrman+SODA03}, based on the local testability property of Hadamard codes and the fact that, with the promise that a string is in the Hadamard code, the string can be decoded efficiently using Bernstein-Vazirani algorithm \cite{Bernstein+SIAMJC97}. The main contribution of our work is the proof of a classical lower bound on the number of bits of communication necessary for a classical nondeterministic protocol, obtained by showing an upper bound on the number of inputs for which each message can be used, which is basically a problem of extremal combinatorics. Proving this upper bound is indeed the hard part of the proof. This gives a separation $O(\log n)$ vs.~$\Omega(\log^2 n)$, where $n$ is the input length, of respectively quantum weakly nondeterministic and classical nondeterministic communication complexity, for our total function. The paper is structured as follows. We present definitions in Section \ref{section:notations}. We then show the quantum upper bound in Section \ref{section:upperbound} and the classical lower bound in Section \ref{section:lowerbound}. Finally, in Section \ref{section:open}, we discuss open problems. \section{Notations and Definitions}\label{section:notations} \subsection{Notations} In this paper, we will mainly work in vector spaces of the form $\{0,1\}^n$ with the usual addition between vectors $\mathbf{x}=(x_0,\ldots,x_{n-1})$ and $\mathbf{y}=(y_0,\ldots,y_{n-1})$ defined as $\mathbf{x}\mathbf\oplus\mathbf{y}=(x_0\oplus y_0,\ldots,x_{n-1}\oplus y_{n-1})$, where $x_i\oplus y_i$ denotes the parity of $x_i$ and $y_i$, and the inner product defined as $\mathbf{x}\cdot \mathbf{y}=\bigoplus_{i=0}^{n-1} x_iy_i$. We will in several occasions consider integers in $\Set{0}{2^n-1}$ as vectors of $\{0,1\}^n$ through their binary encoding. We define the function $\delta$ over $\Integers\times\Integers$ as follows. $$\delta(a,b)=\left\{\begin{array}{ll} 0&\textrm{ if } a=b\\ 1& \textrm{ if } a\neq b \end{array} \right., \textrm{ for any integers } a \textrm{ and } b.$$ For $k\ge 1$, we denote by $S_k$ the set $\Set{1}{2^k-1}\backslash\{2^j\:\vert\:0\le j\le k-1\}$, i.~e.~the set of integers in $\Set{1}{2^k-1}$ that are not a power of $2$. Finally, for any $i\in\Set{1}{2^k-1}$, we denote by $[i]$ the larger power of $2$ smaller or equal to $i$. In other words, $[i]=2^{\floor{\log_2 i}}$. We now recall the definition of Hadamard codes. \begin{definition} For any integer $k\ge 1$, the Hadamard code of length $2^k$, denoted $\mathscr{H}_k$, is the set $$\big \{h(\mathbf{w})\:\vert\:\mathbf{w}\in\{0,1\}^k\big\}, $$ where $h(\mathbf{w})$ is the binary vector of length $2^k$ with $i$-th coordinate $\mathbf{w}\cdot \mathbf{i}$ $($for $\:0\le i\le 2^k-1)$. \end{definition} Notice that $\mathscr{H}_k$ is a linear code containing $2^{k}$ codewords of length $2^k$. \subsection{Nondeterministic communication complexity}\label{section:comcom} \subsubsection{Classical nondeterministic protocols} We first recall the definition of classical nondeterministic communication complexity. We refer to the textbook by Hromkovi{\v c} \cite{Hromkovic97} for further details. Given a set of pairs of strings $X\times Y$, where $X\subseteq\{0,1\}^\ast$ and $Y\subseteq\{0,1\}^\ast$, and a function $f:X\times Y \to \{0,1\}$, the communication problem associated to $f$ is the following: Alice has an input $x\in X$, Bob an input $y\in Y$ and their goal is to compute the value $f(x,y)$. We suppose that Alice and Bob have unlimited computation power. Moreover, a proof is given to the protocol: Alice and Bob each receive a string which is private, i.~e.~each player cannot see the other's part of the proof. We say that a protocol $P$ is a nondeterministic protocol for $f$ if, for each $(x,y)\in X\times Y$, the following holds: \begin{enumerate} \item[(i)] if $f(x,y)=1$ then there is a proof such that the protocol outputs $1$, \item[(ii)] if $f(x,y)=0$ then, for all proofs, the protocol outputs $0$. \end{enumerate} The communication complexity of a nondeterministic protocol $P$ that computes correctly $f$, denoted $N(P,f)$, is the maximum, over all the inputs $(x,y)$ and the proofs, of the number of bits exchanged between Alice and Bob on this input. The nondeterministic communication complexity of the function $f$, denoted $N(f)$, is the minimum, over all the nondeterministic protocols $P$ that compute $f$, of $N(P,f)$. We now recall the notions of rectangle, covering and their relation with classical nondeterministic complexity. A rectangle of $X\times Y$ is a subset $R\subseteq X\times Y$ such that $R$ can be written as $A\times B$ for some $A\subseteq X$ and $B\subseteq Y$. The rectangle $R$ is said to be 1-monochromatic for $f$ if, for all $(x,y)\in R$, $f(x,y)=1$. A 1-covering of size $t$ for $f$ is a set of $t$ rectangles $R_1,\cdots,R_t$ of $X\times Y$ that are 1-monochromatic for $f$ and such that $R_1\cup\cdots\cup R_t=\{(x,y)\in X\times Y\:\vert\: f(x,y)=1\}$. Let $C^1(f)$ be the minimum, over all the 1-covering of $f$, of the size of the covering. Then the following fact holds (we refer to \cite{Hromkovic97} for the proof). \begin{fact}\label{fact1} $N(f)=\ceil{\log_2{C^1(f)}}$. \end{fact} \subsubsection{Quantum weakly nondeterministic protocols} Let us now consider quantum communication complexity. We refer to Nielsen and Chuang \cite{Nielsen+00} for details about quantum computation and to \cite{Buhrman00,Klauck00,deWolfTCS02} for good surveys of quantum communication complexity. We define quantum weakly nondeterministic protocols as in the classical case, the only modification being that the messages are now allowed to be quantum: Alice and Bob receive inputs $x$, $y$ and two classical strings corresponding to a classical proof, communicate through a quantum channel and their goal is to compute $f(x,y)$. Notice that in this model there is no prior entanglement between the two players. \begin{definition}{\bf{(Quantum weak nondeterminism)}}\label{weak} We say that such a quantum protocol is a weakly nondeterministic protocol for $f$ if, for each $(x,y)\in X\times Y$, the following holds:\vspace{-1mm} \begin{enumerate} \item[(i)] if $f(x,y)=1$ then there is a classical proof such that the protocol outputs $1$ with probability 1, \vspace{-3mm} \item[(ii)] if $f(x,y)=0$ then, for all classical proofs, the protocol outputs $0$ with probability 1. \end{enumerate} \end{definition}\vspace{-1mm} \noindent Similarly to the classical case, the quantum weakly nondeterministic communication complexity of $f$ is the minimum, over all the quantum weakly nondeterministic protocols computing $f$, of the number of qubits exchanged between Alice and Bob on the worst-case instance and the worst proof. We are thus considering the worst case complexity of exact quantum protocols receiving classical proofs. As explained in the introduction of this paper, a stronger definition of quantum nondeterministic protocols can be given \cite{Massar+01,deWolfSIAMJC03}, corresponding to probabilistic protocols using quantum messages that output 1 with positive probability if and only if $f(x,y)=1$. The main reasons why we think studying the power of quantum protocols resulting from Definition \ref{weak} is meaningful is that, first, this definition corresponds to the original version of classical nondeterminism, based on the notion of proof, and, second, we believe quantum strongly nondeterministic protocols are in a way too powerful to be ``fairly'' compared with classical nondeterministic protocols. Let us give a simple example that illustrates the latter point. The non-equality function is the function $NEQ_n:\{0,1\}^n\times\{0,1\}^n\to \{0,1\}$ such that $NEQ_n(x,y)=1$ if $x\neq y$ and $NEQ_n(x,y)=0$ if $x=y$. Massar, Bacon, Cerf and Cleve \cite{Massar+01} have shown a quantum strongly nondeterministic protocol for $NEQ_n$ using exactly one quantum bit (qubit) of communication. In comparison, it is well known that $N(NEQ_n)=\Theta(\log n)$ (see for example \cite{Hromkovic97}). We explain their simple protocol, which shows the power of quantum strong nondeterminism. Alice sees its input $x$ as an integer in $\Set{0}{2^n-1}$, prepares the state $$\frac{1}{\sqrt{2}}(\cos\left(\frac{x\pi}{2^n}\right)\ket{0}+\sin\left(\frac{x\pi}{2^n}\right)\ket{1})$$ and sends it to Bob. Bob rotates it by the angle of $-y\pi/2^n$, obtaining the state $$\frac{1}{\sqrt{2}}(\cos\left(\frac{(x-y)\pi}{2^n}\right)\ket{0}+\sin\left(\frac{(x-y)\pi}{2^n}\right)\ket{1}).$$ Measuring this state gives $1$ with positive probability if $x\neq y$. In the case $x=y$, then the probability of measuring $1$ is 0. The quantum communication protocol that does the above state manipulations, measures the final state and outputs the outcome of the measurement is thus a quantum strongly nondeterministic communication protocol for $NEQ_n$ using only one qubit of communication, in a way incomparable with classical nondeterminism. \subsection{Our total function} We now define the communication problem $HEQ_{k,k'}$ (Hadamard Equality) that is used to show the separation of quantum weakly nondeterministic and classical nondeterministic communication complexity.\vspace{4mm} $\phantom{aa}${\bf Hadamard Equality} $(\mathbf{HEQ_{k,k'}},\textrm{ for } k,k'\ge 1)$\\ \indent $\phantom{aa}$Alice's input: a vector $\mathbf{a}=(a_1,\ldots,a_{2^k-1})$ in $\Set{0}{2^{k'}-1}^{2^k-1}$\\ \indent $\phantom{aa}$Bob's input: $\phantom{e}$a vector $\mathbf{b}=(b_1,\ldots,b_{2^k-1})$ in $\Set{0}{2^{k'}-1}^{2^k-1}$\\ \indent $\phantom{aa}$output: $\phantom{lalala}$$0$ if $(0,\delta(a_1, b_1),\ldots,\delta(a_{2^k-1}, b_{2^k-1}))\in \mathscr{H}_k\backslash\{(0,\ldots,0)\}$\\ \indent $\phantom{aa}$$\phantom{lalalalallaal}\:\:$ $1$ else\vspace{5mm} \noindent Notice that, for any $a$ and $b\in\Set{0}{2^{k'}-1}$, we have $\delta(a,b)=0$ if and only if $a=b$. Thus the problem $HEQ_{k,k'}$ can be seen as a two-leveled string equality problem: Intuitively, the hard case is for Alice and Bob to check whether $$(0,\delta(a_1, b_1),\ldots,\delta(a_{2^k-1}, b_{2^k-1}))=(0,\ldots,0)$$ and, to do this, they have to check whether $\delta(a_i, b_i)=0$ for sufficiently many values of $i$ (actually, at least $k$ different values). The point is that the nondeterministic communication complexity of testing the equality of two integers of $k'$ bits is $\Theta(k')$. Thus, intuitively, the classical nondeterministic communication complexity of $HEQ_{k,k'}$ is $\Omega(kk')$. We will, in section \ref{section:lowerbound}, prove that when $k'$ is sufficient large, this intuition is correct. To our knowledge, the function $HEQ_{k,k'}$ has never been considered before, but the case $k'=1$ is similar to a property testing problem considered by Buhrman, Fortnow, Newman and R\"ohrig \cite{Buhrman+SODA03} in the framework of query complexity. The original (promise) problem in \cite{Buhrman+SODA03} is, for a fixed subset $A_k$ of $\mathscr{H}_k$, to decide whether a string $x$ is in $A_k$ or the Hamming distance between $x$ and any string of $A_k$ is sufficiently large, by querying as few bits of $x$ as possible. By setting $A_k=\mathscr{H}_k\backslash\{(0,\ldots,0)\}$, and replacing ``sufficiently large'' by ``positive'', we obtain a definition similar to $HEQ_{k,1}$. However, as far as communication complexity is concerned, the results in \cite{Buhrman+SODA03} do not imply any separation of classical nondeterminism and quantum weak nondeterminism. \section{Quantum Upper Bound}\label{section:upperbound} In this section, we present an efficient quantum weakly nondeterministic protocol for $HEQ_{k,k'}$. We first prove the following lemma, which restates, in our notations, a well-known property of the Hadamard code. \begin{lemma}\label{lemma:condition} Let $\mathbf{x}=(x_0,x_1,\ldots,x_{2^k-1})$ be a vector in $\{0,1\}^{2^k}$ such that $x_0=0$. Then the following two assertions are equivalent. \begin{enumerate} \item[1.] $\mathbf{x}\in \mathscr{H}_k$; \item[2.] For all the indexes $i$ in $S_k$, the following holds: $x_i=x_{[i]} \oplus x_{i-[i]}$. \end{enumerate} \end{lemma} \begin{proof} Take a vector $\mathbf{x}\in \mathscr{H}_k$ and an integer $i$ in $S_k$. From the definition of the Hadamard code, there exists a vector $\mathbf{w}\in\{0,1\}^k$ such that $x_i=\mathbf{w}\cdot\mathbf{i}$, $x_{[i]}=\mathbf{w}\cdot\mathbf{i'}$ and $x_{i-[i]}=\mathbf{w}\cdot\mathbf{i''}$, with $i'=[i]$ and $i''=i-[i]$. Then $x_{[i]} \oplus x_{i-[i]}=\mathbf{w}\cdot(\mathbf{i'}\oplus\mathbf{i''}) =\mathbf{w}\cdot\mathbf{i}$ from the definition of $[i]$. Thus assertion 2 holds. Now we prove that there are at most $2^{k}$ vectors in $\{0,1\}^{2^k}$ satisfying assertion 2. Since $\abs{\mathscr{H}_k}=2^{k}$, this will prove the lemma. Take two vectors $\mathbf{x}$ and $\mathbf{x'}$ such that $x_0=x'_0=0$ and $x_{2^l}=x'_{2^l}$ for all $l\in\{0,\ldots,k-1\}$. If $\mathbf{x}$ and $\mathbf{x'}$ both satisfy assertion 2 then the other bits are uniquely determined and thus, necessarily, $\mathbf{x}=\mathbf{x'}$. This implies that we can construct at most $2^{k}$ different vectors satisfying assertion 2. \end{proof} We then present the main result of this section. \noindent\begin{theorem}\label{theorem:comupperbound} For any positive integers $k$ and $k'$, there exists a quantum weakly nondeterministic protocol using less than $3(k+k')$ qubits of communication that computes the function $HEQ_{k,k'}$. \end{theorem} \begin{proof} We describe our quantum protocol, which is actually a generalization of (a modified version of) the quantum query protocol in \cite{Buhrman+SODA03}. Suppose that the inputs are $\mathbf{a}=(a_1,\ldots,a_{2^k-1})$, $\mathbf{b}=(b_1,\ldots,b_{2^k-1})$ and that $(\mathbf{a},\mathbf{b})$ is a 1-instance of $HEQ_{k,k'}$. This means that one of the two following cases holds: \begin{enumerate} \item[(i)] $(0,\delta(a_1,b_1),\ldots,\delta(a_{2^k-1},b_{2^k-1}))\notin \mathscr{H}_k$; or \item[(ii)] $(0,\delta(a_1,b_1),\ldots,\delta(a_{2^k-1},b_{2^k-1}))=(0,\ldots,0)$. \end{enumerate} Alice first guesses which case holds. If (i) really holds then, from Lemma \ref{lemma:condition}, there exists an integer $j\in S_k$ such that $\delta(a_j,b_j)\neq\delta(a_{[j]},b_{[j]}) \oplus \delta(a_{j-[j]},b_{j-[j]})$. Alice guesses this index $j$, sends the value of her guess $j$ and the three integers $a_j$, $a_{[j]}$ and $a_{j-[j]}$ (using a classical message). Bob then checks whether $\delta(a_j,b_j)\neq\delta(a_{[j]},b_{[j]}) \oplus \delta(a_{j-[j]},b_{i-[j]})$, outputs $1$ if it holds, and $0$ else. Now suppose that Alice guessed that (ii) holds. Alice then creates and sends Bob the following state. \begin{displaymath} \frac{1}{\sqrt{2^k}}\sum_{m=0}^{2^k-1}\ket{m}\ket{a_m}, \end{displaymath} where the first register consists in $k$ qubits and the second register $k'$ qubits. Here, we use the convention $a_0=0$. Bob applies the following unitary transform on the state he received: \begin{displaymath} \ket{m}\ket{r}\mapsto (-1)^{\delta(r,b_m)}\ket{m}\ket{r}, \end{displaymath} for all $m\in\Set{0}{2^{k}-1}$ and $r\in\Set{0}{2^{k'}-1}$, with the convention $b_0=0$. He then sends back the resulting state to Alice. Alice now performs the unitary transform \begin{displaymath} \ket{m}\ket{r}\mapsto \ket{m}\ket{r\oplus a_m} \end{displaymath} for any $m\in\Set{0}{2^k-1}$ and $r\in\Set{0}{2^{k'}-1}$ (here $r\oplus a_m$ denote the bitwise parity of the binary encodings of $r$ and $a_m$). The resulting state is $$ \frac{1}{\sqrt{2^k}}\sum_{m=0}^{2^k-1}(-1)^{\delta(a_m, b_m)}\ket{m}\ket{0}. $$ From now, it is simply Bernstein-Vazirani algorithm \cite{Bernstein+SIAMJC97} (or Deutsch-Jozsa algorithm \cite{Deutsch+92}). Alice applies an Hadamard transform on each of the $k$ qubits of the first register and measures the first register of the resulting state in the computational basis, outputs 1 if the result is $0$ and outputs 0 else. If (ii) really holds, the state before the measurement being $\ket{0}\ket{0}$, her measurement result is necessarily $0$. She then outputs $1$ without error. For any 1-instance of $HEQ_{k,k'}$, there is thus a guess that can be verified with probability $1$ by this protocol. Now consider the behavior of this protocol on a 0-instance, i.~e.~an instance such that $(0,\delta(a_1,b_1),\ldots,\delta(a_{2^k-1},b_{2^k-1}))\in \mathscr{H}_k\backslash\{(0,\ldots,0)\}$. If Alice guesses that the case (i) holds, then, from Lemma \ref{lemma:condition}, the checking procedure always outputs $0$. If Alice guesses that the case (ii) holds, then at the end of the checking procedure, before doing the measurement, the state will be $\ket{c}\ket{0}$ for some $c\in\Set{1}{2^{k}-1}$. Measuring this state will give $c$ which is different from $0$. Thus the checking procedure outputs 0 with probability 1, whatever Alice's guesses are. We conclude that the above protocol is correct on 0-instances as well. \end{proof} \section{Classical Lower Bound}\label{section:lowerbound} First, notice that there exists a nondeterministic classical protocol for $HEQ_{k,k'}$ using $O(kk')$ communication bits. The protocol is similar to the quantum protocol of Theorem \ref{theorem:comupperbound}, but, when Alice guesses that $(0,\delta(a_1,b_1),\ldots,\delta(a_{2^k-1},b_{2^k-1}))=(0,\ldots,0)$, she sends the $k$ integers $a_{2^s}$, for all $s\in\Set{0}{k-1}$, instead of sending the state $\frac{1}{\sqrt{2^k}}\sum_{m=0}^{2^k-1}\ket{m}\ket{a_m}$. Bob then outputs 1 if and only if $\delta(a_{2^s},b_{2^s})=0$ for all these integers $s$. The objective of this section is to show that this protocol is basically optimal. The proof of the lower bound is based on the following strong result. \begin{theorem}\label{mainproposition} Let $k$ and $k'$ be two positive integers such that $k\ge 3$ and $k'\ge k$. Consider any subset $A\subseteq \Set{0}{2^{k'}-1}^{2^k-1}$ such that, for any two elements $\mathbf{a}=(a_1,\ldots,a_{2^k-1})$ and $\mathbf{b}=(b_1,\ldots,b_{2^k-1})$ of $A$, the following condition holds. \begin{equation}\label{condition} \left\{ \begin{array}{ll} (0,\delta(a_1,b_1),\ldots,\delta(a_{2^k-1},b_{2^k-1}))= (0,\dots,0)&\:\: if \:\:\mathbf{a}=\mathbf{b}\\ (0,\delta(a_1,b_1),\ldots,\delta(a_{2^k-1},b_{2^k-1}))\notin\mathscr{H}_k&\:\: if \:\:\mathbf{a}\neq\mathbf{b} \end{array}\:\:\:\right. \end{equation} Then $A$ necessarily satisfies $$\abs{A}\le 2^{k'2^k-k(k'-k-1)}.$$ \end{theorem} \begin{proof} Our proof is inspired by a new proof by Babai, Snevily and Wilson \cite{Babai+95} of a result by Frankl~\cite{Frankl86}, itself generalizing a result by Delsarte~\cite{Delsarte73,Delsarte74}, that gives an upper bound on the size of any code in function of the cardinality of the set of Hamming distances that occurs between two distinct codewords (but these results are fundamentally different from what we need to prove our upper bound). Denote $\mathscr{A}=\Set{0}{2^{k'}-1}^{2^k-1}$, and consider any subset $A\subseteq\mathscr{A}$ such that any two elements $\mathbf{a}$ and $\mathbf{b}$ satisfies the condition (\ref{condition}). For each $a\in\Set{0}{2^{k'}-1}$, consider the polynomial $\varepsilon_a$ over the field of rational numbers defined as follows. $$\varepsilon_a(X)=1- \frac{X}{a}\:\frac{X-1}{a-1}\:\cdots\:\frac{X-(a-1)}{1}\:\frac{X-(a+1)}{-1}\: \frac{X-(a+2)}{-2}\:\cdots\:\frac{X-(2^{k'}-1)}{a-(2^{k'}-1)}\:.$$ Notice that $\varepsilon_a(b)=\delta(a,b)$ for any $a$ and $b$ in $\Set{0}{2^{k'}-1}$. Now, given a vector $\mathbf{a}=(a_1,\ldots,a_{2^k-1})$ in $\mathscr{A}$, we define the multivariate polynomial \begin{displaymath} f_\mathbf{a}(\mathbf{X})=f_{\mathbf{a}}(X_1,\ldots,X_{2^k-1})= \prod_{i\in S_k} \big(1-\varepsilon_{a_i}(X_i)-\varepsilon_{a_{[i]}}(X_{[i]})-\varepsilon_{a_{i-[i]}}(X_{i-[i]})\big)\:. \end{displaymath} The polynomial $f_\mathbf{a}$ has the property that any monomial it contains has as most $\abs{S_k}=2^k-k-1$ distinct indeterminates $X_j$ in it. For each $f_{\mathbf{a}}$, we construct a new polynomial as follows: for each variable $X_j$ appearing in $f_{\mathbf{a}}$ with an exponent $e>2^{k'}-1$, we replace $X_j^e$ by $X_j^e$ reduced modulo $X_j(X_j-1)\ldots(X_j-(2^{k'}-1))$. Call $f'_{\mathbf{a}}$ the new polynomial. Notice that, as functions over the rationals, $f_{\mathbf{a}}$ and $f'_{\mathbf{a}}$ have the same values over $\mathscr{A}$. As a function, each $f'_\mathbf{a}$ is in the span of all the $\sum_{i=0}^{2^k-k-1}(2^{k'}-1)^i\binom{2^k-1}{i}$ monomial functions in which at most $2^k-k-1$ distinct variables enter and such that the exponent of each variable is at most $2^{k'}-1$. From the hypothesis on $A$, Lemma \ref{lemma:condition} implies that the following holds for all $\mathbf{a}$ and $\mathbf{b}$ in $A$. $$f'_{\mathbf{a}}(\mathbf{b})=f_{\mathbf{a}}(\mathbf{b})\equiv \left\{\begin{array}{ll} 1\:\:\textrm{mod } 2& \textrm{ if } \mathbf{a} = \mathbf{b} \\ 0\:\:\textrm{mod } 2 & \textrm{ if } \mathbf{a}\neq\mathbf{b} \end{array}\right. $$ We now show that this implies that the $\abs{A}$ functions $f'_{\mathbf{a}}$ for $\mathbf{a}\in A$ are linearly independent over the rationals. Take $\abs{A}$ rationals $\lambda_{\mathbf{a}}$ such that $\sum_{\mathbf{a}\in A}\lambda_{\mathbf{a}}f'_{\mathbf{a}}=\mathbf{0}$. Without loss of generality, we can actually consider that the $\lambda_{\mathbf{a}}$ are integers. The evaluation of the two sides of this expression at the point $\mathbf{b}$ gives $\lambda_{\mathbf{b}}\equiv 0\:\textrm{mod } 2$. Thus, necessarily, $\lambda_{\mathbf{a}}\equiv 0\:\textrm{mod } 2$ for all $\mathbf{a}\in A$. Suppose that the $\lambda_{\mathbf{a}}$ are not all zero and denote $\Lambda_i=\{\mathbf{a}\in A\textrm{ such that }\lambda_{\mathbf{a}}\neq 0 \textrm{ and } 2^i\vert\lambda_{\mathbf{a}}\}$ for $i$ ranging from 1 to $r$, where $r$ is the greatest integer such that $2^r$ appears in the prime power decomposition of some $\lambda_{\mathbf{a}}$. Evaluating, for increasing $i$ from 1 to $r$, the functions $\sum_{\mathbf{a}\in\Lambda_i}(\lambda_{\mathbf{a}}/2^i)f'_{\mathbf{a}}$ gives that $\Lambda_{1}=\emptyset$. Thus $\lambda_{\mathbf{a}}=0$ for all $\mathbf{a}\in A$. The fact that the $\abs{A}$ functions $f'_{\mathbf{a}}$ are linearly independent over the rationals implies that \begin{eqnarray} \abs{A}&\le& \sum_{i=0}^{2^k-k-1}(2^{k'}-1)^i\binom{2^k-1}{i}\label{eq4.1}\\ &\le& \sum_{i=0}^{2^k-k}(2^{k'})^i\binom{2^k}{i}\:\:.\label{eq4.2} \end{eqnarray} We now show an upper bound for this expression.\vspace{2mm} \begin{lemma}\label{lemma:upperbound} Let $k$ and $k'$ be positive integers such that $k\ge 3$ and $k'\ge k$. Then \begin{displaymath}\label{equation:ineq} \sum_{i=0}^{2^k-k}(2^{k'})^i\binom{2^k}{i}\le 2^{k'2^k-k(k'-k-1)}. \end{displaymath} \end{lemma} {\bf Proof of Lemma \ref{lemma:upperbound}.} First notice that, in the case $k'\ge k$, the function $$h:j\mapsto (2^{k'})^j\binom{2^k}{j}$$ is an increasing function over $\Set{0}{2^k}$: For any $i\in\Set{0}{2^k-1}$, we have $h(i+1)/h(i)=2^{k'}(2^k-i)/(i+1)\ge 2^{k'}2^{-k}\ge 1$. We can now give the following upper bound. \begin{eqnarray*} \sum_{i=0}^{2^k-k}(2^{k'})^i\binom{2^k}{i}&\le& 2^k\max_{i\in\Set{0}{2^k-k}}\left((2^{k'})^i\binom{2^k}{i}\right)\\ &=& 2^k(2^{k'})^{2^k-k}\binom{2^k}{2^k-k}=2^k(2^{k'})^{2^k-k}\binom{2^k}{k}. \end{eqnarray*} Using the standard fact $\binom{2^k}{k}\le (e2^k/k)^k$, where $e$ is the Euler constant, we obtain, for $k\ge 3$, \begin{eqnarray*} \sum_{i=0}^{2^k-k}(2^{k'})^i\binom{2^k}{i}&\le& 2^k(2^{k'})^{2^k-k}\left(2^k\right)^k\\ &=& 2^{k'2^k-k(k'-k-1)}.\:\:\:\:\: \Box \end{eqnarray*}\vspace{2mm} Using Lemma \ref{lemma:upperbound}, we obtain the claimed upper bound on the size of $A$. This concludes the proof of Theorem \ref{mainproposition}. \end{proof} We are now ready to prove the lower bound on the classical nondeterministic complexity of $HEQ_{k,k'}$. \begin{theorem}\label{theorem:comlowerbound} Let $k$ and $k'$ be two positive integers such that $k\ge 3$ and $k'\ge k$. Then $$N(HEQ_{k,k'})\ge k(k'-k)-(k+k').$$ \end{theorem} \begin{proof} Denote again $\mathscr{A}=\Set{0}{2^{k'}-1}^{2^k-1}$. Notice that for any $\mathbf{a}\in \mathscr{A}$, $(\mathbf{a},\mathbf{a})$ is a 1-instance of $HEQ_{k,k'}$. We will show a lower bound on the number of 1-monochromatic (for $HEQ_{k,k'}$) rectangles of $\mathscr{A}\times\mathscr{A}$ necessary to cover $\{(\mathbf{a},\mathbf{a})\:\vert\:\mathbf{a}\in\mathscr{A}\}$. Here covering means that the union of the rectangles has only to include $\{(\mathbf{a},\mathbf{a})\:\vert\:\mathbf{a}\in\mathscr{A}\}$. Such a lower bound obviously implies a lower bound on the number of 1-monochromatic rectangles necessary to cover all the 1-instances of $HEQ_{k,k'}$. Any 1-monochromatic rectangle of a covering of $\{(\mathbf{a},\mathbf{a})\:\vert\:\mathbf{a}\in\mathscr{A}\}$ can be considered, without loss of generality, to be of the form $A\times A$ for some subset $A\subseteq\mathscr{A}$. By the definition of a 1-monochromatic rectangle, for each $\mathbf{a}=(a_1,\ldots,a_{2^k-1})$ and $\mathbf{b}=(b_1,\ldots,b_{2^k-1})$ in $A$ the following must hold: \begin{enumerate} \item[1.] $(0,\delta(a_1,b_1),\ldots,\delta(a_{2^k-1},b_{2^k-1}))=(0,\dots,0)\:\:$ if $\:\:\mathbf{a}=\mathbf{b}$; \item[2.] $(0,\delta(a_1,b_1),\ldots,\delta(a_{2^k-1},b_{2^k-1}))\notin\mathscr{H}_k\:\:$ if $\:\:\mathbf{a}\neq\mathbf{b}$. \end{enumerate} Then, even for the largest 1-monochromatic rectangle of the form $A\times A$, from Theorem \ref{mainproposition} we have $\abs{A}\le 2^{k'2^k-k(k'-k-1)}$. This implies that at least $$\frac{(2^{k'})^{2^k-1}}{\abs{A}}\ge 2^{kk'-k^2-k-k'}\:$$ 1-monochromatic rectangles are necessary to cover $\{(\mathbf{a},\mathbf{a})\:\vert\:\mathbf{a}\in\mathscr{A}\}$. The nondeterministic complexity of $HEQ_{k,k'}$ is thus, using Fact \ref{fact1}, at least $kk'-k^2-k-k'$. \end{proof} This theorem implies the quadratic separation, as stated in the next corollary. \begin{corollary} There is a quadratic separation of quantum weakly nondeterministic and classical nondeterministic communication complexity. \end{corollary} \begin{proof} By considering for example $HEQ_{k,2k}$, for which the quantum weakly nondeterministic communication complexity is, from Theorem \ref{theorem:comupperbound}, $O(k)$ and the classical nondeterministic communication complexity is, from Theorem \ref{theorem:comlowerbound}, $\Omega(k^2)$. \end{proof} \section{Discussion and Open Problems}\label{section:open} Although we conjecture that even for arbitrary $k'$, the classical nondeterministic communication complexity of $HEQ(k,k')$ is $\Omega(kk')$, it is not possible to prove this fact using the same technique. Indeed, equation (\ref{eq4.2}) is a relatively tight approximation of (\ref{eq4.1}) and $$ \sum_{i=0}^{2^k-k}(2^{k'})^i\binom{2^k}{i}\ge (2^{k'})^{2^k-k}\binom{2^k}{k} \ge 2^{k'2^k-kk'+k^2-k\log_2 k}, $$ which cannot be $2^{k'2^k-\Omega(kk')}$ when $k'$ is small with respect to $k$. The main open problem is whether a separation larger than quadratic can be found between classical nondeterministic and quantum weakly nondeterministic communication complexity for a total function. Is an exponential gap achievable? It may indeed be the case that, for total functions, the largest gap achievable is polynomial and, possibly, quadratic.
physics/0511237
\section{Introduction} In 1812, Massachusetts governor Elbridge Gerry, got help from his political party by crafting a district and won his own election. At the time, someone produced an illustration of the districting and emphasized its similarities with a salamander. The term {\it {Gerrymander}} was then coined from putting together Gerry and mander. Nowadays, {\it{Gerrymandering}} refers to the practice of drawing district lines to maximize the advantage of a political party. For example, a bipartisan gerrymandering is the one in which the districting is to protect incumbents, and a racial or ethnic gerrymandering is to dilute or preserve the strength of minorities. Political Districting has since become an issue that is always political, controversial and sometimes even ugly. In the US, for example, the results of a population census in every ten years may require a voter redistricting in order to redistribute the House seats among the states. Politicians have always fought over district boundaries, while the courts might consider the problem just too political even to enter. Several constitutional amendments were actually passsed in the nineteenth century to prevent {\it{Gerrymandering}}. In nearly half of the states that underwent voter redistricting in the 1990s, federal or state courts played an essential role in the redistricting debate and judges actually issued new lines in ten states. In this process of redistricting, the courts indeed never used quantitative methods to justify the plans. One would wonder if there are more objective methods to perform the redistricting process. There are actually mathematical and numerical approaches exist in the literature. Such methods can in principle eliminate {\it{Gerrymandering}} by providing well defined steps and constraints. Local search methods include those used in Kaiser [1] and in Nagel [2]. An implicit enumeration technique was also developed by Garfinkel and Nemhauser [3]. George et.al. [4] studied the problem of determining New Zealand's electoral districts, using a location-allocation based iterative method in conjunction with a geographic information system (GIS). From a mathematical point of view, the Political Districting Problem belongs to what is known as the Districting (or zone design) Problem in which $n$ units are grouped into $k$ zones such that some cost function is optimized, subject to constraints on the topology of the zones, etc and has been shown to be NP-Complete [5]. Thus, it is best to be treated by some optimization methods. The Districting Problem is a geographical problem which is present in a number of geographical tasks such as school districting, design of sales territories, etc. The constraints of the Districting Problem are very similar to that of the Clustering Problem in optimization. Let the set of $n$ initial units be $X = {x_1, x_2, ..., x_n}$, where $x_i$ is the $i$-th unit and let the number of districts be $k$. Let $Z_i$ be the set of all the units that belong to district $Z$. Then $$ Z_i \neq \emptyset \,\,\, , i = 1, ..., k \,\,\, , $$ $$ Z_i \cap Z_j = \emptyset \,\,\, , i \neq j \,\,\, , $$ $$ \cup^k_{i=1} Z_i = X \,\,\, . \eqno(1.1) $$ There is an additional constraint in the Districting Problem, namely, the constraint of contiguity which makes the problem somewhat more complicated. It constrains the set of possible solutions to the problem that assures contiguity between the units within the designed district. Contiguity here means that every unit in a district is connected to every other unit through units that are also in the district. An important optimization criterion in the Political Districting Problem is to avoid Gerrymandering. It is generally accepted that there are three essential characteristics that the districts should have [6]: population equality, contiguity and geographical compactness. The task here is therefore to devise a method that is able to produce solutions which satisfy these characteristics. In this paper, we map the Districting problem onto a $q$-state Potts model. This would allow us to study the problem by using statistical physics methods. Most of the constraints that we mentioned above could be represented as the interaction terms among various sites of the $q$-state Potts model or the addition of an external field to the system. By doing so, we can also understand the corresponding physical nature of such a social problem. Using a physics model to study a social science problem is not new. There already appear many papers and books written on various subjects in social science. People have employed concepts such as scalings, etc, to study the social behavior in financial markets [7-9]. Statistical models have also been applied to NP-complete problems in combinatorial optimization [10]. We here demonstrate how a social economics problem can be transformed into a physics model and carry out an optimization study to look for the optimal solution of the problem. This paper is organized as follows. Section II is a description of the model for the redistricting problem. Section III contains the results of our numerical simulation and Section IV is the summary and discussion. \section{The Model} The $q$-state Potts model was first proposed as an appropriate generalization of the Ising model, to consider a system of spins confined in a plane with each spin pointing to one of the $q$ equally spaced directions specified by the angles $$ \Theta_n = \frac{2\pi n}{q} \,\,\, , n = 0, 1, ..., q-1 \,\,\, . \eqno(2.1) $$ \noindent In the most general form, the nearest-neighbor interaction would only depend on the relative angle between the two vectors. The Hamiltonian will then take the form $$ H = - \sum_{ij} J(\Theta_{ij}) \,\,\, , \eqno(2.2) $$ \noindent where the function $J(\Theta)$ is $2\pi$ periodic and $\Theta_{ij} = \Theta_{n_i} - \Theta_{n_j}$ is the angle between the two spins of neighboring sites $i$ and $j$. In his seminal work, Potts [11] chose $$ J(\Theta) = \epsilon_1 \cos\Theta \,\,\, , \eqno(2.3) $$ \noindent and was able to determine the critical point of this (now known as planar Potts) model on the square lattice for $q = 2, 3, 4$. As a remark, he also gave the critical point for all $q$ of the following (now known as standard Potts) model $$ J(\Theta_{ij}) = \epsilon_2 \delta_{S_i, S_j} \,\,\, , \eqno(2.4) $$ \noindent where $\delta_{ij}$ is the Kronecker delta and is equal to 1 when $i=j$ and 0 otherwise. It is also the model with interaction energy of the form in Eq. (2.4) that has attracted the most attention to date. In our study here, we shall use this standard Potts model as the starting point. To begin with, we assume each site, or $q$-state spin to represent a unit in the Districting problem and with a total of $N$ units. $q$ is the total number of districts in the plan. For a spin to be in one of the $q$ states means that the unit belongs to that particular district. Define $n_j$ to be the number of sites in a particular state $j$. It is clear that $$ \sum_{j=1}^q n_j = N \,\,\, . \eqno(2.5) $$ \noindent The objective here is to include the constraints such as population equality, contiguity and geographical compactness as interaction energy terms in the Hamiltonian so that the ground state energy configuration corresponds to the optimal solution of the problem. Achieving equal voter population size districts is central to any Political Districting Problem. A measure of this can be obtained, e.g., by calculating the sum of the differences between the population of each district and the average population over all districts. To include this in our model, we can view it as an external random field acting on site $i$ with a field strength $p_i$ representing the voter population of this site. Therefore, the total voter population $P_j$ of a district $j$ is the sum of the interaction of the external random field and all the sites within this district and is given by $$ P_j = \sum_{i=1}^{n_j} p_i \delta_{S_i, j} \,\,\, . \eqno(2.6) $$ \noindent and the total voter population of the plan is given by $$ P_0 = \sum_{j=1}^q P_j \,\,\, . \eqno(2.7) $$ \noindent The average voter population $\langle P \rangle$ for each district is therefore equal to $P_0/q$. The difference between the voter population of district $j$ and the average voter population will contribute to the Hamiltonian of the system. Let us define its total contribution to be $H_P$. It is then given by $$ H_P = \sum_{j=1}^q \biggl| \frac{P_j}{\langle P \rangle} - 1 \biggr| \,\,\, . \eqno(2.8) $$ Population equality alone will sometimes lead to problems of contiguity and compactness in districting, resulting in districts of unnatural shapes. Hence compactness is usually an important factor in any political districting solution. There are many ways to define the compactness of a district but there is yet no universally acceptable definition of compactness. Young [12] studied eight different measures of compactness and showed that each measure fails to give satisfactory results on certain geographical configurations. In short, any good measure of compactness must apply both to the districting as a whole and to each district individually. It should also be conceptually simple and should use easily collected and verifiable data. Our strategy here is to take compactness as the smallest total sum of all boundaries between different districts. In this way, we can view it as the interaction energy between domains, or the domain wall energy. Thus, the contribution between sites $i$ and $j$ to the domain wall energy is $$ ( 1 - \delta_{S_i, S_j} ) C_{ij} \,\,\, , \eqno(2.9) $$ \noindent where $C_{ij} = 1$, if $i$ and $j$ are neighboring sites and zero otherwise. Define the Hamiltonian energy from this interaction to be $H_D$, we will have $$ H_D = \sum_{i,j} ( 1 - \delta_{S_i, S_j} ) C_{ij} \,\,\, . \eqno(2.10) $$ $H_P$ and $H_D$ are the two energy terms that we include in our Hamiltonian for the study of the system, which we now define as $$ H = \lambda_P H_P + \lambda_D H_D \,\,\, , \eqno(2.11) $$ \noindent where $\lambda_P$ and $\lambda_D$ are constant coefficients. Notice that the way we define $H_D$ would in most cases quarantee the constraint of contiguity, depending on the ratio between $\lambda_P$ and $\lambda_D$. Other constraints can also be included as the interaction among various sites or external fields acting on the system and will be discussed in Sec. IV. \section{Numerical Simulation Results} In the above section, we have shown how to map the Districting Problem to a $q$-state Potts Model and rewrite the constraints into interaction between different sites and external fields acting on the system. In this section, we will use the problem of determining the districting for the Taiwan Legislature seats as an example, though the method can be equally applicable to any districting problem. We will also use Monte Carlo method to perform our simulation. Starting from the 2008 Legislature election, the political districts of Taiwan will be restructured, resulted in 73 voter districts where each district will elect its own legislator for the Legislature. Taipei city, for example, will have 8 voter districts. On the other hand, Taipei city has at this moment a total of 449 precincts (a precinct corresponds to a site in our model) and a voter population of about 2 million. Redistricting into 8 voter districts will imply that we need to regroup and put the precincts into different voter districts, which is a typical Political Districting Problem. In order to satisfy the population equality constraint, a voter population of about 250,000 voters in each voter district is preferred. Figure 1 shows the distribution of the number of precincts vs. precinct voter population in Taipei city. The y-axis is the number of precincts while the x-axis is the voter population in each precinct, with each bin representing 500 voters. It is interesting to see that this can be approximated by a Gaussian distribution with a mean of about 4,100 and a variance of about 1,450. This can in turn be interpreted as a Gaussian-distributed random field term in the $q$-state Potts model. We should mentiont here that $q$-state Potts model with Gaussian-distributed random fields is of tremendous interest in physics. It has been studied for a long time and will not be discussed here. \begin{figure} \includegraphics[width=10cm]{gerrymander_fig1 \caption{The distribution of the number of precincts vs. precinct voter population in Taipei city.} \end{figure} \begin{figure} \includegraphics[width=12cm]{gerrymander_fig2 \caption{(a) A district of 10 precincts in our model; (b) each red dot represents one precinct; (c) a network of precincts connected by green arcs; (d) the network extracted from (c). } \end{figure} \begin{figure} \includegraphics[width=10cm]{gerrymander_fig3 \caption{The distribution of the number of neighbors that each of the current 449 precincts in Taipei city has.} \end{figure} In order to map this problem onto the $q$-state Potts model, we need to define what is meant by the neighbors of a site (precinct). Figure 2 is an illustration of our definition. It represents a district with 10 precincts, as shown in Figure 2(a). Each of the precincts is represented by a red dot, which is shown in Figure 2(b). The red dots correspond to the sites in our $q$-state Potts model. These red dots are then connected by green arcs, forming a network as shown in Figure 2(c). Figure 2(d) is the network extracted from Figure 2(c). We can further exploit the interesting properties of this example. Figure 3 is the distribution of the number of neighbors that each of the current 449 precincts in Taipei city has. We can call this the {\it connectivity} of a precinct. We can also approximate this by a Gaussian Distribution with a mean of about 4.6 and a standard deviation about 1. This further supports the introduction of an interaction term among different sites in our discussion above. The preferred voter district boundaries here will cut through those precincts with fewer connections to avoid long and thin precincts (units) within a voter district in order to satisfy compactness. In fact, precincts with more connections will lie closer to the {\it center} of a voter district while precincts with few connections will stay near or at the district boundaries. \begin{figure} \includegraphics[width=10cm]{gerrymander_fig4 \caption{Energy of the system vs. temperature.} \end{figure} \begin{figure} \includegraphics[width=10cm]{gerrymander_fig5 \caption{Specific heat capacity $C_V$ of our system vs. temperature.} \end{figure} The competition between the two terms in the Hamiltonian in Eq. (2.11) defines the statistical properties of the model. Figure 4 shows the energy of the system vs. temperature. As we can see, the system undergoes a phase transition around $T = 2.0$ with an energy $E \approx 308$, when simulated $\lambda_P$ and $\lambda_D$ are equal to 50 and 1 respectively. The phase transition here corresponds to the formation of domains, or the aggregation of precincts into compact voter districts. As mentioned above, the total number of sites in this system is 449 while $q$ is equal to 8, the number of precincts and voter districts in Taipei respectively. To make sure there is no peculiar behavior in our system, we have actually started from $T = 10^7$ and gradually lowered the temperature and obtained a smooth curve all the way down to $T = 0.1$ where there is no further change in the ground state energy. Figure 5 is a plot of the specific heat capacity $C_V$ vs. temperature. One can see that there is a peak around temperature, $T = 2.0$, with a peak value of about 273, which is an indication that this is a phase transition, where large domains are formed and condensed and the temperature at which the peak appears corresponds to the critical temperature. Since our example here is a finite system (with only 449 sites), finite size effect prevents the peak to blow up. In order to see that this model has a phase transition, we construct an artificial system which has similar interaction terms as our example but with a varying number of sites to demonstrate the critical properties in the thermodynamic limit. The artificial model is our $q$-state Potts model on a periodic two dimensional triangular lattice with a Hamiltonian of the form similar to Eq. (2.11). We again set $q = 8$ in this system and assume Gaussian distributions in both the voter population and the number of connections for the sites. Again, we normalize the total voter population to 1, independent of the number of sites. For the connections, since each site on the triangular lattice could at the most connect to six nearest neighbors, we normalize to have the peak of the Gaussian distribution to be at 3, and the cutofff at 6. Figure 6 is a plot of the energy of the artificial system vs. temperature. In the figure, we have included the energy of latttice sizes from $8 \times 8$ to $64 \times 64$. One can clearly see the sharpening of the transition around $T \approx 1.6 $, with $E$ about 990, again with $\lambda_P$ and $\lambda_D$ are equal to 50 and 1 respectively. \begin{figure} \includegraphics[width=10cm]{gerrymander_fig6 \caption{Energy of the artificial system vs. temperature.} \end{figure} \begin{figure} \includegraphics[width=10cm]{gerrymander_fig7 \caption{Specific heat capacity $C_V$ of the artificial system vs. temperature.} \end{figure} Figure 7 is a plot of the specific heat capacity $C_V$ vs. temperature of the system. We can here see clearly that $C_V$ will diverge as the size of the lattice grows, which confirms a phase transition of the system. The peak value of $C_V$ for the lattice $64 \times 64$ is about 2470. Figure 8(a) is a map of the Taipei city and its 449 precincts. As one lowers the temperature, the system will eventually reach its ground state. In reality, one cannot have absolute voter population equality for each of the voter districts. The number of near optimal solutions will increase if one allows the percentage difference of voter population of a district from the average voter population/district to increase. Figure 8(b) is an illustration of the voter districting with the lowest ground state energy from our simulation with $\lambda_P$ and $\lambda_D$ equal to 50 and 1 respectively. There are a total of 8 voter districts, drawn in different colors. \begin{figure} \includegraphics[width=12cm]{gerrymander_fig8 \caption{ (a) A map of the Taipei city and its 449 precincts; (b) an illustration of the voter districting in lowest energy state from our simulation with $\lambda_P$ and $\lambda_D$ equal to 50 and 1 respectively. The 8 voter districts are drawn in different colors. } \end{figure} \hskip 1 cm {Table 1. Voter districting in Taipei city with different $\lambda_A$ and $\lambda_D$. } \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline $\lambda_P$ & $\lambda_D$ & $E_{min}$ & $E_P$ & $E_D$ & $\Delta P_{max}$ & $\Delta P_{max}/ \langle P \rangle$ \\ \hline 1 & 1 & 86.715 & 4.715 & 82 & 390925 & 1.5693 \\ \hline 5 & 1 & 97.073 & 1.615 & 89 & 143919 & 0.5778 \\ \hline 10 & 1 & 104.075 & 0.507 & 99 & 47145 & 0.1893 \\ \hline 50 & 1 & 110.537 & 0.031 & 109 & 1765 & 0.0071 \\ \hline 100 & 1 & 111.998 & 0.020 & 110 & 1846 & 0.0074 \\ \hline 500 & 1 & 145.778 & 0.014 & 139 & 801 & 0.0032 \\ \hline \end{tabular} \end{center} Table 1 is the optimal energy state ($E_{min}$) that we obtain with different values of $\lambda_P$ and $\lambda_D$. $E_P$ and $E_D$ are the contributions to $E_{min}$ from the voter population ($\lambda_P H_P$) and district boundary ($\lambda_D H_D$) in Eq. (2.11). The voter population of each precinct is taken from the 2004 Legislature Election. Also included is the largest deviation ($\Delta P_{max}$) of the voter population of a district from the average voter population $\langle P \rangle$ and the ratio ($\Delta P_{max}/ \langle P \rangle$). In the 2008 Legislature Election, the Central Election Commisson (CEC) of Taiwan constrains $\Delta P_{max}/ \langle P \rangle$ of Taipei city to be less than 15\%. On the other hand, one also wants to have a near minimal $E_D$ in order to guarantee contiguity and compactness. Taking these into consideration, $\lambda_P$ somewhere between 10 and 100 is preferred for the districting. \section{Summary and Discussion} In this paper, we show how to use a statistical physics model to study a social economics problem. We have mapped the Political Districting Problem to a $q$-state Potts model in which the constraints can be written as interaction between sites or external fields acting on the system. Districting into $q$ voter districts is equivalent to finding the ground state of the system. Searching for an optimal solution for the ground state becomes an optimization problem and standard optimization algorithms such as the Monte Carlo method or simulated annealing method can be employed here. The system undergoes a phase transition as one lowers the temperature. This transition can be understood as follows. At high temperature, only small domains are formed and the whole system is in a random state. As the temperature decreases, large domains will begin to form in order to lower the energy of the system. At the critical temperature, the system will form large domains and thus will approach the ground state configuration. In the example above, we studied the 2008 Taiwan Legislature Election with two constraints, viz. voter population equality and compactness. With a suitable choice of the ratio between $\lambda_D$ and $\lambda_P$, the near optimal solutions also satisfy the contiguity condition. One can also add other interaction terms for extra constraints. In our example here, Taipei city itself has currently 12 administrative zones. The CEC of Taiwan prefers to have no more than 2 administrative zones in each voter district. Hence the districting here corresponds to adding another constraint to the Hamiltonian. One can, for example, add another term to the Hamiltonian which takes the following form $$ H_A = \lambda_A \sum_{i,j,k} \delta_{S_i,S_j} \delta_{S_j,S_k} \delta_{S_k,S_i} (1 - \delta_{A_i,A_j}) (1 - \delta_{A_j,A_k}) (1 - \delta_{A_k,A_i}) \,\,\, , \eqno(4.1) $$ \noindent where $A_i$ here refers to the administrative zone that site $i$ belongs to and $i, j, k$ all belong to the same voter district. One can see that in (4.1), the right hand side will give a finite contribution if the sites within a voter district belong to 3 or more administrative zones. A large $\lambda_A$ practically eliminates such a possibility. In general, one can include additional terms to the total Hamiltonian to take care of new constraints and the methodology we give here should equally be applicable to other districting problems. We have thus shown how one can use statistical physics approach to study \lq\lq socio-econophysics\rq\rq \, problems and demonstrate with the example of districting Taipei city in the 2008 Taiwan Legislature Election. This work was supported in part by the National Science Council, Taiwan, R.O.C. (grant no. NSC-94-2112-M-001-019).
2011.05465
\section{Introduction} Every topological space in this article is assumed to be metrizable and separable. Erd\H{o}s space is defined to be the space $$ \mathfrak{E} = \{(x_n)_{n\in \omega} \in \ell^2 : x_i \in \mathbb{Q} , \textit{for all } i\in \omega \},$$ where $\ell^2$ is the Hilbert space of square-summable sequences of real numbers. Erd\H{o}s space was introduced by Erd\H{o}s in 1940 in \cite{er} as an example of a totally disconnected and non-zero dimensional space. For a space $X$, $\mathcal{K}(X)$ denotes the hyperspace of non-empty compact subsets of $X$ with the Vietoris topology. For any $n\in \ensuremath{\mathbb{N}}$, $\mathcal{F}_n(X)$ is the subspace of $\mathcal{K}(X)$ consisting of all non-empty subsets that have cardinality less than or equal to $n$, and $\mathcal{F}(X)$ is the subspace of $\mathcal{K}(X)$ of finite subsets of $X$. In \cite{zaragoza}, the author proved that $\mathcal{F}_n(\mathfrak{E})$ is homeomorphic to $\mathfrak{E}$ for all $n\in\mathbb{N}$. Soon after that, David S. Lipham proved in \cite{lipham} that if $X$ is an Erd\H{o}s space factor then $\mathcal{F}(X)$ is an Erd\H{o}s space factor. The objective of this note is to round off this topic with the following result. \begin{theorem}\label{FE} $\mathcal{F}(\mathfrak{E})$ is homeomorphic to $\mathfrak{E}$. \end{theorem} We will also give some indirect consequences of the theorem. \section{Definitions and preliminaries} In this paper, $\ensuremath{\mathbb{N}}$ is the set of positive integers and $\omega=\{0\}\cup\ensuremath{\mathbb{N}}$ is the set of natural numbers. Given $n\in \ensuremath{\mathbb{N}}$ and subsets $U_1,\ldots, U_n$ of a topological space $X$, $\langle U_{1},\ldots ,U_{n}\rangle$ denotes the collection $\left\lbrace F \in \mathcal{K}(X):F\subset \bigcup_{k=1}^n U_k,\, F\cap U_{k}\neq \emptyset \textit{ for } k \leq n \right\rbrace $. Recall that the Vietoris topology on $\mathcal{K}(X)$ has as its canonical base all the sets of the form $\langle U_{1},\ldots ,U_{n}\rangle$, where $U_k$ is a non-empty open subset of $X$ for each $k\leq n$. In \cite{ME}, Dijkstra and van Mill gave an intrinsic characterization of Erd\H{o}s space. We will use this characterization so we first recall some definitions. \begin{defi}[{\cite[Remark 2.4]{ME}}]\label{eazd} A topological space $(X,\mathcal{T})$ is almost zero-dimensional if there is a zero-dimensional topology $\mathcal{W}$ in $X$ such that $\mathcal{W}$ is coarser than $\mathcal{T}$ and has the property that every point in $X$ has a local neighborhood base consisting of sets that are closed with respect to $\mathcal{W}$. \end{defi} \begin{defi}[{\cite[Definition 5.1]{ME}}] Let $X$ be a space and let $\mathcal{A}$ be a collection of subsets of $X$. The space $X$ is called $\mathcal{A}$-cohesive if every point of the space has a neighborhood that does not contain nonempty proper clopen subsets of any element of $\mathcal{A}$. \end{defi} We refer the reader to \cite{Ke} for the basic theory of trees. \begin{theorem}[{\cite[Theorem 8.13, p. 46]{ME}}]\label{EE} A nonempty space $E$ is homeomorphic to $\mathfrak{E}$ if and only if there exists topology $\mathcal{W}$ on $E$ that witnesses the almost zero-dimensionality of $E$, a nonempty tree $T$ over a countable alphabet and subspaces $E_s$ of $E$ that are closed with respect to $\mathcal{W}$ for each $s\in T$ such that: \begin{enumerate} \item $E_{\emptyset} = E$ and $E_s =\bigcup \{E_t : t \in succ(s)\}$ whenever $s \in T$, \item each $x\in E$ has an open neighborhood $U$ that is an anchor in $(E,\mathcal{W})$, that is, for every $t\in [T ]$ we either have $E_{t\restriction k} \cap U = \emptyset$ for some $k \in \omega$ or the sequence $E_{t\restriction k_0},\ldots, E_{t\restriction n},\ldots $ converges in $(E,\mathcal{W})$ to a point. \item for each $s \in T$ and $t \in succ(s)$, $E_t$ is nowhere dense in $E_s$, and \item $E$ is $\{Es : s \in T\}$-cohesive. \end{enumerate} \end{theorem} For each $n\in\ensuremath{\mathbb{N}}$, let $\varphi_n:X^n\to \mathcal{F}_n(X)$ be the function defined by $\varphi_n(x_1,\ldots,x_n)=\{x_1,\ldots x_n\}$. It is know that this function is continuous, finite-to-one and in fact it is a quotient \cite{Sub}. \begin{lemma}\label{Coh} Let $X$ be a space that is $\{A_s: s\in S\}$-cohesive, witnessed by a base $\mathcal{B}$ of open sets. Consider the following collection of subsets of $\mathcal{F}(X)$: $$ \mathcal{A}=\left\{\varphi_n[A_{s_1}\times\cdots\times A_{s_n}]:n\in\mathbb{N},\, \forall i\in\{1,\ldots,n\}\, (s_i\in S)\}\right. $$ Then $\mathcal{F}(X)$ is $\mathcal{A}$-cohesive, and the open sets that witness this may be taken from the collection $\mathcal{C}=\left\{\langle U_1,\ldots, U_n\rangle:\forall i\in\{1,\ldots,n\}\, (U_i\in\mathcal{B})\right\}$. \end{lemma} \begin{proof} Let $F\in \mathcal{F}(X)$, suppose that $F=\{x_1, \ldots ,x_k\}$ with $x_j\neq x_i$ if $i\neq j$. For each $j\in\{1,\ldots,n\}$, let $V_j\in \mathcal{B}$ with $x_j\in V_j$. We can assume that if $i\neq j$ then $V_i\cap V_j=\emptyset$. Let $V=\langle V_1,\ldots, V_k\rangle$, note that $F\in V$. We claim that $V$ does not contain clopen subsets of any element of $\mathcal{A}$. Suppose there are $s_1,\ldots s_m \in S$ such that $V$ contains a non-empty proper clopen subset $O$ of $\varphi_m[A_{s_1}\times\cdots\times A_{s_m}]$. As $V\cap\mathcal{F}_{k-1}(X)=\emptyset$, it follows that $m\geq k$. If $i\in(k,m]$, we define $V_i=V_k$. In this way, $V=\langle V_1,\ldots, V_m\rangle$. Thus $O\cap \mathcal{F}_m(X)$ is a clopen subset of $ \mathcal{F}_m(X)$. Let $x=(x_1, \ldots,x_{k}, x_{k+1},\ldots, x_m)$ where $x_k=x_{k+1}=\ldots =x_m$. Note that $x\in V_1\times \ldots \times V_m $. Let $g_m=\varphi_m\restriction A_{s_{1}}\times \ldots \times A_{s_{{m}}}$ and $C=g_m^{\leftarrow}[O\cap \mathcal{F}_m(X)]\cap (V_1\times \ldots \times V_m)$. By Proposition 2.6 of \cite {zaragoza}, $C$ is a clopen subset of $A_{s_{1}}\times \ldots \times A_{s_{{m}}}$ such that $C\subset V_1\times \ldots \times V_m $; this is a contradiction ( see \cite[Remark 5.2]{van}). \end{proof} \section{Proof of the theorem} By Theorem \ref{EE} there is a topology $\mathcal{W}$ for $\mathfrak{E}$ which is witness to the almost zero dimensionality of $\mathfrak{E}$, a countable tree $ T $, a family of sets $\mathcal{E}=\{ E_s :s\in T\}$ which are closed with respect to $\mathcal{W}$, and for each $x\in\mathfrak{E}$ an open neighborhood $U_x$ that satisfy the conditions of Theorem \ref{EE} for $\mathfrak{E}$. Let $\mathcal{W}_n$ the topology of $\mathcal{F}_n((\mathfrak{E},\mathcal{W}))$, $T^n=\{s_1*\ldots *s_n: s_1, \ldots, s_n\in T \textit{ and } \vert s_1\vert =\ldots =\vert s_n\vert\}$ and for each $s_1*\ldots *s_n\in T^n$ let $H_{s_1*\ldots* s_n}$ be the subset $\varphi_n[E_{s_1}\times\cdots\times E_{s_n}]$ of $\mathcal{F}_n(\mathfrak{E})$. We need to define the neighborhoods that will work as anchors. However, in Theorem 1.2 of \cite{zaragoza}, the neighborhoods that we proposed as anchors depend on the branch. In the following Lemma we correct that mistake. Let $F=\{x_1,\ldots,x_k\}\in\mathcal{F}(\mathfrak{E})$. For each $j\leq k$, let $U_{x_j} $ be a neighborhood of $ x_j $ which is anchor in $(\mathfrak{E},\mathcal{W})$. Let $\mathcal{U}_F=\langle U_{x_1},\ldots, U_{x_k}\rangle$. \begin{lemma}\label{ancla} If $F\in\mathcal{F}_n(X)$, the set $\mathcal{U}_F\cap \mathcal{F}_n(X)$ is an anchor for $(\mathcal{F}_n(\mathfrak{E}),\mathcal{W}_n)$. \end{lemma} \begin{proof} Let $ F=\{x_1,\ldots,x_k\}\in\mathcal{F}_n(\mathfrak{E})$, and $\mathcal{U}_F$ defined as above. Consider $ t \in [T^n] $. If there exist $i\in \omega$ such that $\mathcal{U}_F\cap\varphi_n[E_{t_1\restriction i}\times \ldots \times E_{t_n\restriction i}]=\emptyset$ then we are finished. Now suppose that for each $ i \in \omega $ $$ (*)\textit{ }\mathcal{U}_F \cap \varphi_n[E_{t_1\restriction i} \times \ldots \times E_{t_n\restriction i}] \neq\emptyset.$$ We claim that for all $ i \in \omega $ and $ j \leq n $, there exists $l(i,j)\in \omega$ such that, $U_{l(i,j)}\cap E_{t_j\restriction i} \neq \emptyset$. By $(*)$ there exists $(y_1,\ldots, y_n)\in E_{t_1\restriction i}, \times \ldots \times E_{t_n\restriction i}$ such that $$\{y_1,\ldots, y_n\}=\varphi_n(y_1,\ldots, y_n)\in \mathcal{U}_F=\langle U_{x_1},\ldots U_{x_k}\rangle.$$ So there is $l\leq n$ such that $y_j\in U_{x_l}$; this proves the claim. If we fix $j$, this defines a function $i \mapsto l(i,j) $ with domain $\omega $ and codomain $\{1,\ldots, k\}$. This implies that there exists an infinite $ A \subset \omega$ and fixed $l_j\in \ensuremath{\mathbb{N}} $ such that $E_{t_j\restriction i}\cap U_{l_j}\neq\emptyset$ for each $i\in A$. As $E_{t_j\restriction s} \subset E_{t_j\restriction t}$ if $s<t$, then $E_{t_j\restriction i}\cap U_{l_j}\neq\emptyset$ for all $i\in \omega$. Therefore $\{E_{t_j\restriction i}:i\in \omega\}$ converges in $(\mathfrak{E},\mathcal{W})$ to a point $p_j\in \mathfrak{E}$; this holds for all $j\in\{1,\ldots,n\}$. Therefore $\{E_{t_1\restriction i} \times \ldots\times E_{t_n\restriction i}: i \in \omega\}$ converges in $(\mathfrak{E}^n,\mathcal{W}^n)$ to $(p_1,\ldots,p_n)$. So $\{\varphi_n[E_{t_1\restriction i} \times \ldots\times E_{t_n\restriction i}]: i \in \omega\}$ converges in $\mathcal{F}_n(\mathfrak{E},\mathcal{W})$ to $\varphi_n((p_1,\ldots,p_n))$. \end{proof} We are ready to prove the theorem. \begin{proof}[Proof of Theorem \ref{FE}] In Theorem 1.2 from \cite{zaragoza} it is proved that for each $n\in \mathbb{N}$ the topology $\mathcal{W}_n$, the tree $T^n$ and collection $\mathcal{E}_n=\{\varphi_n(E _{s_1}\times \ldots \times E_{s_n }): s_1*\ldots *s_n\in T^n \}$ satisfy conditions (1), (3) and (4) of Theorem \ref{EE} for $\mathcal{F}_n(\mathfrak{E})$. Also, by Lemma \ref{ancla}, $\{\mathcal{U}_F:F\in\mathcal{F}_n(X)\}$ are anchors so condition (2) of Theorem \ref{EE} for $\mathcal{F}_n(\mathfrak{E})$ is also satisfied. Consider the tree $$T=\{\emptyset\}\cup \{\langle n\rangle^{\frown} s_1*\ldots *s_n :n\in\omega,\, s_1*\ldots *s_n\in T^n\}.$$ Let $X_{\emptyset}=\mathcal{F}(\mathfrak{E})$ and $X_{\langle n\rangle^{\frown} s_1*\ldots *s_n }= \varphi_n[E _{s_1}\times \ldots \times E_{s_n }]$ for each $n\in \omega$ and $s_1*\ldots *s_n \in T^n$. Let $\mathcal{W}^\prime$ be the Vietoris topology in $\mathcal{F}(\mathfrak{E}, \mathcal{W})$. Then we will prove that $\mathcal{W}^\prime$, $T$, $\mathcal{S}=\{X_s: s\in T\}$ and $\{\mathcal{U}_F: F\in \mathcal{F}(\mathfrak{E})\}$ satisfy the conditions required in Theorem \ref{EE} for $\mathcal{F}(\mathfrak{E})$. Indeed, the fact that the Vietoris topology in $\mathcal{F}(\mathfrak{E}, \mathcal{W})$ witnesses that $\mathcal{F}(\mathfrak{E})$ is almost zero-dimensional follows from the proof of Proposition 2.2 in \cite{zaragoza}. On the other hand, for each $s_1*\ldots * s_n \in T_n$, $\varphi_n[E_{s_1}\times \ldots \times E_{s_n}]$ is a closed subset of $\mathcal{F}_n(\mathfrak{E}, \mathcal{W})$, hence $\varphi_n[E_{s_1}\times \ldots\times E_{s_n}]$ is closed in $\mathcal{F}(\mathfrak{E}, \mathcal{W})$, because $\mathcal{F}_n(\mathfrak{E}, \mathcal{W})$ is closed in $\mathcal{F}(\mathfrak{E}, \mathcal{W})$. For $\emptyset\in T$, we have that $succ_T(\emptyset)= \{\langle n \rangle^\frown \emptyset_1*\ldots * \emptyset_n: n\in \omega, \emptyset_1*\ldots * \emptyset_n\in T_n \}$ then $X_{\langle n \rangle^\frown \emptyset_1*\ldots * \emptyset_n}=\mathcal{F}_n(\mathfrak{E})$. Hence $X_{\langle n \rangle^\frown \emptyset_1*\ldots * \emptyset_n}$ is nowhere dense in $X_{\emptyset}$ and $X_{\emptyset}=\bigcup_{t\in succ_T(\emptyset)} X_{t}$. On the other hand, if $\langle n\rangle^{\frown} s_1*\ldots *s_n\in T\setminus \{\emptyset\}$, $succ_T(\langle n\rangle^{\frown} s_1*\ldots *s_n )=\{ \langle n\rangle^{\frown} t: t\in succ_{T^n}(s_1*\ldots *s_n )\}$. Then for each $\langle n\rangle^{\frown} s_1*\ldots *s_n \in T\setminus \{\emptyset\}$, we have $X_{\langle n\rangle^{\frown} s_1*\ldots *s_n }=\bigcup\{X_{\langle n\rangle^{\frown} t}: t\in succ_{T^n}(s_1*\ldots *s_n)\}$ and $X_{\langle n\rangle^{\frown} t}$ is nowhere dense in $X_{\langle n\rangle^{\frown} s_1*\ldots *s_n}$ if $t\in succ_T(s_1*\ldots *s_n)$. Thus, conditions (1) and (3) of Theorem \ref{EE} for $\mathcal{F}(\mathfrak{E})$ are satisfied. Now we prove condition (2). Let $F\in\mathcal{F}(\mathfrak{E})$. Suppose that $F=\{x_1,\ldots, x_k\}$ with $x_j\neq x_i$ if $i\neq j$ so $\mathcal{U}_F=\langle U_{x_1},\ldots, U_{x_k}\rangle$. We claim that $\mathcal{U}_F$ is an anchor in $\mathcal{W}^\prime$. Let $\widehat{t}\in[T]$, then there exists $n\in \omega$ such that $\widehat{t}=\langle n\rangle^\frown \widehat{t}_n$ and $\widehat{t}_n\in [T^n]$. Also, there exist $t_i\in [T]$ such that $\widehat{t}_n=(t_1,\ldots, t_n)$.\\\\ \noindent\textbf{Case 1}: $k\leq n$. By Lemma \ref{ancla}, $\mathcal{U}_F\cap\mathcal{F}_n(\mathfrak{E})$ is a anchor of $F$ in $(\mathcal{F}_n(\mathfrak{E}),\mathcal{W}_n)$. We will show that $\mathcal{U}_F\cap X_{\widehat{t}\restriction i}=\emptyset$ for some $i\in \omega$, or that $(X_{\widehat{t}\restriction i})_{i\in \omega}$ converges in $(\mathcal{F}(\mathfrak{E}),\mathcal{W}^\prime)$. If $\mathcal{U}_F\cap X_{\widehat{t}\restriction i}=\emptyset$ for some $i\in \omega$, we are finished. If $\mathcal{U}_F\cap X_{\widehat{t}\restriction i}\neq\emptyset$ for each $i\in \omega$, as $X_{\widehat{t}\restriction {i+1}}=X_{\langle n\rangle^\frown \widehat{t}_n \restriction i} =\varphi_n[E_{t_1\restriction i}\times \ldots \times E_{t_n\restriction i}]\subset \mathcal{F}_n(\mathfrak{E})$ then $(\mathcal{U}_F\cap \mathcal{F}_n(\mathfrak{E}))\cap X_{\widehat{t}\restriction i}\neq\emptyset$ for each $i\in \omega$. Hence $(X_{\widehat{t}\restriction i})_{i\in \omega}$ converges in $(\mathcal{F}(\mathfrak{E}),\mathcal{W}^\prime)$ because $\mathcal{U}_F\cap\mathcal{F}_n(\mathfrak{E})$ is a anchor of $F$ in $(\mathcal{F}_n(\mathfrak{E}),\mathcal{W}_n)$. So $\mathcal{U}_F$ is anchor of $F$ in $(\mathcal{F}(\mathfrak{E}),\mathcal{W}^\prime)$. \\\\ \noindent \textbf{Case 2}: $k> n$. For each $j\in (n,k]$, let's consider $t_{n+1}=\ldots =t_k=t_n$. Let $\widehat{s}=(t_1,\ldots, t_n, t_{n+1}, \ldots, t_k)$, then $\widehat{s}\in [T^k]$. From case 1, we have that $X_{\langle k\rangle^\frown\widehat{s}\restriction j} \cap \mathcal{U}_F=\emptyset$ if $j\geq m$ or $(X_{\langle k\rangle^\frown\widehat{s}\restriction j} )_{j\in \omega}$ converges. We claim that either $X_{\widehat{t}\restriction j} \cap \mathcal{U}_F=\emptyset$ for some $j\geq m$ or $(X_{\widehat{t}\restriction j} )_{j\in \omega}$ converges. Note that for all $j\in\omega$ $$(*)\textit{ }\varphi_n[ E_{t_1\restriction j}\times \cdots\times E_{t_n\restriction j}]\subset \varphi_k[ E_{t_1\restriction j}\times \ldots\times E_{t_n\restriction j}\times E_{t_{n+1}\restriction j}\times\cdots\times E_{t_k\restriction j}].$$ Let's suppose that $X_{\widehat{t}\restriction j} \cap \mathcal{U}_F\neq\emptyset$ for each $j\in \omega$. By $(*)$ we have $X_{\langle k\rangle^\frown\widehat{s}\restriction j} \cap \mathcal{U}_F\neq\emptyset$, then $(X_{\langle k\rangle^\frown\widehat{s}\restriction j})_{j\in \omega}$ converges to some $A\in\mathcal{F}_k(\mathfrak{E})$. Hence $(\varphi_n[ E_{t_1\restriction j}\times \cdots\times E_{t_n\restriction j}])_{j\in \omega}$ converges to $A$. Then $(X_{\widehat{t}\restriction j} )_{j\in \omega}$ converges to $A$.\\\\ Thus, condition (2) of Theorem \ref{EE} for $\mathcal{F}(\mathfrak{E})$ is satisfied. By Lemma \ref{Coh}, $\mathcal{F}(\mathfrak{E})$ is $\{X_{\langle n\rangle^{\frown} s} : n\in \ensuremath{\mathbb{N}}, s\in T^n\}$-cohesive. Let $F\in \mathcal{F}(\mathfrak{E})$ such that $F\in \mathcal{W}$ and let $\mathcal{W}\in \mathcal{C}$ be the open set given by Lemma \ref{Coh} that does not contain non-empty clopen subsets of any element of $\{X_{\langle n\rangle^{\frown} s} : n\in \ensuremath{\mathbb{N}}, s\in T^n\}$. We claim that $\mathcal{W}$ does not contain non-empty clopen subsets of $\mathcal{F}(\mathfrak{E})$. Suppose otherwise, then there is an clopen non-empty $\mathcal{O}$ subset of $\mathcal{F}(\mathfrak{E})$ such that $O\subset \mathcal{W}$. Note that for each $n\in \mathbb{N}$, we have $\mathcal{O}\cap X_{\langle n\rangle^{\frown} s}=\emptyset$ because $\mathcal{W}$ does not contain clopen of any element of $\{X_{\langle n\rangle^{\frown} s} : n\in \ensuremath{\mathbb{N}}, s\in T^n\}$. Thus, $\mathcal{O}$ is empty and this is a contradiction. This implies that in fact $\mathcal{F}(\mathfrak{E})$ is $\{X_t:t\in T\}$-cohesive. Thus, condition (4) of Theorem \ref{EE} for $\mathcal{F}(\mathfrak{E})$ is satisfied. Then by Theorem \ref{EE} we have that $\mathcal{F}(\mathfrak{E})$ is homeomorphic to $\mathfrak{E}$. \end{proof} \section{An application to Sierpiński stratifications } A system of sets $(X_s)_{s\in T}$ is a Sierpiński stratification (\cite[Definition 7.1, p. 31]{ME}) of a space $X$ if: \begin{enumerate} \item T is a non-empty tree over a countable alphabet, \item each $X_s$ is a closed subset of $X$, \item $X_{\emptyset}=X$, $X_s=\bigcup\{X_t:t\in succ(s)\}$ \item if $\sigma\in [T]$ then the sequence $X_{\sigma\restriction 1},\ldots, X_{\sigma\restriction n},\ldots$ converges to a point in $X$. \end{enumerate} Note that in Theorem 1.1 of \cite{zaragoza} it is implicitly proved that if $(X_s)_{s\in T}$ is a Sierpiński stratification of a space $X$, then $\mathcal{C}_n=\{\varphi_n[X_{s_1}\times \ldots\times X_{s_n}]: s_1*\ldots *s_n\in T^n \}$ is a Sierpiński stratification of $\mathcal{F}_n(X)$. Define $X_\emptyset=X$ and $ X_{\langle n\rangle^{\frown} s_1*\ldots *s_n }=\varphi_n[X_{s_1}\times \ldots\times X_{s_n}]$ for $n\in\ensuremath{\mathbb{N}}$ and $s_1*\ldots *s_n\in T^n$. Then by an argument similar to the proof of Theorem \ref{FE} we obtain that $\mathcal{C}=\{X_\emptyset\}\cup\{X_{\langle n\rangle^{\frown} s_1*\ldots *s_n }:n\in \ensuremath{\mathbb{N}}, s_1*\ldots *s_n\in T^n\}$ is a Sierpiński stratification of $\mathcal{F}(X)$. So we have the following Corollary. \begin{cor}\label{se} Let $(X_s)_{s\in T}$ be Sierpiński stratification of a space $X$. Then \begin{enumerate} \item For each $n\in \mathbb{N}$, $\mathcal{C}_n$ is a Sierpiński stratification of $\mathcal{F}_n(X)$. \item $\mathcal{C}$ is a Sierpiński stratification of $\mathcal{F}(X)$. \end{enumerate} \end{cor} Van Engelen proved in (\cite[Theorem A.1.6]{van}) that a zero dimensional $ X $ space is homeomorphic to $\mathbb{Q}^\omega$ if $X$ has a Sierpiński stratification $(X_s)_{s\in T}$ such that $X_t$ is nowhere dense in $X_s$, if $t\in succ(s)$. Using this fact, the following corollary can be proved. \begin{cor} The spaces $\mathcal{F}_n(\mathbb{Q}^\omega)$, $\mathcal{F}(\mathbb{Q}^\omega)$ are homeomorphic to $\mathbb{Q}^\omega$. \end{cor} \begin{proof} Let $(X_s)_{s\in T}$ be Sierpiński stratification of a space $\mathbb{Q}^\omega$ such that $B_t$ is nowhere dense in $B_s$, if $t\in succ(s)$. By Corollary \ref{se} for each $n\in \mathbb{N}$, $\mathcal{C}_n$ and $\mathcal{C}$ are Sierpiński stratifications of $\mathcal{F}_n(X)$ and $\mathcal{F}(X)$, respectively. Notice that we have that $\varphi_n[X_{t_1}\times \ldots\times X_{t_n}]$ is nowhere dense in $\varphi_n[X_{s_1}\times \ldots\times X_{s_n}]$ if $t_1*\ldots* t_n\in succ(t_1*\ldots* t_n)$ and that $ X_{\langle n\rangle^{\frown} t_1*\ldots *t_n}$ is nowhere dense in $X_{\langle n\rangle^{\frown} s_1*\ldots *s_n}$ if $\langle n\rangle^{\frown} t_1*\ldots *t_n\in succ(\langle n\rangle^{\frown} s_1*\ldots *s_n)$. \end{proof} David Lipham has proved in \cite{lipham} that $ X $ is an $ \mathfrak{E} $ - factor, if and only if it admits a Sierpi\'nski stratification $ (B_s) _ {s \in T} $. We can consider the sets $ B_s $ as subsets of $ (X, \mathcal{W}) $, where $ \mathcal{W} $ is a topology witness to the almost zero dimensionality of $ X $. A natural question is this. \begin{ques} If $ X $ is a $ \mathfrak {E} $ - factor, under what conditions is $ (X, \mathcal{W}) $ homeomorphic to $ \mathbb{Q}^ \omega $? \end{ques} The next natural question in this direction is whether the hyperspace $\mathcal{K}(\mathfrak{E})$ is homeomorphic to $\mathfrak{E}$. However, in a personal communication Professor Jan van Mill explained that $\mathcal{K}(\mathfrak{E})$ is not Borel. The argument goes as follows. By Theorem 9.2 in \cite{ME} there is a closed copy of $\mathbb{Q}$ in $\mathfrak{E}$. Thus, there is a closed copy of $\mathcal{K}(\mathbb{Q})$ inside $\mathcal{K}(\mathfrak{E})$. However, $\mathcal{K}(\mathbb{Q})$ is not Borel by a result of Hurewicz (see \cite[33.5]{Ke}). This implies that $\mathcal{K}(\mathfrak{E})$ is not Borel. However, there is another direction that is worth exploring. Michalewski proved in \cite{mich} that $\mathcal{K}(\mathbb{Q})$ is a topological group. Also, the following follows directly from Theorem \ref{FE}. \begin{cor} $\mathcal{F}(\mathfrak{E})$ is homogeneous. \end{cor} Thus, the following is a natural question. \begin{ques} Is $\mathcal{K}(\mathfrak{E})$ homogeneous? \end{ques}
1305.7364
\section{Introduction} Bursting is an important signaling component of neurons, characterized by a periodic {alternation} of bursts and quiescent periods. Bursts are {transient}, but high-frequency trains of spikes, contrasting with the absence of spikes during the quiescent periods. Bursting activity has been recorded in many neurons, both in vitro and in vivo, and electrophysiological recordings show a great variety of bursting time {series}. All neuronal bursters share nevertheless a sharp separation between three different time scales: a fast time-scale for the spike generation, a slow time-scale for the intraburst spike frequency, and an ultra slow time-scale for the inter burst frequency. Many neuronal models exhibit bursting in some parameter range and many bursting models have been analyzed through bifurcation theory but the exact {mechanisms modulating neuronal} bursting are still poorly understood, both mathematically and physiologically. In particular, modeling the route to burst, that is the {physiologically observed modulation} from a regular pacemaking activity to a bursting activity, has remained elusive to date. {Also many efforts have been devoted at classifying different types of bursters} \cite{Rinzel1987b,Bertram1995,Golubitsky2001,Izhikevich2000}. {But the mathematical mechanisms that allow a same neuron to be modulated across different types are rarely studied, despite their physiological role in homeostatic cell regulation and development} \cite{Liu1998}. As an attempt to advance the mathematical understanding of neuronal bursting, the present paper exploits the particular structure of conductance based neuronal models to {address} with a local analysis tool the global structure of {bursting} attractors. Rooted in the seminal work of Hodgkin and Huxley \cite{Hodgkin1952}, conductance-based models are nonlinear RC circuits consisting of one capacitance (modeling the cell membrane) in parallel with possibly many voltage sources with voltage dependent conductance (each modeling a specific ionic current). The variables of the model are the membrane potential ($V$) and the gating (activation and inactivation) variables that model the kinetics of each ion channel. The vast diversity of ion channels involved in a particular neuron type leads to high-dimensional models, but all conductance-based models share two central structural assumptions: \begin{itemize} \item[] {\it (i)} a classification of gating variables in three well separated time-scales (fast variables - in the range of the membrane potential time scale $\sim 1ms$; slow variables - $5$ to $10$ times slower; and ultra-slow variables - $10$ to hundreds time slower), which roughly correspond to the three time scales of neuronal bursting. \item[] {\it (ii)} each voltage regulated gating variable $x$ obeys the first-order monotone dynamics $\tau_x(V)\dot x=-x+x_\infty(V)$, which implies that, at steady state, every voltage regulated gating variable is an explicit monotone function of the membrane potential, that is, $x=x_\infty(V)$. \end{itemize} Our analysis of neuronal bursting rests on these two structural assumptions. Assumption {\it (i)} suggests a three-time scale singularly perturbed bursting model, whose singular limit provides the skeleton of the bursting attractor. Assumption {\it (ii)} implies that the equilibria of arbitrary conductance-based models are determined by Kirchoff's law (currents sum to zero in the circuit), which provides a single algebraic equation in the sole scalar variable $V$. This remarkable feature calls for singularity theory \cite{Golubitsky1985} to understand the equilibrium structure of the model. {The results of jointly exploiting timescale separation and singularity theory for neuronal bursting modeling provide the following specific contributions: $-$The universal unfolding of the winged-cusp singularity is shown to organize a three time-scale burster. The three level hierarchy of singularity theory dictates the hierarchy of timescales: the state variable of the bifurcation problem is the fast variable, the bifurcation parameter is the slow variable, and unfolding parameter(s) are the ultra-slow variable(s). Because the geometric construction is grounded in the algebraic and timescale structure of conductance-based models, the proposed model can be related to detailed conductance- based models through mathematical reduction. We provide general conditions for this mathematical model to be a normal form reduction of an arbitrary conductance-based model. Both the bifurcation parameter and the unfolding parameters have a clear physiological interpretation. $-$ The bifurcation parameter is directly linked to the balance between restorative and regenerative slow ion channels, the importance of which was recently studied by the authors in} \cite{Franci2013}. {The modulation of the bifurcation parameter in the proposed three-time scale model provides a geometrically and physiologically meaningful transition from slow tonic spiking to bursting. This ``route to bursting" is known to play a significant role in central nervous system activity} \cite{Sherman2001,Viemari2006,Beurrier1999}. {Its mathematical modeling appears to be novel. $-$ The three unfolding parameters modulate in an even slower scale the fast-slow phase portrait of the three-time scale burster. The affine parameter plays the classical role of an adaptation current that hysterically modulates the slow-fast phase portrait across a parameter range where a stable resting state and a stable spiking limit cycle coexist, thereby creating the bursting attractor. The two remaining unfolding parameters can modulate the bursting attractor across a continuum of bursting types. As a result, transition between differenting bursting waveforms, observed for instance in developing neurons} \cite{Liu1998}, {are geometrically captured as paths in the unfolding space of the winged cusp. The physiological interpretation of this modulation is a straightforward consequence of the clear physiological interpretation of each unfolding parameter.} The existence of three-time scale bursters in the abstract unfolding of a winged cusp is presented in Section 2. Section 3 focuses on a minimal reduced model of neuronal bursting and uses the insight of singularity theory to describe a physiological route to bursting in this model. Section 4 shows how to trace the same geometry in arbitrary conductance based models. Section \ref{SEC: why the winged cusp} discuss in a less technical way the relevance of the winged-cusp singularity for the modeling of bursting modulation. The technical details of mathematical proofs are presented in an appendix. \section{Universal unfolding and multi-time scale attractors} \subsection{A primer on singularity theory} We introduce here some notation and terminology that will be used extensively in the paper. The interested reader is referred to the main results of Chapters I-IV in \cite{Golubitsky1985} for a comprehensive exposition of the singularity theory used in this paper. Singularity theory studies scalar bifurcation problems of the form \begin{equation}\label{EQ: generic bif problem} g(x,\lambda)=0,\quad x,\lambda\in\mathbb R, \end{equation} where $g$ is a smooth function. The variable $x$ denotes the state and $\lambda$ is the bifurcation parameter. The set of pairs $(x,\lambda)$ satisfying (\ref{EQ: generic bif problem}) is called the {\it bifurcation diagram}. {\it Singular points} satisfy $g(x^\star,\lambda^\star)=\frac{\partial g}{\partial x}(x^\star,\lambda^\star)=0$. Indeed, if $\frac{\partial g}{\partial x}(x^\star,\lambda^\star)\neq0$, then the implicit function theorem applies and the bifurcation diagram is necessarily regular at $(x^\star,\lambda^\star)$. Except for the fold $x^2\pm\lambda=0$, bifurcations are not generic, that is they do not persist under small perturbations. Singularity theory is a robust bifurcation theory: it aims at classifying all possible persistent bifurcation diagrams that can be obtained by small perturbations of a given singularity. A {\it universal unfolding} of $g(x,\lambda)$ is a parametrized family of functions $G(x,\lambda; \alpha)$, where $\alpha$ lies in the unfolding parameter space $\mathbb R^k$, such that \begin{itemize} \item[] 1) $G(x,\lambda;0)=g(x,\lambda)$ \item[] 2) Given any $p(x)$ and a small $\mu>0$, one can find an $\alpha$ near the origin such that the two bifurcation problems $G(x,\lambda;\alpha)=0$ and $g(x,\lambda)+\mu p(x)=0$ are qualitatively equivalent. \item[] 3) $k$ is the minimum number of unfolding parameters needed to reproduce all perturbed bifurcation diagrams of $g(x,\lambda)$. $k$ is called the codimension of $g(x,\lambda)$. \end{itemize} Unfolding parameters are not bifurcation parameters. Instead, they change the qualitative bifurcation diagram of the perturbed bifurcation problem $G(x,\lambda;\alpha)=0$. That is why $\lambda$ is a distinguished parameter in the theory. Historically, this parameter was associated to a slow time, whose evolution lets the dynamics visit the bifurcation diagram in a quasi-steady state manner. It will play the same role in the present paper, where we only consider two singularities and their universal unfolding:\\ the codimension 1 hysteresis \begin{equation}\label{EQ: stable hy} g_{hy}^s(x,\lambda)=-x^3-\lambda, \end{equation} whose universal unfolding is shown to be \cite[Chapter IV]{Golubitsky1985} \begin{equation}\label{EQ: stable hy unfold} G_{hy}^s(x,\lambda;\ \beta)=-x^3-\lambda+\beta x, \end{equation} the codimension 3 winged cusp \begin{equation}\label{EQ: stable cusp} g_{wcusp}^s(x,\lambda)=-x^3-\lambda^2, \end{equation} whose universal unfolding is shown to be \cite[Section III.8 and Chapter IV]{Golubitsky1985} \begin{equation}\label{EQ: stable cusp unfold} G_{wcusp}^s(x,\lambda;\ \alpha,\beta,\gamma)=-x^3-\lambda^2+\beta x -\gamma\lambda x -\alpha. \end{equation} The universal unfolding of codimension$\geq$1 bifurcations contains some codimension 1 bifurcation. For instance, the universal unfolding of the winged cusp possesses hysteresis bifurcations on the unfolding parameter hypersurface defined by $\alpha\gamma^2+\beta=0$, $\alpha\leq0$. Even though such bifurcation diagrams are not persistent, they define {\it transition varieties} that separate equivalence classes of persistent bifurcation diagrams, hence, providing a complete classification of persistent bifurcation diagrams. An unperturbed bifurcation problem assumes the suggestive role of {\it organizing center}: all the perturbed bifurcation diagrams are determined and organized by the unperturbed bifurcation diagram, {which constitutes the most singular situation}. Via the inspection of local algebraic conditions at the singularity, an organizing center provides a {quasi-global} description of all possible perturbed bifurcation diagrams. \subsection{The hysteresis singularity and spiking oscillations} \label{SSEC: rel osci in hyst} The hysteresis singularity has a universal unfolding $-x^3-\lambda+\beta x$ with persistent bifurcation diagram plotted in Figure \ref{FIG 1}A for $\beta>0$. We use this algebraic curve to generate the phase portrait in Fig. \ref{FIG 1}B of the two-time scale model \begin{IEEEeqnarray}{rCl}\label{EQ: hy dynamics} \dot x&=&G_{hy}^s(x,\lambda+y;\ \beta)\IEEEyessubnumber\\ &=&-x^3+\beta x-\lambda-y\nonumber\\ \dot y&=&\varepsilon(x-y)\IEEEyessubnumber \end{IEEEeqnarray} \begin{figure}[h!] \center \includegraphics[width=0.9\textwidth]{Fig1} \caption{{\bf Relaxation oscillations in the universal unfolding of the hysteresis bifurcation.} {\bf A.} A persistent bifurcation diagram of the hysteresis singularity. Branches of stable (resp. unstable) fixed points are depicted as full (resp. dashed) lines. {\bf B.} Through a slow adaptation of the bifurcation parameter, the bifurcation diagram in A. is transformed into the phase plane of a two-dimensional dynamical system, which still defines a universal unfolding of the hysteresis singularity. The thick full line is the fast subsystem nullcline. The thin full line is the slow subsystem nullcline. The circle denotes an unstable fixed point. For small $\beta>0$, the model exhibits exponentially stable relaxation oscillations (depicted in light blue).}\label{FIG 1} \end{figure} \noindent Because $y$ is a slow variable, it acts as a slowly varying modulation of the bifurcation parameter in the fast dynamics (\ref{EQ: hy dynamics}a). As a consequence, the global analysis of system (\ref{EQ: hy dynamics}) reduces to a quasi-steady state bifurcation analysis of (\ref{EQ: hy dynamics}a), hence the relationship between Fig. \ref{FIG 1}A and Figure \ref{FIG 1}B. {The following (well known) theorem characterizes a global attractor of} (\ref{EQ: hy dynamics}){, that is the existence of Van-der-Pol type relaxation oscillations in the universal unfolding of the hysteresis.} \begin{theorem}\label{THM: hy rel oscil}{\bf \cite{Grasman1987},\cite{Mishchenko1980},\cite{Krupa2001c}} For $\lambda=0$ and for all $0<\beta<1$, there exists $\bar\varepsilon>0$ such that, for all $\varepsilon\in(0,\bar\varepsilon]$, the dynamical system (\ref{EQ: hy dynamics}) possesses an exponentially stable relaxation limit cycle, which attracts all solutions except the equilibrium at $(0,0)$. \end{theorem} The familiar reader will recognize in (\ref{EQ: hy dynamics}) a famous model of neurodynamics introduced by FitzHugh \cite{FitzHugh1961}. It is the prototypical planar reduction of spiking oscillations. There is therefore a close relationship between the hysteresis singularity and spike generation. {It is worth emphasizing that the relationship between singularity theory} (Fig. \ref{FIG 1}A) {and the two-time scale phase portrait} (Fig. \ref{FIG 1}B) {imposes choosing the bifurcation parameter, not an unfolding parameter, as the slow variable. It should also be observed that the slow variable is a deviation from the unfolding parameter $\lambda$ rather than the bifurcation parameter itself. Keeping $\lambda$ as the bifurcation parameter of the two-dimensional dynamics} (\ref{EQ: hy dynamics}) {allows to shape its equilibrium structure accordingly to the universal unfolding of the organizing singularity, in this case, the hysteresis, and will play an important role in the next section.} \subsection{The winged cusp singularity and rest-spike bistability} \label{SSEC: rest-spike in wcusp} We repeat the elementary construction of Section \ref{SSEC: rel osci in hyst} for the codimension-3 winged cusp singularity $-x^3-\lambda^2$. It differs from the hysteresis singularity in the \emph{non-monotonicity} of $g(x,\lambda)$ in the bifurcation parameter, that is $\frac{\partial (-x^3-\lambda^2)}{\partial\lambda}=-2\lambda$ changes sign at the singularity. Figure \ref{FIG 2}A illustrates an important persistent bifurcation diagram in the unfolding of the winged cusp, obtained for $\gamma=0$, $\beta>0$, and $\alpha<-2\left(\frac{\beta}{3}\right)^{3/2}$. We call it the {\it mirrored hysteresis} bifurcation diagram. The right part ($\lambda>0$) of this bifurcation diagram is essentially the persistent bifurcation diagram of the hysteresis singularity in Figure \ref{FIG 1}A. In that region, $\frac{\partial G_{wcusp}^s}{\partial \lambda}<0$. The left part ($\lambda>0$) is the mirror of the hysteresis and, in that region, $\frac{\partial G_{wcusp}^s}{\partial \lambda}>0$. For $\gamma\neq 0$, the mirroring effect is not perfect, but the qualitative analysis does not change. The hysteresis and its mirror collide in a transcritical singularity for $\alpha=-2\left(\frac{\beta}{3}\right)^{3/2}$. This singularity belongs to the transcritical bifurcation transition variety in the winged cusp unfolding (see Appendix \ref{SEC: trans variety}). The transcritical bifurcation variety plays an important role in the forthcoming analysis. \begin{figure}[h!] \center \includegraphics[width=0.9\textwidth]{Fig2} \caption{{\bf Singularly perturbed rest-spike bistability in the universal unfolding of the winged cusp.} {\bf A.} {\it Mirrored hysteresis} persistent bifurcation diagram of the winged cusp for $\beta>0$, $\alpha<-2\left(\frac{\beta}{3}\right)^{3/2}$, and $\gamma=0$. {\bf B.} A phase plane of (\ref{EQ: cusp dynamics}).}\label{FIG 2} \end{figure} We use the algebraic curve in Figure \ref{FIG 2}A to generate the phase portrait in Figure \ref{FIG 2}B of the two-dimensional model \begin{IEEEeqnarray}{rCl}\label{EQ: cusp dynamics} \dot x&=&G_{wcusp}^s(x,\lambda+y;\ \alpha,\beta,\gamma)\IEEEyessubnumber\\ &=&-x^3+\beta x-(\lambda+y)^2-\gamma(\lambda+y)x -\alpha\nonumber\\ \dot y&=&\varepsilon(x-y).\IEEEyessubnumber \end{IEEEeqnarray} Its fixed point equation \begin{equation}\label{EQ: cusp dyn fp} F(x,\lambda,\alpha,\beta,\gamma):=-x^3+\beta x-(\lambda+x)^2-\gamma(\lambda+x)x-\alpha. \end{equation} is easily shown to be again a universal unfolding of the winged cusp around $x_{wcusp}:=\frac{1}{3}$, $\lambda_{wcusp}:=0$, $\alpha_{wcusp}:=-\frac{1}{27}$, $\beta_{wcusp}:=-\frac{1}{3}$, $\gamma_{wcusp}:=-2$. The face portrait in Fig. \ref{FIG 2}B is a prototype phase portrait of rest-spike bistability: a stable fixed point coexists with a stable relaxation limit cycle. \begin{figure}[h!] \center \includegraphics[width=0.75\textwidth]{Fig3} \caption{{\bf An unfolding of the pitchfork bifurcation variety in (\ref{EQ: cusp dynamics}).} The phase portraits in Figs. \ref{FIG 1} and \ref{FIG 2} both belong to the unfolding of the pitchfork singularity in center. A smooth deformation of the phase portrait of Fig. \ref{FIG 1} into the phase portrait of Fig. \ref{FIG 2} involves a transcritical bifurcation, which degenerate into a pitchfork for a particular value of the unfolding parameter $\gamma$}.\label{FIG 3} \end{figure} Similarly to the previous section, the analysis of the singularly perturbed model (\ref{EQ: cusp dynamics}) is completely characterized by the bifurcation diagram of Figure \ref{FIG 2}A. This bifurcation diagram provides a skeleton for the rest-spike bistable phase portrait in Figure \ref{FIG 2}B, as stated in the following theorem. Its proof is provided in Section \ref{SSEC: cusp bist proof}. \begin{theorem}\label{THM: cusp bist} For all $\beta>\beta_{wcusp}$, there exist open sets of bifurcation ($\lambda$) and unfolding ($\alpha,\gamma$) parameters near the pitchfork singularity at $(\lambda,\alpha,\gamma)=(\lambda_{PF}(\beta),\alpha_{PF}(\beta),\gamma_{PF}(\beta))$, in which, for sufficiently small $\varepsilon>0$, model (\ref{EQ: cusp dynamics}) exhibits the coexistence of an exponentially stable fixed point $N_s$ and an exponentially stable spiking limit cycle $\ell^\varepsilon$. Their basins of attraction are separated by the stable manifold $W_s^\varepsilon$ of a hyperbolic saddle $S$ {\rm (see Fig. \ref{FIG 2}B)}. \end{theorem} Figure \ref{FIG 3} shows the transition in (\ref{EQ: cusp dynamics}) from the hysteresis phase portrait in Figure \ref{FIG 1}B to the bistable phase portrait in Fig. \ref{FIG 2}B through a transcritical bifurcation. Both phase portraits are generated by unfolding the degenerate portrait in Fig. \ref{FIG 3}, center, which belongs to the pitchfork bifurcation variety $(\alpha,\gamma)=(\alpha_{PF}(\beta),\gamma_{PF}(\beta))$, $\beta>\beta_{wcusp}$ (see Appendix \ref{SEC: trans variety}). The transcritical bifurcation variety $\alpha=\alpha_{TC}(\beta,\gamma)$ is obtained through variations of the unfolding parameter $\gamma$ away from the pitchfork variety. It provides the two phase portraits in Fig. \ref{FIG 3}, center top and bottom. By increasing or decreasing the bifurcation parameter $\lambda$ and decreasing the unfolding parameter $\alpha$ out of the transcritical bifurcation variety, these phase portraits perturb to the generic phase portraits in the corner, corresponding to the qualitative phase portraits in Figures \ref{FIG 1}B and Fig. \ref{FIG 2}B, respectively. The reader of \cite{Franci2012} will recognize the same organizing role of the pitchfork in a planar model of neuronal excitability. \subsection{A three-time scale bursting attractor in the winged cusp unfolding} \label{SSEC: cusp burst} The coexistence of a stable resting state and stable spiking oscillation, or {\it singularly perturbed rest-spike bistability}, makes (\ref{EQ: cusp dynamics}) a good candidate as the slow-fast subsystem of a three-time scale minimal bursting model: \begin{IEEEeqnarray}{rCl}\label{EQ: 3D cusp dynamics} \dot x&=&G_{wcusp}^s(x,\lambda+y;\ \alpha+z,\beta,\gamma)\IEEEyessubnumber\\ &=&-x^3+\beta x-(\lambda+y)^2-\gamma(\lambda+y)x -\alpha-z\nonumber\\ \dot y&=&\varepsilon_1(x-y)\IEEEyessubnumber\\ \dot z&=&\varepsilon_2 (-z + a x+b y + c ),\IEEEyessubnumber \end{IEEEeqnarray} where {$0<\varepsilon_2\ll\varepsilon_1\ll 1$ and $a,b,c\in\mathbb R$}. The {$z$-dynamics} models the ultra-slow adaptation of the affine unfolding parameter $\alpha$, in such a way that the global attractor of (\ref{EQ: 3D cusp dynamics}) will be determined by {a quasi-static modulation} of (\ref{EQ: 3D cusp dynamics}a) through different persistent bifurcation diagrams. {Here, again, the role of singularity theory in distinguishing bifurcation and unfolding parameters is crucial. The hierarchy between these parameters and the state variable, formalized in the theory in} \cite[Definition III.1.1]{Golubitsky1985}, {is reflected here in the hierarchy of timescales.} The time scale separation between (\ref{EQ: 3D cusp dynamics}a-\ref{EQ: 3D cusp dynamics}b) and (\ref{EQ: 3D cusp dynamics}-c) makes it possible once again to derive a global analysis of model (\ref{EQ: 3D cusp dynamics}) from the analysis of the steady state behavior of (\ref{EQ: cusp dynamics}) as $\alpha$ is varied. Such analysis can easily be derived geometrically in the singular limit $\varepsilon_1=0$. It is sketched in Figure \ref{FIG 4}. {For $\alpha\in(\alpha_{SN},\alpha_{SH}^0)$, the singularly perturbed model} (\ref{EQ: cusp dynamics}) {exhibits rest-spike bistability, that is, the coexistence of a stable node $N_s$, a singular stable periodic orbit $\ell^0$, and a singular saddle separatrix $W_s^0$. At $\alpha=\alpha_{SH}^0=-2\left(\frac{\beta}{3}\right)^{3/2}$ the left and right branches of the mirrored hysteresis bifurcation collide in a transcritical singularity that serves as a connecting point for a singular homoclinic trajectory $SH^0$. For $\alpha>\alpha_{SH}^0$, the only (singular) attractor is the stable node $N_s$. At $\alpha=\alpha_{SN}$, the saddle and the stable node merge in a saddle-node bifurcation $SN$. For $\alpha<\alpha_{SN}$, the only attractor is the singular periodic orbit $\ell^0$.} The different singular invariant sets in Figure \ref{FIG 4}A, can be glued together to construct the three-dimensional singular invariant set $\mathcal M_0$ in Figure \ref{FIG 4}B-left. \begin{figure}[h!] \center \includegraphics[width=0.8\textwidth]{Fig4} \caption{{\bf Singular steady-state behavior of (\ref{EQ: cusp dynamics}) through a variation of the unfolding parameter $\alpha$.} {\bf A.} Singular phase portraits of (\ref{EQ: cusp dynamics}) for $\gamma=0$, $\beta=\frac{1}{3}$, and small negative $\lambda$. {\bf B.} Gluing the different invariant sets in {\bf A} leads to the three-dimensional singular invariant set $\mathcal M^0$ (left), which provides a skeleton for a three-time scale bursting attractor (right) in the singularly perturbed system (\ref{EQ: 3D cusp dynamics}). The branch of stable fixed points (resp. saddle point) for $\alpha<\alpha_{SN}$ is drawn as the black solid curve $\mathcal L$ (resp. the black dashed curve $\mathcal S$). The saddle node bifurcation connecting them is denoted by $\mathcal F$. The branch of unstable fixed points is drawn as the black dashed line $\mathcal U$. The branch of stable singular periodic orbit for $\alpha<\alpha_{SH}^0$ is drawn as the blue cilindric surface $P^0$. The singular saddle homoclinic trajectory is drawn as the orange oriented curve $SH^0$.}\label{FIG 4} \end{figure} The singular invariant set $\mathcal M_0$ provides a skeleton for a three-time scale bursting attractor that shadows the branch $\mathcal L$ of stable fixed points in alternation with the branch $P^0$ of (singular) stable periodic orbits, as depicted in Figure \ref{FIG 4}B-right. To prove the existence of such an attractor, we only need to understand how $\mathcal M^0$ perturbs for $\varepsilon_1>0$. \begin{figure}[h!] \center \includegraphics[width=0.5\textwidth]{Fig5} \caption{{\bf Bifurcation diagram of (\ref{EQ: cusp dynamics}) with respect the unfolding parameter $\alpha$ for sufficiently small $\varepsilon$.} The branch of stable fixed point is depicted as the full thin line $\mathcal{L}$, the branch of saddle point as the dashed thin line $\mathcal S$, and the branch of unstable fixed point as the dashed thin line $\mathcal U$. The branch of stable periodic orbit is depicted by the thick full lines $P^\varepsilon$ and the branch of unstable periodic orbits by the thick dashed lines $Q^\varepsilon$. $SH^\varepsilon$: saddle-homoclinic bifurcation. $\mathcal F_{LC}$: fold limit cicle bifurcation. $\mathcal F$: fold (saddle-node) bifurcation. The yellow strip between the saddle-node and fold limit cicle bifurcation denotes the rest-spike bistable range.}\label{FIG 5} \end{figure} Near the singular limit, the branch of singular periodic orbits $P^0$ perturbs to a nearby branch of exponentially stable periodic orbits $P^\varepsilon$ (see Fig. \ref{FIG 5}), whereas the singular homoclinic trajectory $SH^0$ perturbs to an unstable homoclinic trajectory $SH^\epsilon$ (at $\alpha=\alpha_{SH}^\epsilon$). The branch of unstable periodic orbits $Q^\varepsilon$ generated at $SH^\varepsilon$ eventually merges with $P^\varepsilon$ at a fold limit cycle bifurcation $\mathcal F_{LC}$ for some $\alpha_{FLC}^\varepsilon\in(\alpha_{SH}^\epsilon,\alpha_{SH}^0)$. In the whole range $(\alpha_{SN},\alpha_{FLC}^\varepsilon)$, model (\ref{EQ: cusp dynamics}) exhibits the coexistence of a stable fixed point and a stable spiking limit cycle. The details of this analysis are contained in Lemma \ref{LEM: cusp SH} in Section \ref{SSUB: burst att proof}. We follow \cite{Terman1991,Su2004} to derive conditions on the bifurcation and unfolding parameters in (\ref{EQ: 3D cusp dynamics}a-\ref{EQ: 3D cusp dynamics}b) and {to place the hyperplane $\dot z=0$ (through a suitable choice of the parameters $a,b,c\in\mathbb R$) such that an ultra-slow variation of} $z$ can hysteretically modulate the slow-fast subsystem (\ref{EQ: 3D cusp dynamics}a-\ref{EQ: 3D cusp dynamics}b) across its bistable range $(\alpha_{SN},\alpha_{FLC}^\varepsilon)$ to obtain stable bursting oscillations. The existence of such bursting oscillations is stated in the following theorem. Its proof is provided in Section \ref{SSUB: burst att proof}. \begin{theorem}\label{THM: cusp burst} For all $\beta>\beta_{wcusp}$, there exists an open set of bifurcation ($\lambda$) and unfolding ($\alpha,\gamma$) parameters near the pitchfork singularity at $(\lambda,\alpha,\gamma)=(\lambda_{PF}(\beta),\alpha_{PF}(\beta),\gamma_{PF}(\beta))$ {such that, for all $\lambda,\alpha,\gamma$ in those sets, there exist $a,b,c,\in\mathbb R$ such that,} for sufficiently small $\varepsilon_1\gg\varepsilon_2>0$, model (\ref{EQ: 3D cusp dynamics}) has a hyperbolic bursting attractor. \end{theorem} {Theorem} \ref{THM: cusp burst} {uses the two regenerative phase portraits in Fig.} \ref{FIG 3} {left to construct a} {\it bursting} {attractor by modulating the unfolding parameter $\alpha$. The bursting attractor directly rests upon the bistability of those phase portraits. It should be noted that the same construction can be repeated on the restorative phase portraits in Fig.} \ref{FIG 3} {right. However those phase portraits are monostable and their ultra-slow modulation leads to a} {\it slow tonic spiking} ({\it i.e.} {a single spike necessarily followed by a rest period). This attractor differs from a bursting attractor by the absence of a bistable range in the bifurcation diagrams of Fig.} \ref{FIG 4}. {It can be shown that the persistence of (rest-spike) bistability in the singular limit is a hallmark of regenerative excitability} (Fig. \ref{FIG 3} left) {and that it cannot exist in restorative excitability} (Fig. \ref{FIG 3} right). See \cite{Franci2013} for a mode detailed discussion. { Modulation in} (\ref{EQ: 3D cusp dynamics}) {of the bifurcation parameter across the transcritical bifurcation of Fig.} \ref{FIG 3} {therefore provides a geometric transition from the slow tonic spiking attractor to the bursting attractor. This transition organizes the geometric route into bursting discussed in the next section}. \section{{A physiological route to bursting}} \label{SEC: the routes} \subsection{A minimal three-time scale bursting model} The recent paper \cite{Franci2012} introduces the planar neuron model \begin{IEEEeqnarray}{rCl}\label{EQ: SIADS} \dot V&=&V-\frac{V^3}{3}-n^2+I\IEEEyessubnumber\\ \dot n&=&\varepsilon(n_\infty(V-V_0)+n_0-n)\IEEEyessubnumber \end{IEEEeqnarray} Its phase portrait was shown to contain the pitchfork of Figure \ref{FIG 3} as an organizing center, leading to distinct types of excitability for distinct values of the unfolding parameters. The analysis of the previous section suggests that a bursting model is naturally obtained by augmenting the planar model (\ref{EQ: SIADS}) with ultra slow adaptation: \begin{IEEEeqnarray}{rCl}\label{EQ: SIAM burst quantitative} \dot V&=&kV-\frac{V^3}{3}-(n+n_0)^2+I-z\IEEEyessubnumber\\ \dot n&=&\varepsilon_n(V)\left(n_\infty(V-V_0) -n \right) \IEEEyessubnumber\\ \dot z&=&\varepsilon_z(V)(z_\infty(V-V_1)-z)\IEEEyessubnumber \end{IEEEeqnarray} Model (\ref{EQ: SIADS}) is essentially model (\ref{EQ: SIAM burst quantitative}) for $k=1$ and $z=0$, modulo a translation $n\gets n+n_0$. The dynamics (\ref{EQ: SIAM burst quantitative}b-\ref{EQ: SIAM burst quantitative}c) mimic the kinetics of gating variables in conductance-based models, where the steady-state characteristics $n_\infty(\cdot)$ and $z_\infty(\cdot)$ are monotone increasing (typically sigmoidal) and the time scaling $\varepsilon_n(\cdot)$ and $\varepsilon_z(\cdot)$ are Gaussian-like strictly positive functions. {Details of model} (\ref{EQ: SIAM burst quantitative}) {for the numerical simulations of the paper are provided in Appendix} \ref{SEC: parameters}. {The slow-fast subsystem} (\ref{EQ: SIAM burst quantitative}a\ref{EQ: SIAM burst quantitative}b) {shares the same geometric structure as} (\ref{EQ: cusp dynamics}). {After a translation $V\leftarrow V+V_0$, the right hand side of} (\ref{EQ: SIAM burst quantitative}a) {can easily be shown to be a universal unfolding of the winged cusp and the slow dynamics} (\ref{EQ: SIAM burst quantitative}b) {modulates its bifurcation parameter. Plugging the ultra-slow dynamics} (\ref{EQ: SIAM burst quantitative}c){, one recovers the same structure as} (\ref{EQ: 3D cusp dynamics}). {Therefore, the conclusions of Theorems} \ref{THM: cusp bist} and \ref{THM: cusp burst} {apply to} (\ref{EQ: SIAM burst quantitative}). The difference between (\ref{EQ: SIAM burst quantitative}) and (\ref{EQ: 3D cusp dynamics}) is that the model (\ref{EQ: SIAM burst quantitative}) has the physiological interpretation of a reduced conductance-based model, with $V$ a fast variable that aggregates the membrane potential with all fast gating variables, $n$ a slow recovery variable that aggregates all the slow gating variables regulating neuronal excitability, and $z$ an ultra-slow adaptation variable that aggregates the ultra-slow gating variables that modulate the cellular rhythm over the course of many action potentials. Finally, $I$ models an external applied current. \subsection{Model parameters and their physiological interpretation} \label{SSEC: parameter interpretation} \subsubsection*{The bifurcation parameter $n_0$ models the balance between restorative and regenerative ion channels} The central role of the bifurcation parameter $n_0$ in (\ref{EQ: SIAM burst quantitative}) was analyzed in \cite{Franci2012,Franci2013} and is illustrated in Fig. \ref{FIG 6}. The transcritical bifurcation variety in Fig. \ref{FIG 3} corresponds to the physiologically relevant transition from restorative excitability (large $n_0$) to regenerative excitability (small $n_0$). When the excitability is restorative, the recovery variable $n$ provides negative feedback on membrane potential variations near the resting equilibrium, a physiological situation well captured by FitzHugh-Nagumo model (or the hysteresis singularity). In contrast, when excitability is regenerative, the recovery variable $n$ provides positive feedback on membrane potential variations near the resting potential, a physiological situation that requires the quadratic term in (\ref{EQ: SIAM burst quantitative}a) (or the winged cusp singularity). \begin{figure} \center \includegraphics[width=0.9\textwidth]{Fig6} \caption{{\bf Transition from restorative excitability (tonic firing) to regenerative excitability (bursting) in model (\ref{EQ: SIAM burst quantitative}) by sole variation of the bifurcation parameter $n_0$.} The analytical expression of the steady state functions $n_\infty(\cdot)$ and $z_\infty(\cdot)$ and numerical parameter values are provided in Appendix \ref{SEC: parameters}. The time scale is the same in the left and right {time series}.}\label{FIG 6} \end{figure} The value of $n_0$ in a conductance-based model reflects the balance between restorative and regenerative ion channels that regulate neuronal excitability. How to determine the balance in an arbitrary conductance-based model is discussed in \cite{Franci2013}. Note that the restorative or regenerative nature of a particular ion channel in the slow time-scale is an intrinsic property of the channel. A prominent example of restorative channel is the slow potassium activation shared by (almost) all spiking neurons. A prominent example of regenerative channel is the slow calcium activation encountered in most bursting neurons. The presence of regenerative channels in neuronal bursters is well established in neurophysiology. See e.g. \cite{Krahe2004,Astori2011}. \subsubsection*{The affine unfolding parameter provides bursting by ultra-slow modulation of the current across the membrane} For small $n_0$, the modulation of the ultra-slow variable $z$ creates a hyperbolic bursting attractor through the hysteretic loop described in Fig. \ref{FIG 4}. The burster becomes a single-spike limit cycle (tonic firing) for large $n_0$ (restorative excitability), that is, in the absence of rest-spike bistability in the planar model. The presence of ultra-slow currents in neuronal bursters is well established in neurophysiology (see e.g. \cite{Astori2011}). A prominent example is provided by ultra-slow calcium activated potassium channels. \subsubsection*{Half activation potential affects the route to bursting} The role of the unfolding parameter $\gamma$ in (\ref{EQ: 3D cusp dynamics}) is illustrated in Fig. \ref{FIG 3}: it provides two qualitatively distinct paths connecting the restorative and regenerative phase portraits. This role is played by the parameter $V_0$ in the planar model (\ref{EQ: SIADS}) studied in \cite{Franci2012}, which has the physiological interpretation of a half activation potential. The role of half-activation potentials in neuronal excitability is well documented in neurophysiology (see e.g. \cite{Putzier2009}). The role of this unfolding parameter in the route to bursting is discussed in the next subsection. \subsubsection*{No spike without fast autocatalytic feedback} The role of the unfolding parameter $k$ in (\ref{EQ: SIAM burst quantitative}) is to provide positive (autocatalytic) feedback in the fast dynamics. The prominent source of this feedback in conductance-based models is the fast sodium activation. It is well acknowledged in neurodynamics \cite{Izhikevich2007}. The reduced model (\ref{EQ: SIAM burst quantitative}) makes clear predictions about its dynamical behavior in the absence of this feedback ({\it i.e.} $k=0$). Those predictions { are further discussed in Section} \ref{SSEC: burst across bursting types} and are in closed agreement with the experimental observation of ``small oscillatory potentials" when sodium channels are {shut down} with pharmacological blockers \cite{Guzman2009,Zhan1999} {or are poorly expressed during neuronal cell development} \cite{Liu1998}. \subsection{A physiological route to bursting} \label{SSEC: routes to burst} A central insight of the reduced model (\ref{EQ: SIAM burst quantitative}) is that it provides a route to bursting: fixing all unfolding parameters and varying only the bifurcation parameter $n_0$ leads to a smooth transition from tonic firing to bursting, see Fig. \ref{FIG 7}. \begin{figure}[h!] \center \includegraphics[width=\textwidth]{Fig7} \caption{{\bf Route from tonic firing to bursting in model (\ref{EQ: SIAM burst quantitative}) via a smooth variation of the bifurcation parameter $n_0$.} Rest of the parameter as in Figure \ref{FIG 6}.}\label{FIG 7} \end{figure} Smooth and reversible transitions between those two rhythms have been observed in many experimental recordings \cite{Sherman2001,Viemari2006}, making the route to burst an important signaling mechanism. The fact that the modulation is achieved simply through the bifurcation parameter $n_0$, {\it i.e.} the balance between restorative and regenerative channels, is of physiological importance because it is consistent with the physiology of experimental observations of routes into bursting \cite{Sherman2001,Viemari2006,Beurrier1999}. The analysis in the above sections shows that the transition from single spike to bursting is through the transcritical bifurcation variety in model (\ref{EQ: cusp dynamics}). Looking at the singular limit $\varepsilon=0$ of (\ref{EQ: cusp dynamics}) near this transition variety provides further insight on the geometry of the route that leads to the appearance of the saddle-homoclinic bifurcation organizing the bistable phase-portrait. This route is organized by the path through the pitchfork bifurcation, which provides the most symmetric path across the transcritical variety. The generic transitions are understood by perturbing the degenerate path. Fig. \ref{FIG 8}A shows the qualitative projection of those paths onto the $(V_0,n_0)$ parameter chart obtained in model (\ref{EQ: SIADS}) for $I=\frac{2}{3}$. The chart is reproduced from \cite{Franci2012}. The same qualitative picture is obtained for the $(\gamma,\lambda)$ parameter chart of the abstract model (\ref{EQ: cusp dynamics}) at $\alpha=\alpha_{TC}(\beta,\gamma)$ (see Appendix \ref{SEC: trans variety}). The chart associates different excitability types (as well as their restorative or regenerative nature, see \cite{Franci2013}) to distinct bifurcation mechanisms. Unfolding those paths along the $I$ (or $\alpha$) direction leads to the bifurcation diagrams in Fig. \ref{FIG 9}B. They reveal (in the singular limit) the onset of the bistable range organized by the singular saddle-homoclinic loop $SH^0$ as paths cross the transcritical bifurcation variety. \begin{figure}[h!] \center \includegraphics[width=\textwidth]{Fig8} \caption{{\bf Routes into bursting in the universal unfolding of the pithfork bifurcation.} {\bf A.} Qualitative projection of routes into bursting onto the $(V_0,n_0)$ (resp. $(\gamma,\lambda)$) of model (\ref{EQ: SIADS}) (resp. (\ref{EQ: cusp dynamics})) for $I=\frac{2}{3}$ (resp. $\alpha=\alpha_{TC}(\beta,\gamma)$, see Appendix \ref{SEC: trans variety}). Excitability is restorative in subregions $I$ and $II$, mixed in subregion $V$, and regenerative in subregion $IV$. See \cite{Franci2012} and \cite{Franci2013} for details concerning the underlying bifurcation mechanisms. The transition path labeled with a star depicts the degenerate path across the pitchfork. The generic paths {\it i)} and {\it ii)} are distinguished by different half activation potentials $V_0$ (resp. unfolding parameter $\gamma$). {\bf B.} Unfolding of transition paths in A along the $I$ (resp. $\alpha$) direction. Black thick lines denote branches of saddle-node (SN) bifurcation. In paths {\it i)} and {\it ii)}, the model undergoes a transcritical bifurcation (TC) as the path touches tangentially a branch of SN bifurcations. In the degenerate path, the model undergoes a pitchfork (PF) bifurcation as the path enters the cusp tangentially to both branches of SN bifurcations. The singular saddle-homoclinic loop, geometrically constructed in Figs. \ref{FIG 4} and \ref{FIG 9}, is denoted by $SH^0$ and determines the appearance of a singular bistable range persisting away from singular limit. }\label{FIG 8} \end{figure} The same qualitative picture persists for $\varepsilon>0$. Fig. \ref{FIG 9} illustrates how the appearance of the singular saddle-homoclinic loop is accompanied, for $\varepsilon>0$, by a smooth transition from a monostable (SNIC - route {\it i)} ) or barely bistable (sub. Hopf - route {\it ii)} ) bifurcation diagram to the robustly bistable bifurcation diagram constructed in the sections above (Fig. \ref{FIG 5}). Through ultra-slow modulation of the unfolding parameter $\alpha$, this transition geometrically captures the transition from tonic spiking to bursting via the sole variation of the bifurcation parameter. \begin{figure}[h!] \center \includegraphics[width=0.9\textwidth]{Fig9} \caption{{\bf Geometry of the two generic routes into bursting in the unfolding of the pitchfork bifurcation in model (\ref{EQ: 3D cusp dynamics}) and model (\ref{EQ: SIAM burst quantitative}).}}\label{FIG 9} \end{figure} {The strong agreement between the mathematical insight provided by singularity theory and the known electrophysiology of bursting is a peculiar feature of the proposed approach. There is a direct correspondence between the bifurcation and unfolding parameters of the winged cusp and the physiological minimal ingredients of a neuronal burster. In particular, our analysis predicts that any bursting neuron must possess at least one physiologically regulated slow regenerative channel. This prediction needs to be tested systematically but we have found no counter-example in the bursting neurons we have analyzed to date.} \section{{Normal form reduction of conductance-based models}} \label{SEC: w cusp in cb modes} \subsection{A two dimensional reduction} \label{SSEC: cb models two d red} The winged cusp singularity emerges as an organizing center of rhythmicity in the reduced neuronal model (\ref{EQ: SIAM burst quantitative}), but a legitimate question is whether this singularity can be traced in arbitrary (high-dimensional) conductance-based models. Our recent paper \cite{Franci2013} addresses a closely related question for the transcritical variety. It provides an analog of the bifurcation parameter $n_0$ in arbitrary conductance-based models of the form \begin{IEEEeqnarray}{rCll}\label{EQ: generic cb model} C_m\dot V&=&-\sum_{\iota}\bar g_\iota m_\iota^{a_\iota} h_\iota^{b_\iota} (V-E_\iota)+I_{app},\nonumber\\ &=:&I_{ion}(V,x^f,x^s,x^{us})+I_{app}&\IEEEyessubnumber\\ \tau_{x^{f}_j}(V)\dot x^f_j&=&-x^f_j+x^f_{j,\infty}(V),&\quad j=1,\ldots,n_f\IEEEyessubnumber\\ \tau_{x^s_j}(V)\dot x^s_j&=&-x^s_j+x^s_{j,\infty}(V),&\quad j=1,\ldots,n_s\IEEEyessubnumber\\ \tau_{x^{us}_j}(V)\dot x^{us}_j&=&-x^{us}_j+x^{us}_{j,\infty}(V),&\quad j=1,\ldots,n_{us}\IEEEyessubnumber \end{IEEEeqnarray} where $\iota$ {runs through} all ionic currents, $x^f:=[x^f_j]_{j=1,\ldots,n_f}$ denotes the $n_f$-dimensional column vector of fast gating variables, $x^s:=[x^s_j]_{j=1,\ldots,n_s}$ denotes the $n_s$-dimensional column vector of slow gating variables, and $x^{us}:=[x^{us}_j]_{j=1,\ldots,n_{us}}$ denotes the $n_{us}$-dimensional column vector of ultra-slow variables (see also \cite{Franci2013} for more details on the adopted notation). Following common analysis methods in neurodynamics, we want to reduce the (possibly) high-dimensional model (\ref{EQ: generic cb model}) to a two-dimensional model of the form \begin{IEEEeqnarray}{rCl}\label{EQ: CB model 2d red} \dot V &=&F(V,n)+I\IEEEyessubnumber\\ \tau(V)\dot n^s&=&-n+n_\infty^s(V)\IEEEyessubnumber \end{IEEEeqnarray} where $V$ is the fast voltage and $n$ is a slow aggregate variable. We achieve this reduction by first considering the singular limit of three time scales leading to a quasi-steady state approximation for fast gating variables, that is \begin{equation}\label{EQ: cb inst fast} x^f_j\equiv x^f_{j,\infty}(V), \end{equation} for all $ j=1,\ldots,n_f$, and freezing ultra-slow variables, that is setting \begin{equation* x^{us}_j\equiv \bar x^{us}_j, \end{equation*} for all $j=1,\ldots,n_{us}$, where the values $\bar x^{us}_j$ belong to the physiological range of the different variables. The remaining dynamics read as \begin{IEEEeqnarray*}{rCl \dot V&=&I_{ion}(V,x^f_\infty(V),x^s,\bar x^{us})+I_{app},\\ \tau(V)\dot x^s_j&=&(-x^s_j+x^s_{j,\infty}(V)),\quad j=1,\ldots,n_s \end{IEEEeqnarray*} which is a fast-slow system with $V$ as fast variable and $x^s$ as slow variables. The planar reduction proceeds from the change of variables \begin{IEEEeqnarray*}{rCl n&=&x_1^s,\\ n_i^\perp&=&x_i^s-x_{i,\infty}^s(n_\infty^{-1}(n)),\quad i=2,\ldots,n_s, \end{IEEEeqnarray*} This change of variable is globally invertible by monotonicity of the (in)activation functions $x_{i,\infty}^s$. Under the additional simplifying assumption of identical time constants \begin{equation}\label{EQ: cb homo time slow} \tau_{x^s_j}(V)= \tau(V)\geq\epsilon^{-1}\gg 1, \end{equation} for all $V\in\mathbb R$ and all $j=1,\ldots,n_s$, it is an easy calculation to show that \begin{IEEEeqnarray*}{rCl \tau(V^\star)\dot n_i^\perp&=&- n_i^\perp +\mathcal O((n-n^\star)^2,(n-n^\star)(V-V^\star),(V-V^\star)^2) \end{IEEEeqnarray*} around any equilibrium $(V^\star,(x_i^s)^\star):=(V^\star,x^s_{i,\infty}(V^\star))$. It follows that, locally around {\it any} equilibrium, the two dimensional manifold \begin{IEEEeqnarray*}{rCl} \mathcal M_{red}&:=&\left\{(V,x^s_\infty)\in\mathbb R\times[0,1]^{n_s}:\ n_i^\perp=0,\ i=2,\ldots,n_s \right\}\\ &=&\left\{(V,x^s_\infty)\in\mathbb R\times[0,1]^{n_s}:\ x_i^s=x_{i,\infty}^s(n_\infty^{-1}(n)),\ i=2,\ldots,n_s \right\} \end{IEEEeqnarray*} is exponentially attractive. It should be stressed that the (harsh) simplifying assumption (\ref{EQ: cb homo time slow}) is necessary only around the steady-state value $V^\star$ and that the hyperbolic decomposition is robust to small perturbations \cite{Hirsch1977}. It should also be observed that the proposed two-dimensional reduction is a straightforward generalization of the classical two-dimensional reduction of Hodgkin-Huxley model \cite{FitzHugh1961,Rinzel1985} that rests on setting sodium activation to steady state ($m_{Na}\equiv m_{Na,\infty}(V)$) and using an algebraic relationship between the sodium inactivation and the potassium activation (usually in the form $h\simeq1-n$). \subsection{The winged cusp planar model (\ref{EQ: cusp dynamics}) is a local normal form of slow-fast conductance based models} \label{SSEC: wcusp in cb models} Given an equilibrium $(V^\star,n_\infty(V^\star))$ of (\ref{EQ: CB model 2d red}), consider the (linear) change of variables \begin{IEEEeqnarray*}{rCl}\label{EQ: CB model 2d red varia variables} x&=&V-V^\star\\ y&=&\frac{n-n_\infty(V^\star)}{\frac{\partial n_\infty}{\partial V}(V^\star)} \end{IEEEeqnarray*} The $y$ dynamics is particularly simple. Indeed, by simple Taylor expansion, \begin{IEEEeqnarray*}{rCl \dot y&=& \varepsilon(x-y)+\mathcal O(x^2),\\ \varepsilon&:=&\frac{1}{\tau(V^\star)}\ll 1. \end{IEEEeqnarray*} In the new coordinates, (\ref{EQ: CB model 2d red}) reads \begin{IEEEeqnarray}{rCl}\label{EQ: CB model 2d red varia dyn} \dot x&=&F\left(x+V^\star,n_\infty(V^\star)+\frac{\partial n_\infty}{\partial V}(V^\star)y\right)+I\IEEEyessubnumber\\ \dot y&=&\varepsilon(x-y)+\mathcal O(x^2)\IEEEyessubnumber. \end{IEEEeqnarray} Simple computations show that (\ref{EQ: CB model 2d red varia dyn}a) satisfies \begin{IEEEeqnarray*}{rCl \frac{\partial\dot x}{\partial x}(0,0)&=&\frac{\partial I_{ion}}{\partial V}+\sum_{i=1}^{n_f}\frac{\partial I_{ion}}{\partial x_i^f}\frac{\partial x^f_{i,\infty}}{\partial V}\\ \frac{\partial\dot x}{\partial y}(0,0)&=&\sum_{i=1}^{n_s}\frac{\partial I_{ion}}{\partial x_i^s}\frac{\partial x^s_{i,\infty}}{\partial V} \end{IEEEeqnarray*} where the right hand sides are intended computed at $V=V^\star,x^f=x^f_\infty(V^\star),x^s=x^s_\infty(V^\star)$, and $x^{us}=\bar x^{us}$. We claim that the critical manifold $\dot x=0$ of (\ref{EQ: CB model 2d red}) has a degenerate singularity provided that \begin{itemize} \item[] {\it (i)} the full slow-fast subsystem has a degenerate equilibrium, that is, the Jacobian of the slow-fast subsystem (\ref{EQ: generic cb model}a-\ref{EQ: generic cb model}c) is singular \item[] {\it (ii)} at such equilibrium, the contributions of slow restorative and slow regenerative channels \cite{Franci2013} are perfectly balanced, that is \begin{IEEEeqnarray*}{rCl \sum_{i=1}^{n_s}\frac{\partial I_{ion}}{\partial x_i^s}\frac{\partial x^s_{i,\infty}}{\partial V}&=&0. \end{IEEEeqnarray*} \end{itemize} To prove our claim we notice with similar computations as \cite{Franci2013} that conditions {\it (i)} and {\it (ii)} imply that $$\frac{\partial I_{ion}}{\partial V}+\sum_{i=1}^{n_f}\frac{\partial I_{ion}}{\partial x_i^f}\frac{\partial x^f_{i,\infty}}{\partial V}=0,$$ which is equivalent to the Jacobian of the fast subsystems (\ref{EQ: generic cb model}a-\ref{EQ: generic cb model}b) being singular. Hence, when conditions {\it (i)} and {\it (ii)} are fulfilled, \begin{equation}\label{EQ: varia dyn deg cond} \frac{\partial\dot x}{\partial x}(0,0)=\frac{\partial\dot x}{\partial y}(0,0)=0. \end{equation} Property (\ref{EQ: varia dyn deg cond}) ensures that the critical manifold of (\ref{EQ: CB model 2d red varia dyn}) has a codimension$>0$ singularity at the origin (where, as usual, the slow variable $y$ plays the role of the bifurcation parameter). This singularity corresponds to the transcritical bifurcation detected in arbitrary conductance based models in \cite{Franci2013}. It is indeed proved in \cite{Franci2013} that conditions {\it (i)} and {\it (ii)} enforce a transcritical bifurcation in the associated conductance based model. Algebraically, (\ref{EQ: varia dyn deg cond}) ensures that, similarly to the bifurcation parameter in the winged cusp universal unfolding (see Section \ref{SSEC: rest-spike in wcusp}), $y$ modulates non-monotonically the fast $x$ dynamics. Physiologically, it captures in the reduced model the non-monotone modulation of membrane potential dynamics by slow restorative (providing negative feedback) and slow regenerative (providing positive feedback) ion channels. We use the algorithm in \cite{Franci2013} to detect the degenerate dynamics of (\ref{EQ: CB model 2d red varia dyn}) in arbitrary conductance based models. This construction reveals that the transcritical bifurcation is part of the transcritical transition variety in the universal unfolding of the winged cusp. The result is sketched in Figure \ref{FIG XX1} left and verified numerically in the Hodgkin-Huxley model augmented with a calcium current in Figure \ref{FIG XX1} right. The model and its reduction are presented and further discussed in Section \ref{SSSE: winged-cusp phase plane in HHCa} below. The obtained phase plane is organized by the mirrored hysteresis bifurcation diagram of the normal form (\ref{EQ: cusp dynamics}) in Fig. \ref{FIG 2}, in the limiting case in which the two hystresis branches merge at the transcritical bifurcation. This provides an indirect proof that the global phase plane is organized by the winged cusp. This singularity is indeed the only (codimension$\leq3$) singularity exhibiting the mirrored hysteresis in its universal unfolding (see \cite[Section IV.4]{Golubitsky1985}). \begin{figure}[h!] \center \includegraphics[width=0.9\textwidth]{FigXX1} \caption{A transcritical bifurcation in the universal unfolding of the winged cusp organizes the dynamics of the two-dimensional reduction of generic conductace based models. Left: Sketch of the dynamics on the two-dimensional invariant manifold $\mathcal M_{red}$. Right: construction of the two dimensional reduction (\ref{EQ: CB model 2d red varia dyn}) at the transcritical bifurcation in the Hodgkin-Huxley model augmented with a calcium current (\ref{EQ: HH model})-(\ref{EQ: HH CA dynamics}).}\label{FIG XX1} \end{figure} One can push forward the singularity analysis and derive an algorithm to enforce the degenerate conditions of the winged cusp rather than the transcritical bifurcation by using additional model parameters as auxiliary parameters \cite[Section III.4]{Golubitsky1985}. This would lead to the conclusion that the critical manifold of the reduced dynamics (\ref{EQ: CB model 2d red varia dyn}) is actually a versal unfolding of the winged cusp. Alternatively, one can modulate model parameters and show that their variations recover all persistent bifurcation diagrams of the winged cusp. Such computations are however lengthy and bring no new information to the picture presented here. \subsection{Application to the Hodgkin-Huxley model augmented with a regenerative channel} \label{SSSE: winged-cusp phase plane in HHCa} The first conductance-based model appears in the seminal paper of Hodgkin-Huxley \cite{Hodgkin1952} \begin{IEEEeqnarray}{rCl}\label{EQ: HH model} C \dot V &=& - \bar g_Kn^4(V-V_K) - \bar g_{Na}m^3h(V-V_{Na})- g_l(V-V_l)+I\IEEEyessubnumber\\ \tau_m(V)\dot m&=&-m+m_\infty(V)\IEEEyessubnumber\\ \tau_n(V)\dot n&=&-n+n_\infty(V)\IEEEyessubnumber \\ \tau_h(V)\dot h&=&-h+h_\infty(V),\IEEEyessubnumber \end{IEEEeqnarray} where the time constants $\tau_x$ and the steady state characteristics $x_\infty$, $x=m,n,h$ are chosen in accordance with the original model (see Appendix \ref{SEC: parameters HH}). The model only accounts for two ionic currents: sodium, with its fast activation variable $m$ and slow inactivation $h$, and potassium, with slow activation $n$. The classical phase portrait reduction \cite{FitzHugh1961,Rinzel1985} is obtained with the quasi-steady state approximation $m\simeq m_\infty(V)$ and the empirical fit $h\simeq 1-n$. It is well known that in its physiological part ($0<n<1$) this phase portrait is qualitatively the FitzHugh phase portrait in Fig. \ref{FIG 1}. But we showed in \cite[Figure 5]{Drion2012} that the entire phase portrait ($n\in\mathbb R$) indeed also contains the ``mirrored" phase portrait of Fig. \ref{FIG 2}. This observation suggests that a winged cusp organizes the fast subsytem (\ref{EQ: HH model}a-\ref{EQ: HH model}b) of Hodkgin-Huxley dynamics. The singularity is found in a non-physiological range of the phase space ($n<0$), which is consistent with the absence of slow regenerative currents in the model. The missing element in Hodgkin-Huxley model to make the winged cusp physiological is a slow regenerative ion channel. Following \cite{Drion2012}, we add the calcium current \begin{IEEEeqnarray}{rCl}\label{EQ: HH CA dynamics} I_{Ca,L}&=&-\bar g_{Ca}d(V-V_{Ca}) \IEEEyessubnumber\\ \tau_d(V) \dot d&=&-d + d_\infty(V). \IEEEyessubnumber \end{IEEEeqnarray} The algorithm in \cite{Franci2013} detects a transcritical bifurcation for $$V^\star\simeq-61.2730,\quad g_{Ca}^\star\simeq0.2520,\quad I^\star\simeq-30.7694. $$ Following the construction in Section \ref{SSEC: wcusp in cb models}, in particular, Eq. (\ref{EQ: CB model 2d red varia dyn}), the associated reduced variational dynamics at the transcritical bifurcation reads \begin{IEEEeqnarray*}{rCl \dot x &=& - \bar g_K\left(n_\infty(V^\star)+y\frac{\partial n_\infty}{\partial V}(V^\star)\right)^4(V^\star+x-V_K)\nonumber\\ &&- \bar g_{Na}m_\infty(V^\star+x)^3\left(h_\infty(V^\star)+y\frac{\partial h_\infty}{\partial V}(V^\star)+\mathcal O(y^2)\right)(V-V_{Na})\\ &&-\bar g_{Ca}^\star\left(d_\infty(V^\star)+y\frac{\partial d_\infty}{\partial V}(V^\star)+\mathcal O(y^2)\right)(V-V_{Ca})\nonumber\\ &&- g_l(V-V_l)+I^\star\\ \dot y&=&\varepsilon(x-y)+\mathcal O(x^2). \end{IEEEeqnarray*} Its phase plane is drawn in Figure \ref{FIG XX1} right. We now apply the global two-dimensional reduction described in Section \ref{SSEC: cb models two d red}, in particular, Eq. (\ref{EQ: CB model 2d red}), to model (\ref{EQ: HH model}-\ref{EQ: HH CA dynamics}). To this aim, we express all variables in terms of potassium activation $n$. Since in the original model its activation function cannot be explicitly inverted, we use the exponential fitting $$n_\infty(V)=\frac{1}{1+e^{0.06(11.6-V)}},\quad n_\infty^{-1}(n)=11.6-\frac{1}{0.06}\ln\left(\frac{1}{n}-1\right)$$ Figure \ref{FIG XX2} provides a comparison of the behavior of the original and reduced models. Despite quantitative differences (in particular, as in the reduction of the original Hodgkin-Huxley model, treating fast variables as instantaneous increases spiking frequency), the reduced model faithfully captures the qualitative behavior of its high-dimensional counterpart, for instance, rest-spike bistability. Phase plane analysis of the associated normal form (\ref{EQ: cusp dynamics}) provides a clear geometrical interpretation of such dynamical behavior (Fig. \ref{FIG 2}). \begin{figure}[h!] \center \includegraphics[width=0.8\textwidth]{FigXX2} \caption{Comparison of the full Hodgkin-Huxley augmented with a calcium current (\ref{EQ: HH model})-(\ref{EQ: HH CA dynamics}) and its two-dimensional reduction, obtained by applying the reduction procedure of Section \ref{SSEC: cb models two d red}.}\label{FIG XX2} \end{figure} \subsection{The role of ultra-slow variables} \label{SSEC: ultra slow unfolding} Ultra-slow variables appear in a variety of forms: ultra-slow gating variables (e.g. inactivation of calcium channels), intracellular calcium (e.g. SK channels), metabotropic regulation of channel expression (e.g. regulation of calcium channel expression by serotonin receptors), homeostatic regulation of channel expression (e.g. calcium dependent expression of ion channels), etc.~. As such, they do not allow a systematic analysis as for slow-gating variables. However, their effect on the model reduction (\ref{EQ: CB model 2d red}) can be understood in terms of modulation of the unfolding parameters of the associated normal form. The observation that the many (auxiliary) parameters of conductance based models might naturally provide a versal unfolding of the winged cusp organizing their fast critical manifold suggests that variations in ultra-slow variables act as ultra-slow modulation of the unfolding parameters in the associated normal form. The effect of ultra-slow variables is thus constrained to reshape the geometry of the slow fast phase portrait. This might lead to ultra-slow adaptation mechanisms (similarly to the action of $\alpha$ in Fig. \ref{FIG 4}) or to even slower modulation mechanisms (similarly to the action of $k$ and $V_0$ in Fig. \ref{FIG 11} below). Clearly, this does not permit to conclude precise results on the global dynamics of a multi-timescale model, but suggest that the low dimensional bursting modulation mechanism described here has a strong relevance for generic conductance-based models. \section{{Modulation of bursting by unfolding parameters and its physiological interpretation}} \label{SEC: why the winged cusp} \subsection{Bursting modeling and unfolding theory} The rich literature on mathematical modeling of bursting calls for a few comparisons with the model proposed in the present paper. The geometry of our bursting attractor is the most classical one of a saddle-homoclinic burster (one out of the 16 bursting attractors in the recent classification of Izhikevich, see \cite[page 376]{Izhikevich2007}). Such an attractor is for instance found in the early bursting model of Hindmarsh and Rose \cite{Hindmarsh1984}. The two models exhibit an analog geometry: the mirror of the classical Fitz-Hugh phase portrait, obtained here by mirroring the fast variable cubic nullcline, is obtained there by mirroring the monotone activation function of the recovery variable. But the Hindmarsh-Rose model lacks the organization of some high-codimension singularity, making it impractical for modulation studies (see, e.g., \cite{Shilnikov2008}) and for physiological interpretability. The more recent literature on bursting has certainly exploited unfolding theory around high-codimension bifurcations to construct different types of bursting attractors. A non exhaustive list is \cite{Rinzel1987b,Bertram1995,Golubitsky2001,Izhikevich2000} and the references discussed in \cite[page 376]{Izhikevich2007}. The outcome of those studies is a useful mathematical classification between different bursting attractors organized by different bifurcations but it is not clear how to use this classification for modulation studies. A possible reason is that most of those references construct bursting models from restorative phase portraits that retain the qualitative organization of Fitz-Hugh model by a hysteresis singularity. Such models lack the transcritical bifurcation that organizes the normal form reduction of general bursting conductance based models. The approach of the present paper differs from earlier studies in starting from the cusp singularity, inspired by our original observation that the mirrored hysteresis phase portrait organizes the reduced Hodgkin-Huxley dynamics \cite[Fig. 5]{Drion2012}. The direct link between the mathematical unfolding of the cusp singularity and the local normal form of conductance-based models in the vicinity of their transcritical bifurcation is probably crucial in using unfolding theory to understand the modulation of bursting in neuronal models. \subsection{A geometrical and physiological modulation of a burster across bursting types} \label{SSEC: burst across bursting types} The single geometric attractor of (\ref{EQ: SIAM burst quantitative}) contains a continuum of different bursting wave forms modulated by the bifurcation and the unfolding parameters. Beyond the route to bursting studied in Section \ref{SEC: the routes}, Figure \ref{FIG 11} illustrates a situation where the bifurcation parameter and the affine unfolding parameters are fixed but where the two remaining unfolding parameters are modulated in a quasi static manner. The figure displays a variety of waveforms that nevertheless share the same geometry of the bursting attractor as hysterethic paths in the universal unfolding of the winged cusp. For small autocatalytic feedback gain $k$, corresponding to low expression of fast sodium channels, the model emits small oscillatory potentials (SOP), on the left. Increasing this gain, the waveform smoothly evolves toward a classical `` square-wave" oscillation, on the right, after a transient ``tapered" bursting activity, shown in the inset (see \cite[page 376]{Izhikevich2007} and references therein for a discussion about the different bursting types). As in the case of the route from tonic spiking to bursting, the transition shown in Fig. \ref{FIG 11} has physiological relevance. For instance, a similar transition has been observed during development of neuronal cells \cite{Liu1998}. \begin{figure}[h!] \center \includegraphics[width=\textwidth]{Fig11} \caption{{\bf Modulation of model (\ref{EQ: SIAM burst quantitative}) across different bursting wave forms.} Increasing the fast positive feedback gain $k$ and decreasing the half-activation potential $V_0$ the model smoothly evolves from (calcium driven) small oscillatory potentials SOP (on the left) to ``square-wave"-like bursting (on the right) across ``tapered"-like bursting (shown in the inset). Parameters values are provided in Section \ref{SEC: parameters}.} \label{FIG 11} \end{figure} The geometry of the ``tapered"-like bursting wave-form in Figure \ref{FIG 11} reveals another subtlety of the winged cusp unfolding. In addition to broad regions of restorative and regenerative excitability, Fig. \ref{FIG 8}A shows a small parametric region of mixed excitability (type V in the terminology of \cite{Franci2012}). Like regenerative phase portraits, phase portraits in this region have a persistent bistable range, but it is of fold/fold type, with a down-state that is a regenerative fixed point and a up-state that is either a restorative fixed point or a limit cycle (emerging from a Hopf bifurcation within or outside the bistable range). The bursting attractor observed in this region can be considered as a variant of the bursting attractor associated to regenerative excitability. Both bursting attractors share the same geometry of hysteretic paths in the unfolding of the winged cusp singularity but the fold/fold variant exhibits the peculiar wave form illustrated in Fig. \ref{FIG 12}, usually studied under the name of ``tapered" bursting in the literature, see e.g. \cite[page 376]{Izhikevich2007}. \begin{figure}[h!] \center \includegraphics[width=0.9\textwidth]{Fig12} \caption{{\bf Variant of the saddle-homoclinic bursting attractor in model (\ref{EQ: SIAM burst quantitative}).} {\bf A.} When the fast-slow subsystem (\ref{EQ: SIAM burst quantitative}a-\ref{EQ: SIAM burst quantitative}b) exhibits Type V excitability \cite{Franci2012}, the bistable range is of fold / fold type leading to a ``tapered" bursting waveform. Parameters values are provided in Section \ref{SEC: parameters}. {\bf B.} The hysteretic path associated to this type appears along path {\it ii)} of Fig. \ref{FIG 8}. {At the two ends of the bistable range, the up and down attractors are stable equilibria loosing stability in a saddle-node bifurcation}. Depending on the excitability subtype, the burst onset can either exhibit damped spiking oscillations ending in a Hopf bifurcation within the bistable range (a situation captured by the bifurcation diagram in \cite[Fig. 5.2]{Franci2012}) or a single action potential (a situation captured by the bifurcation diagram in \cite[Fig. 5.3]{Franci2012}). }\label{FIG 12} \end{figure} It is remarkable that the four different wave forms shown in Figs. \ref{FIG 11} and Fig. \ref{FIG 12} can be modeled by the same geometric attractor. A companion paper in preparation further investigates the physiological mechanisms that modulate the bursting wave within the unfolding of the winged cusp singularity. \section{Conclusions} The paper proposes that conductance based models exhibiting bursting attractors are organized by a winged cusp singularity. The geometry of the resulting attractor is classical (a hysteretic modulation of a slow-fast portrait over a rest-spike bistable range) but singularity theory is used to identify key parameters for the modulation of the bursting attractor. The cusp singularity organizes the slow-fast phase portrait around the mirror hysteresis of Section \ref{SSEC: rest-spike in wcusp} in contrast to the standard hysteresis of classical phase portrait reductions of Hodgkin-Huxley model. The bifurcation parameter has the convenient physiological interpretation of a ionic balance recently studied in \cite{Franci2013}. Its modulation through the transcritical variety of the cusp unfolding governs a geometric transition from tonic spiking to bursting in the three-timescale normal form (\ref{EQ: 3D cusp dynamics}): it provides a physiologically relevant route to bursting. The affine unfolding parameter has the physiological interpretation of an ultraslow ionic current, typically driven by the intracellular calcium concentration. Its modulation provides the classical adaptation variable of the three time-scale bursting attractor. The two remaining unfolding parameters have the physiological interpretation of a fast autocatalytic gain (the maximal sodium conductance) and of an average half activation potential, respectively. Their quasi static modulation evolves the bursting attractor across different bursting wave forms, consistently with what is observed experimentally in neuronal development, for instance. In spite of the vast diversity of ion channels encountered in different neurons and the resulting vast diversity of regulation pathways, singularity theory and time scale separation suggest an apparent simplicity and universality in the underlying modulation mechanisms, as paths in the universal unfolding of the winged cusp. Those features are appealing to address system theoretic questions such as sensitivity, robustness, and homeostasis issues. \section{Acknowledgments} Prof. M. Golubitsky is gratefully acknowledged for insightful comments and suggestions during the visit of the first author at the Mathematical Bioscience Institute (Ohio State University). \clearpage
1305.6907
\section*{Acknowledgements} VB thanks KITP-UCSB for hospitality during the course of this work and F. Halzen, A. Karle, J. Learned, D. Marfatia, S. Pakvasa and {W.P. Pan} for email communications. This research was supported in part by the National Science Foundation under Grant No. NSF PHY11-25915 and in part by the U.S. Department of Energy under grants No. DE-FG02-95ER40896 and DE-FG02-12ER41811.
2206.12895
\section{Introduction} Clustering is an important problem in unsupervised learning that has been widely studied in statistics, data mining, network analysis, etc.~\citep{punj1983cluster,Article:Dhillon_01,Article:Banerjee_05,berkhin2006survey,Article:Abbasi07}. The goal of clustering is to partition a set of data points into clusters such that items in the same cluster are expected to be similar, while items in different clusters should be different. This is concretely measured by the sum of distances (or squared distances) between each point to its nearest cluster center. One conventional notion to evaluate a clustering algorithms is: with high probability, $$cost(C,D)\leq \gamma OPT_k(D)+\xi,$$ where $C$ is the centers output by the algorithm and $cost(C,D)$ is a cost function defined for $C$ on dataset $D$. $OPT_k(D)$ is the cost of optimal (oracle) clustering solution on $D$. When everything is clear from context, we will use $OPT$ for short. Here, $\gamma$ is called \textit{multiplicative error} and $\xi$ is called \textit{additive error}. Alternatively, we may also use the notion of expected cost. Two popularly studied clustering problems are 1) the $k$-median problem, and 2) the $k$-means problem. The origin of $k$-median dates back to the 1970's (e.g.,~\citet{kaufman1977plant}), where one tries to find the best location of facilities that minimizes the cost measured by the distance between clients and facilities. Formally, given a set of points $D$ and a distance measure, the goal is to find $k$ center points minimizing the sum of absolute distances of each sample point to its nearest center. In $k$-means, the objective is to minimize the sum of squared distances instead. Particularly, $k$-median is usually the one used for clustering on graph/network data. In general, there are two popular frameworks for clustering. One heuristic is the Lloyd’s algorithm~\citep{1056489}, which is built upon an iterative distortion minimization approach. In most cases, this method can only be applied to numerical data, typically in the (continuous) Euclidean space. Clustering in general metric spaces (discrete spaces) is also important and useful when dealing with, for example, the graph data, where Lloyd's method is no longer applicable. A more broadly applicable approach, the local search method~\citep{DBLP:conf/compgeom/KanungoMNPSW02,DBLP:journals/siamcomp/AryaGKMMP04}, has also been widely studied. It iteratively finds the optimal swap between the center set and non-center data points to keep lowering the cost. Local search can achieve a constant approximation ratio ($\gamma=O(1)$) to the optimal solution for $k$-median~\citep{DBLP:journals/siamcomp/AryaGKMMP04}. In this paper, we will focus on clustering under this the general metric space setting. \vspace{0.1in} \noindent\textbf{Initialization of cluster centers.}\ It is well-known that the performance of clustering can be highly sensitive to initialization. If clustering starts with good initial centers (i.e., with small approximation error), the algorithm may use fewer iterations to find a better solution. The $k$-median++ algorithm~\citep{DBLP:conf/soda/ArthurV07} iteratively selects $k$ data points as initial centers, favoring distant points in a probabilistic way. Intuitively, the initial centers tend to be well spread over the data points (i.e., over different clusters). The produced initial center is proved to have $O(\log k)$ multiplicative error. Follow-up works of $k$-means++ further improved its efficiency and scalability, e.g.,~\citet{DBLP:journals/pvldb/BahmaniMVKV12,DBLP:conf/aaai/BachemLHK16,DBLP:conf/icml/LattanziS19}. In this work, we propose a new initialization framework, called Hierarchically Well-Separated Tree (HST) initialization, based on metric embedding techniques. Our method is built upon a novel search algorithm on metric embedding trees, with comparable approximation error and running time as $k$-median++. Moreover, importantly, our initialization scheme can be conveniently combined with the notion of differential privacy (DP). \newpage \noindent\textbf{Clustering with Differential Privacy.}\ The concept of differential privacy~\citep{DBLP:conf/icalp/Dwork06,DBLP:conf/focs/McSherryT07} has been popular to rigorously define and resolve the problem of keeping useful information for model learning, while protecting privacy for each individual. Private $k$-means problem has been widely studied, e.g.,~\citet{DBLP:conf/stoc/FeldmanFKN09,DBLP:conf/icml/NockCBN16,7944775}, mostly in the continuous Euclidean space. The paper~\citep{DBLP:conf/icml/BalcanDLMZ17} considered identifying a good candidate set (in a private manner) of centers before applying private local search, which yields $O(\log^3 n)$ multiplicative error and $O((k^2+d)\log^5 n)$ additive error. Later on, the Euclidean $k$-means errors are further improved to $\gamma=O(1)$ and $\xi=O(k^{1.01} \cdot d^{0.51} + k^{1.5})$ by~\cite{DBLP:conf/nips/StemmerK18}, with more advanced candidate set selection. \cite{DBLP:conf/pods/HuangL18} gave an optimal algorithm in terms of minimizing Wasserstein distance under some data separability condition. For private $k$-median clustering, \cite{DBLP:conf/stoc/FeldmanFKN09} considered the problem in high dimensional Euclidean space. The strategy of~\citet{DBLP:conf/icml/BalcanDLMZ17} to form a candidate center set could as well be adopted to $k$-median, which leads to $O(\log^{3/2} n)$ multiplicative error and $O((k^2+d)\log^3 n)$ additive error in high dimensional Euclidean space. However, one main limitation of these methods is that they cannot be applied to general metric spaces (e.g., on graphs). In discrete space, \cite{DBLP:conf/soda/GuptaLMRT10} proposed a private method for the classical local search heuristic, which applies to both $k$-medians and $k$-means. To cast privacy on each swapping step, the authors applied the exponential mechanism of~\citet{DBLP:conf/focs/McSherryT07}. Their method produced an $\epsilon$-differentially private solution with cost $6OPT+ O(\triangle k^2\log^2n/\epsilon)$, where $\triangle$ is the diameter of the point set. In this work, we will show that our HST initialization can improve DP local search for $k$-median~\citep{DBLP:conf/soda/GuptaLMRT10} in terms of both approximation error and efficiency. \vspace{0.1in} \noindent\textbf{The main contributions} of this work include : \begin{itemize}\vspace{-0.05in} \item We introduce the HST~\citep{DBLP:journals/jcss/FakcharoenpholRT04} to the $k$-median clustering problem for initialization. We design an efficient sampling strategy to select the initial center set from the tree, with an approximation factor $O(\log \min\{k,\triangle\})$ in the non-private setting, which is $O(\log \min\{k,d\})$ when $\triangle=O(d)$ (e.g., bounded data). This improves the $O(\log k)$ error of $k$-means/median++ in e.g., the lower dimensional Euclidean space. \item We propose a differentially private version of HST initialization under the setting of~\cite{DBLP:conf/soda/GuptaLMRT10} in discrete metric space. The so-called DP-HST algorithm finds initial centers with $O(\log n)$ multiplicative error and $O(\epsilon^{-1}\triangle k^2 \log^2 n)$ additive error. Moreover, running DP-local search starting from this initialization gives $O(1)$ multiplicative error and $O(\epsilon^{-1}\triangle k^2 (\log\log n) \log n)$ additive error, which improves previous results towards the well-known lower bound $O(\epsilon^{-1}\triangle k\log (n/k))$ on the additive error of DP $k$-median~\citep{DBLP:conf/soda/GuptaLMRT10} within a small $O(k\log\log n)$ factor. To our knowledge, this is the first initialization method with differential privacy guarantee and improved error rate in general metric space. \item We conduct experiments on simulated and real-world datasets to demonstrate the effectiveness of our methods. In both non-private and private settings, our proposed HST-based initialization approach achieves smaller initial cost than $k$-median++ (i.e., finds better initial centers), which may also lead to improvements in the final clustering quality. \end{itemize} \section{Background and Setup} \label{sec:pre} \subsection{Differential Privacy (DP)} \begin{definition}[Differential Privacy (DP) ~\citep{DBLP:conf/icalp/Dwork06}] If for any two adjacent data sets $D$ and $D'$ with symmetric difference of size one, for any $O\subset Range(\mathbbm{A})$, an algorithm $\mathbbm{A}$ satisfies $$ Pr[\mathbbm{A}(D)\in O] \leq e^\epsilon Pr[\mathbbm{A}(D')\in O],$$ then algorithm $\mathbbm{A}$ is said to be $\epsilon$-differentially private. \end{definition} Intuitively, differential privacy requires that after removing any data point (graph node in our case), the output of $D'$ should not be too different from that of the original dataset $D$. Smaller $\epsilon$ indicates stronger privacy, which, however, usually sacrifices utility. Thus, one of the central topics in differential privacy literature is to balance the utility-privacy trade-off. To achieve DP, one approach is to add noise to the algorithm output. The \textit{Laplace mechanism} adds Laplace($\eta(f)/\epsilon$) noise to the output, which is known to achieve $\epsilon$-DP. The \textit{exponential mechanism} is also a tool for many DP algorithms. Let $O$ be the set of feasible outputs. The utility function $q:D \times O \rightarrow \mathbb R$ is what we aim to maximize. The exponential mechanism outputs an element $o \in O$ with probability $P[\mathbbm{A}(D)=o]\propto \exp(\frac{\epsilon q(D,o)}{2 \eta(q)})$, where $D$ is the input dataset and $\eta(f)=\sup_{|D-D'|=1} |f(D)-f(D')|$ is the sensitivity of $f$. Both mechanisms will be used in our paper. \subsection{Metric $k$-Median Clustering} \label{sec:pre private k median} Following~\cite{DBLP:journals/siamcomp/AryaGKMMP04,DBLP:conf/soda/GuptaLMRT10}, the problem of metric $k$-median clustering (DP and non-DP) studied in our paper is stated as below. \begin{definition}[$k$-median] Given a universe point set $U$ and a metric $\rho : U \times U \rightarrow \mathbb R$, the goal of $k$-median to pick $F \subseteq U$ with $|F|= k$ to minimize \begin{align} \label{def:cost rho} \text{\textbf{$k$-median:}}\hspace{0.2in}cost_k(F,U) = \sum _{v\in U} \min_{f\in F}\rho(v,f). \end{align} Let $D \subseteq U $ be a set of demand points. The goal of DP $k$-median is to minimize \begin{align} \label{def:cost rho DP} \text{\textbf{DP $k$-median:}}\hspace{0.2in}cost_k(F,D) = \sum _{v\in D} \min_{f\in F}\rho(v,f). \end{align} At the same time, the output $F$ is required to be $\epsilon$-differentially private to $D$. We may drop ``$F$'' and use ``$cost_k(U)$'' or ``$cost_k(D)$'' if there is no risk of ambiguity. \end{definition} To better understand the motivation of the DP clustering, we provide a real-world example as follows. \begin{example} Consider $U$ to be the universe of all users in a social network (e.g., Twitter). Each user (account) is public, but also has some private information that can only be seen by the data holder. Let $D$ be users grouped by some feature that might be set as private. Suppose a third party plans to collaborate with the most influential users in $D$ for e.g., commercial purposes, thus requesting the cluster centers of $D$. In this case, we need a strategy to safely release the centers, while protecting the individuals in $D$ from being identified (since the membership of $D$ is private). \end{example} The local search procedure for $k$-median proposed by \cite{DBLP:journals/siamcomp/AryaGKMMP04} is summarized in Algorithm~\ref{alg:rand local search}. First we randomly pick $k$ points in $U$ as the initial centers. In each iteration, we search over all $x\in F$ and $y\in U$, and do the swap $F \leftarrow F -\{x\}+\{y\}$ such that $F-\{x\}+\{y\}$ improves the cost of $F$ the most (if more than factor $(1-\alpha/k)$ where $\alpha>0$ is a hyper-parameter). We repeat the procedure until no such swap exists. \cite{DBLP:journals/siamcomp/AryaGKMMP04} showed that the output centers $F$ achieves 5 approximation error to the optimal solution, i.e., $cost(F)\leq 5 OPT$. \begin{algorithm2e}[t] \SetKwInput{Input}{Input} \SetKwInput{Output}{Output} \SetKwInput{Initialization}{Initialization} \Input{Data points $U$, parameter $k$, constant $\alpha$} \Initialization{Randomly select $k$ points from $U$ as initial center set $F$} \DontPrintSemicolon \While{$\exists\ x\in F,y\in U$ s.t. $cost(F-\{x\}+\{y\}) \leq (1-\alpha/k)cost(F)$}{ Select $(x, y) \in F_i \times (D \setminus F_i)$ with $\arg\min_{x,y} \{ cost(F - \{x\} + \{y\})\}$\; Swap operation: $ F \leftarrow F -\{x\}+\{y\}$ } \Output{Center set $F$ } \caption{Local search for $k$-median clustering~\citep{DBLP:journals/siamcomp/AryaGKMMP04}} \label{alg:rand local search} \end{algorithm2e} \subsection{$k$-median++ Initialization} Although local search is able to find a solution with constant error, it takes $O(n^2)$ per iteration~\cite{Article:Resende_2007} in expected $O(k\log n)$ steps (in total $O(kn^2\log n)$) when started from random center set, which would be slow for large datasets. Indeed, we do not need such complicated algorithm to reduce the cost at the beginning, i.e., when the cost is large. To accelerate the process, efficient initialization methods find a ``roughly'' good center set as the starting point for local search. In the paper, we compare our new initialization scheme mainly with a popular (and perhaps most well-known) initialization method, the $k$-median++~\citep{DBLP:conf/soda/ArthurV07} (see Algorithm~\ref{alg:k-median++}). \begin{algorithm2e}[h] \SetKwInput{Input}{Input} \SetKwInput{Output}{Output} \Input{Data points $U$, number of centers $k$} \DontPrintSemicolon Randomly pick a point $c_1\in U$ and set $F=\{c_1\}$\; \For{$i=2$ to $k$}{ Select $c_i=u\in U$ with probability $\frac{\rho(u,F)}{\sum_{u'\in U} \rho(u',F)}$\; $F = F\cup \{c_i\}$ } \Output{$k$-median++ initial center set $F$} \caption{$k$-median++ initialization~\citep{DBLP:conf/soda/ArthurV07}} \label{alg:k-median++} \end{algorithm2e} Here, the function $D(u,C)$ is the shortest distance from a data point $u$ to the closest (center) point in set $C$. \cite{DBLP:conf/soda/ArthurV07} showed that the output centers $C$ by $k$-median++ achieves $O(\log k)$ approximation error with time complexity $O(nk)$. Starting from the initialization, we only need to run $O(k\log \log k)$ steps of the computationally heavy local search to reach a constant error solution. Thus, initialization may greatly improve the clustering efficiency. \newpage \section{Initialization via Hierarchically Well-Separated Tree (HST)} \label{sec:NDP} In this section, we propose our novel initialization scheme for $k$-median clustering, and provide our analysis in the non-private case solving \eqref{def:cost rho}. The idea is based on the metric embedding theory. We will start with an introduction to the main tool used in our approach. \subsection{Hierarchically Well-Separated Tree (HST)} In this paper, for an $L$-level tree, we will count levels in descending order down the tree. We use $h_v$ to denote the level of $v$, and $n_i$ be the number of nodes at level $i$. The Hierarchically Well-Separated Tree (HST) is based on the padded decompositions of a general metric space in a hierarchical manner~\citep{DBLP:journals/jcss/FakcharoenpholRT04}. Let $(U, \rho)$ be a metric space with $|U|=n$, and we will refer to this metric space without specific clarification. A $ \beta$–padded decomposition of $U$ is a probabilistic distribution of partitions of $U$ such that the diameter of each cluster $U_i\in U$ is at most $\beta$, i.e., $\rho(u,v)\leq \beta$, $\forall u,v\in U_i$, $i=1,...,k$. The formal definition of HST is given as below. \begin{definition} \label{def:2-HST} Assume $\min_{u,v\in U}\rho(u,v)=1$ and denote $\triangle=\max_{u,v\in U}\rho(u,v)$. An $\alpha$-Hierarchically Well-Separated Tree ($\alpha$-HST) with depth $L$ is an edge-weighted rooted tree $T$, such that an edge between any pair of two nodes of level $i-1$ and level $i$ has length at most $\triangle/\alpha^{L-i}$. \end{definition} In this paper, we consider $\alpha=2$-HST for simplicity, as $\alpha$ only affects the constants in our theoretical analysis. As presented in Algorithm~\ref{alg:new_hst}, the construction starts by applying a permutation $\pi$ on $U$, such that in following steps the points are picked in a random sequence. We first find a padded decomposition $P_L=\{P_{L,1},...,P_{L,n_L}\}$ of $U$ with parameter $\beta=\triangle/2$. The center of each partition in $P_{L,j}$ serves as a root node in level $L$. Then, we re-do a padded decomposition for each partition $P_{L,j}$, to find sub-partitions with diameter $\beta=\triangle/4$, and set the corresponding centers as the nodes in level $L-1$, and so on. Each partition at level $i$ is obtained with $\beta=\triangle /2^{L-i}$. This process proceeds until a node has a single point, or a pre-specified tree depth is reached. \begin{figure}[h] \centering \mbox{ \includegraphics[width=2.3in]{arxiv-fig/padded_color.pdf}\hspace{0.2in} \includegraphics[width=3.6in]{arxiv-fig/tree.pdf} } \vspace{-0.05in} \caption{An illustrative example of a 3-level padded decomposition and its corresponding 2-HST. \textbf{Left:} The thickness of the ball represents the level. The color corresponds to the levels in the HST in the right panel. ``$\triangle$'''s are the center nodes of partitions (balls), and ``$\times$'''s are non-center data points. \textbf{Right:} The resulting 2-HST generated from the padded decomposition. } \label{fig:padded}\vspace{0.15in} \end{figure} In Figure~\ref{fig:padded}, we provide an example of $L=3$-level 2-HST (left panel), along with its underlying padded decompositions (right panel). Besides this basic implementation for better illustration, \cite{Proc:Guy_ICALP2017} proposed an efficient HST construction in $O(m\log n)$ time, where $n$ and $m$ are the number of nodes and the number of edges in a graph, respectively. \begin{algorithm2e}[t] \SetKwInput{Input}{Input} \SetKwInput{Output}{Output} \Input{Data points $U$ with diameter $\triangle$, $L$} \DontPrintSemicolon Randomly pick a point in $U$ as the root node of $T$\; Let $r=\triangle/2$\; Apply a permutation $\pi$ on $U$ \tcp*{so points will be chosen in a random sequence} \For{each $v\in U$}{ Set $C_v=[v]$\; \For{each $u\in U$}{ Add $u\in U$ to $C_v$ if $d(v,u)\leq r$ and $u\notin \bigcup_{v'\neq v} C_{v'}$ } } Set the non-empty clusters $C_v$ as the children nodes of $T$\; \For{each non-empty cluster $C_v$}{ Run 2-HST$(C_v,L-1)$ to extend the tree $T$; stop until $L$ levels or reaching a leaf node } \Output{2-HST $T$} \caption{Build 2-HST($U,L$)} \label{alg:new_hst} \end{algorithm2e} \begin{algorithm2e}[t] \SetKwInput{Input}{Input} \SetKwInput{Output}{Output} \SetKwInput{Initialization}{Initialization} \Input{$U$, $\triangle$, $k$} \Initialization{$L=\log \triangle$, $C_0=\emptyset, C_1=\emptyset$} \DontPrintSemicolon Call Algorithm~\ref{alg:new_hst} to build a level-$L$ 2-HST $T$ using $U$\; \For{each node $v$ in $T$}{ $N_v\gets |U\cap T(v)| $\; $score(v)\gets N_v\cdot 2^{h_v}$ } \While{$|C_1|<k$ }{ Add top $(k-|C_1|)$ nodes with highest score to $C_1$\; \For{each $v\in C_1$}{ $C_1=C_1\setminus \{v\}$, if $\exists\ v'\in C_1$ such that $v'$ is a descendant of $v$ } } $C_0=\textrm{FIND-LEAF}(T,C_1)$ \Output{Initial center set $C_0 \subseteq U$} \caption{NDP-HST initialization} \label{alg:new_initial-NDP} \end{algorithm2e} The first step of our method is to embed the data points into an HST (see Algorithm~\ref{alg:new_initial-NDP}). Next, we will describe our proposed new strategy to search for the initial centers on the tree (w.r.t. the tree metric). Before moving on, it is worth mentioning that, there are polynomial time algorithms for computing an \textit{exact} $k$-median solution in the tree metric (\cite{DBLP:journals/orl/Tamir96,Rahul2003}). However, the dynamic programming algorithms have high complexity (e.g., $O(kn^2)$), making them unsuitable for the purpose of fast initialization. Moreover, it is unknown how to apply them effectively to the private case. As will be shown, our new algorithm 1) is very efficient, 2) gives $O(1)$ approximation error in the tree metric, and 3) can be effectively extended to DP easily (Section~\ref{sec:DP}). \subsection{HST Initialization Algorithm} Let $L = \log \Delta$ and suppose $T$ is a level-$L$ 2-HST in $(U,\rho)$, where we assume $L$ is an integer. For a node $v$ at level $i$, we use $T(v)$ to denote the subtree rooted at $v$. Let $N_v=|T(v)|$ be the number of data points in $T(v)$. The search strategy for the initial centers, NDP-HST initialization (``NDP'' stands for ``Non-Differentially Private''), is presented in Algorithm~\ref{alg:new_initial-NDP} with two phases. \newpage \noindent\textbf{Subtree search.} The first step is to identify the subtrees that contain the $k$ centers. To begin with, $k$ initial centers $C_1$ are picked from $T$ who have the largest $score(v)=N(v)\cdot 2^{h_v}$. This is intuitive, since to get a good clustering, we typically want the ball surrounding each center to include more data points. Next, we do a screening over $C_1$: if there is any ancestor-descendant pair of nodes, we remove the ancestor from $C_1$. If the current size of $C_1$ is smaller than $k$, we repeat the process until $k$ centers are chosen (we do not re-select nodes in $C_1$ and their ancestors). This way, $C_1$ contains $k$ root nodes of $k$ disjoint subtrees. \begin{algorithm2e}[h] \SetKwInput{Input}{Input} \SetKwInput{Output}{Output} \SetKwInput{Initialization}{Initialization} \Input{$T$, $C_1$} \Initialization{$C_0=\emptyset$} \DontPrintSemicolon \For{each node $v$ in $C_1$}{ \While{$v$ is not a leaf node}{ $v \gets \arg_w \max \{N_w, w\in ch(v) \} $, where $ch(v)$ denotes the children nodes of $v$\; } Add $v$ to $C_0$ } \Output{Initial center set $C_0 \subseteq U$} \caption{FIND-LEAF ($T,C_1$)} \label{alg:new_initial_prune} \end{algorithm2e} \noindent\textbf{Leaf search.} After we find $C_1$ the set of $k$ subtrees, the next step is to find the center in each subtree using Algorithm~\ref{alg:new_initial_prune} (``FIND-LEAF''). We employ a greedy search strategy, by finding the child node with largest score level by level, until a leaf is found. This approach is intuitive since the diameter of the partition ball exponentially decays with the level. Therefore, we are in a sense focusing more and more on the region with higher density (i.e., with more data points). The complexity of our search algorithm is given as follows. \begin{proposition}[Complexity] Algorithm~\ref{alg:new_initial-NDP} takes $O(dn\log n)$ time in the Euclidean space. \label{prop:time} \end{proposition} \begin{remark} The complexity of HST initialization is in general comparable to $O(dnk)$ of $k$-median++. Our algorithm would be faster if $k>\log n$, i.e., the number of centers is large. Similar comparison also holds for general metrics. \end{remark} \subsection{Approximation Error of HST Initialization} Firstly, we show that the initial center set produced by NDP-HST is already a good approximation to the optimal $k$-median solution. Let $\rho^{T}(x,y)=d_T(x,y)$ denote the ``2-HST metric'' between $x$ and $y$ in the 2-HST $T$, where $d_T(x,y)$ is the tree distance between nodes $x$ and $y$ in $T$. By Definition \ref{def:2-HST} and since $\triangle=2^L$, in the analysis we assume equivalently that the edge weight of the $i$-th level $2^{i-1}$. The crucial step of our analysis is to examine the approximation error in terms of the 2-HST metric, after which the error can be adapted to the general metrics by the following Lemma~\citep{DBLP:conf/focs/Bartal96}. \begin{lemma} \label{lemma:metrics} In a metric space $(U,\rho)$ with $|U|=n$ and diameter $\triangle$, it holds that $E[\rho^T(x,y)]=O(\min\{\log n,\log \triangle\})\rho(x,y)$. In the Euclidean space $\mathbb R^d$, $E[\rho^T(x,y)]=O(d)\rho(x,y)$. \end{lemma} Recall $C_0, C_1$ from Algorithm~\ref{alg:new_initial-NDP}. We define \begin{align} cost_k^T(U) &= \sum_{y\in U} \min_{x\in C_0} \rho^T(x,y), \label{eqn:cost}\\ {cost_k^T}'(U,C_1) &= \min_{\substack{|F\cap T(v)|=1,\\ \forall v\in C_1}} \sum_{y\in U} \min_{x\in F} \rho^T(x,y), \label{eqn:cost'}\\ OPT_{k}^T(U) &= \min_{F\subset U,|F|=k} \sum_{y\in U} \min_{x\in F} \rho^{T}(x,y) \equiv \min_{C_1'}\ {cost_k^T}'(U,C_1'). \label{eqn:OPT} \end{align} For simplicity, we will use ${cost_k^T}'(U)$ to denote ${cost_k^T}'(U,C_1)$. Here, $OPT_k^T$ \eqref{eqn:OPT} is the cost of the global optimal solution with 2-HST metric. The last equivalence in \eqref{eqn:OPT} holds because the optimal centers set can always located in $k$ disjoint subtrees, as each leaf only contain one point. \eqref{eqn:cost} is the $k$-median cost with 2-HST metric of the output $C_0$ of Algorithm~\ref{alg:new_initial-NDP}. \eqref{eqn:cost'} is the oracle cost after the subtrees are chosen. That is, it represents the optimal cost to pick one center from each subtree in $C_1$. Firstly, we bound the approximation error of subtree search and leaf search, respectively. \begin{lemma}[Subtree search]\label{lem:good_center_nonprivate} ${cost_k^T}'(U) \leq 5 OPT^T_{k}(U)$. \end{lemma} \begin{lemma}[Leaf search] \label{lem:greedy_first NDP} $cost^{T}_k(U)\leq 2 {cost_k^T}'(U)$. \end{lemma} Combining Lemma~\ref{lem:good_center_nonprivate} and Lemma~\ref{lem:greedy_first NDP}, we obtain \begin{theorem}[2-HST error]\label{thm:NDP 2-HST metric} Running Algorithm~\ref{alg:new_initial-NDP}, we have $cost^{T}_k(U) \leq 10OPT^T_k(U) $. \end{theorem} Thus, HST-initialization produces an $O(1)$ approximation to $OPT$ in the 2-HST metric. Define $cost_k(U)$ as \eqref{def:cost rho} for our HST centers, and the optimal cost w.r.t. $\rho$ as \begin{align} OPT_{k}(U) = \min_{|F|=k} \sum_{y\in U} \min_{x\in F} \rho(x,y). \end{align} We have the following result based on Lemma~\ref{lemma:metrics}. \begin{theorem}\label{theo:NDP rho metric} In general metric space, the expected $k$-median cost of Algorithm~\ref{alg:new_initial-NDP} is $E[cost_k(U)]=O(\min \{\log n, \log \triangle \}) OPT_k(U)$. \end{theorem} \begin{remark} In the Euclidean space, \cite{MakarychevMR19} proved $O(\log k)$ random projections suffice for $k$-median to achieve $O(1)$ error. Thus, if $\triangle=O(d)$ (e.g., bounded data), by Lemma~\ref{lemma:metrics}, HST initialization is able to achieve $O(\log (\min\{d,k\}))$ error, which is better than $O(\log k)$ of $k$-median++ when $d$ is small. \end{remark} \vspace{0.1in} \noindent\textbf{NDP-HST Local Search.} We are interested in the approximation quality of standard local search (Algorithm~\ref{alg:rand local search}), when initialized by our NDP-HST. \begin{theorem} \label{theo:NDP} NDP-HST local search achieves $O(1)$ approximation error in expected $O(k\log \log\min\{ n, \triangle\})$ number of iterations for input in general metric space. \end{theorem} Before ending this section, we remark that the initial centers found by NDP-HST can be used for $k$-means clustering analogously. For general metrics, $E[cost_{km}(U)]=O(\min \{\log n, \log \triangle \})^2 OPT_{km}(U)$ where $cost_{km}(U)$ is the optimal $k$-means cost. See Appendix~\ref{sec:append k-means} for the detailed (and similar) analysis. \section{HST Initialization with Differential Privacy} \label{sec:DP} In this section, we consider initialization method with differential privacy (DP). Recall (\ref{def:cost rho DP}) that $U$ is the universe of data points, and $D\subset U$ is a demand set that needs to be clustered with privacy. Since $U$ is public, simply running initialization algorithms on $U$ would preserve the privacy of $D$. However, 1) this might be too expensive; 2) in many cases one would probably want to incorporate some information about $D$ in the initialization, since $D$ could be a very imbalanced subset of $U$. For example, $D$ may only contain data points from one cluster, out of tens of clusters in $U$. In this case, initialization on $U$ is likely to pick initial centers in multiple clusters, which would not be helpful for clustering on $D$. Next, we show how our proposed HST initialization can be easily combined with differential privacy that at the same time contains information about the demand set $D$, leading to improved approximation error (Theorem~\ref{thm:DP-HST}). Again, suppose $T$ is an $L=\log\triangle$-level 2-HST of universe $U$ in a general metric space. Denote $N_v=|T(v)\cap D|$ for a node point $v$. Our private HST initialization (DP-HST) is similar to the non-private Algorithm~\ref{alg:new_initial-NDP}. To gain privacy, we perturb $N_v$ by adding i.i.d. Laplace noise: $$\hat{N_v}=N_v+Lap(2^{(L-h_v)}/\epsilon),$$ where $Lap(2^{(L-h_v)}/\epsilon)$ is a Laplace random number with rate $2^{(L-h_v)}/\epsilon$. We will use the perturbed $\hat N_v$ for node sampling instead of the true value $N_v$, as described in Algorithm~\ref{alg:new_initial DP}. The DP guarantee of this initialization scheme is straightforward by the composition theory~\citep{DBLP:conf/icalp/Dwork06}. \begin{algorithm2e}[h] \SetKwInput{Input}{Input} \SetKwInput{Output}{Output} \SetKwInput{Initialization}{Initialization} \Input{$U,D$, $\triangle$, $k$, $\epsilon$} \DontPrintSemicolon Build a level-$L$ 2-HST $T$ based on input $U$\; \For{each node $v$ in $T$}{ $N_v\gets |D\cap T(v)| $\; $\hat{N_v}\gets N_v+Lap(2^{(L-h_v)}/\epsilon)$\; $score(v)\gets \hat{N}(v)\cdot 2^{h_v}$ } Based on $\hat N_v$, apply the same strategy as Algorithm~\ref{alg:new_initial-NDP}: find $C_1$; $C_0=\text{FIND-LEAF(}C_1\text{)}$ \Output{Private initial center set $C_0 \subseteq U$ } \caption{DP-HST initialization} \label{alg:new_initial DP} \end{algorithm2e} \begin{theorem} Algorithm~\ref{alg:new_initial DP} is $\epsilon$-differentially private. \end{theorem} \begin{proof} For each level $i$, the subtrees $T(v,i)$ are disjoint to each other. The privacy used in $i$-th level is $\epsilon/2^{(L-i)}$, and the total privacy is $\sum _{i} \epsilon/2^{(L-i)}<\epsilon$. \end{proof} We now consider the approximation error. As the structure of the analysis is similar to the non-DP case, we present the main result here and defer the detailed proofs to Appendix~\ref{sec:append proof}. \begin{theorem}\label{theo:DP rho metric} Algorithm~\ref{alg:new_initial DP} finds initial centers such that $$E[cost_k(D)]=O(\log n) (OPT_k(D)+k\epsilon^{-1}\triangle\log n).$$ \end{theorem} \textbf{DP-HST Local Search.} Similarly, we can use private HST initialization to improve the performance of private $k$-median local search, which is presented in Algorithm~\ref{alg:DP-HST}. After initialization, the DP local search procedure follows~\cite{DBLP:conf/soda/GuptaLMRT10} using the exponential mechanism. \begin{algorithm2e}[h] \SetKwInput{Input}{Input} \SetKwInput{Output}{Output} \SetKwInput{Initialization}{Initialization} \Input{$U$, demand points $D \subseteq U $, parameter $k, \epsilon$, $T$} \Initialization{$F_1$ the private initial centers generated by Algorithm~\ref{alg:new_initial DP} with privacy $\epsilon/2$} \DontPrintSemicolon Set parameter $\epsilon'=\frac{\epsilon}{4\triangle (T+1)}$ \; \For{$i=1$ to $T$}{ Select $(x, y) \in F_i \times (V \setminus F_i)$ with prob. proportional to $\exp (-\epsilon' \times ( cost(F_i - \{x\} + \{y\}))$\; Let $ F_{i+1} \leftarrow F_i-\{x\}+\{y\} $ } Select $j$ from $\{1, 2, ... , T+1 \}$ with probability proportional to $\exp(-\epsilon'\times  cost(F_j ))$ \Output{$F=F_j$ the private center set} \caption{DP-HST local search} \label{alg:DP-HST} \end{algorithm2e} \begin{theorem} \label{thm:DP-HST} Algorithm~\ref{alg:DP-HST} achieves $\epsilon$-differential privacy. With probability $(1-\frac{1}{poly(n)})$, the output centers admit \begin{align*} cost_k(D)\leq 6OPT_k(D)+O(\epsilon^{-1}k^2\triangle(\log\log n)\log n) \end{align*} in $T=O(k\log\log n)$ iterations. \end{theorem} The DP local search with random initialization~\citep{DBLP:conf/soda/GuptaLMRT10} has 6 multiplicative error and $O(\epsilon^{-1}\triangle k^2\log^2 n)$ additive error. Our result improves the $\log n$ term to $\log\log n$ in the additive error. Meanwhile, the number of iterations needed is improved from $T=O(k\log n)$ to $O(k\log\log n)$ (see Section~\ref{sec:iteration-cost} for an empirical justification). Notably, it has been shown in \cite{DBLP:conf/soda/GuptaLMRT10} that for $k$-median problem, the lower bounds on the multiplicative and additive error of any $\epsilon$-DP algorithm are $O(1)$ and $O(\epsilon^{-1}\triangle k\log (n/k))$, respectively. Our result matches the lower bound on the multiplicative error, and the additive error is only worse than the bound by a factor of $O(k\log\log n)$ which would be small in many cases. To our knowledge, Theorem~\ref{thm:DP-HST} is the first result in literature to improve the error of DP local search in general metric space. \section{Experiments} \label{sec:experiment} \subsection{Datasets and Algorithms} \begin{figure}[b!] \vspace{-0.3in} \centering \mbox{\hspace{-0.2in} \includegraphics[width=3.4in]{arxiv-fig/dist_matrix_r1.pdf} \includegraphics[width=3.4in]{arxiv-fig/dist_matrix.pdf} } \vspace{-0.25in} \caption{Example of synthetic graphs: subgraph of 50 nodes. \textbf{Left:} $r=1$. \textbf{Right:} $r=100$. Darker and thicker edged have smaller distance. When $r=100$, the graph is more separable.} \label{fig:graph example}\vspace{-0.25in} \end{figure} \noindent\textbf{Discrete Euclidean space.}\hspace{0.1in}Following previous work ., we test $k$-median clustering on the MNIST hand-written digit dataset~\citep{lecun1998gradient} with 10 natural clusters (digit 0 to 9). We set $U$ as 10000 randomly chosen data points. We choose the demand set $D$ using two strategies: 1) ``balance'', where we randomly choose 500 samples from $U$; 2) ``imbalance'', where $D$ contains 500 random samples from $U$ only from digit ``0'' and ``8'' (two clusters). We note that, the imbalanced $D$ is a very practical setting in real-world scenarios, where data are typically not uniformly distributed. On this dataset, we test clustering with both $l_1$ and $l_2$ distance as the underlying metric. \vspace{0.1in} \noindent\textbf{Metric space induced by graph.}\hspace{0.1in}Random graphs have been widely considered in testing $k$-median methods~\citep{DBLP:conf/nips/BalcanEL13,todo2019fast}. The construction of graphs follows a similar approach as the synthetic \textit{pmedinfo} graphs provided by the popular OR-Library~\citep{beasley1990_OR}. The metric $\rho$ for this experiment is the shortest (weighted) path distance. To generate a size $n$ graph, we first randomly split the nodes into $10$ clusters. Within each cluster, each pair of nodes is connected with probability $0.2$ and weight drawn from standard uniform distribution. For each pair of clusters, we randomly connect some nodes from each cluster, with weights following uniform $[0.5,r]$. A larger $r$ makes the graph more separable, i.e., clusters are farther from each other. In Figure~\ref{fig:graph example}, we plot two example graphs (subgraphs of 50 nodes) with $r=100$ and $r=1$. We present two cases: $r=1$ and $r=100$. For this task, $U$ has 3000 nodes, and the private set $D$ (500 nodes) is chosen using similar ``balanced'' and ``imbalanced'' scheme as described above. In the imbalanced case, we choose $D$ randomly from only two clusters. \vspace{0.1in} \noindent\textbf{Algorithms.} We compare the following clustering algorithms in both non-DP and DP setting: (1) \textbf{NDP-rand:} Local search with random initialization; (2) \textbf{NDP-kmedian++:} Local search with $k$-median++ initialization (Algorithm~\ref{alg:k-median++}); (3) \textbf{NDP-HST:} Local search with NDP-HST initialization (Algorithm~\ref{alg:new_initial-NDP}), as described in Section~\ref{sec:NDP}; (4) \textbf{DP-rand:} Standard DP local search algorithm~\citep{DBLP:conf/soda/GuptaLMRT10}, which is Algorithm~\ref{alg:DP-HST} with initial centers randomly chosen from $U$; (5) \textbf{DP-kmedian++:} DP local search with $k$-median++ initialization run on $U$; (6) \textbf{DP-HST:} DP local search with HST-initialization (Algorithm~\ref{alg:DP-HST}). For non-DP tasks, we set $L=6$. For DP clustering, we use $L=8$. For non-DP methods, we set $\alpha=10^{-3}$ in Algorithm~\ref{alg:rand local search} and the maximum number of iterations as 20. To examine the quality of initialization as well as the final centers, We report both the cost at initialization and the cost of the final output. For DP methods, we run the algorithms for $T=20$ steps and report the results with $\epsilon=1$. We test $k\in\{2,5,10,15,20\}$. The average cost over $T$ iterations is reported for more robustness. All results are averaged over 10 independent repetitions. \subsection{Results} The results on MNIST dataset are given in Figure~\ref{fig:mnist}. The comparisons are similar for both $l_1$ and $l_2$: \begin{itemize} \item From the left column, the initial centers found by HST has lower cost than $k$-median++ and random initialization, for both non-DP and DP setting, and for both balanced and imbalanced demand set $D$. This confirms that the proposed HST initialization is more powerful than $k$-median++ in finding good initial centers. \item From the right column, we also observe lower final cost of HST followed by local search in DP clustering. In the non-DP case, the final cost curves overlap, which means that despite HST offers better initial centers, local search can always find a good solution eventually. \item The advantage of DP-HST, in terms of both the initial and the final cost, is more significant when $D$ is an imbalanced subset of $U$. As mentioned before, this is because our DP-HST initialization approach also privately incorporates the information of $D$. \end{itemize} The results on graphs are reported in Figure~\ref{fig:graph}, which give similar conclusions. In all cases, our proposed HST scheme finds better initial centers with smaller cost than $k$-median++. Moreover, HST again considerably outperforms $k$-median++ in the private and imbalanced $D$ setting, for both $r=100$ (highly separable) and $r=1$ (less separable). The advantages of HST over $k$-median++ are especially significant in the harder tasks when $r=1$, i.e., the clusters are nearly mixed up. \begin{figure}[h] \centering \mbox{ \includegraphics[width=2.7in]{arxiv-fig/experiment/MNIST_init_bal1_eps1.pdf}\hspace{0.2in} \includegraphics[width=2.7in]{arxiv-fig/experiment/MNIST_init_bal1_eps1_l2.pdf} } \mbox{ \includegraphics[width=2.7in]{arxiv-fig/experiment/MNIST_final_bal1_eps1.pdf}\hspace{0.2in} \includegraphics[width=2.7in]{arxiv-fig/experiment/MNIST_final_bal1_eps1_l2.pdf} } \mbox{ \includegraphics[width=2.7in]{arxiv-fig/experiment/MNIST_init_bal0_eps1.pdf}\hspace{0.2in} \includegraphics[width=2.7in]{arxiv-fig/experiment/MNIST_init_bal0_eps1_l2.pdf} } \mbox{ \includegraphics[width=2.7in]{arxiv-fig/experiment/MNIST_final_bal0_eps1.pdf}\hspace{0.2in} \includegraphics[width=2.7in]{arxiv-fig/experiment/MNIST_final_bal0_eps1_l2.pdf} } \vspace{-0.1in} \caption{Initial and final $k$-median cost on MNIST dataset. \textbf{1st column:} $l_1$ distance. \textbf{2nd column:} $l_2$ distance. } \label{fig:mnist} \end{figure} \begin{figure}[h] \centering \mbox{ \includegraphics[width=2.7in]{arxiv-fig/experiment/graph_init_bal1_eps1.pdf}\hspace{0.2in} \includegraphics[width=2.7in]{arxiv-fig/experiment/graph_init_bal1_eps1_r1.pdf} } \mbox{ \includegraphics[width=2.7in]{arxiv-fig/experiment/graph_final_bal1_eps1.pdf}\hspace{0.2in} \includegraphics[width=2.7in]{arxiv-fig/experiment/graph_final_bal1_eps1_r1.pdf} } \mbox{ \includegraphics[width=2.7in]{arxiv-fig/experiment/graph_init_bal0_eps1.pdf}\hspace{0.2in} \includegraphics[width=2.7in]{arxiv-fig/experiment/graph_init_bal0_eps1_r1.pdf}} \mbox{ \includegraphics[width=2.7in]{arxiv-fig/experiment/graph_final_bal0_eps1.pdf}\hspace{0.2in} \includegraphics[width=2.7in]{arxiv-fig/experiment/graph_final_bal0_eps1_r1.pdf} } \vspace{-0.1in} \caption{Initial and final $k$-median cost on graph dataset. \textbf{1st column:} $l_1$ distance. \textbf{2nd column:} $l_2$ distance. } \label{fig:graph} \end{figure} \newpage\clearpage \subsection{Improved Iteration Cost of DP-HST} \label{sec:iteration-cost} In Theorem~\ref{thm:DP-HST}, we show that under differential privacy constraints, the proposed DP-HST (Algorithm~\ref{alg:DP-HST}) improves both the approximation error and the number of iterations required to find a good solution of classical DP local search~\citep{DBLP:conf/soda/GuptaLMRT10}. In this section, we provide some numerical results to justify the theory. First, we need to properly measure the iteration cost of DP local search. This is because, unlike the non-private clustering, the $k$-median cost after each iteration in DP local search is not decreasing monotonically, due to the probabilistic exponential mechanism. To this end, for the cost sequence with length $T=20$, we compute its moving average sequence with window size $5$. Attaining the minimal value of the moving average indicates that the algorithm has found a ``local optimum'', i.e., it has reached a ``neighborhood'' of solutions with small clustering cost. Thus, we use the number of iterations to reach such local optimum as the measure of iteration cost. The results are provided in Figure~\ref{fig:iter cost}. We see that on all the tasks (MNIST with $l_1$ and $l_2$ distance, and graph dataset with $r=1$ and $r=100$), DP-HST has significantly smaller iterations cost. In Figure~\ref{fig:min cost}, we further report the $k$-median cost of the best solution in $T$ iterations found by each DP algorithm. We see that DP-HST again provide the smallest cost. This additional set of experiments again validates the claims of Theorem~\ref{thm:DP-HST}, that DP-HST is able to found better initial centers in fewer iterations. \begin{figure}[h] \vspace{0.2in} \centering \mbox{ \includegraphics[width=2.7in]{arxiv-fig/experiment/MNIST_DP_iter_l1.pdf} \includegraphics[width=2.7in]{arxiv-fig/experiment/MNIST_DP_iter_l2.pdf} } \mbox{ \includegraphics[width=2.7in]{arxiv-fig/experiment/graph_DP_iter_r100.pdf} \includegraphics[width=2.7in]{arxiv-fig/experiment/graph_DP_iter_r1.pdf} } \vspace{-0.1in} \caption{Iteration cost to reach a locally optimal solution, on MNIST and graph datasets with different $k$. The demand set is an imbalanced subset of the universe.} \label{fig:iter cost} \end{figure} \begin{figure}[t] \centering \mbox{ \includegraphics[width=2.7in]{arxiv-fig/experiment/MNIST_DP_min_cost_l1.pdf} \includegraphics[width=2.7in]{arxiv-fig/experiment/MNIST_DP_min_cost_l2.pdf} } \vspace{-0.1in} \caption{The $k$-median cost of the best solution found by each differentially private algorithm. The demand set is an imbalanced subset of the universe. Same comparison holds on graph data.} \label{fig:min cost} \end{figure} \newpage \subsection{Running Time Comparison with $k$-median++} In Proposition~\ref{prop:time}, we show that our HST initialization algorithm admits $O(dn\log n)$ complexity when considering the Euclidean space. With a smart implementation of Algorithm~\ref{alg:k-median++} where each data point tracks its distance to the current closest candidate center in $C$, $k$-median++ has $O(dnk)$ running time. Therefore, the running time of our algorithm is in general comparable to $k $-median++. Our method would run faster if $k=\Omega (\log n)$. \begin{figure}[h] \centering \mbox{ \includegraphics[width=2.7in]{arxiv-fig/experiment/time_compare_k.pdf}\hspace{0.2in} \includegraphics[width=2.7in]{arxiv-fig/experiment/time_compare_n.pdf} } \caption{Empirical time comparison of HST initialization v.s. $k$-median++, on MNIST dataset with $l_2$ distance. \textbf{Left:} The running time against $k$, on a subset of $n=2000$ data points. \textbf{Right:} The running time against $n$, with $k=20$ centers.} \label{fig:compare time} \end{figure} In Figure~\ref{fig:compare time}, we plot the empirical running time of HST initialization against $k$-median++, on MNIST dataset with $l_2$ distance (similar comparison holds for $l_1$). From the left subfigure, we see that $k$-median++ becomes slower with increasing $k$, and our method is more efficient when $k>20$. In the right panel, we observe that the running time of both methods increases with larger sample size $n$. Our HST algorithm has a slightly faster increasing rate, which is predicted by the complexity comparison ($n\log n$ v.s. $n$). However, this difference in $\log n$ factor would not be too significant unless the sample size is extremely large. Overall, our results suggest that in general, the proposed HST initialization would have similar efficiency as $k$-median++ in common practical scenarios. \section{Conclusion} In this paper, we propose a new initialization framework for the metric $k$-median problem in general (discrete) metric space. Our approach is called HST initialization, which leverages tools from metric embedding theory. Our novel tree search approach has comparable efficiency and approximation error to the popular $k$-median++ initialization. Moreover, we propose the differentially private (DP) HST initialization algorithm, which adapts to the private demand point set, leading to better clustering performance. When combined with subsequent DP local search heuristic, our algorithm is able to improve the additive error of DP local search and our result is close to the theoretical lower bound within a small factor. Experiments with Euclidean metrics and graph metrics verify the effectiveness of our method, which improves the cost of both the initial centers and the final $k$-median output.
2010.13676
\section{Introduction} \label{sec:introduction} \input{introduction} \section{Related Work} \label{sec:related-work} \input{related-work} \section{Robust Face Frontalization} \label{sec:frontalization-landmarks} \subsection{Outline of the Proposed Method} \label{sec:outline} \input{outline} \subsection{Frontalization via Alignment of 3D Point Sets} \input{face_frontalization} \section{Algorithm Analysis and Implementation Details} \label{sec:implementation} \input{implementation} \section{Experiments} \label{sec:experiments} \input{experiments} \section{Conclusions} \label{sec:conclusions} In this paper we introduced a robust face frontalization (RFF) method that is based on the simultaneous estimation of the rigid transformation between two 3D point sets and the non-rigid deformation of a 3D face model. This is combined with pixel-to-pixel warping, between an input image of an arbitrarily-viewed face and a synthesized frontal view of the face. The proposed method yields state-of-the-art results, both quantitatively and qualitatively, when compared with two recently proposed methods. Up to now, the performance of face frontalization has been evaluated using face recognition benchmarks. We advocate that direct image-to-image comparison between the predicted output and the associated ground-truth yields a better assessment criterion that is not biased by another task, e.g. face recognition. It is worthwhile to notice that the performance of face recognition can be boosted by combining face frontalization with the use of facial symmetry. However, the latter is likely to introduce undesired artefacts and unrealistic facial deformations, which are quite damaging for other face analysis tasks, such as expression recognition or lip reading. In the future, we plan to extend the proposed method to image sequences in order to allow robust temporal analysis of faces. In particular, we are interested in combining face frontalization with audio-visual speech enhancement. As already outlined, face frontalization may well be viewed as a process of discriminating between rigid head movements and non-rigid facial deformations, which can be used to eliminate head motions that naturally accompany speech production, and hence leverage the performance of visual speech reading and of audio-visual speech enhancement. \appendices \section{Robust Estimation of the Rigid Alignment Between Two 3D Point Sets} \label{app:robust-alignment} \input{robust-alignment} \section{Deformable Shape Model} \label{app:statistical-shape} \input{statistical-shape} \section{Robust Estimation of the Alignment Between a Deformable Shape and a 3D Point Set} \label{app:robust-shape-fit} \input{robust-shape-fit} \bibliographystyle{IEEEtran} \subsection{3D Morphable Model} As already mentioned, face frontalization has been essentially used as a pre-processing step for face recognition and the evaluation metrics proposed so far are generally based on the recognition rate \cite{yim2015rotating,wang2016facial,banerjee2018frontalize,zhao2018towards}. This kind of evaluation lacks direct significance for standalone face frontalization methods. Moreover, existing evaluation pipelines include facial-expression normalization \cite{zhu2015high}, which biases the results and does not allow to evaluate the extent to which face frontalization preserves non-rigid facial deformations.
2010.13638
\section{Introduction} \setcounter{lemma}{0} \setcounter{theorem}{0} \setcounter{equation}{0}\setcounter{proposition}{0} \setcounter{Rem}{0}\setcounter{conjecture}{0} For $m,n\in\N=\{0,1,2\ldots\}$, the truncated hypergeometric series ${}_{m+1}F_m$ is defined by $$ {}_{m+1}F_m\bigg[\begin{matrix}x_0&x_1&\ldots&x_m\\ &y_1&\ldots&y_m\end{matrix}\bigg|z\bigg]_n=\sum_{k=0}^n\f{(x_0)_k(x_1)_k\cdots(x_m)_k}{(y_1)_k\cdots(y_m)_k}\cdot\f{z^k}{k!}, $$ where $(x)_k=x(x+1)\cdots(x+k-1)$ is the Pochhammer symbol. During the past few decades, supercongruences concerning truncated hypergeometric series have been widely studied (cf. \cite{GZ,Long2011,MaoZhang2019,Sun2011,Sun2014,vH1997,Wang2020,WangSD2018,Zudilin2009}). In 2011, Sun \cite{Sun2011} proposed some conjectural supercongruences which relate truncated hypergeometric series to Euler numbers and Bernoulli numbers (see \cite{Sun2011} for the definitions of Euler numbers and Bernoulli numbers). For example, he conjectured that for any prime $p>3$ we have \begin{equation}\label{sunconj1} \sum_{k=0}^{(p-1)/2}(3k+1)\f{(\f12)_k^3}{(1)_k^3}4^k\eq p+2(-1)^{(p-1)/2}p^3E_{p-3}\pmod{p^4}, \end{equation} and for any $r\in\Z^{+}$ we have \begin{equation}\label{sunconj2} \sum_{k=0}^{p^r-1}(3k+1)\frac{(\frac12)_k^3}{(1)_k^3}4^k\equiv p^r+\frac76p^{r+3}B_{p-3}\pmod{p^{r+4}}, \end{equation} where $E_0,E_1,E_2,\ldots$ are Euler numbers and $B_0,B_1,B_2,\ldots$ are Bernoulli numbers. Note that $ak+1=(1+1/a)_k/(1/a)_k$. Thus the sums in \eqref{sunconj1} and \eqref{sunconj2} are actually the truncated hypergeometric series. In 2012, using the WZ method (cf. \cite{PWZ}), Guillera and Zudilin \cite{GZ} proved that \begin{equation}\label{GZres} \sum_{k=0}^{p-1}(3k+1)\f{(\f12)_k^3}{(1)_k^3}4^k\eq\sum_{k=0}^{(p-1)/2}(3k+1)\f{(\f12)_k^3}{(1)_k^3}4^k\eq p\pmod{p^3}, \end{equation} which is \eqref{sunconj2} modulo $p^3$ with $r=1$. In 2019, Mao and Zhang \cite{MaoZhang2019} confirmed \eqref{sunconj1} via a WZ pair found by Guillera and Zudilin \cite{GZ}. The reader is referred to \cite{Sun2019} for further conjectures involving the sums in \eqref{sunconj1} and \eqref{sunconj2}. Our first theorem confirms \eqref{sunconj2}. \begin{theorem}\label{mainth1} For any prime $p>3$ and integer $r\geq1$, we have \begin{equation}\label{mainth1eq1} \sum_{k=0}^{p^r-1}(3k+1)\f{(\f12)_k^3}{(1)_k^3}4^k\eq p^r+\f76p^{r+3}B_{p-3}\pmod{p^{r+4}}. \end{equation} \end{theorem} Using the same technique as the one used in the proof of Theorem \ref{mainth1} and using \eqref{sunconj1}, we can also prove that for any prime $p>3$ and positive integer $r$ $$ \sum_{k=0}^{(p^r-1)/2}(3k+1)\f{(\f12)_k^3}{(1)_k^3}4^k\eq p^r+2(-1)^{(p-1)/2}p^{r+2}E_{p-3}\pmod{p^{r+3}}. $$ It is worth mentioning that Guo and Schlosser \cite{GuoSchlosser2020} obtained two different $q$-analogues of \eqref{GZres}, and similarly to \eqref{GZres}, they conjectured that for any odd prime $p$ $$ \sum_{k=0}^{(p+1)/2}(3k-1)\f{(-\f12)_k^2(\f12)_k}{(1)_k^3}4^k\eq p\pmod{p^3} $$ which has been confirmed by the first author \cite{Wang2020} by extending it to the modulus $p^4$ case. In 2011, as a refinement of the (C.2) supercongruence of Van Hamme \cite{vH1997}, Long \cite{Long2011} proved that \begin{equation}\label{longres} \sum_{k=0}^{(p-1)/2}(4k+1)\f{(\f12)_k^4}{(1)_k^4}\eq p\pmod{p^4}. \end{equation} Guo and Wang \cite{GuoWang2020} obtained a generalization of \eqref{longres}. For any prime $p>3$ and positive integer $r$, they proved that \begin{equation}\label{GuoWangres} \sum_{k=0}^{(p^r-1)/2}(4k+1)\f{(\f12)_k^4}{(1)_k^4}\eq p^r\pmod{p^{r+3}}. \end{equation} Our next theorem confirms a conjecture of Guo \cite[Conjecture 6.2]{Guo2020} which extends \eqref{GuoWangres} to the modulus $p^{r+4}$ case. \begin{theorem}\label{mainth2} Let $p>3$ be a prime and $r$ a positive integer. Then \begin{equation}\label{mainth2eq1} \sum_{k=0}^{(p^r-1)/2}(4k+1)\f{(\f12)_k^4}{(1)_k^4}\eq p^r+\f76p^{r+3}B_{p-3}\pmod{p^{r+4}}. \end{equation} \end{theorem} Note that Guo \cite{Guo2020} proved that for any odd prime $p$ and positive integer $r$ $$ \sum_{k=0}^{(p^r-1)/2}(4k+1)\f{(\f12)_k^4}{(1)_k^4}\eq\sum_{k=0}^{p^r-1}(4k+1)\f{(\f12)_k^4}{(1)_k^4}\pmod{p^{r+4}}. $$ Clearly, the two sums in \eqref{mainth1eq1} and \eqref{mainth2eq1} are the same modulo $p^{r+4}$. Guo \cite[Conjecture 6.3]{Guo2020} conjectured that it is also true for $p=3$. \begin{theorem}\label{mainth3} Let $p$ be an odd prime and $r$ a positive integer. Then \begin{equation}\label{mainth3eq1} \sum_{k=0}^{p^r-1}(3k+1)\f{(\f12)_k^3}{(1)_k^3}4^k\eq\sum_{k=0}^{(p^r-1)/2}(4k+1)\f{(\f12)_k^4}{(1)_k^4}\pmod{p^{r+4}}. \end{equation} \end{theorem} Note that Guo \cite[Conjecture 6.4]{Guo2020} also conjectured a $q$-analogue of \eqref{mainth3eq1}. Our main strategy to prove Theorems \ref{mainth1}--\ref{mainth3} is using the WZ method (the reader is referred to \cite{GZ,PWZ,Zudilin2009} for further details and some well known WZ pairs). In fact, the case $r=1$ is easy to deal with since the dominators appearing in the WZ pairs are not divisible by $p$. However, the case $r\geq2$ is very sophisticated. In this case, we need to reduce the sums in \eqref{mainth1eq1} and \eqref{mainth2eq1} to the case $r=1$ via some complicated calculation. The paper is organized as follows. In both Sections 2 and 3, we shall first establish preliminary results which connect the case $r\geq2$ with the case $r=1$ and play important role in the proof of Theorem \ref{mainth3}. Then we will use the preliminary results to prove Theorems \ref{mainth1} and \ref{mainth2}. In the end of Section 3, we shall give the proof of Theorem \ref{mainth3}. \section{Proof of Theorem \ref{mainth1}} \setcounter{lemma}{0} \setcounter{theorem}{0} \setcounter{equation}{0}\setcounter{proposition}{0} \setcounter{Rem}{0}\setcounter{conjecture}{0} We first establish the following result. \begin{theorem}\label{sec2th} For any odd prime $p$ and positive integer $r$ we have $$ \f1{p^r}\sum_{k=0}^{p^r-1}(3k+1)\f{(\f12)_k^3}{(1)_k^3}4^k\eq\f1{p}\sum_{k=0}^{p-1}(3k+1)\f{(\f12)_k^3}{(1)_k^3}4^k\pmod{p^4}. $$ \end{theorem} Define the multiple harmonic sum (cf. \cite{Tauraso2018}) as follows: $$ H_n(s_1,s_2,\ldots,s_r)=\sum_{1\leq k_1< k_2<\cdots<k_r\leq n}\f{1}{k_1^{s_1}k_2^{s_2}\cdots k_r^{s_r}}, $$ where $n\geq r>0$ and each $s_i$ is a positive integer. Multiple harmonic sums have many congruence properties. For example, for any prime $p>s+2$, Sun \cite{SunZH2000} proved that \begin{equation}\label{harmonic1} H_{p-1}(s)\eq\begin{cases}\displaystyle-\f{s(s+1)}{2s+4}p^2B_{p-s-2}\pmod{p^3}\ &\t{if}\ 2\nmid s,\vspace{0.2cm}\\ \displaystyle\f{s}{s+1}pB_{p-s-1}\pmod{p^2}\ &\t{if}\ 2\mid s;\end{cases} \end{equation} for any $p>5$, Kh. Hessami Pilehrood and T. Hessami Pilehrood \cite[Lemma 3]{Hessami2012} proved that \begin{equation}\label{harmonic2} H_{p-1}(1,2)\eq -\f{3H_{p-1}(1)}{p^2}-\f{5H_{p-1}(3)}{12}\pmod{p^3}. \end{equation} \begin{lemma}\label{mainth1lem3} For any odd prime $p$ and positive integer $r$ we have \begin{gather*} p^{2r}\sum_{n=1}^{p^r-1}\f{1}{n^2}\binom{2n}{n}\eq p^2\sum_{n=1}^{p-1}\f{1}{n^2}\binom{2n}{n}\pmod{p^4},\\ p^{2r}\sum_{n=1}^{p^r-1}\f{H_{n-1}(1)}{n}\binom{2n}{n}\eq p^2\sum_{n=1}^{p-1}\f{H_{n-1}(1)}{n}\binom{2n}{n}\pmod{p^4},\\ p^{3r}\sum_{n=1}^{p^r-1}\f{H_{n-1}(1)^2}{n}\binom{2n}{n}\eq p^3\sum_{n=1}^{p-1}\f{H_{n-1}(1)^2}{n}\binom{2n}{n}\pmod{p^4},\\ p^{3r}\sum_{n=1}^{p^r-1}\f{H_{n-1}(2)}{n}\binom{2n}{n}\eq p^2\sum_{n=1}^{p-1}\f{H_{n-1}(2)}{n}\binom{2n}{n}\pmod{p^4},\\ p^{3r}\sum_{n=1}^{p^r-1}\f{H_{n-1}(1)}{n^2}\binom{2n}{n}\eq p^2\sum_{n=1}^{p-1}\f{H_{n-1}(1)}{n^2}\binom{2n}{n}\pmod{p^4},\\ p^{2r}\sum_{k=1}^{(p^r-1)/2}\f1{(2k-1)^2}\eq p^{2}\sum_{k=1}^{(p-1)/2}\f1{(2k-1)^2}\pmod{p^4},\\ p^{3r}\sum_{k=1}^{(p^r-1)/2}\f{H_{2k-2}(1)}{(2k-1)^2}\eq p^{3}\sum_{k=1}^{(p-1)/2}\f{H_{2k-2}(1)}{(2k-1)^2}\pmod{p^4},\\ p^{3r}\sum_{k=1}^{(p^r-1)/2}\f{H_{k-1}(1)}{(2k-1)^2}\eq p^{3}\sum_{k=1}^{(p-1)/2}\f{H_{k-1}(1)}{(2k-1)^2}\pmod{p^4}. \end{gather*} \end{lemma} \begin{proof} We only prove the second congruence, since the other ones can be showed in a similar way. We shall finish the proof by induction on $r$. Clearly, the second congruence holds for $r=1$. Assume that it holds for $r=k>1$. Now \begin{align*} p^{2k+2}\sum_{n=1}^{p^{k+1}-1}\f{H_{n-1}(1)}{n}\binom{2n}{n}=&p^{2k+2}\sum_{\substack{n=1\\p\nmid n}}^{p^{k+1}-1}\f{H_{n-1}(1)}{n}\binom{2n}{n}+p^{2k+2}\sum_{\substack{n=1\\p\mid n}}^{p^{k+1}-1}\f{H_{n-1}(1)}{n}\binom{2n}{n}\\ \eq& p^{2k+2}\sum_{\substack{n=1\\p\mid n}}^{p^{k+1}-1}\f{H_{n-1}(1)}{n}\binom{2n}{n}\\ =&p^{2k+1}\sum_{n=1}^{p^k-1}\f{H_{pn-1}(1)}{n}\binom{2pn}{pn}\pmod{p^4}, \end{align*} where we used the fact that $\ord_p(H_{n-1}(1))\geq -k$ for $1\leq n\leq p^{k+1}-1$. Note that $$ \ord_p(p^{2k+1}/n)\geq k+2\geq4\ \t{for}\ 1\leq n\leq p^k. $$ Hence for $1\leq n\leq p^k$ we have $$ \f{p^{2k+1}H_{pn-1}(1)}{n}\eq\f{p^{2k+1}}{n}\sum_{\substack{j=1\\ p\mid j}}^{pn-1}\f1{j}=\f{p^{2k}H_{n-1}(1)}{n}\pmod{p^4} $$ and $$ \f{p^{2k}H_{n-1}(1)}{n}\eq0\pmod{p^2}. $$ By the well-known Kazandzidis congruence (cf. \cite[p. 380]{Robert00}) we have for any odd prime $p$ $$ \binom{2pn}{pn}\eq\binom{2n}{n}\pmod{p^2}. $$ Combining the above and by the induction hypothesis we arrive at $$ p^{2k+2}\sum_{n=1}^{p^{k+1}-1}\f{H_{n-1}(1)}{n}\binom{2n}{n}\eq p^{2k}\sum_{n=1}^{p^k-1}\f{H_{n-1}(1)}{n}\binom{2n}{n}\eq p^{2}\sum_{n=1}^{p-1}\f{H_{n-1}(1)}{n}\binom{2n}{n}\pmod{p^4}. $$ We are done. \end{proof} \medskip \noindent{\it Proof of Theorem \ref{sec2th}}. As in \cite{GZ}, we shall use the following WZ pair $$ F(n,k)=(3n+2k+1)\f{(\f12)_n(\f12+k)_n^2}{(1)_n^3}4^n $$ and $$ G(n,k)=-\f{(\f12)_n(\f12+k)_{n-1}^2}{(1)_{n-1}^3}4^n. $$ Then we have $$ F(n,k-1)-F(n,k)=G(n+1,k)-G(n,k) $$ and $$ \sum_{k=0}^{p^r-1}(3k+1)\f{(\f12)_k^3}{(1)_k^3}4^k=\sum_{n=0}^{p^r-1}F(n,0). $$ Clearly, \begin{align*} \sum_{n=0}^{p^r-1}F(n,0)=&\sum_{n=0}^{p^r-1}\sum_{k=1}^{(p^r-1)/2}\big(F(n,k-1)-F(n,k)\big)+\sum_{n=0}^{p^r-1}F\l(n,\f{p^r-1}{2}\r)\\ =&\sum_{k=1}^{(p^r-1)/2}\sum_{n=0}^{p^r-1}\big(G(n+1,k)-G(n,k)\big)+\sum_{n=0}^{p^r-1}F\l(n,\f{p^r-1}{2}\r)\\ =&\sum_{k=1}^{(p^r-1)/2}G(p^r,k)+\sum_{n=0}^{p^r-1}F\l(n,\f{p^r-1}{2}\r), \end{align*} where the last follows from the fact that $G(0,k)=0$. It suffices to show \begin{equation}\label{sec2thkey1} \f1{p^r}\sum_{n=0}^{p^r-1}F\l(n,\f{p^r-1}{2}\r)\eq\f{1}{p}\sum_{n=0}^{p-1}F\l(n,\f{p-1}{2}\r)\pmod{p^4} \end{equation} and \begin{equation}\label{sec2thkey2} \f{1}{p^r}\sum_{k=1}^{(p^r-1)/2}G(p^r,k)\eq\f{1}{p}\sum_{k=1}^{(p-1)/2}G(p,k)\pmod{p^4}. \end{equation} We first consider \eqref{sec2thkey1}. It is easy to see that $$ F\l(n,\f{p^r-1}{2}\r)=\begin{cases}\displaystyle\f{p^{2r}(3n+p^r)}{4n^2}\binom{2n}{n}\f{(1+\f{p^r}2)_{n-1}^2}{(1)_{n-1}^2}\ &\mbox{if}\ n\geq1,\vspace{0.2cm}\\ \displaystyle p^r\ &\mbox{if}\ n=0. \end{cases} $$ Therefore, $$ \f{1}{p^r}\sum_{n=1}^{p^r-1}F\l(n,\f{p^r-1}{2}\r)=\f{3p^{r}}{4}\sum_{n=1}^{p^r-1}\f{(1+\f{p^r}2)_{n-1}^2}{(1)_{n-1}^2}\cdot\f{\binom{2n}{n}}{n}+\f{p^{2r}}{4}\sum_{n=1}^{p^r-1}\f{(1+\f{p^r}2)_{n-1}^2}{(1)_{n-1}^2}\cdot\f{\binom{2n}{n}}{n^2}. $$ For $1\leq n\leq p^r-1$, it is clear that $\ord_p(n)\leq r-1$. Note that $$ \f{(1+\f{p^r}2)_{n-1}}{(1)_{n-1}}=1+\f{p^r}2H_{n-1}(1)+\f{p^{2r}}4H_{n-1}(1,1)+\cdots $$ and $$ \ord_p\bigg(H_{n-1}(\overbrace{1,1,\ldots,1}^{d\t{'s}\ 1})\bigg)\geq -d(r-1). $$ Thus we have $$ \f{(1+\f{p^r}2)_{n-1}}{(1)_{n-1}}\eq1+\f{p^r}2H_{n-1}(1)+\f{p^{2r}}8H_{n-1}(1)^2-\f{p^{2r}}8H_{n-1}(2)\pmod{p^3}, $$ where we have used $H_{n-1}(1,1)=(H_{n-1}(1)^2-H_{n-1}(2))/2$. Now by Lemma \ref{mainth1lem3}, \begin{align}\label{sec2thkey3} \f{1}{p^r}\sum_{n=1}^{p^r-1}F\l(n,\f{p^r-1}{2}\r)\eq&\f{3p^{r}}4\sum_{n=1}^{p^r-1}\f{\binom{2n}{n}}{n}\l(1+\f{p^r}2H_{n-1}(1)+\f{p^{2r}}8H_{n-1}(1)^2-\f{p^{2r}}8H_{n-1}(2)\r)^2\notag\\ &+\f{p^{2r}}4\sum_{n=1}^{p^r-1}\f{\binom{2n}{n}}{n^2}\l(1+\f{p^r}2H_{n-1}(1)\r)^2\notag\\ \eq&\f{3p^{r}}4\sum_{n=1}^{p^r-1}\f{\binom{2n}{n}}{n}\l(1+p^rH_{n-1}(1)+\f{p^{2r}}2H_{n-1}(1)^2-\f{p^{2r}}4H_{n-1}(2)\r)\notag\\ &+\f{p^{2r}}4\sum_{n=1}^{p^r-1}\f{\binom{2n}{n}}{n^2}\big(1+p^rH_{n-1}(1)\big)\notag\\ \eq&\f{3p}4\sum_{n=1}^{p-1}\f{\binom{2n}{n}}{n}\l(1+pH_{n-1}(1)+\f{p^2}2H_{n-1}(1)^2-\f{p^2}4H_{n-1}(2)\r)\notag\\ &+\f{p^{2}}4\sum_{n=1}^{p-1}\f{\binom{2n}{n}}{n^2}\big(1+pH_{n-1}(1)\big)\pmod{p^4}, \end{align} where in the last step we have used the fact (cf. \cite[Theorem 1.3]{SunTauraso2010}) that \begin{equation}\label{suntauraso} p^{r-1}\sum_{k=1}^{p^r-1}\f{\binom{2k}{k}}{k}\eq\begin{cases}\displaystyle2\pmod{p^3}\quad&\t{if}\ p=2,\vspace{0.2cm}\\ \displaystyle5\pmod{p^3}\quad&\t{if}\ p=3,\vspace{0.2cm}\\ \displaystyle\f89p^2B_{p-3}\pmod{p^3}\quad&\t{otherwise}.\end{cases} \end{equation} Note that in \eqref{sec2thkey3} we actually obtain a result which is independent of $r$. Thus we have proved \eqref{sec2thkey1}. Now we consider \eqref{sec2thkey2}. Clearly, $$ \f{1}{p^r}\sum_{k=1}^{(p^r-1)/2}G(p^r,k)= -\f{4p^{2r}}{16^{p^r}}\binom{2p^r}{p^r}^3\sum_{k=1}^{(p^r-1)/2}\f{(\f12+p^r)_{k-1}^2}{(2k-1)^2(\f12)_{k-1}^2}. $$ By a similar argument as above, $\ord_p(2k-1)\leq r-1$ and $$ \f{(\f12+p^r)_{k-1}}{(\f12)_{k-1}}\eq 1+2p^rH_{2k-2}(1)-p^rH_{k-1}(1)\pmod{p^2}. $$ Therefore, by Lemma \ref{mainth1lem3} we have \begin{align*} \f{1}{p^r}\sum_{k=1}^{(p^r-1)/2}G(p^r,k)\eq&-\f{4p^{2r}}{16^{p^r}}\binom{2p^r}{p^r}^3\sum_{k=1}^{(p^r-1)/2}\f{1+4p^rH_{2k-2}(1)-2p^rH_{k-1}(1)}{(2k-1)^2}\\ \eq&-\f{4p^2}{16^{p^r}}\binom{2p^r}{p^r}^3\sum_{k=1}^{(p-1)/2}\f{1+4pH_{2k-2}(1)-2pH_{k-1}(1)}{(2k-1)^2}\pmod{p^4}. \end{align*} Note that $$ \sum_{k=1}^{(p-1)/2}\f{1}{(2k-1)^2}=H_{p-1}(2)-\f14H_{(p-1)/2}(2)\eq0\pmod{p}. $$ Hence we have $$ \sum_{k=1}^{(p-1)/2}\f{1+4pH_{2k-2}(1)-2pH_{k-1}(1)}{(2k-1)^2}\eq0\pmod{p}. $$ Then from the Kazandzidis congruence and Fermat's little theorem, we immediately obtain \begin{equation}\label{sec2thkey4} \f{1}{p^r}\sum_{k=1}^{(p^r-1)/2}G(p^r,k)\eq-2p^2\binom{2p}{p}^3\sum_{k=1}^{(p-1)/2}\f{1+4pH_{2k-2}(1)-2pH_{k-1}(1)}{(2k-1)^2}\pmod{p^4}. \end{equation} This proves \eqref{sec2thkey2} since the right-hand side of the above congruence is independent of $r$. The proof of Theorem \ref{sec2th} is now complete.\qed We are now in a position to prove Theorem \ref{mainth1}. We need the following lemmas. \begin{lemma}\label{mainth1lem1} For any prime $p>3$ we have \begin{gather} \label{mainth1lem1eq1}\sum_{k=1}^{p-1}\f{1}{k^3}\binom{2k}{k}\eq-\f{2H_{p-1}(1)}{p^2}\pmod{p},\\ \label{mainth1lem1eq2}\sum_{k=1}^{p-1}\f{H_k(2)}{k}\binom{2k}{k}\eq \f{2H_{p-1}(1)}{3p^2}\pmod{p},\\ \label{mainth1lem1eq3}\sum_{k=1}^{p-1}\l(\f2{k^2}-\f{3H_k(1)}{k}\r)\binom{2k}{k}\eq \f{2H_{p-1}(1)}{p}\pmod{p^2}. \end{gather} \end{lemma} \begin{proof} \eqref{mainth1lem1eq1} was originally conjectured by Sun \cite[Conjecture 1.1]{Sun2011} and confirmed by Kh. Hessami Pilehrood and T. Hessami Pilehrood \cite{Hessami2012}. One may consult \cite[Conjecture 1.1]{Sun2011} for the modulus $p^4$ case of \eqref{mainth1lem1eq1}. We can directly verify \eqref{mainth1lem1eq2} and \eqref{mainth1lem1eq3} for $p=5$. By \cite{Tauraso2018} we know these two congruences hold for $p>5$. \end{proof} \begin{lemma}\label{mainth1lem2} For any prime $p>3$ we have $$ \sum_{k=1}^{p-1}\l(\f{3H_k(1)^2}{k}-\f{4H_k(1)}{k^2}\r)\binom{2k}{k}\eq\f{6H_{p-1}(1)}{p^2}\pmod{p}. $$ \end{lemma} \begin{proof} As in \cite{Tauraso2018}, consider the WZ pair $$ F(n,k)=\f1{k}\binom{n+k}{k}\quad \t{and}\quad G(n,k)=\f{k}{(n+1)^2}\binom{n+k}{k}. $$ Then for any $n,k\in\N$ we have $$ F(n+1,k)-F(n,k)=G(n,k+1)-G(n,k). $$ Let $S_n=\sum_{k=1}^nF(n,k)H_k(1)^2$. Then \begin{align}\label{deltaS} S_{n+1}-S_n=&\sum_{k=1}^{n+1}F(n+1,k)H_k(1)^2-\sum_{k=1}^nF(n,k)H_k(1)^2\notag\\ =&F(n+1,n+1)H_{n+1}(1)^2+\sum_{k=1}^n(F(n+1,k)-F(n,k))H_k(1)^2\notag\\ =&F(n+1,n+1)H_{n+1}(1)^2+\sum_{k=1}^n(G(n,k+1)-G(n,k))H_k(1)^2\notag\\ =&F(n+1,n+1)H_{n+1}(1)^2+\sum_{k=1}^n\bigg(G(n,k+1)H_k(1)^2-G(n,k)H_{k-1}^2\notag\\ &-\f{2G(n,k)H_{k-1}}{k}-\f{G(n,k)}{k^2}\bigg)\notag\\ =&\f{\binom{2n+2}{n+1}H_{n+1}(1)^2}{n+1}+\f{\binom{2n+2}{n+1}H_{n}(1)^2}{2n+2}-2\sum_{k=1}^n\f{G(n,k)H_k(1)}{k}+\sum_{k=1}^n\f{G(n,k)}{k^2}. \end{align} By \cite[(16)]{Tauraso2018} we have \begin{equation}\label{pf1eq1} \sum_{k=1}^n\f{G(n,k)}{k^2}=\f{1}{(n+1)^2}\sum_{k=1}^n\f1{k}\binom{n+k}{k}=\f{1}{(n+1)^2}\l(\f32\sum_{k=1}^n\f{\binom{2k}{k}}{k}-H_{n}(1)\r). \end{equation} From \cite[(1.49)]{G} we know that $$ \sum_{k=0}^n\binom{x+k}{k}=\binom{x+n+1}{n}. $$ Therefore, \begin{align}\label{pf1eq2} \sum_{k=1}^n\f{G(n,k)H_k(1)}{k}=&\f{1}{(n+1)^2}\sum_{k=1}^n\binom{n+k}{k}\sum_{j=1}^k\f1{j}=\f{1}{(n+1)^2}\sum_{j=1}^n\f1{j}\sum_{k=j}^{n}\binom{n+k}{k}\notag\\ =&\f{1}{(n+1)^2}\sum_{j=1}^n\f1{j}\l(\binom{2n+1}{n}-\binom{n+j}{j-1}\r)\notag\\ =&\f{1}{(n+1)^2}\l(\f12\binom{2n+2}{n+1}H_n(1)-\f{1}{n+1}\sum_{j=1}^n\binom{n+j}{j}\r)\notag\\ =&\f1{2(n+1)^2}\binom{2n+2}{n+1}H_n(1)-\f{1}{2(n+1)^3}\binom{2n+2}{n+1}+\f1{(n+1)^3}. \end{align} Substituting \eqref{pf1eq1} and \eqref{pf1eq2} into \eqref{deltaS} we have \begin{align*} S_{n+1}-S_n=&\f{3\binom{2n+2}{n+1}H_{n+1}(1)^2}{2n+2}-\f{2\binom{2n+2}{n+1}H_{n+1}(1)}{(n+1)^2}+\f{5\binom{2n+2}{n+1}}{2(n+1)^3}-\f{H_{n+1}(1)}{(n+1)^2}\\ &-\f{1}{(n+1)^3}+\f{3}{2(n+1)^2}\sum_{k=1}^n\f{\binom{2k}{k}}{k}. \end{align*} Now summing both sides over $n$ from $0$ to $p-2$ and noting that $S_0=0$ we have \begin{align}\label{pf1eq3} S_{p-1}=&\f32\sum_{n=1}^{p-1}\f{\binom{2n}{n}H_n(1)^2}{n}-2\sum_{n=1}^{p-1}\f{\binom{2n}{n}H_n(1)}{n^2}+\f52\sum_{n=1}^{p-1}\f{\binom{2n}{n}}{n^3}-\sum_{k=1}^{p-1}\f{H_n(1)}{n^2}-H_{p-1}(3)\notag\\ &+\f32\sum_{n=1}^{p-1}\f1{n^2}\sum_{k=1}^{n-1}\f{\binom{2k}{k}}{k}. \end{align} Clearly, for $1\leq k\leq p-1$, $$ F(p-1,k)=\f1{k}\binom{p-1+k}{k}\eq\f1{k}\binom{k-1}{k}=0\pmod{p}. $$ Thus we have \begin{equation}\label{pf1eq4} S_{p-1}=\sum_{k=1}^{p-1}F(p-1,k)H_k(1)^2\eq0\pmod{p}. \end{equation} In view of \eqref{harmonic1} and \eqref{harmonic2} we have \begin{equation}\label{pf1eq5} \sum_{k=1}^{p-1}\f{H_n(1)}{n^2}\eq H_{p-1}(1,2)\eq-\f{3H_{p-1}}{p^2}\pmod{p}. \end{equation} Note that \begin{align}\label{pf1eq6} \sum_{n=1}^{p-1}\f1{n^2}\sum_{k=1}^{n-1}\f{\binom{2k}{k}}{k}=\sum_{k=1}^{p-1}\f{\binom{2k}{k}}{k}\sum_{n=k}^{p-1}\f{1}{n^2}-\sum_{n=1}^{p-1}\f{\binom{2n}{n}}{n^3}\eq-\sum_{k=1}^{p-1}\f{\binom{2k}{k}H_{k}(2)}{k}\pmod{p}. \end{align} Substituting \eqref{pf1eq4}--\eqref{pf1eq6} into \eqref{pf1eq3} and using \eqref{mainth1lem1eq1} and \eqref{mainth1lem1eq2} we immediately obtain the desired result. \end{proof} \begin{lemma}\label{harmonickey} Let $p>3$ be a prime. Then \begin{gather} \label{harmonickeyeq1}\sum_{k=1}^{(p-1)/2}\f{1}{(2k-1)^2}\eq\f1{12}pB_{p-3}\pmod{p^2},\\ \label{harmonickeyeq2}\sum_{k=1}^{(p-1)/2}\f{H_{2k-2}(1)}{(2k-1)^2}\eq\f38B_{p-3}\pmod{p},\\ \label{harmonickeyeq3}\sum_{k=1}^{(p-1)/2}\f{H_{k-1}}{(2k-1)^2}\eq\f78B_{p-3}\pmod{p}. \end{gather} \end{lemma} \begin{proof} By \cite[Corollaries 5.1 and 5.2]{SunZH2000} we have $$ H_{p-1}(2)\eq \f23pB_{p-3}\pmod{p^2}\quad\t{and}\quad H_{(p-1)/2}(2)\eq \f73pB_{p-3}\pmod{p^2}. $$ It follows that $$ \sum_{k=1}^{(p-1)/2}\f{1}{(2k-1)^2}=H_{p-1}(2)-\f14H_{(p-1)/2}(2)\eq \f1{12}pB_{p-3}\pmod{p^2}. $$ This proves \eqref{harmonickeyeq1}. Clearly, \begin{align*} \sum_{k=1}^{(p-1)/2}\f{H_{2k-2}(1)}{(2k-1)^2}=\sum_{k=1}^{(p-1)/2}\f{H_{2((p+1)/2-k)-2}(1)}{(2((p+1)/2-k)-1)^2}=\f{1}{4}\sum_{k=1}^{(p-1)/2}\f{H_{2k}}{k^2}\pmod{p}, \end{align*} where we used the fact $H_{p-1-k}(1)\eq H_{k}(1)\pmod{p}$ for any $0\geq k\geq p-1$. In view of \cite[Lemma 2.4]{MaoWang2019}, we have \begin{equation}\label{maowangres} \sum_{k=1}^{(p-1)/2}\f{H_{2k}(1)}{k^2}\eq\f32B_{p-3}\pmod{p}. \end{equation} This proves \eqref{harmonickeyeq2}. Note that $$ H_{(p-1)/2-k}(1)=H_{(p-1)/2}(1)-\sum_{j=1}^k\f{1}{(p+1)/2-j}\eq H_{(p-1)/2}(1)+2H_{2k}(1)-H_k(1)\pmod{p}. $$ Therefore, \begin{align*} &\sum_{k=1}^{(p-1)/2}\f{H_{k-1}(1)}{(2k-1)^2}=\sum_{k=1}^{(p-1)/2}\f{H_{(p-1)/2-k}(1)}{(p-2k)^2}\\ \eq&\f14\sum_{k=1}^{(p-1)/2}\f{H_{(p-1)/2}(1)+2H_{2k}(1)-H_k(1)}{k^2}\\ \eq&\f12\sum_{k=1}^{(p-1)/2}\f{H_{2k}(1)}{k^2}-\f14H_{(p-1)/2}(3)-\f14H_{(p-1)/2}(1,2)\pmod{p}. \end{align*} It is known (cf. \cite{Hessami2014}) that $$ H_{(p-1)/2}(3)\eq-2B_{p-3}\pmod{p}\quad\t{and}\quad H_{(p-1)/2}(1,2)\eq \f32B_{p-3}\pmod{p}. $$ Then \eqref{harmonickeyeq3} follows from the above two congruences and \eqref{maowangres} immediately. \end{proof} \medskip \noindent{\it Proof of Theorem \ref{mainth1}}. By \eqref{sec2thkey3} and \eqref{sec2thkey4} we have \begin{align*} &\f{1}{p^r}\sum_{k=0}^{p^r-1}(3k+1)\f{(\f12)_k^3}{(1)_k^3}4^k\\ \eq&1+\f14p^2\l(3\sum_{n=1}^{p-1}\f{\binom{2n}{n}H_n(1)}{n}-2\sum_{n=1}^{p-1}\f{\binom{2n}{n}}{n^2}\r)+\f18p^3\l(3\sum_{n=1}^{p-1}\f{\binom{2n}{n}H_n(1)^2}{n}-4\sum_{n=1}^{p-1}\f{\binom{2n}{n}H_n(1)}{n^2}\r)\\ &+\f{3}{4}p\sum_{n=1}^{p-1}\f{\binom{2n}{n}}{n}+\f5{16}p^3\sum_{n=1}^{p-1}\f{\binom{2n}{n}}{n^3}-\f3{16}p^3\sum_{n=1}^{p-1}\f{\binom{2n}{n}H_n(2)}{n}-2p^2\sum_{k=1}^{(p-1)/2}\f{1}{(2k-1)^2}\\ &-8p^3\sum_{k=1}^{(p-1)/2}\f{H_{2k-2}(1)}{(2k-1)^2}+4p^3\sum_{k=1}^{(p-1)/2}\f{H_{k-1}(1)}{(2k-1)^2}\pmod{p^4}. \end{align*} Then by \eqref{harmonic1}, \eqref{suntauraso} and Lemmas \ref{mainth1lem1}--\ref{harmonickey} we have $$ \f{1}{p^r}\sum_{n=0}^{p^r-1}(3n+1)\f{(\f12)_n^3}{(1)_n^3}4^n\eq1+\f76p^3B_{p-3}\pmod{p^4}. $$ The proof of Theorem \ref{mainth1} is now complete.\qed \section{Proof of Theorem \ref{mainth2}} \setcounter{lemma}{0} \setcounter{theorem}{0} \setcounter{equation}{0}\setcounter{proposition}{0} \setcounter{Rem}{0}\setcounter{conjecture}{0} Similarly as in Section 2, we first establish the following result. \begin{theorem}\label{sec3th} For any odd prime $p$ and positive integer $r$ we have $$ \f{1}{p^r}\sum_{k=0}^{(p^r-1)/2}(4k+1)\f{(\f12)_k^4}{(1)_k^4}\eq \f{1}{p}\sum_{k=0}^{(p-1)/2}(4k+1)\f{(\f12)_k^4}{(1)_k^4}\pmod{p^4}. $$ \end{theorem} \begin{lemma}\label{mainth2lem1} For any odd prime $p$ and positive integer $r$ we have \begin{gather} \label{mainth2lem1eq1}p^rH_{p^r-1}(1)\eq pH_{p-1}(1)\pmod{p^4},\\ \label{mainth2lem1eq2}p^{2r}H_{p^r-1}(1,1)\eq p^2H_{p-1}(1,1)\pmod{p^4},\\ \label{mainth2lem1eq3}p^{3r}H_{p^r-1}(1,1,1)\eq p^3H_{p-1}(1,1,1)\pmod{p^4}. \end{gather} \end{lemma} \begin{proof} We first prove \eqref{mainth2lem1eq1}. Clearly, it holds for $r=1$. Assume that it holds for $r=k>1$. Then for $r=k+1$ we have \begin{align*} p^{k+1}H_{p^{k+1}-1}(1)=&p^{k+1}\sum_{\substack{j=1\\ p\mid j}}^{p^{k+1}-1}\f1{j}+p^{k+1}\sum_{\substack{j=1\\ p\nmid j}}^{p^{k+1}-1}\f1{j}=p^k\sum_{j=1}^{p^k-1}\f1{j}+p^{k+1}\sum_{l=0}^{p^k-1}\sum_{j=1}^{p-1}\f{1}{lp+j}\\ \eq&p^k\sum_{j=1}^{p^k-1}\f1{j}+p^{k+1}\sum_{l=0}^{p^k-1}\sum_{j=1}^{p-1}\l(\f{lp}{j^2}-\f1{j}\r)\\ =& p^kH_{p^k-1}(1)+\f{p^{2k+2}(p^k-1)}{2}H_{p-1}(2)-p^{2k+1}H_{p-1}(1)\\ \eq&p^kH_{p^k-1}(1)\pmod{p^4}. \end{align*} By the induction hypothesis we obtain \eqref{mainth2lem1eq1}. \eqref{mainth2lem1eq2} and \eqref{mainth2lem1eq3} can be proved similarly by noting that $$ H_{p^r-1}(1,1)=\f{H_{p^r-1}(1)^2-H_{p^r-1}(2)}{2} $$ and $$ H_{p^r-1}(1,1,1)=\f{H_{p^r-1}(1)^3-3H_{p^r-1}(1)H_{p^r-1}(2)+2H_{p^r-1}(3)}{6}. $$ \end{proof} \begin{lemma}\label{mainth2lem2} For any odd prime $p$ and positive integer $r$ we have $$ p^{3r}\sum_{k=0}^{(p^r-3)/2}\f{16^k}{(2k+1)^3\binom{2k}{k}^2}\eq p^{3}\sum_{k=0}^{(p-3)/2}\f{16^k}{(2k+1)^3\binom{2k}{k}^2}\pmod{p^4}. $$ \end{lemma} \begin{proof} Clearly, the congruence holds for $r=1$. Now suppose that $r\geq2$. In view of the proof of \cite[Theorem 1.1]{PanSun2014}, for $0<k<p^r/2$ we have \begin{equation}\label{key} \f{p^r}{k\binom{2k}{k}}\eq0\pmod{p}. \end{equation} Thus we obtain \begin{align*} p^{3r}\sum_{k=0}^{(p^r-3)/2}\f{16^k}{(2k+1)^3\binom{2k}{k}^2}=&p^{3r}\sum_{\substack{k=0\\ p\mid 2k+1}}^{(p^r-3)/2}\f{16^k}{(2k+1)^3\binom{2k}{k}^2}+p^{3r}\sum_{\substack{k=0\\ p\nmid 2k+1}}^{(p^r-3)/2}\f{16^k}{(2k+1)^3\binom{2k}{k}^2}\\ \eq&p^{3r}\sum_{\substack{k=0\\ p\mid 2k+1}}^{(p^r-3)/2}\f{16^k}{(2k+1)^3\binom{2k}{k}^2}\pmod{p^4}, \end{align*} since by \eqref{key}, $\ord_p(p^{3r}/\binom{2k}{k}^2)\geq r+2\geq 4$. It is routine to check that \begin{align*} p^{3r}\sum_{\substack{k=0\\ p\mid 2k+1}}^{(p^r-3)/2}\f{16^k}{(2k+1)^3\binom{2k}{k}^2}=p^{3r}\sum_{k=0}^{(p^{r-1}-3)/2}\f{4^{(2k+1)p-1}}{(2k+1)^3p^3\binom{(2k+1)p-1}{((2k+1)p-1)/2}^2}. \end{align*} Now \begin{align*} \binom{(2k+1)p-1}{((2k+1)p-1)/2}=\f{\Gamma((2k+1)p)}{\Gamma(\f{(2k+1)p+1}{2})^2}=-\f{\Gamma_p((2k+1)p)}{\Gamma_p(\f{(2k+1)p+1}{2})^2}\binom{2k}{k}, \end{align*} where $\Gamma_p(x)$ is the $p$-adic Gamma function (see \cite[Chapter 7]{Robert00} for properties of this function). By \eqref{key}, for $0\leq k\leq (p^{r-1}-3)/2$ we have $$ \f{p^{3r-3}}{(2k+1)^3\binom{2k}{k}^2}=\f{4p^{3r-3}}{(2k+1)(k+1)^2\binom{2k+2}{k+1}^2}\eq0\pmod{p^3}. $$ By Fermat's little theorem we have $$ 4^{(2k+1)p-1}=16^{kp}\cdot 4^{p-1}\eq16^k\pmod{p}. $$ Furthermore, $$ \f{\Gamma_p((2k+1)p)^2}{\Gamma_p(\f{(2k+1)p+1}{2})^4}\eq\f{\Gamma_p(0)^2}{\Gamma_p(\f12)^4}=1\pmod{p}. $$ Combining the above we arrive at \begin{align*} p^{3r}\sum_{k=0}^{(p^r-3)/2}\f{16^k}{(2k+1)^3\binom{2k}{k}^2}\eq p^{3r-3}\sum_{k=0}^{(p^{r-1}-3)/2}\f{16^k}{(2k+1)^3\binom{2k}{k}^2}\pmod{p^4}. \end{align*} Then the desired result follows from induction on $r$. \end{proof} \medskip \noindent{\it Proof of Theorem \ref{sec3th}}. We need the following pair which appeared in \cite{WangSD2018} \begin{gather*} F(n,k)=(-1)^k(4n+1)\f{(\f12)_n^3(\f12)_{n+k}}{(1)_n^3(1)_{n-k}(\f12)_k^2},\\ G(n,k)=(-1)^{k-1}\f{4(\f12)_n^3(\f12)_{n+k-1}}{(1)_{n-1}^3(1)_{n-k}(\f12)_k^2}. \end{gather*} One may easily check that for any $n\in\N$ and $k\in\Z^{+}$ $$ (2k-1)F(n,k-1)-2kF(n,k)=G(n+1,k)-G(n,k). $$ Note that such pair is not a WZ pair. Nevertheless, it is also very useful as classical WZ pairs. For $m\in\N$, by induction on $m$ and noting that $$ F(n,k)=0\quad\t{for}\ k>n $$ and $$ G(0,k)=0\quad\t{for}\ k>0, $$ we have \begin{equation}\label{sec3id1} \sum_{n=0}^{m}F(n,0)=\f{4^m}{\binom{2m}{m}}F(m,m)+\sum_{k=1}^m\f{4^{k-1}G(m+1,k)}{(2k-1)\binom{2k-2}{k-1}}. \end{equation} Taking $m=(p^r-1)/2$ in \eqref{sec3id1} we have \begin{align*} \sum_{k=0}^{(p^r-1)/2}(4k+1)\f{(\f12)_k^4}{(1)_k^4}=&\sum_{k=0}^{(p^r-1)/2}F(n,0)\\ =&\f{2^{p^r-1}}{\binom{p^r-1}{(p^r-1)/2}}F\l(\f{p^r-1}{2}\r)+\sum_{k=1}^{(p^r-1)/2}\f{4^{k-1}}{(2k-1)\binom{2k-2}{k-1}}G\l(\f{p^r+1}{2},k\r). \end{align*} It suffices to show \begin{equation}\label{sec3thkey1} \f{1}{p^r}\cdot\f{2^{p^r-1}}{\binom{p^r-1}{(p^r-1)/2}}F\l(\f{p^r-1}{2}\r)\eq \f{1}{p}\cdot\f{2^{p-1}}{\binom{p-1}{(p-1)/2}}F\l(\f{p-1}{2}\r)\pmod{p^4} \end{equation} and \begin{equation}\label{sec3thkey2} \f{1}{p^r}\cdot\sum_{k=1}^{(p^r-1)/2}\f{4^{k-1}}{(2k-1)\binom{2k-2}{k-1}}G\l(\f{p^r+1}{2},k\r)\eq \f{1}{p}\cdot\sum_{k=1}^{(p-1)/2}\f{4^{k-1}}{(2k-1)\binom{2k-2}{k-1}}G\l(\f{p+1}{2},k\r)\pmod{p^4}. \end{equation} We first consider \eqref{sec3thkey1}. Note that $$ \f{(1/2)_k}{(1)_k}=\f{\binom{2k}{k}}{4^k}. $$ Thus we have \begin{align*} \f{1}{p^r}\cdot\f{2^{p^r-1}}{\binom{p^r-1}{(p^r-1)/2}}F\l(\f{p^r-1}{2}\r)=&\f{1}{p^r}\cdot\f{(-1)^{(p^r-1)/2}2^{p^r-1}(2p^r-1)}{\binom{p^r-1}{(p^r-1)/2}}\cdot\f{(\f12)_{(p^r-1)/2}^3(\f12)_{p^r-1}}{(1)_{(p^r-1)/2}^3(\f12)_{(p^r-1)/2}^2}\\ =&\binom{\f12p^r-1}{p^r-1}\binom{2p^r-1}{p^r-1}. \end{align*} For any $p$-adic integer $a$, it is easy to see that \begin{align}\label{morley} &\binom{ap^r-1}{p^r-1}=\f{(1+(a-1)p^r)_{p^r-1}}{(1)_{p^r-1}}\notag\\ \eq&1+(a-1)p^rH_{p^r-1}(1)+(a-1)^2p^{2r}H_{p^r-1}(1,1)+(a-1)^3p^{3r}H_{p^r-1}(1,1,1)\notag\\ \eq&1+(a-1)pH_{p-1}(1)+(a-1)^2p^2H_{p-1}(1,1)+(a-1)^3p^3H_{p-1}(1,1,1)\notag\\ \eq&\binom{ap-1}{p-1}\pmod{p^4}, \end{align} where we have used Lemma \ref{mainth2lem1}. Therefore, \begin{equation}\label{Fkey} \f{1}{p^r}\cdot\f{2^{p^r-1}}{\binom{p^r-1}{(p^r-1)/2}}F\l(\f{p^r-1}{2}\r)\eq\binom{\f12p-1}{p-1}\binom{2p-1}{p-1}\pmod{p^4}. \end{equation} Then \eqref{sec3thkey1} follows by noting that the right-hand side of the above congruence is independent of $r$. Below we consider \eqref{sec3thkey2}. It is easy to see that \begin{align*} &\f{1}{p^r}\cdot\sum_{k=1}^{(p^r-1)/2}\f{4^{k-1}}{(2k-1)\binom{2k-2}{k-1}}G\l(\f{p^r+1}{2},k\r)\\ =&\f{1}{p^r}\sum_{k=1}^{(p^r-1)/2}\f{4^{k-1}}{(2k-1)\binom{2k-2}{k-1}}\cdot\f{4(-1)^{k-1}(\f12)_{(p^r+1)/2}^3(\f12)_{(p^r-1)/2+k}}{(1)_{(p^r-1)/2}^3(1)_{(p^r+1)/2-k}(\f12)_k^2}. \end{align*} Note that $$ \l(\f12\r)_{(p^r-1)/2+k}=\l(\f12\r)_{(p^r-1)/2}\l(\f{p^r}{2}\r)_k=\f{p^r}{2}\l(\f12\r)_{(p^r-1)/2}\l(1+\f{p^r}{2}\r)_{k-1} $$ and $$ (1)_{(p^r+1)/2-k}=(-1)^{k-1}\f{(1)_{(p^r-1)/2}}{(\f12-\f{p^r}2)_{k-1}}. $$ Therefore, \begin{align*} &\f{1}{p^r}\cdot\sum_{k=1}^{(p^r-1)/2}\f{4^{k-1}}{(2k-1)\binom{2k-2}{k-1}}G\l(\f{p^r+1}{2},k\r)\\ =& \f{p^{3r}}{16^{p^r-1}}\binom{p^r-1}{(p^r-1)/2}^4\sum_{k=0}^{(p^r-3)/2}\f{64^k(1+\f{p^r}2)_k(\f12-\f{p^r}{2})_k}{(2k+1)^3\binom{2k}{k}^3(1)_k^2}. \end{align*} By \eqref{key} we have $$ \f{p^{3r}}{(2k+1)^3\binom{2k}{k}^3}=\f{8p^{3r}}{(k+1)^3\binom{2k+2}{k+1}^3}\eq0\pmod{p^3}. $$ Thus by Fermat's little theorem, \eqref{morley} and Lemma \ref{mainth2lem2} we further obtain \begin{align}\label{Gkey} \f{1}{p^r}\cdot\sum_{k=1}^{(p^r-1)/2}\f{4^{k-1}}{(2k-1)\binom{2k-2}{k-1}}G\l(\f{p^r+1}{2},k\r)\eq& p^{3r}\sum_{k=0}^{(p^r-3)/2}\f{16^k}{(2k+1)^3\binom{2k}{k}^2}\notag\\ \eq& p^3\sum_{k=0}^{(p-3)/2}\f{16^k}{(2k+1)^3\binom{2k}{k}^2}\pmod{p^4}. \end{align} This proves \eqref{sec3thkey2}. The proof of Theorem \ref{sec3th} is now complete. \qed \medskip \noindent{\it Proof of Theorem \ref{mainth2}}. By \eqref{Fkey} and \eqref{Gkey} we arrive at \begin{equation}\label{mainth2key1} \f{1}{p^r}\sum_{k=0}^{(p^r-1)/2}(4k+1)\f{(\f12)_k^4}{(1)_k^4}\eq\binom{\f12p-1}{p-1}\binom{2p-1}{p-1}+p^3\sum_{k=0}^{(p-3)/2}\f{16^k}{(2k+1)^3\binom{2k}{k}^2}\pmod{p^4}. \end{equation} By \eqref{morley}, $$ \binom{\f12p-1}{p-1}\eq 1-\f12pH_{p-1}(1)+\f14p^2H_{p-1}(1,1)-\f18p^3H_{p-1}(1,1,1)\pmod{p^4} $$ and $$ \binom{\f2p-1}{p-1}\eq 1+pH_{p-1}(1)+p^2H_{p-1}(1,1)+p^3H_{p-1}(1,1,1)\pmod{p^4}. $$ It is known (cf. \cite{Hessami2014}) that $$ H_{p-1}(1,1)\eq -\f13pB_{p-3}\pmod{p^2} $$ and $$ H_{p-1}(1,1,1)\eq0\pmod{p}. $$ Then in view of \eqref{harmonic1} we get that \begin{equation}\label{mainth2key2} \binom{\f12p-1}{p-1}\binom{2p-1}{p-1}\eq 1-\f7{12}p^3B_{p-3}\pmod{p^4}. \end{equation} Sun \cite[Theorem 1.2]{Sun2014} proved that \begin{equation}\label{mainth2key3} \sum_{k=0}^{(p-3)/2}\f{16^k}{(2k+1)^3\binom{2k}{k}^2}\eq\f74B_{p-3}\pmod{p}. \end{equation} Substituting \eqref{mainth2key2} and \eqref{mainth2key3} into \eqref{mainth2key1} we immediately obtain Theorem \ref{mainth2}.\qed \medskip Now we can easily obtain Theorem \ref{mainth3}. \medskip \noindent{\it Proof of Theorem \ref{mainth3}}. The case $p>3$ is the immediate corollary of Theorems \ref{mainth1} and \ref{mainth2}. Now we consider the case $p=3$. In view of Theorems \ref{sec2th} and \ref{sec3th}, we only need to prove \eqref{mainth3} for $r=1$. In fact, if $p=3$ and $r=1$, one may check \eqref{mainth3} directly.\qed \begin{Acks} The authors are grateful to Prof. Zhi-Wei Sun for his helpful suggestions on this paper. This work was supported by the National Natural Science Foundation of China (grant no. 11971222) \end{Acks}
2010.13701
\section{Introduction} This document is a model and instructions for \LaTeX. Please observe the conference page limits. \section{Ease of Use} \subsection{Maintaining the Integrity of the Specifications} The IEEEtran class file is used to format your paper and style the text. All margins, column widths, line spaces, and text fonts are prescribed; please do not alter them. You may note peculiarities. For example, the head margin measures proportionately more than is customary. This measurement and others are deliberate, using specifications that anticipate your paper as one part of the entire proceedings, and not as an independent document. Please do not revise any of the current designations. \section{Prepare Your Paper Before Styling} Before you begin to format your paper, first write and save the content as a separate text file. Complete all content and organizational editing before formatting. Please note sections \ref{AA}--\ref{SCM} below for more information on proofreading, spelling and grammar. Keep your text and graphic files separate until after the text has been formatted and styled. Do not number text heads---{\LaTeX} will do that for you. \subsection{Abbreviations and Acronyms}\label{AA} Define abbreviations and acronyms the first time they are used in the text, even after they have been defined in the abstract. Abbreviations such as IEEE, SI, MKS, CGS, ac, dc, and rms do not have to be defined. Do not use abbreviations in the title or heads unless they are unavoidable. \subsection{Units} \begin{itemize} \item Use either SI (MKS) or CGS as primary units. (SI units are encouraged.) English units may be used as secondary units (in parentheses). An exception would be the use of English units as identifiers in trade, such as ``3.5-inch disk drive''. \item Avoid combining SI and CGS units, such as current in amperes and magnetic field in oersteds. This often leads to confusion because equations do not balance dimensionally. If you must use mixed units, clearly state the units for each quantity that you use in an equation. \item Do not mix complete spellings and abbreviations of units: ``Wb/m\textsuperscript{2}'' or ``webers per square meter'', not ``webers/m\textsuperscript{2}''. Spell out units when they appear in text: ``. . . a few henries'', not ``. . . a few H''. \item Use a zero before decimal points: ``0.25'', not ``.25''. Use ``cm\textsuperscript{3}'', not ``cc''.) \end{itemize} \subsection{Equations} Number equations consecutively. To make your equations more compact, you may use the solidus (~/~), the exp function, or appropriate exponents. Italicize Roman symbols for quantities and variables, but not Greek symbols. Use a long dash rather than a hyphen for a minus sign. Punctuate equations with commas or periods when they are part of a sentence, as in: \begin{equation} a+b=\gamma\label{eq} \end{equation} Be sure that the symbols in your equation have been defined before or immediately following the equation. Use ``\eqref{eq}'', not ``Eq.~\eqref{eq}'' or ``equation \eqref{eq}'', except at the beginning of a sentence: ``Equation \eqref{eq} is . . .'' \subsection{\LaTeX-Specific Advice} Please use ``soft'' (e.g., \verb|\eqref{Eq}|) cross references instead of ``hard'' references (e.g., \verb|(1)|). That will make it possible to combine sections, add equations, or change the order of figures or citations without having to go through the file line by line. Please don't use the \verb|{eqnarray}| equation environment. Use \verb|{align}| or \verb|{IEEEeqnarray}| instead. The \verb|{eqnarray}| environment leaves unsightly spaces around relation symbols. Please note that the \verb|{subequations}| environment in {\LaTeX} will increment the main equation counter even when there are no equation numbers displayed. If you forget that, you might write an article in which the equation numbers skip from (17) to (20), causing the copy editors to wonder if you've discovered a new method of counting. {\BibTeX} does not work by magic. It doesn't get the bibliographic data from thin air but from .bib files. If you use {\BibTeX} to produce a bibliography you must send the .bib files. {\LaTeX} can't read your mind. If you assign the same label to a subsubsection and a table, you might find that Table I has been cross referenced as Table IV-B3. {\LaTeX} does not have precognitive abilities. If you put a \verb|\label| command before the command that updates the counter it's supposed to be using, the label will pick up the last counter to be cross referenced instead. In particular, a \verb|\label| command should not go before the caption of a figure or a table. Do not use \verb|\nonumber| inside the \verb|{array}| environment. It will not stop equation numbers inside \verb|{array}| (there won't be any anyway) and it might stop a wanted equation number in the surrounding equation. \subsection{Some Common Mistakes}\label{SCM} \begin{itemize} \item The word ``data'' is plural, not singular. \item The subscript for the permeability of vacuum $\mu_{0}$, and other common scientific constants, is zero with subscript formatting, not a lowercase letter ``o''. \item In American English, commas, semicolons, periods, question and exclamation marks are located within quotation marks only when a complete thought or name is cited, such as a title or full quotation. When quotation marks are used, instead of a bold or italic typeface, to highlight a word or phrase, punctuation should appear outside of the quotation marks. A parenthetical phrase or statement at the end of a sentence is punctuated outside of the closing parenthesis (like this). (A parenthetical sentence is punctuated within the parentheses.) \item A graph within a graph is an ``inset'', not an ``insert''. The word alternatively is preferred to the word ``alternately'' (unless you really mean something that alternates). \item Do not use the word ``essentially'' to mean ``approximately'' or ``effectively''. \item In your paper title, if the words ``that uses'' can accurately replace the word ``using'', capitalize the ``u''; if not, keep using lower-cased. \item Be aware of the different meanings of the homophones ``affect'' and ``effect'', ``complement'' and ``compliment'', ``discreet'' and ``discrete'', ``principal'' and ``principle''. \item Do not confuse ``imply'' and ``infer''. \item The prefix ``non'' is not a word; it should be joined to the word it modifies, usually without a hyphen. \item There is no period after the ``et'' in the Latin abbreviation ``et al.''. \item The abbreviation ``i.e.'' means ``that is'', and the abbreviation ``e.g.'' means ``for example''. \end{itemize} An excellent style manual for science writers is \cite{b7}. \subsection{Authors and Affiliations} \textbf{The class file is designed for, but not limited to, six authors.} A minimum of one author is required for all conference articles. Author names should be listed starting from left to right and then moving down to the next line. This is the author sequence that will be used in future citations and by indexing services. Names should not be listed in columns nor group by affiliation. Please keep your affiliations as succinct as possible (for example, do not differentiate among departments of the same organization). \subsection{Identify the Headings} Headings, or heads, are organizational devices that guide the reader through your paper. There are two types: component heads and text heads. Component heads identify the different components of your paper and are not topically subordinate to each other. Examples include Acknowledgments and References and, for these, the correct style to use is ``Heading 5''. Use ``figure caption'' for your Figure captions, and ``table head'' for your table title. Run-in heads, such as ``Abstract'', will require you to apply a style (in this case, italic) in addition to the style provided by the drop down menu to differentiate the head from the text. Text heads organize the topics on a relational, hierarchical basis. For example, the paper title is the primary text head because all subsequent material relates and elaborates on this one topic. If there are two or more sub-topics, the next level head (uppercase Roman numerals) should be used and, conversely, if there are not at least two sub-topics, then no subheads should be introduced. \subsection{Figures and Tables} \paragraph{Positioning Figures and Tables} Place figures and tables at the top and bottom of columns. Avoid placing them in the middle of columns. Large figures and tables may span across both columns. Figure captions should be below the figures; table heads should appear above the tables. Insert figures and tables after they are cited in the text. Use the abbreviation ``Fig.~\ref{fig}'', even at the beginning of a sentence. \begin{table}[htbp] \caption{Table Type Styles} \begin{center} \begin{tabular}{|c|c|c|c|} \hline \textbf{Table}&\multicolumn{3}{|c|}{\textbf{Table Column Head}} \\ \cline{2-4} \textbf{Head} & \textbf{\textit{Table column subhead}}& \textbf{\textit{Subhead}}& \textbf{\textit{Subhead}} \\ \hline copy& More table copy$^{\mathrm{a}}$& & \\ \hline \multicolumn{4}{l}{$^{\mathrm{a}}$Sample of a Table footnote.} \end{tabular} \label{tab1} \end{center} \end{table} \begin{figure}[htbp] \centerline{\includegraphics{fig1.png}} \caption{Example of a figure caption.} \label{fig} \end{figure} Figure Labels: Use 8 point Times New Roman for Figure labels. Use words rather than symbols or abbreviations when writing Figure axis labels to avoid confusing the reader. As an example, write the quantity ``Magnetization'', or ``Magnetization, M'', not just ``M''. If including units in the label, present them within parentheses. Do not label axes only with units. In the example, write ``Magnetization (A/m)'' or ``Magnetization \{A[m(1)]\}'', not just ``A/m''. Do not label axes with a ratio of quantities and units. For example, write ``Temperature (K)'', not ``Temperature/K''. \section*{Acknowledgment} The preferred spelling of the word ``acknowledgment'' in America is without an ``e'' after the ``g''. Avoid the stilted expression ``one of us (R. B. G.) thanks $\ldots$''. Instead, try ``R. B. G. thanks$\ldots$''. Put sponsor acknowledgments in the unnumbered footnote on the first page. \section*{References} Please number citations consecutively within brackets \cite{b1}. The sentence punctuation follows the bracket \cite{b2}. Refer simply to the reference number, as in \cite{b3}---do not use ``Ref. \cite{b3}'' or ``reference \cite{b3}'' except at the beginning of a sentence: ``Reference \cite{b3} was the first $\ldots$'' Number footnotes separately in superscripts. Place the actual footnote at the bottom of the column in which it was cited. Do not put footnotes in the abstract or reference list. Use letters for table footnotes. Unless there are six authors or more give all authors' names; do not use ``et al.''. Papers that have not been published, even if they have been submitted for publication, should be cited as ``unpublished'' \cite{b4}. Papers that have been accepted for publication should be cited as ``in press'' \cite{b5}. Capitalize only the first word in a paper title, except for proper nouns and element symbols. For papers published in translation journals, please give the English citation first, followed by the original foreign-language citation \cite{b6}. \section{Introduction} \label{sec:intro} Multi-target tracking systems have a broad range of applications, such as security surveillance or crowd behavior analysis \cite{robin2016multiRobot, ardo2019drone} and there is an increasing effort in the research community to deploy them into real world scenarios~\cite{dendorfer2020motchallenge, chavdarova2018wildtrack}. Approaching these problems with multi-camera setups brings additional features and benefits with respect to single camera systems, such as more complete information for large spaces or crowded scenes. Besides, multi-camera systems naturally lead to the development of distributed solutions, which are lighter in communication demands, more robust to failures and faster in processing time than centralized ones. However, the implementation of a multi-target multi-camera tracking in a distributed setup has several challenges. First, the construction of robust and efficient distributed systems in terms of bandwidth usage, consensus between nodes and selection of accurate information to share. Second, in the presence of multiple targets, each camera must solve independently a data association problem between measurements and trackers. In contrast with centralized systems, where the central node unifies the high level information returning it labeled to the nodes, in a distributed setup the data association across cameras can only be performed locally and with partial information. This work tackles the challenges discussed above by exploiting synergies between complementary modules typically studied independently in prior work. We present an approach for multi-target tracking in a distributed camera network boosting the implementation of a Distributed Kalman Filter (DKF), a local data association process that uses a state of the art re-identification network and a novel distributed method for high level information management. The overall idea is illustrated in Figure~\ref{fig:globalIdea}. The DKF exploits the local data association to decide which information needs to be merged into the distributed consensus update. Similarly, the data association is enhanced with the uncertainty estimations given by the DKF. Finally, the tracker manager makes sure of correct association during the initialization and synchronized deletion of the lost trackers. The main trade-off to consider in our system is bandwidth usage vs accuracy, which we also handle with a careful design. \begin{figure} \begin{center} \includegraphics[width=0.5\textwidth]{images/global_ideaBIG.PNG} \end{center} \caption{Distributed multi-target tracking scenario. The proposed system maintains in each node the 3D tracking information of all targets (seen or not by the current camera) thanks to the information shared among connected cameras. Each camera solves an independent data association problem. Green boxes correspond to detections while blue and red ones refer to the two existing trackers in the scene, ID1 and ID2 respectively (Best viewed in color).} \label{fig:globalIdea} \end{figure} Specifically, our main contributions are: $\bullet$ A \textbf{DKF implementation augmented with fully automatic data association} based on geometric cues and appearance information. Differently from existing integrated approaches, we run a single communication message per estimation cycle, lightening the process and increasing applicability to real world setups. $\bullet$ A \textbf{novel distributed strategy to manage the trackers' information}. In contrast with centralized systems, where the central node unifies the information, we propose a distributed data association across cameras that manages the partial information in local nodes. To minimize the excess of bandwidth usage caused by the exchange of appearance information, our algorithm communicates appearance features only once per tracker. The proposed approach is evaluated in public benchmarks, demonstrating the benefits with respect to a naive DKF implementation, as well as established centralized algorithms. Experimentation detailed in Section \ref{sec:experiment} analyzes different relevant aspects including the effect of connectivity restrictions, the influence of appearance in the filter and the effects of our strategy to unify high level information. \section{State of the art} \label{sec:stateofart} This section summarizes related work on core multi-target multi-camera tracking aspects. \subsection{Multi-view multi-target tracking in centralized systems} Multi-target multi-camera tracking in centralized systems sends the information from each camera to a common location where all the data is processed together \cite{tesfaye2019fastconstrains}. These implementations, which stand out for their accuracy, are typically used in safety applications \cite{ferraguti2020safety1,chen2018safety2}. Obtaining the complete trajectories of several targets is normally formulated as an optimization problem in a graph. The nodes represent short trajectories, known as tracklets, obtained by the association of detected bounding boxes. There are different variations to compute the weights of the graph. The combination of the similarity measure given by a triplet loss with a linear motion model is proposed in~\cite{ristani2018features}. The proposal in \cite{wen2017hyper-graph} is the association of tracklets across views based on correlations in motion, appearance and smoothness of the resulting 3D trajectory. A method is presented in \cite{le2018online} to select a subset of tracklets from the graph and associate them based on geometry and motion cues. Other works such as \cite{xu2016hierarchy} model the tracklet association problem as a hierarchical structure optimization. The main disadvantages of centralized methods are the excessive bandwidth usage, required to send all the information to the central computer, and the lack of robustness with respect to a single point of failure. \subsection{Multi-view multi-target tracking in distributed systems} Distributed implementations typically focus on improving the robustness and the efficiency through the study of the bandwidth, the consensus between nodes and the accuracy of the information shared. Several algorithms are proposed in \cite{olfati2007DKF} to achieve a consensus in a distributed heterogeneous sensor network performing only one communication per estimation cycle. One of those algorithms is implemented in \cite{soto2009distributed} for a Pan-Tilt-Zoom (PTZ) camera network to track people of interest, although the data association problem is not addressed there. The work in \cite{kamal2015ICF&JPDAF} selects the Information-weighted Consensus Filter (ICF) method \cite{kamal2012ICF} as consensus algorithm and fills the gap of data association with the Joint Probabilistic Data Association (JPDAF) algorithm \cite{zhou1993JPDAF}, which uses the previous target states to relate measurements and trackers. A drawback of the ICF method is the use of several communication messages per camera and estimation cycle, being less efficient and partially breaking the appealing properties of pure distributed systems. Using the same consensus algorithm, \cite{he2019efficient} proposes a tracking approach in a distributed camera network. They address data association within each camera through a global metric, merging appearance and geometry cues, and across-view data association through the euclidean distance between the 3D position of the targets. The improvement of the Consensus Kalman Filter is discussed in \cite{shorinwa2020distributedVehicles} posed as a Maximum A Posteriori (MAP) optimization problem. The algorithm consists of closed-form algebraic iterations that guarantee the convergence to the centralized MAP over a designated sliding time window. They assume that each target in the environment has a unique identifier known to all sensors. Our multi-target multi-camera tracking approach is based on \cite{soto2009distributed}, and fills the gap of automatic data association between measurements and trackers with a global metric based on geometry and appearance. Unlike the consensus algorithm in \cite{he2019efficient}, the algorithm implemented in this work only sends one communication message per estimation cycle, lightening the communication process. Besides, we include a specific strategy to manage the high level data across cameras to improve the global data association and the consistency of the trackers in the network. \subsection{Data association and re-identification} A wide variety of techniques have been proposed to tackle the problem of data association. One of the most popular techniques is the use of motion models to compute similarity based on geometric constraints, and the use of features such as histograms for appearance criteria \cite{de2020divers,chandra2019densepeds}. Commonly, a global similarity function is defined based on both metrics. Other works extract body poses and relate them by the nearest observation statistically consistent with the distribution of positions \cite{virgona2018shoulder}. Taking into account the factors included in the data association problem, previous work defines several costs related to geometry, shape, appearance, pose and coordinate transformation to obtain a complete similarity function \cite{sharma2018cost}. The work proposed in \cite{tsai2019intertial} uses inertial sensing and RGB-D cameras to capture the skeleton data and perform a short-term pairing. A long-term pairing process adds the color histogram to the similarity function in order to increase robustness. The data association process in our approach is similar to \cite{de2020divers}, but our implementation takes advantage of re-identification strategies supported by recent deep learning techniques. A generalized strategy is based on comparing feature vectors, obtained from a network output, to measure the similarity between a query image and a global gallery (i.e., model) of individuals \cite{chen2019abd,quan2019auto,liu2018attributerecognition}. In order to be efficient and effective, to extract these appearance feature vectors we use the architecture proposed in \cite{zhou2019omni}, that mixes global and local features in a lightweight network, changing traditional convolution operations by depth-wise operations. \section{Distributed Tracking Approach} \label{sec:method} \begin{figure} \begin{center} \includegraphics[width=0.5\textwidth]{images/Architecture.PNG} \end{center} \caption{Overall architecture of our distributed multi-target multi-camera tracking system.} \label{fig:arquitecture} \end{figure} Figure \ref{fig:arquitecture} summarizes the proposed architecture. First, image target positions are obtained with a people detector~\cite{wu2019detectron2} in each camera. Then, the association between the current detections and existing trackers is performed locally (LDA). This association is based on a global score computed from the geometric cues, provided by the local tracking filter, and the appearance similarity with respect to the appearance model of the target stored in the local gallery. To continue the cycle, each camera exchanges a single communication message with the neighboring cameras, the trackers get into the DKF where the new state of the target is updated and sent to the Distributed Target Manager (DTM) block. Finally, the DTM manages the initialization of new trackers and the deletion of unobserved ones uniformly over the network. \subsection{Distributed Kalman Filter} For simplicity in the exposition, throughout this sub-section we will consider the distributed tracking of a single target. The target model, \sloppy $\mathbf{x}(k) = (x(k), y(k), w(k), h(k), \dot{x}(k), \dot{y}(k)),$ is represented as a 3D cylinder moving on a ground plane, where $(x(k), y(k))$ are the ground plane coordinates of the cylinder center, $( w(k), h(k))$ are the width and height of the cylinder and $(\dot{x}(k) , \dot{y}(k))$ are the velocity of the target in the $x$ and $y$ directions. The filter models the motion of the target considering a discrete-time linear dynamical system, \begin{gather} \mathbf{x}(k+1) = \mathbf{A}\mathbf{x}(k) + \mathbf{w}(k), \label{transitionDynamics}\hskip 0.25cm \mathbf{z}(k) = \mathbf{H}\mathbf{x}(k) + \mathbf{v}(k), \end{gather} where $\mathbf{w}(k)$ and $\mathbf{v}(k)$ are zero mean Gaussian noise ($\mathbf{w}(k)\sim \mathcal{N}(0 ,\mathbf{Q}(k)), \mathbf{v}(k)\sim \mathcal{N}(0,\mathbf{R}(k))$), being $\mathbf{Q}(k)$ and $\mathbf{R}(k)$ the model and measurement covariance matrices respectively. The change of the target model in each step depends directly on the transition matrix $\mathbf{A}$, which considers a constant velocity model on the target position and no dynamics on the rest of the state. The noisy measurement $\mathbf{z}(k)$ is the 3D cylinder obtained as the projection of the bounding box given by the detector, i.e., $\mathbf{z}(k)=(x(k), y(k), w(k), h(k))$ defined with the output matrix $\mathbf{H}$ plus the noise. To obtain this projection we assume known homographies for each camera to map the image plane to ground plane coordinates. The independent execution of the filter in each camera, $C_i,$ produces a local estimation of the target, $\hat{\mathbf{x}}_i(k)$, possibly different to other cameras' estimation. A Distributed Kalman-Consensus filter is used to mitigate these differences. The consensus algorithm works with known data association between the local measurement, $\mathbf{z}_i(k)$, and target prediction for all the cameras. From this association, each camera computes its sensor data information, $\mathbf{u}_i(k)$, and its inverse-covariance matrix, $\mathbf{U}_i(k)$, defined as the vector and matrix information, obtained as \begin{gather} \mathbf{u}_i(k) = {\mathbf{H}}^T{\mathbf{R}_i}^{-1}(k)\mathbf{z}_i(k), \hspace{2mm} \mathbf{U}_i(k) = {\mathbf{H}}^T{\mathbf{R}_i}^{-1}(k)\mathbf{H}. \label{matInfo} \end{gather} The values in~\eqref{matInfo} are exchanged with neighboring cameras in the network, $C_j\in C_i^n$, together with the prediction of the target state, $\mathbf{\bar{x}}_i(k)$, obtained as $\mathbf{\bar{x}}_i(k) =\mathbf{A}\mathbf{\hat{x}}_i(k-1)$. Assuming the measurement noises of the sensors are uncorrelated, the representation in information form allows the cameras to combine all the received measurements with the acquired one by simply adding them, \begin{gather} \mathbf{y}_i(k) = \displaystyle\sum_{C_j\in C_i^n} \mathbf{u}_j(k), \hspace{5mm} \mathbf{S}_i(k) = \displaystyle\sum_{C_j\in C_i^n} \mathbf{U}_j(k). \label{calcS} \end{gather} The state is then updated by the correction in the prediction of the target state with the merged information and the predictions from the neighboring cameras, \begin{equation} \begin{split} \mathbf{\hat{x}}_i(k) = \mathbf{\bar{x}}_i(k) &+ \mathbf{M}_i(k)\left[\mathbf{y}_i(k) - \mathbf{S}_i(k)\mathbf{\bar{x}}_i(k)\right] \\ &+\gamma \mathbf{M}_i(k)\displaystyle\sum_{C_j\in C_i^n}(\mathbf{\bar{x}}_j(k)-\mathbf{\bar{x}}_i(k)), \label{xhat} \end{split} \end{equation} where $\mathbf{M}_i(k) = (\mathbf{P}_i(k)^{-1} + \mathbf{S}_i(k))^{-1}$ is the Kalman Gain in the information form, $\mathbf{P}_i(k)$ is the covariance of the target state and $\gamma = 1/\|\mathbf{M}_i(k) + 1\|$. Finally, the covariance matrix is updated according to $\mathbf{P}_i(k+1) = \mathbf{A}\mathbf{M}_i(k){\mathbf{A}}^T+ \mathbf{Q}_i(k) \label{P}.$ Although the standard implementation of the DKF assumes synchronization of the cameras, the consensus-nature of the algorithm makes it amenable to a fully asynchronous, event-triggered implementation such as~\cite{he2017event}. \subsection{Local Data Association} \label{sec:lda} In our method, the local data association required for a correct update of the filter is made merging two constraints based on geometry and appearance. Let us consider now that in a particular estimation cycle, the set of measurements $\mathcal{Z}=\{\mathbf{z}_j\}$\footnote{We use now the index $j$ to denote different measurements observed in a single camera instead of neighbors in the camera network.} is provided by the detector to the LDA module. Since the DKF updates the uncertainty of the tracker state, we take advantage of this information to calculate the Mahalanobis distance between the $x,y$ position on the ground of each measurement, $\mathbf{z}_j$, and the predicted position, $\mathbf{\bar{x}}_i$, \begin{equation} d(\mathbf{z}_j, \mathbf{\bar{x}}_i) = \sqrt{(\mathbf{z}_j-\mathbf{H\bar{x}}_i)\mathbf{V}^{-1}(\mathbf{z}_j-\mathbf{H\bar{x}}_i)^T}, \label{eq:maha} \end{equation} being $\mathbf{V} = \mathbf{P}_{xy}+\mathbf{R}_{xy}$, with $\mathbf{P}_{xy}$ and $\mathbf{R}_{xy}$ the sub-matrices of $\mathbf{P}_i$ and $\mathbf{R}_j$ that encode the position covariance of the estimation and the measurement respectively. Then, the similarity value in geometry is computed as \begin{align} s_d(\mathbf{z}_j, \mathbf{\bar{x}}_i) = \left\{ \begin{array}{c} \frac{1}{\alpha}\hspace{1mm}d(\mathbf{z}_j, \mathbf{\bar{x}}_i) \hspace{3mm} \hbox{ if } \hspace{1.5mm}d(\mathbf{z}_j, \mathbf{\bar{x}}_i) < \tau \\ 1 \hspace{10mm} \hbox{ otherwise}, \\ \end{array} \right. \label{eq:sd} \end{align} where $\alpha$ is a configuration parameter and $\tau$ a threshold applied to ignore highly unlikely candidates. The candidates selected by the geometry constraint are then evaluated in appearance. Instead of using traditional hand-crafted descriptors to measure the appearance similarity, we employ a network designed for people re-identification \cite{zhou2019omni}, whose weights have been pre-trained with the MSMT17 Benchmark \cite{wei2018msmt17}. Inspired by the re-identification task evaluation methodology, we associate with each tracker a local gallery of limited size that represents the target appearance model. The construction of this gallery is currently done by simply storing locally observed patches for each target and updating them over time, saving a new patch every $N$ frames, and discarding the oldest one. The appearance similarity between a query and the tracker gallery is obtained with the minimum cosine distance, \begin{equation} s_a(\mathbf{a}_j) = \min_{\mathbf{a}_{i} \in \mathcal{A}_{i}} \displaystyle\left( 1 - \frac{\mathbf{a}_j^T \hspace{1mm} \mathbf{a}_{i}}{\|\mathbf{a}_j\| \|\mathbf{a}_{i}\|}\right), \label{eq:sa} \end{equation} where $\mathbf{a}_j,$ the query, is the appearance feature vector associated to detection $\mathbf{z}_j$, and $\mathcal{A}_i=\{\mathbf{a}_i\}$ is the set of feature vectors of the target's gallery used by camera $C_i$. The final decision in the data association is based on selecting the minimum value of the product of both scores, $s_a$ and $s_d$. \subsection{Distributed Tracker Manager} \label{sec:manage} Another important issue to address in a practical implementation of the DKF is the management of the trackers through the full distributed system. This requires a correct data association of trackers across different cameras to guarantee that the information mixed in~\eqref{calcS} and~\eqref{xhat} corresponds to the same target. Similarly, the cameras need to agree upon the time in which a particular tracker is no longer relevant and should be dropped. We propose how to address these problems in a distributed fashion. \vspace{2mm} \paragraph*{Distributed Global Data Association} Trackers are identified locally by a two-dimensional unique identifier, ID$_i$, described by the camera id in the network, $i$, and a local counter, $n$. New trackers can either be initialized because of a new local observation or because of a transmission from neighboring cameras. Local initialization is done whenever a new target generates two observations in its local gallery. This helps filtering spurious measurements from the detector, giving enough time for a new tracker to ensure that it corresponds to a valid target. Once this happens, we attach to the DKF data the appearance model, $\mathcal{A}_{\hbox{ID}_i},$ in the message sent to neighbors. It is important to highlight that this is the only moment when appearance is transmitted through the network in our algorithm, consisting in the two descriptors available in the local gallery at that time. The second case that can trigger new tracker initializations in our system is the reception of messages from neighbor cameras. Our algorithm considers three situations for this case: 1) A single neighbor camera sends a new tracker. Then, the camera creates a new tracker and associates it to the received one for the future DKF consensus updates. The local gallery is initialized with the appearance model received. 2) The camera receives new trackers from several neighboring cameras. 3) A new local tracker is initialized at the same time that new trackers from other cameras are received. In situations 2) and 3), it is necessary to check whether the new trackers from the different cameras are of the same target or not. We perform a similar process to the one for local data association described in Section \ref{sec:lda}, replacing in~\eqref{eq:maha} the measurement by the other camera's estimation, $d(\mathbf{\hat{x}}_i, \mathbf{\bar{x}}_j)$, and the Mahalanobis distance with the Euclidean distance, $\mathbf{V}=\mathbf{I}$, since the covariance matrices associated to the trackers are not part of the communication messages. This also requires a different threshold in~\eqref{eq:sd}. If two or more trackers are similar enough, they are merged locally into a single one. \vspace{2mm} \paragraph*{Consensus-based Tracker Drop and Re-initialization} The other main task of the Distributed Tracker Manager is to decide when to drop a tracker. Instead of letting each camera to decide this process individually, we have opted for a consensus-based solution that reduces, as much as possible, the number of iterations that different cameras carry out the tracking individually. We let $\ell_i$ be the local estimation that camera $i$ has on the number of iterations gone since the last local data association of a measurement to the target made by any camera of the network. Since this is a global parameter that involves the whole network, its estimation is sent as part of the tracker message at every iteration. If the camera achieves the local data association of the tracker, the new value of this parameter is set to zero. Otherwise, the camera chooses as new value the minimum among all the values received, including its own, and adds one unit, \begin{equation} \ell_{i}(k+1) = \left \{ \begin{array}{ll} 0 & \textrm{if detected} \\ \displaystyle\min_{j \in \mathcal{C}_{i}^n} \left(\ell_i(k), \ell_j(k)\right)+1 & \textrm{otherwise} \end{array} \right. \label{eq:ages} \end{equation} The camera drops the target when $\ell_i(k+1)$ is higher than a threshold $\kappa$. The tracker ID is saved together with its gallery as an \textit{old tracker}, to be recovered if the same target is back in the camera field of view or some other camera re-activates it. This process is checked during the local initialization. Before assigning a new ID, a re-identification score is computed between the new tracker gallery and the galleries from \textit{old trackers} through~\eqref{eq:sa}. The re-initialization is accomplished if $s_a$ is lower than $\epsilon$. \subsection{Communication and bandwidth requirements.} \label{Sec:BW} The information shared between cameras at every iteration consists, for each active tracker, of the tracker ID, its predicted state, the measurements obtained in~\eqref{matInfo} and the counter of the last observation, (ID$_i$, $\mathbf{\bar{x}_i}, \mathbf{u}_i, \mathbf{U}_i$, $\ell_i$). This message is encoded using a total of $51$ elements, representing less than $1$kB of bandwidth information per tracker. In order to carry out the process explained in the \textit{Distributed Global Data Association}, the message with the information related to the new trackers is sent together with appearance information. The appearance information exchange between cameras is composed by two appearance feature vectors required in the trackers initialization. Each one has 512 elements, representing $16.38$kB of bandwidth information that is sent only once per tracker. \section{Experiments} \label{sec:experiment} \subsection{Experimental setup.} Our experiments are run on data from the EPFL dataset~\cite{k-short}: \textit{Terrace} set with 4 outdoor cameras, \textit{Laboratory}, with 4 indoor cameras and \textit{Campus} set with 3 outdoor cameras. For the person detection, all experiments run the official Detectron implementation \cite{wu2019detectron2}, filtering the output with a process equivalent to non-maximal suppression. Regarding evaluation, we use standard Multiple Object Tracking (MOT) metrics, including Multi Object Tracking Accuracy (\textbf{MOTA}) and Multiple Object Tracking Precision (\textbf{MOTP}), as defined in \cite{bernardin2008evaluating}, and Identification Precision (\textbf{IDP}), Identification Recall (\textbf{IDR}) and their F1 Score (\textbf{IDF1}) as defined in \cite{ristani2016evaluating2}. All the evaluations follow the same process explained in \cite{xu2016hierarchy}, giving as a final result the median of each metric for all the cameras available in each dataset. \begin{center} \begin{figure*}[tb!] \begin{tabular}{@{}ccccc} \begin{tabular*}{13mm}[l]{@{}c} \end{tabular*} & \begin{tabular}{cccc} Complete \hspace{26mm} & Ring \hspace{29mm} & Chain \hspace{20mm} & Disconnected \end{tabular}\\ \begin{tabular*}{13mm}[l]{@{}c} Graph \end{tabular*} & \begin{tabular}{cccc} \includegraphics[width=12mm, height=12mm]{images/completeConnect.PNG} \hspace{23mm} & \includegraphics[width=12mm, height=12mm]{images/mediumConnect.PNG} \hspace{25mm} & \includegraphics[width=12mm, height=12mm]{images/lowConnect.PNG} \hspace{22mm} & \includegraphics[width=12mm, height=12mm]{images/noConnect.PNG} \end{tabular}\\ \begin{tabular*}{13mm}[l]{@{}c} Terrace \end{tabular*} & \begin{tabular}{cccc} \includegraphics[width=37mm, height=27mm]{images/Terrace_Complete.png} & \includegraphics[width=37mm, height=27mm]{images/Terrace_Ring.png} & \includegraphics[width=37mm, height=27mm]{images/Terrace_Chain.png} & \includegraphics[width=37mm, height=27mm]{images/Terrace_Disconnect.png} \end{tabular}\\ \begin{tabular*}{13mm}[l]{@{}c} Laboratory \end{tabular*}& \begin{tabular}{cccc} \includegraphics[width=37mm, height=27mm]{images/Laboratory_Complete.png} & \includegraphics[width=37mm, height=27mm]{images/Laboratory_Ring.png} & \includegraphics[width=37mm, height=27mm]{images/Laboratory_Chain.png} & \includegraphics[width=37mm, height=27mm]{images/Laboratory_Disconnect.png} \end{tabular} \\ \begin{tabular*}{13mm}[l]{@{}c} Campus* \end{tabular*} & \begin{tabular}{cccc} \includegraphics[width=37mm, height=27mm]{images/Campus_Ring.png} & \includegraphics[width=37mm, height=27mm]{images/Campus_Ring.png} & \includegraphics[width=37mm, height=27mm]{images/Campus_Chain.png} & \includegraphics[width=37mm, height=27mm]{images/Campus_Disconnect.png} \end{tabular}\\ \multicolumn{4}{c}{\footnotesize \hspace{65mm} \includegraphics[width=54mm, height=2.8mm]{images/legend_row.PNG}} \\ \end{tabular} \caption{Ablation study for our approach considering different graph topologies (complete, ring, chain, disconnected). The variations are run on three sets of data (Terrace, Laboratory and Campus) and evaluated using CLEAR MOT metrics. Running all the steps of the proposed approach (DKF+LDA+MT) achieves the best results in the majority of cases. *The Campus dataset only has three cameras, obtaining the same results in the complete and ring graphs.} \label{fig:histograms} \end{figure*} \end{center} \vspace{-6mm} \subsection{Ablation Study and Topology effect} This first experiment analyzes three configurations of the proposed approach to evaluate the effects of the novel modules we propose in our architecture. The simplest configuration is our implementation of the Distributed Kalman Filter using only geometric information in the data association process (DKF). A second version adds the appearance information from the local gallery (DKF + LDA). The latest version includes the distributed tracker management module in the system. This version represents the complete algorithm presented in the paper (DKF + LDA + DTM). Besides the ablation study, this experiment analyzes the effects of different network topologies. For chain and ring topologies we have considered several network alternatives to make the evaluation independent of the individual quality of particular cameras in the tracking. The considered combinations are shown in Figure \ref{fig:connections}. Furthermore, we evaluate the influence of the appearance in a disconnected graph, i.e., without information shared between cameras and thus running four independent Kalman filters. \begin{figure}[H] \centering \includegraphics[width=0.45\textwidth]{images/analysis2.PNG} \caption{Alternatives considered for (a) ring graph topologies and (b) chain graph topologies.} \label{fig:connections} \end{figure} Figure \ref{fig:histograms} summarizes the results of all the tests in this experiment. Each row shows the results obtained using certain dataset, and each column corresponds to the results with a different connectivity graph. In all of them, the configuration parameters from the proposed algorithm are set to $\alpha_{LDA}=2000$, $\alpha_{GDA}=50$, $\tau=0.5$, $\kappa=15$, $\epsilon = 0.25$ and the gallery size to 20 samples with $N=20.$ These parameters were defined in Sections~\ref{sec:lda} and~\ref{sec:manage}. \begin{figure}[!tb] \centering \includegraphics[width=0.48\textwidth]{images/completeScenes.PNG} \caption{Qualitative tracking results of our approach in three datasets: \textit{Laboratory} (first row), \textit{Terrace} (second row) and \textit{Campus} (third row). More on the supplementary video.} \label{fig:epfl} \end{figure} \\Overall, we can see how both proposed modules (LDA and DTM) bring significant improvements in all the metrics and datasets, with more positive influence in the identification metrics (IDF1, IDP and IDR) than the tracking metrics (MOTA, MOTP). For a few cases it would be slightly better to apply only LDA, but the penalization for including DTM in those few cases (decrease of less than 5\%) is not nearly as significant as the benefits it brings in the rest of the cases (up to 20\% increase in some network topologies with respect to the DKF+LDA configuration). Figure \ref{fig:epfl} shows some qualitative results for each of the datasets. The supplementary material provides additional results. \subsection{Comparison with other algorithms} This experiment compares the proposed approach (configured exactly as in the previous experiment) with recent centralized methods evaluated in \cite{xu2016hierarchy}. They publish results on public datasets running their proposed approach, Hierarchical Trajectory Composition (HTC), as well as two other baselines, Probabilistic Occupancy Map (POM) \cite{pom} and K-shorted Path (KSP) \cite{k-short}. To make a fair comparison, the evaluation parameters are those described in \cite{xu2016hierarchy}. Among the datasets used there that provide accurate camera-ground plane calibration (required to run our algorithm), we picked the most challenging sequence of \textit{Terrace}, where a high number of crossings and occlusions occur between targets. Note this does not intend to be a thorough evaluation, but rather an experiment to see where the proposed distributed approach gets in comparison with existing centralized approaches. Previously discussed distributed approaches \cite{soto2009distributed, kamal2015ICF&JPDAF} can not be included in this study because up to our knowledge they do not provide MOT metric results in available benchmarks, or focus on different goals and metrics than us. In \cite{kamal2015ICF&JPDAF}, the experiments focused on the consensus algorithm analysis, whereas \cite{soto2009distributed} uses its own camera network to test the proposed approach, showing as result the trajectories of the individuals on the ground. Section~\ref{sec:stateofart} already highlighted our approach advantages with respect to these systems. \begin{table}[!tb] \centering \footnotesize \begin{tabular}{|p{3.9cm}|c|c|c|} \hline \textbf{Algorithm} & MOTA & MOTP & Bandwidth\\ & & & [kB/frame]$^{o}$\\ \hline \multicolumn{4}{|l|}{\textbf{Centralized} baselines}\\ \hline HTC \cite{xu2016hierarchy}* & 71.84 & 71.15 & 1215 \\ KSP \cite{k-short}* & 65.75 & 57.82 & 1215 \\ POM \cite{pom}* & 56.9 & 61.33 & 1215 \\ \hline \multicolumn{4}{|l|}{\textbf{Distributed} proposed approach}\\ \hline DKF+LDA+DTM (Complete) & 80.95 & 75.2 & 28.23 \\ DKF+LDA+DTM (Ring) & 75.98 & 74.42 & 28.23 \\ DKF+LDA+DTM (Chain) & 69.74 & 73.96 & 28.23 \\ \hline \multicolumn{4}{@{}p{8cm}}{\footnotesize * Results interpolated \cite{xu2016hierarchy}, only shown graphically.}\\ \multicolumn{4}{@{}p{8cm}}{\footnotesize $^{o}$ Bandwidth calculated analytically more details in the text.} \end{tabular} \caption{MOT metrics and bandwidth requirements per frame of our distributed approach and existing centralized methods on \textit{Terrace} dataset.} \label{tab:comparison2} \end{table} The results are summarized in Table \ref{tab:comparison2}, where we see better performance of our approach with respect to the centralized methods in two of the three graph topologies. The complete graph, being the closest version to the centralized system, gets an improvement of 9.11\% in MOTA while the ring graph improves the results by 4.14\%. Finally the chain graph obtains lower MOTA, but still comparable results to those of the HTC algorithm. Regarding the bandwidth, for the centralized systems we have computed it considering that every camera is at one hop communication to the central server. Since there are four cameras, the server needs to receive 4 images of $288\times360$ RGB pixels each one, giving a total of 414,720 pixels. Each pixel is encoded with 3 Bytes (R+G+B), so the total bandwidth required is 1,215kB/frame. For the distributed system, we compute an upper bound of the bandwidth, considering that every iteration there are $9$ active trackers, which is the maximum number of trackers active during the whole execution. Each tracker requires $0.78$kB/frame (Section~\ref{Sec:BW}), so when multiplied by the 4 cameras and the 9 trackers results $28.08$kB. The total number of trackers initialized in our algorithm is $24$, that over all the iterations average a total of $0.15$kB/frame to send the appearance information of the new trackers. The sum of these two quantities results in the $28.23$kB/frame of Table~\ref{tab:comparison2}. \balance \section{Conclusions} This paper has presented a new multi-target tracking approach for a distributed camera network, providing a global approach that deals with the distributed fusion of low and high level information. In this work, the challenges of a distributed system have been addressed boosting the DKF with a fully automatic data association and a novel tracker manager to handle the misalignment of the high level information. The data association is based on geometric and appearance constraints. The distributed tracker manager takes care of the global data association and each tracker's consistency in the network to reduce the number of iterations that different cameras carry out the tracking individually. The proposed approach is evaluated in challenging public benchmarks, reaching comparable, or even better results than centralized systems and demonstrating the benefits with respect to a naive DKF implementation. The current implementation of the DKF assumes synchronization of the cameras but the consensus-nature of the algorithm makes it amenable to a fully asynchronous and event-triggered implementation. This and more sophisticated strategies of information selection to build the galleries are open challenges left for future work. { \bibliographystyle{IEEEtran}
hep-ph/9606434
\section{Introduction} The traditional way to reduce the independent parameters of a theory is the introduction of a symmetry. Grand Unified Theories (GUTs) \cite {pati1,gut1,GUTS} are representative examples of such attempts. For instance, the minimal $SU(5)$ reduces the gauge couplings of the Standard Model (SM) by one and gives us a testable prediction for one of them. In fact, LEP data \cite{abf,lec1} seem to suggest that the $N=1$ global supersymmetry \cite{sakai1,lec2} should be required in addition to make the prediction viable. GUTs also relate Yukawa couplings among themselves, which can lead to predictions for the parameters of the SM. The prediction of the ratio $m_{\tau}/m_b$ \cite{begn} in the minimal $SU(5)$ was an example of a successful reduction of the independent parameters of this sector. On the other hand, requiring more symmetry (e.g., $SO(10),~E_6,~E_7,~E_8$) does not necessarily lead to more predictions for the SM parameters, due to the presence of new degrees of freedom, various ways and channels of breaking the symmetry, etc. An extreme case from this point of view are superstrings, which have huge symmetries, but no real predictions for the SM parameters. In a series of papers \cite{kmz}--\cite{kmoz2}, we have proposed that a natural gradual extension of the GUT ideas, which preserves their successes and enhances the predictions, is to attempt to relate the gauge and Yukawa couplings, or in other words, to achieve Gauge-Yukawa Unification (GYU). Searching for a symmetry that could provide such a unification, one is led to introduce a symmetry that relates fields with different spins, i.e., supersymmetry and in particular $N=2$ supersymmetry \cite{f79}. Unfortunately, $N=2$ supersymmetric theories have serious phenomenological problems due to light mirror fermions. Needless to say that there exists GYU in superstrings, too \cite{lec3,lec4}. In the following we would like to emphasize an alternative way to achieve unification of couplings, which is based on the fact that within the framework of a renormalizable field theory, one can find renormalization group (RG) invariant relations among parameters, that can improve the calculability and the predictive power of a theory. In our recent studies \cite{kmz}--\cite{kmoz2}, we have considered the GYU which is based on the principles of reduction of couplings \cite{chang1}--\cite{andrianov1} and finiteness \cite{PW}--\cite{LZ}. These principles, which are formulated in perturbation theory, are not explicit symmetry principles, although they might imply symmetries. The former principle is based on the existence of RG invariant relations among couplings, which preserve perturbative renormalizability. Similarly, the latter one is based on the fact that it is possible to find RG invariant relations among couplings that keep finiteness in perturbation theory, even to all orders. Applying these principles one can relate the gauge and Yukawa couplings without introducing necessarily a symmetry, nevertheless improving the predictive power of a model. Concerning recent related studies, we would like to emphasize that our approach to GYU for asymptotically non-free theories \cite{kmtz,kmtz2} covers work done by other authors \cite{lanross}, though the underlying idea might be different. In the next section we begin by illustrating the idea of reduction of couplings, and in section 3 we consider a Finite Unified Theory (FUT) based on $SU(5)$--one of the successful Gauge-Yukawa Unified theories--which, moreover, is attracting a renewed interest because of duality in supersymmetric field theories \cite{seiberg,leigh}. \section{Reduction of couplings} To illustrate the idea of the reduction of couplings, we consider a theory containing two scalar fields $\phi_I ~,~I=1,2$. The renormalizable Lagrangian, which has two parities $ \phi_I \to -\phi_I $, is given by \begin{eqnarray} {\cal L} &=& \frac{1}{2}\sum_{I=1,2}~(~\partial_{\mu} \phi_I\partial^{\mu}\phi_I-m^{2}_{I}~\phi_{I}^{2}~) -\frac{g_1}{4!}~\phi_{1}^{4}- \frac{g_0}{4}~\phi_{1}^{2}\phi_{2}^{2} -\frac{g_2}{4!}~\phi_{2}^{4}~. \end{eqnarray} The theory defined by this Lagrangian has originally three dimensionless couplings $g_i~,~i=0,1,2$ and two dimensionful parameters $m_1$ and $m_2$, and we would like to consider the reduction in these numbers. To this end, we first compute one-loop diagrams in $4-2\epsilon$ dimensions and employ the minimal subtraction (MS) scheme for renormalization. One finds in this order \begin{eqnarray} g_{0}^{(0)} &=&\mu^{2\epsilon}~[~ g_0+\frac{1}{\epsilon} \frac{1}{16\pi^2}~(~g_{0}^{2}+\frac{1}{4}g_{1}g_0 +\frac{1}{4}g_{2}g_0~)~]~,\\ g_{i}^{(0)} &=&\mu^{2\epsilon}~[~ g_i+\frac{1}{\epsilon} \frac{1}{16\pi^2}~(\frac{3}{4})~(~g_{0}^{2}+g_{i}^{2}~)~]~(i=1,2)~,\\ (m_{1}^{(0)})^2 &=&m_{1}^{2}+[~\frac{1}{\epsilon} \frac{1}{16\pi^2}~(\frac{1}{2})~(g_{1}m_{1}^{2} +g_{0}m_{2}^{2}~)~]~,\\ (m_{2}^{(0)})^2 &=&m_2+[~\frac{1}{\epsilon} \frac{1}{16\pi^2}~(\frac{1}{2})~(g_{2}m_{2}^{2} +g_{0}m_{1}^{2}~)~]~,~ \end{eqnarray} where $\mu$ is the 't Hooft renormalization scale, and $g^{(0)}$'s and $m^{(0)}$'s stand for the bare couplings and masses. To maintain renormalizability of the theory, it is usually assumed that these five parameters are independent. There may be, however, exceptional situations. Obviously, in the presence of the $O(2)$ symmetry, we have $m_1=m_2$ and $g_1=g_2=3g_0$ so that only one dimensionless and one massive parameter are independent. This is true to all orders in perturbation theory, because the $O(2)$ symmetry is anomaly-free in the present case. Are there other possibilities? To answer this question at one-loop order, we assume that \begin{eqnarray} g_i &=& \rho_i\,g_0~,~(i=1,2)~,~m_1=e\,m_2~, \end{eqnarray} and insert them into the renormalization eqs. (2)--(5). One finds that (under the assumption that $m_{1}^{2}~,~m_{2}^{2} > 0$) there are two solutions that are consistent with the one-loop renormalizability: \begin{eqnarray} \rho_1 &=&\rho_2=3~\mbox{or}~\rho_1 = \rho_2=1 \end{eqnarray} with $e=1$. The first one is the symmetric one, but the second one is associated with no obvious symmetry. So, the second one might be an artifact of one-loop order and could disappear if one goes to higher orders. It is remarkable that one can check at one-loop order already, whether the second possibility of reducing the number of parameters persists in higher orders. We will see it in a moment. The reduction of couplings was originally formulated for a massless theory on the basis of the Callan-Symanzik equation. The extension to theories with massive parameters is not straightforward if one wants to keep the generality and the rigor on the same level as for the massless case; one has to fulfill a set of requirements coming from the renormalization group equations, the Callan-Symanzik equations, etc., along with the normalization conditions imposed on irreducible Green's functions. There have been some progresses in this direction \cite{maison}. Here we would like to present the idea of the reduction of dimensionless couplings. As we have done in the example above, we assume, to make the method transparent, that the MS scheme has been employed so that all the RG functions such as $\beta$ functions depend only on dimensionless couplings. Then we would like to investigate whether a solution like eq. (7), which is not a consequence of a symmetry, persists to higher orders in perturbation theory. To be general, we consider a massless renormalizable theory which contain a set of $(N+1)$ dimensionless couplings. The renormalized irreducible Green's function in the MS scheme satisfies the RG equation \begin{eqnarray} 0 &=&[~\mu\frac{\partial}{\partial \mu}+ ~\beta_i\,\frac{\partial}{\partial g_i}+ ~\Phi_I \gamma^{\phi I}_{~~~J} \frac{\delta}{\delta \Phi_J} ~]~\Gamma (~{\bf \Phi},g_0,g_1,\dots,g_N,\mu~)~, \end{eqnarray} where ${\bf \Phi}$ stands for a set of fields, $\beta$'s for the $\beta$ functions and $\gamma$ for the $\gamma$ functions. We then ask ourselves whether the reduction of parameters, i.e., \begin{eqnarray} g_i &=&g_i(g)~,~(i=1,\dots,N)~,~g\equiv g_0 \end{eqnarray} is consistent with the RG equation \begin{eqnarray} 0 &=&[~\mu\frac{\partial}{\partial \mu}+ ~\beta_g\,\frac{\partial}{\partial g}+ ~\Phi_I \gamma^{\phi I}_{~~~J}\frac{\delta}{\delta \Phi_J} ~]~\Gamma (~{\bf \Phi},g,g_1(g),\dots,\mu~)~, \end{eqnarray} where $g$ is called the primary coupling. We find that the following set of equations has to be satisfied: \begin{eqnarray} \beta_g =\beta_0 &,& \beta_g\,\frac{d g_i}{d g} = \beta_i~,~(i\neq 0)~, \end{eqnarray} which are called the reduction equations \cite{zim1}. The bare quantities are given by \begin{eqnarray} \Phi^{(0)}_{I} &=& \mu^{k_{ I}\epsilon}Z^{\phi~J}_{I} (g) \Phi_J~,~ g_{i}^{(0)} = \mu^{k_{i}\epsilon} Z^{g~j}_{i}(g) g_j (g)~. \end{eqnarray} The renormalization constants above are those which are first computed in the original theory and then rewritten by means of eq. (9), and the $k$'s are introduced to match the dimension in $(4-2 \epsilon)$ dimensions. Therefore, the requirements for the reduced theory to be perturbative renormalizable means that the functions $g_i (g)$ should have a power series expansion in the primary coupling $g$. That is, \begin{eqnarray} g_{i}(g) &=& g\,\sum_{n=0}^{\infty} \rho_{i}^{(n)} g^{n}~(i\neq 0)~. \end{eqnarray} REcalling ourselves that $\beta$'s and $\gamma$'s are also a power series and assuming that the expansion coefficients with $n \leq n_0$ are determined, we insert the power series ansatz (13) into the reduction equations (11). One finds that to obtain the $(n_0+1)^{\mbox{th}}$ order coefficients, we have to solve a linear system of equations with $N$ unknown quantities, where its coefficients are given by the lowest order quantities in the reduction procedure. This is the reason why one can investigate at the lowest order, whether the linear system in $(n_0+1)^{\mbox{th}}$ order can be uniquely solved. For our example of a $\phi^4$ theory, one finds \begin{eqnarray} \beta_0 &=& \mu \frac{d g_0}{d \mu} = \frac{1}{16\pi^2} (4g_0^2+g_1 g_0+g_2 g_0)+\dots ~,\\ \beta_i &=& \mu \frac{d g_i}{d \mu} = \frac{3}{16\pi^2} ( g_0^2+g_{i}^{2})+\dots ~,~(i=1,2)~, \end{eqnarray} where $\dots$ indicates higher order terms. The power series ansatz for the present case takes the form \begin{eqnarray} g_{i}(g) &=& g\,(~\rho_{i}^{(0)}+\sum_{n=1}^{\infty} \rho_{i}^{(n)} g^{n}~)~,~(i=1,2)~, \end{eqnarray} where \begin{eqnarray} \rho_{1}^{(0)}=\rho_{2}^{(0)}=3~\mbox{or}~1~. \end{eqnarray} As described above, we insert them into the corresponding reduction equations and assume that $\rho_{i}^{(n)}$ with $n \leq n_0$ are determined already. Collecting terms of $O(g^{n_0+3})$, we find that \begin{eqnarray} & &\left( \begin{array}{ll} (n_0+2)(4+\rho_{1}^{(0)}+\rho_{2}^{(0)})-5\rho_{1}^{(0)} &~~~~~\rho_{1}^{(0)}\\ ~~~~~\rho_{2}^{(0)} & (n_0+2)(4+\rho_{1}^{(0)} +\rho_{2}^{(0)})-5\rho_{2}^{(0)}\\ \end{array} \right) \nonumber\\ & &\times \left( \begin{array}{l} \rho_{1}^{(n_0+1)}\\ \rho_{2}^{(n_0+1)}\\ \end{array} \right) = \mbox{known quantities}~. \end{eqnarray} Since the matrix on the l.h. side of eq. (18) is regular, we conclude that $\rho_{i}^{(n_0+1)}$ can be uniquely determined. That is, the power series (13) exists uniquely. Moreover, it is possible \cite{zim1} to find a reparametrization of couplings in such a way that $\rho_{i}^{(n)}$ for all $n > 0$ exactly vanish. In fact, this theory corresponds to \cite{kazakov1} $$ {\cal L} =\sum_{I=+,-}~(~ \frac{1}{2}\partial_{\mu} \phi_I\partial^{\mu}\phi_I -\frac{g_0}{6}~\phi_{I}^{4}~)~~,~~\phi_{+(-)}= \frac{1}{\sqrt{2}}(\phi_1+(-) \phi_2)~.$$ \section{Finite Unified Model Based on $SU(5)$} As a realistic example for the reduction of couplings, we consider a Finite Unified Model Based on $SU(5)$. From the classification of theories with vanishing one-loop $\beta$ function for the gauge coupling \cite{HPS}, one can see that using $SU(5)$ as gauge group there exist only two candidate models which can accommodate three fermion generations. These models contain the chiral supermutiplets ${\bf 5}~,~\overline{\bf 5}~,~{\bf 10}~, ~\overline{\bf 5}~,~{\bf 24}$ with the multiplicities $(6,9,4,1,0)$ and $(4,7,3,0,1)$, respectively. Only the second one contains a ${\bf 24}$-plet which can be used for spontaneous symmetry breaking (SSB) of $SU(5)$ down to $SU(3)\times SU(2) \times U(1)$. (For the first model one has to incorporate another way, such as the Wilson flux breaking to achieve the desired SSB of $SU(5)$.) Therefore, we would like to concentrate only on the second model. To simplify the situation, we neglect the intergenerational mixing among the lepton and quark supermultiplets and consider the following $SU(5)$ invariant cubic superpotential for the (second) model: \begin{eqnarray} W &=& \sum_{i=1}^{3}\,[~\frac{1}{2}g_{i}^{u} \,{\bf 10}_i {\bf 10}_i H_{i}+ \sqrt{2}g_{i}^{d}\,{\bf 10}_i \overline{\bf 5}_{i}\, \overline{H}_{i}~] \nonumber\\ & & +\sum_{\alpha=1}^{4}g_{\alpha}^{f}\,H_{\alpha}\, {\bf 24}\,\overline{H}_{\alpha}+ \frac{g^{\lambda}}{3}\,({\bf 24})^3~, \end{eqnarray} where the ${\bf 10}_{i}$'s and $\overline{\bf 5}_{i}$'s are the usual three generations, and the four $({\bf 5}+ \overline{\bf 5})$ Higgses are denoted by $H_{\alpha}~,~\overline{H}_{\alpha} $. The superpotential is not the most general one, but by virtue of the non-renormalization theorem, this does not contradict the philosophy of the coupling unification by the reduction method. (A RG invariant fine tuning is a solution of the reduction equation \footnote{In the case at hand, however, one can find a discrete symmetry that can be imposed on the most general cubic superpotential to arrive at the non-intergenerational mixing \cite{kmz}.} ). Given the superpotential $W$, we can compute the $\beta$ functions of the model. We denote the gauge coupling by $g$ (with the vanishing one-loop $\beta$ function), and our normalization of the $\beta$ functions is as usual, i.e., $d g_{i}/d \ln \mu ~=~ \beta^{(1)}_{i}/16 \pi^2+O(g^5)$, where $\mu$ is the renormalization scale. We find: \begin{eqnarray} \beta^{(1)}_{g} &=& 0~,\nonumber\\ \beta^{u(1)}_{i} &=& \frac{1}{16\pi^2}\, [\,-\frac{96}{5}\,g^2+ 9\,(g_{i}^{u})^{2}+ \frac{24}{5}\,(g^{f}_{i})^{2}+ 4\,(g_{i}^{d})^{2} \,]\,g_{i}^{u}~,\nonumber\\ \beta^{d(1)}_{i} &=& \frac{1}{16\pi^2}\, [\,-\frac{84}{5}\,g^2+ 3\,(g_{i}^{u})^{2} +\frac{24}{5}\,(g^{f}_{i})^{2}+ 10\,(g_{i}^{d})^{2}\,]\,g_{i}^{d}~,\\ \beta^{\lambda(1)} &=& \frac{1}{16\pi^2}\, [\,-30\,g^2+\frac{63}{5}\,(g^{\lambda})^2+ 3\,\,\sum_{\alpha =1}^{4}(g_{ \alpha}^{f})^{2} \,]\,g^{\lambda}~,\nonumber\\ \beta^{f(1)}_{\alpha} &=& \frac{1}{16\pi^2}\, [\,-\frac{98}{5}\,g^2+3\,(g_{i}^{u})^{2}\delta_{i\alpha} +4\,(g_{i}^{d})^{2}\delta_{i\alpha} +\frac{48}{5}\,(g^{f}_{\alpha})^{2} +\sum_{\beta=1}^{4}(g_{\beta}^{f})^{2}+\frac{21}{5}\,(g^{\lambda})^{2} \,]\,g_{\alpha}^{f}~.\nonumber \end{eqnarray} We then regard the gauge coupling $g$ as the primary coupling and solve the reduction equations (11) with the power series ansatz. One finds that the power series, \begin{eqnarray} (g_{i}^{u})^2 &=&\frac{8}{5}g^2+\dots~, ~(g_{i}^{d})^2 =\frac{6}{5}g^2+\dots~,~ (g^{\lambda})^2=\frac{15}{7}g^2+\dots~,\nonumber\\ (g^{f}_{4})^2 &=& g^2~,~(g^{f}_{\alpha})^2=0+\dots~~(\alpha=1,2,3)~, \end{eqnarray} exists uniquely, where $\dots$ indicates higher order terms and all the other couplings have to vanish. As we have done in the previous section, we can easily verify that the higher order terms can be uniquely computed. Consequently, all the one-loop $\beta$ functions of the theory vanish. Moreover, all the one-loop anomalous dimensions for the chiral supermultiplets, \begin{eqnarray} \gamma^{(1)}_{{\bf 10}i} &=& \frac{1}{16\pi^2}\, [\,-\frac{36}{5}\,g^2+ 3\,(g_{i}^{u})^{2}+ 2\,(g_{i}^{d})^{2} \,]~,\nonumber\\ \gamma^{(1)}_{\overline{{\bf 5}}i} &=& \frac{1}{16\pi^2}\, [\,-\frac{24}{5}\,g^2+ 4\,(g_{i}^{d})^{2}\,]~,\nonumber\\ \gamma^{(1)}_{H_{\alpha}} &=& \frac{1}{16\pi^2}\, [\,-24\,g^2+ 3\,(g_{i}^{u})^{2}\delta_{i\alpha}+ \frac{24}{5}(g_{\alpha}^{f})^2\,]~,\\ \gamma^{(1)}_{\overline{H}_{\alpha}} &=& \frac{1}{16\pi^2}\, [\,-24\,g^2+ 4\,(g_{i}^{d})^{2}\delta_{i\alpha}+ \frac{24}{5}(g_{\alpha}^{f})^2\,]~,\nonumber\\ \gamma^{(1)}_{{\bf 24}} &=& \frac{1}{16\pi^2}\, [\,-\frac{10}{5}\,g^2+ +\sum_{\alpha=1}^{4}(g_{\alpha}^{f})^{2} +\frac{21}{5}\,(g^{\lambda})^{2}\,]~,\nonumber \end{eqnarray} also vanish in the reduced system. A very interesting result is that these conditions are necessary and sufficient for finiteness at the two-loop level \cite{PW}. A natural question is what happens in higher loops. Interestingly, there exists a powerful theorem \cite{LPS} which provides the necessary and sufficient conditions for finiteness to all loops. The theorem makes heavy use of the non-renormalization property of the supercurrent anomaly \cite{pisi}. In fact, the finiteness theorem can be formulated in terms of one-loop quantities, and it states for supersymmetry gauge theories, the necessary and sufficient conditions for $\beta_{g}$ and $\gamma$'s to vanish to all orders are \cite{LPS}: \newline (a) The validity of the one-loop finiteness conditions, i.e., $\beta_{g}^{(1)}=\gamma^{(1)\prime}s =0$. \newline (b) The reduction equation (11) admit a unique power series solution. \newline Since the solution (22) can be extended to a unique power series in $g$, the reduced theory (which has a single coupling $g$) has $\beta$ and $\gamma$ functions vanishing to all orders. In this way, the Gauge-Yukawa Unification is achieved \footnote{There is an alternative way to find finite theories, which has been found in connection to duality in supersymmetric theories \cite{leigh}.}. In most of the previous studies of the present model \cite{model1,model}, however, the complete reduction of the Yukawa couplings, which is necessary for all-order-finiteness, was ignored. They have used the freedom offered by the degeneracy in the one- and two-loop approximations in order to make specific ans{\" a}tze that could lead to phenomenologically acceptable predictions. In the above model, we found a diagonal solution for the Yukawa couplings, with each family coupled to a different Higgs. However, we may use the fact that mass terms do not influence the RG functions in a certain class of renormalization schemes, and introduce appropriate mass terms that permit us to perform a rotation in the Higgs sector such that only one pair of Higgs doublets, coupled to the third family, remains light and acquires a non-vanishing VEV \cite{model}. Note that the effective coupling of the Higgs doublets to the first family after the rotation is very small avoiding in this way a potential problem with the proton lifetime \cite{proton}. Thus, effectively, we have at low energies the Minimal Supersymmetric Standard Model (MSSM) with only one pair of Higgs doublets satisfying the boudary conditions at $M_{\rm GUT}$ \begin{eqnarray} g_{t}^{2}& = &\frac{8}{5} g^2+O(g^4)~,~ g_{b}^{2}=g_{\tau}^{2}=\frac{6}{5} g^2+O(g^4)~, \end{eqnarray} where $g_i$ ($i=t, b, \tau$) are the top, bottom and tau Yukawa couplings of the MSSM, and the other Yukawa couplings should be regarded as free. Adding soft breaking terms (which are supposed not to influence the $\beta$ functions beyond $M_{\rm GUT}$), we can obtain supersymmetry breaking. The conditions on the soft breaking terms to preserve one-loop finiteness have been given already some time ago \cite{soft}. Recently, the same problem in higher orders has been addressed \cite{jj}. It is an open problem whether there exists a suitable set of conditions on the soft terms for all-loop finiteness. \section{Predictions of Low Energy Parameters} Since the $SU(5)$ symmetry is spontaneously broken below $M_{\rm GUT}$, the finiteness conditions do not restrict the renormalization property at low energies, and all it remains is a boundary condition on the gauge and Yukawa couplings at $M_{\rm GUT}$, i.e., eq. (23). So we examine the evolution of these couplings according to their renormalization group equations at two-loop with this boundary condition. Below $M_{\rm GUT}$ their evolution is assumed to be governed by the MSSM. We further assume a unique threshold $M_{\rm SUSY}$ for all superpartners of the MSSM so that below $M_{\rm SUSY}$ the SM is the correct effective theory. We recall that $\tan\beta$ is usually determined in the Higgs sector, which however strongly depends on the supersymmetry breaking terms. Here we avoid this by using the tau mass $M_{\tau}$ as input \footnote{This means that we partly fix the Higgs sector indirectly.}. That is, assuming that \begin{eqnarray} M_Z \ll M_{t} \ll M_{\rm SUSY}~, \end{eqnarray} we require the matching condition at $M_{\rm SUSY}$ \cite{barger}, \begin{eqnarray} \alpha_{t}^{\rm SM} &=&\alpha_{t}\,\sin^2 \beta~,~ \alpha_{b}^{\rm SM} ~ =~ \alpha_{b}\,\cos^2 \beta~, ~\alpha_{\tau}^{\rm SM} ~=~\alpha_{\tau}\,\cos^2 \beta~,\nonumber\\ \alpha_{\lambda}&=& \frac{1}{4}(\frac{3}{5}\alpha_{1} +\alpha_2)\,\cos^2 2\beta~, \end{eqnarray} to be satisfied \footnote{There are MSSM threshold corrections to this matching condition \cite{hall1,wright1}, which will be discussed later.}, where $\alpha_{i}^{\rm SM}~(i=t,b,\tau)$ are the SM Yukawa couplings and $\alpha_{\lambda}$ is the Higgs coupling. This is our definition of $\tan\beta$, and eq. (25) fixes $\tan\beta$, because with a given set of the input parameters \cite{pdg}, \begin{eqnarray} M_{\tau} &=&1.777 ~\mbox{GeV}~,~M_Z=91.188 ~\mbox{GeV}~, \end{eqnarray} with \cite{pokorski1} \begin{eqnarray} \alpha_{\rm EM}^{-1}(M_{Z})&=&127.9 +\frac{8}{9\pi}\,\log\frac{M_t}{M_Z} ~,\nonumber\\ \sin^{2} \theta_{\rm W}(M_{Z})&=&0.2319 -3.03\times 10^{-5}T-8.4\times 10^{-8}T^2~,\\ T &= &M_t /[\mbox{GeV}] -165~,\nonumber \end{eqnarray} the matching condition (25) and the GYU boundary condition at $M_{\rm GUT}$ (23) can be satisfied only for a specific value of $\tan\beta$. Here $M_{\tau},M_t, M_Z$ are pole masses, and the couplings are defined in the $\overline{\mbox{MS}}$ scheme with six flavors. The translation from a Yukawa coupling into the corresponding mass follows according to \begin{eqnarray} m_i&=&\frac{1}{\sqrt{2}}g_i(\mu)\,v(\mu)~,~i=t,b,\tau ~~ \mbox{with} ~~v(M_Z)=246.22~\mbox{GeV}~, \end{eqnarray} where $m_i(\mu)$'s are the running masses satisfying the respective evolution equation at two-loop order. The pole masses can be calculated from the running ones of course. For the top mass, we use \cite{barger,hall1} \begin{eqnarray} M_{t} &=&m_{t}(M_t)\,[\,1+ \frac{4}{3}\frac{\alpha_3(M_t)}{\pi}+ 10.95\,(\frac{\alpha_3(M_t)}{\pi})^2+k_t \frac{\alpha_t(M_t)}{\pi}\,]~, \end{eqnarray} where $k_t \simeq -0.3$ for the range of parameters we are concerned with in this paper \cite{hall1}. Note that both sides of eq. (29) contains $M_t$ so that $M_t$ is defined only implicitly. Therefore, its determination requires an iteration method. As for the tau and bottom masses, we assume that $m_{\tau}(\mu)$ and $m_b(\mu)$ for $\mu \leq M_Z$ satisfy the evolution equation governed by the $SU(3)_{\rm C}\times U(1)_{\rm EM}$ theory with five flavors and use \begin{eqnarray} M_{b}&=&m_b(M_b)\,[\,1+ \frac{4}{3}\frac{\alpha_{3(5{\rm f})}(M_b)}{\pi}+ 12.4\,(\frac{\alpha_{3(5{\rm f})}(M_b)}{\pi})^2\,]~,\nonumber\\ M_{\tau}&=&m_{\tau}(M_{\tau})\,[\,1+ \frac{\alpha_{\rm EM (5f)}(M_{\tau})}{\pi}\,]~, \end{eqnarray} where the experimental value of $m_b(M_b)$ is $(4.1-4.5)$ GeV \cite{pdg}. The couplings with five flavors entered in eq. (30) $\alpha_{3(5{\rm f})}$ and $\alpha_{\rm EM (5f)}$ are related to $\alpha_{3}$ and $\alpha_{\rm EM}$ by \begin{eqnarray} \alpha_{3(5{\rm f})}^{-1}(M_Z) &= &\alpha_{3}^{-1}(M_Z) -\frac{1}{3\pi}\,\ln \frac{M_t}{M_Z} ~,\nonumber\\ \alpha_{\rm EM (5f)}^{-1}(M_Z) &= & \alpha_{\rm EM}^{-1}(M_Z)- \frac{8}{9\pi}\,\ln \frac{M_t}{M_Z}~. \end{eqnarray} Using the input values given in eqs. (26) and (27), we find \begin{eqnarray} m_{\tau}(M_{\tau})&=&1.771~\mbox{GeV}~, m_{\tau}(M_{Z})=1.746~\mbox{GeV}~, \alpha_{\rm EM (5f)}^{-1}(M_{\tau})=133.7~, \end{eqnarray} and from eq. (28) we obtain \begin{eqnarray} \alpha_{\tau}^{\rm SM}(M_Z)&=&\frac{g_{\tau}^{2}}{4\pi} =8.005\times 10^{-6}~, \end{eqnarray} which we use as an input parameter instead of $M_{\tau}$. The matching condition (25) suffers from the threshold corrections coming from the MSSM superpartners: \begin{eqnarray} \alpha_{i}^{\rm SM} \to \alpha_{i}^{\rm SM}(1+\Delta_{i}^{\rm SUSY})~,~i=1,2,\dots,\tau~, \end{eqnarray} It was shown that these threshold effects to the gauge couplings can be effectively parametrized by just one energy scale \cite{langacker1}. Accordingly, we can identify our $M_{\rm SUSY}$ with that defined in ref.\cite{langacker1}. This ensures that there are no further one-loop threshold corrections to $\alpha_3(M_Z)$ when we calculate it as a function of $\alpha_{\rm EM}(M_Z)$ and $\sin^2\theta_W(M_Z)$. The same scale $M_{\rm SUSY}$ does not describe threshold corrections to the Yukawa couplings, and they could cause large corrections to the fermion mass prediction \cite{hall1,wright1} \footnote{It is possible to compute the MSSM correction to $M_t$ directly, i.e., without constructing an effective theory below $M_{\rm SUSY}$. In this approach, too, large corrections have been reported \cite{polonsky1}. In the present paper, evidently, we are following the effective theory approach as e.g. refs. \cite{hall1,wright1}.}. For $m_b$, for instance, the correction can be as large as 50\% for very large values of $\tan\beta$, especially in models with radiative gauge symmetry breaking and with supersymmetry softly broken by the universal breaking terms. As we will see later, the $SU(5)$-FUT model predicts (with these corrections suppressed) values for the bottom quark mass that are rather close to the experimentally allowed region so that there is room only for small corrections. Consequently, if we want to break the $SU(2) \times U(1)$ gauge symmetry radiatively, the model favors non-universal soft breaking terms \cite{borzumati1,lec4}. It is interesting to note that the consistency of the finiteness hypothesis is closely related to the fine structure of supersymmetry breaking and also to the Higgs sector, because these superpartner corrections to $m_b$ can be kept small for appropriate supersymmetric spectrum characterized by very heavy squarks and/or small $\mu_H$ describing the mixing of the two Higgs doublets in the superpotential \footnote{The solution with small $\mu_H$ is favored by the experimental data and cosmological constraints \cite{borzumati1}. The sign of this correction is determined by the relative sign of $\mu_H$ and the gluino mass parameter, $M_3$, and is correlated with the chargino exchange contribution to the $b \to s \gamma$ decay \cite{hall1}. The later has the same sign as the Standard Model and the charged Higgs contributions when the supersymmetric corrections to $m_b$ are negative.}. To get an idea about the magnitude of the correction, we consider the case that all the superpartners have the same mass $M_{\rm SUSY}=500$ GeV with $M_{\rm SUSY} >> \mu_H$ and $\tan\beta \mathrel{\mathpalette\@versim>} 50$. Using $\Delta$'s given in ref. \cite{wright1}, we find that the MSSM correction to the $M_t$ prediction is $\sim -1$ \% for this case. Comparing with the results of \cite{wright1,polonsky1}, this may appear to be underestimated. Note, however, that there is a nontrivial interplay among the corrections between the $M_t$ and $M_b$ predictions for a given GYU boundary condition at $M_{\rm GUT}$ and the fixed pole tau mass, which has not been taken into account in refs. \cite{wright1,polonsky1}. In the following discussion, therefore, we regard the MSSM threshold correction to the $M_t$ prediction as unknown and denote it by \begin{eqnarray} \delta^{\rm MSSM} M_t~. \end{eqnarray} In table 1 we present the predictions of $M_t$ and $m_b(M_b)$ for various given values of $M_{\rm SUSY}$. \vspace{1cm} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline $M_{\rm SUSY}$ [GeV] &$\alpha_{3}(M_Z)$ & $\tan \beta$ & $M_{\rm GUT}$ [GeV] & $m_b (M_{b}) $ [GeV]& $M_{t}$ [GeV] \\ \hline $300$ & $0.123 $ &$54.2 $ & $2.08\times 10^{16}$ & $4.54$ & 183.5\\ \hline $500$ & $0.122 $ &$54.3 $ & $1.77\times 10^{16}$ & $4.54$ & 184.0 \\ \hline $10^3$ & $0.120 $ &$54.4 $ & $1.42\times 10^{16}$ & $4.54$ & 184.4 \\ \hline \end{tabular} \end{center} \begin{center} {\bf Table 1}. The predictions for different $M_{\rm SUSY}$ for FUT. \end{center} \noindent As we can see from the table, only negative MSSM corrections of at most $\sim 10$ \% to $m_b(M_b)$ are allowed ( $m_{b}^{\rm exp}(M_b)= (4.1-4.5)$ GeV), implying that FUT favors non-universal soft symmetry breaking terms as announced. The predicted $M_t$ values are well below the infrared value \cite{hill1}, for instance, $194$ GeV for $M_{\rm SUSY}=500$ GeV, so that the $M_t$ prediction must be sensitive against the change of the boundary condition (23). We recall that if one includes the threshold effects of superheavy particles \cite{threshold,lec5}, the GUT scale $M_{\rm GUT}$ at which $\alpha_1$ and $\alpha_2$ are supposed to meet is related to the mass of the superheavy $SU(3)_C $-triplet Higgs supermultiplets contained in $H_{\alpha}$ and $\overline{H}_{\alpha}$. These effects have therefore influence on the GYU boudary condition (23). The structure of the threshold effects in FUT is involved, but they are not arbitrary and probably determinable to a certain extent, because the mixing of the superheavy Higgses is strongly dictated by the fermion mass matrix of the MSSM. To bring these threshold effects under control is challenging. Here we assume that the magnitude of these effects is $\sim \pm 4$ GeV in $M_t$ (which is estimated by comparing the minimal GYU model based on $SU(5)$ \cite{kmoz2}.). We conclude \cite{kmoz2} that \begin{eqnarray} M_t &=&(183+\delta^{\rm MSSM} M_t\pm 5) ~~\mbox{GeV}~, \end{eqnarray} where the finite corrections coming from the conversion from the dimensional reduction scheme to the ordinary $\overline{\mbox{MS}}$ in the gauge sector \cite{anton} are included, and those in the Yukawa sector are included as an uncertainty of $\sim\pm 1$ GeV. The MSSM threshold correction is denoted $\delta^{\rm MSSM} M_t$ which has been discussed in the previous section. Comparing the $M_t$ prediction above with the experimental value \cite{top}, $M_t=(175\pm 9)$ GeV \cite{top}, we see it is consistent with the experimental data. \section{Conclusion} As a natural extension of the unification of gauge couplings provided by all GUTs and the unification of Yukawa couplings, we have introduced the idea of Gauge-Yukawa Unification. GYU is a functional relationship among the gauge and Yukawa couplings provided by some principle. In our studies GYU has been achieved by applying the principles of reduction of couplings and finiteness. The consequence of GYU is that in the lowest order in perturbation theory the gauge and Yukawa couplings above $M_{\rm GUT}$ are related in the form \begin{eqnarray} g_i & = &\kappa_i \,g_{\rm GUT}~,~i=1,2,3,e,\cdots,\tau,b,t~, \end{eqnarray} where $g_i~(i=1,\cdots,t)$ stand for the gauge and Yukawa couplings, $g_{\rm GUT}$ is the unified coupling, and we have neglected the Cabibbo-Kobayashi-Maskawa mixing of the quarks. So, Eq. (37) exhibits a boundary condition on the the renormalization group evolution for the effective theory below $M_{\rm GUT}$, which we have assumed to be the MSSM. As we have demonstrated in a number of publications \cite{kmz,mondragon2,kmtz,kmtz2}, especially in \cite{kmoz2}, there are various supersymmetric GUTs with GYU in the third generation that can predict the bottom and top quark masses in accordance with the experimental data. This means that the top-bottom hierarchy could be explained in these models, in a similar way as the hierarchy of the gauge couplings of the SM can be explained if one assumes the existence of a unifying gauge symmetry at $M_{\rm GUT}$. It is clear that the GYU scenario is the most predictive scheme as far as the mass of the top quark is concerned. It may be worth recalling the predictions for $m_t$ of ordinary GUTs, in particular of supersymmetric $SU(5)$ and $SO(10)$. The MSSM with $SU(5)$ Yukawa boundary unification allows $m_t$ to be anywhere in the interval between 100-200 GeV \cite{barger1} for varying $\tan \beta$, which is now a free parameter. Similarly, the MSSM with $SO(10)$ Yukawa boundary conditions, {\em i.e.} $t-b-\tau$ Yukawa Unification gives $m_t$ in the interval 160-200 GeV \cite{barbero,hall1,wright1,polonsky2}. Clearly, to exclude or verify different GYU models, the experimental as well as theoretical uncertainties have to be further reduced. One of the largest theoretical uncertainties for FUT, as we have seen, results from the not-yet-calculated threshold effects of the superheavy particles. Since the structure of the superheavy particles in FUT is basically fixed, it will be possible to bring these threshold effects under control, which will reduce the uncertainty of the $M_t$ prediction ($5$ GeV) to $\sim 2$ GeV. We have been regarding $\delta^{\rm MSSM} M_t$ as unknown because we have no sufficient information on the superpartner spectra. Recently, however, it has been found that the principle of finiteness \cite{jj} and also of reduction of couplings \cite{future} can be applied to dimensionfull parameters, e.g., soft breaking parameters, too. As a result, it becomes possible to predict the superpartner spectra to a certain extent and then to calculate $\delta^{\rm MSSM} M_t$. It will be very interesting to find out in the coming years, as the experimental accuracy of $m_t$ increases, if nature is kind enough to verify our conjectured Gauge-Yukawa Unification. \noindent{\bf Acknowledgements} \vspace {0.5 cm} J.K. would like to thank the organizers for their kind hospitality.
cond-mat/9606105
\section*{Acknowledgements} \acknowledgments This work was supported in part by grants DMR-9122645 and DMR-8819885. We are grateful to Phil Nelson for a careful reading of the manuscript. TCL and FCM are also grateful to Exxon Research and Engineering where a portion of this work was carried out.
1509.06582
\section{Introduction} This work is concerned with optimization problems of the form \begin{equation} \label{eq:prob_convex} \min_{u\in X} G(u) + F(K(u)) \end{equation} for proper, convex and lower semicontinuous functionals $G: X\to \overline{\mathbb{R}}:=\mathbb{R}\cup \{\infty\}$ and $F: Y\to\overline{\mathbb{R}}$ and a Fréchet-differentiable operator $K:X\to Y$ between two Hilbert spaces $X$ and $Y$, in particular for integral functionals on $L^2(\Omega)$ of the form $F(u) = \int_\Omega f(u(x))\,dx$ for a convex integrand $f:\mathbb{R}\to\mathbb{R}$. Under suitable regularity assumptions, a minimizer $\bar u\in X$ satisfies \begin{equation} \label{eq:oc} \left\{\begin{aligned} -\bar p &\in \grad{K}(\bar u)^*(\partial F(K(\bar u))),\\ \bar p &\in \partial G(\bar u), \end{aligned}\right. \end{equation} for some $\bar p\in X^*$. Here, $\partial F$ denotes the convex subdifferential of $F$ and $\grad{K}(u)^*$ the adjoint of the Fréchet derivative of $K$ at $u$. Using convex duality, one can characterize the primal-dual pair $(\bar u,\bar p)$ via the saddle point ($\bar u,\bar v$) of the Lagrangian \begin{equation} \label{eq:lagrangian} L(u,v) := G(u) + \inner{K(u),v} - F^*(v), \end{equation} where $F^*:Y^*\to\overline{\mathbb{R}}$ denotes the Fenchel conjugate of $F$; in this case, $\bar p$ and $\bar v$ are related via $\bar p = \grad K(\bar u)^* \bar v$. To fix ideas, a prototypical example is the $L^1$ fitting problem~\cite{ClasonJin:2011} \begin{equation} \label{eq:l1fit_problem} \min_u \norm{S(u) - y^\noise}_{L^1} + \frac{\alpha}2 \norm{u}_{L^2}^2. \end{equation} Here, $G(u) = \frac{\alpha}2\norm{u}_{L^2}^2$, $K(u)=S(u) - y^\noise$, where $S$ maps $u$ to the solution $y$ of $-\Delta y+uy=f$ for given $f$, and $F(y) = \norm{y}_{L^1}$. This formulation is appropriate if the given data $y^\delta$ is corrupted by impulsive noise. Here, $F^*(v) = \iota_{\{|v(x)|\leq 1\}}(v)$, where $\iota_A$ denotes the indicator function of the set $A$ in the sense of convex analysis \cite{Hiriart-Urruty:1993}. A second example is the Morozov (constrained) formulation of inverse problems appropriate for data subject to uniformly distributed noise \cite{Clason:2012}, \begin{equation} \label{eq:linffit_problem} \min_u \frac{\alpha}2 \norm{u}_{L^2}^2 \quad\text{s.\,t.}\quad |S(u)(x) -y^\noise(x) |\leq \delta \quad\text{a.\,e. in } \Omega. \end{equation} Here, $G$ and $K$ are as before, while $F(y) = \iota_{\{|y(x)|\leq \delta\}}(y)$ and hence $F^*(v)=\delta \norm{v}_{L^1}$. A similar problem arises in optimal control of partial differential equations with state constraints. As we show in \cref{sec:saddle-vi}, critical points of the saddle point problem \eqref{eq:prob_convex} may be characterized concisely through the variational inclusion \begin{equation} 0 \in R_0(u, v), \end{equation} where we define the set-valued mapping $R_0: X \times Y \rightrightarrows X \times Y$ by \begin{equation} R_0(u, v) \coloneqq \begin{pmatrix} \partial G(u) + \grad K(u)^* v \\ \partial F^*(v) - K(u) \end{pmatrix}. \end{equation} Our goal then is to study the stability of solutions to \eqref{eq:prob_convex} resp.~\eqref{eq:oc} through set-valued analysis of this mapping. The main tool is a form of Lipschitz continuity of $\invR_0$ known as the \emph{Aubin property} (or \emph{pseudo-Lipschitz} or \emph{Lipschitz-like} property). This is also called the \emph{metric regularity} of $R_0$; see, e.g., \cite{Rockafellar:1998,Mordukhovich:2006,Mordukhovich:1993,Dontchev:2014,Ioffe:2015}. In contrast, second-order stability analysis for PDE-constrained optimization problems is usually focused on the stability of the optimal values and of minimizers (as opposed to saddle-points) and is based on sufficient second-order conditions based on directional derivatives, often in stronger topologies; nonsmoothness typically arises from pointwise constraints or, more recently, sparsity penalties. We refer to \cite{BonnansShapiro,Casas:2015} as well as the literature cited therein. Since the problem \eqref{eq:prob_convex} is nonsmooth, the first-order conditions involve proper subdifferential inclusions and hence the second-order analysis required for showing metric regularity involves set-valued derivatives. Considerable effort has been expended on obtaining explicit representations of these derivative, although up to now primarily in the finite-dimensional setting, e.g., in \cite{Mordukhovich:2001,Henrion:2002,Henrion:2013a,Henrion:2014,Outrata:2015a}, with a focus on normal cones arising from inequality constraints. The difficulty in the infinite-dimensional setting stems from the fact that there exists a variety of more or less abstract definitions of such objects, see, e.g., \cite{Mordukhovich:2006}, although more explicit characterizations can be obtained in some concrete situations \cite{Henrion:2010}. Here, by exploiting the fact that the nonsmooth functionals are defined pointwise via convex integrands, we are able to explicitly compute regular coderivatives pointwise using the finite-dimensional theory from \cite{Rockafellar:1998}; see also~\cite{Mordukhovich:2006} for further developments on their calculus. One of the main contributions of this work is therefore to further narrow the gap between the concrete finite-dimensional and the abstract infinite-dimensional settings. Besides being of inherent interest, e.g., for showing stability of the solution of the inverse problem with respect to $\delta$, metric regularity is also relevant to convergence of optimization methods. In the context of the saddle point problem for the Lagrangian \eqref{eq:lagrangian}, it is required for the nonlinear primal-dual hybrid gradient method of \cite{Valkonen:2008}. More widely, through the equivalence \cite{Bolte:2010} of the Aubin property to the recently eminent Kurdyka-{\polishL}ojasiewicz property \cite{Kurdyka:1998,Lojasiewicz:1970,Lojasiewicz:1963}, metric regularity is relevant to the convergence of a wide range of descent methods \cite{Bolte:2013}. It can also be used to directly characterize the convergence of certain basic optimization methods \cite{Bolte:2010,Klatte:2002,Klatte:2009}. Metric regularity is also closely related to the concept of tilt-stability, mainly studied in finite dimensions, see, e.g., \cite{Rockafellar:1998b,Mordukhovich:2012,Drusvyatskiy:2013,Eberhard:2012,Lewis:2013}, but recently also in infinite dimensions \cite{Mordukhovich:2013,Outrata:2015a}. An extended concept incorporating tilt stability is that of full stability \cite{Levy:2000,Mordukhovich:2014}. We also note that when the non-linear saddle-point problem can be written as the minimization of a difference of convex functions -- as any $C^2$ objective can \cite{Tuy:1995} -- detailed characterizations exist in the finite-dimensional case of local minima \cite{Valkonen:2010} and sensitivity \cite{Valkonen:2008}. Moreover, a set-valued analysis of the solvability of such programs with further symmetric cone structure is performed in \cite{Valkonen:2013}. In certain cases, with a finite-dimensional control $u$ in an otherwise infinite-dimensional problem, it is also possible to do away with the regularizer $G$ \cite{DeLosReyes:2015a}. This work is organised as follows. In \cref{sec:pointwise_deriv}, we derive pointwise characterizations of second-order subdifferentials or generalized Hessians of integral functionals on $L^2$ and give examples for several functionals commonly occurring in variational methods for inverse problems, image processing, and PDE-constrained optimization. These results are used in \cref{sec:stability_vi} to give an explicit form of the Mordukhovich criterion for set-valued mappings in Hilbert spaces, in particular for those arising from subdifferentials of the integral functionals considered in the preceding section. \cref{sec:stability_sp} further specializes this to the case of set-valued mappings arising from the first-order optimality conditions \eqref{eq:oc} and gives sufficient conditions for several stability properties such as stability with respect to perturbation of the data. Finally, \cref{sec:stability_paramid} discusses the satisfiability of these conditions in the specific case of the model parameter identification problems \eqref{eq:l1fit_problem} and~\eqref{eq:linffit_problem}, where it will turn out that stability can only be guaranteed after either introducing a Moreau--Yosida regularization or a projection to a finite-dimensional subspace in~$F$. \section{Derivatives and coderivatives in \texorpdfstring{$\scriptstyle L^2(\Omega)$}{L²(Ω)}}\label{sec:pointwise_deriv} Sadly, we cannot as in \cite{Valkonen:2014} directly use the clean finite-dimensional theory from \cite{Rockafellar:1998} to show the Aubin property of $\inv R_0$ through the Mordukhovich criterion \cite{Mordukhovich:1992}. We have to delve into the various complications of the infinite-dimensional setting as presented in \cite{Mordukhovich:2006}. The first one is the multitude of different definitions of set-valued generalized derivatives. Luckily however, as it will turn out, because of the \emph{pointwise nature} of the non-smooth functionals whose second derivatives we require, we will be able to compute the pointwise differentials using the finite-dimensional theory, and limit ourselves to the regular coderivative in infinite dimensions. Although the results of this section hold for integral functionals on $L^p$ for any $1\leq p<\infty$, we restrict the presentation to the Hilbert space $L^2$ for simplicity (and since we will make use of the Hilbert space structure of the saddle-point problem \eqref{eq:oc} later anyway). \subsection{Set-valued mappings and coderivatives} We first collect some notations and definitions for set-valued mappings in Hilbert spaces, following \cite{Rockafellar:1998,Mordukhovich:2006} and simplifying the setting of the latter to Hilbert spaces. The symbols $X, Y, Q$, and $W$ generally stand for (infinite-dimensional) Hilbert spaces, which we identify throughout with their duals via the Riesz isomorphism. With $x \in X$, we then denote by $B(x, r)$ the open ball of radius $r>0$. The closure of a set $A$ we denote by $\closure A$. \begin{definition} Let $U \subset X$ for $X$ a Hilbert space. Then we define the set of \emph{Fréchet (or regular) normals} to $U$ at $u\in U$ by \[ \widehat N(u; U) \coloneqq \left\{ z \in X \,\middle|\, \limsup_{U \ni \alt{u} \to u}\frac{\iprod{z}{\alt{u}-u}}{\norm{\alt{u}-u}} \le 0 \right\} \] and the set of \emph{tangent vectors} by \begin{equation} \label{eq:tangent} T(u; U) \coloneqq \left\{ z \in X \,\middle|\, \text{exist } \tau^i \searrow 0 \text{ and } u^i \in U \text{ such that } z=\lim_{i \to \infty} \frac{u^i-u}{\tau^i} \right\}. \end{equation} For a convex set $U$, these coincide with the usual normal and tangent cones of convex analysis. \end{definition} For our general results, we will need to impose some geometric regularity assumptions. \begin{definition} \label{def:geomderiv} We say that a tangent vector $z \in T(u; U)$ is \emph{derivable} if there exists an $\varepsilon>0$ and a curve $\xi: [0, \varepsilon] \to U$ such that \[ \xi_+'(0) \coloneqq \lim_{t \searrow 0} \frac{\xi(t)-\xi(0)}{t} \] exists with $\xi_+'(0)=z$ and $\xi(0)=u$. We say that $U$ is \emph{geometrically derivable} if for every $u \in U$, every $z \in T(u; U)$ is derivable. \end{definition} It is easy to see that (cf.~\cite[Prop.~6.2]{Rockafellar:1998}) $U$ is geometrically derivable if and only if $T(u; U)$ for each $u \in U$ is defined by a full limit, i.e., we replace in \eqref{eq:tangent} the existence of $\tau^i \searrow 0$ by the requirement of the existence of $u^i$ for any sequence $\tau^i \searrow 0$. For any cone $\DerivCone \subset X$, we also define the \emph{polar cone} \begin{equation} \label{eq:polar} \polar \DerivCone \coloneqq \{ z \in X \mid \iprod{z}{z'} \le 0 \text{ for all } z' \in \DerivCone\}. \end{equation} We use the notation $R: Q \rightrightarrows W$ to denote a set-valued mapping $R$ from $Q$ to $W$; i.e., for every $q \in Q$ holds $R(q) \subset W$. For $R: Q \rightrightarrows W$, we define the domain $\dom R \coloneqq \{q\in Q\midR(q)\neq \emptyset\}$ and graph $\graphR \coloneqq \{(q,w)\subset Q\times W\mid w\inR(q)\}$. The regular coderivatives of such maps are defined graphically with the help of the normal cones. \begin{definition} \label{def:frechetcod} Let $Q$ and $W$ be Hilbert spaces, and $R: Q \rightrightarrows W$ with $\dom R \ne \emptyset$. We then define the \emph{regular coderivative} $\frechetCod R(q | w): W \rightrightarrows Q$ of $R$ at $q\in Q$ for $w\in W$ as \begin{equation} \label{eq:frechetcod} \frechetCod R(q | w)(\dir{w}) \coloneqq \left\{ \dir{q} \in Q \mid (\dir{q}, -\dir{w}) \in \widehat N((q, w); \graph R) \right\}. \end{equation} We also define the \emph{graphical derivative} $DR(q|w): Q \rightrightarrows W$ by \begin{equation} \label{eq:graphderiv-0} D R(q | w)(\dir{q}) \coloneqq \left\{ \dir{w} \in W \mid (\dir{q}, \dir{w}) \in T((q, w); \graph R) \right\}. \end{equation} \end{definition} The graphical derivative may also be written as \cite{Mordukhovich:2006,Rockafellar:1998} \begin{equation} \label{eq:graphderiv} DR(q|w)(\dir{q}) = \limsup_{t \searrow 0,\, \dir{\alt{q}} \to \dir{q}} \frac{R(q+t \dir{\alt{q}})-w}{t}. \end{equation} Here $\limsup_{\alt{t} \to t} A_{\alt{t}}$ stands for the \emph{outer limit} of a sequence of sets $\{A_t \subset W\}_{t \in T}$ over an index set $T$, defined as \[ \limsup_{t \to \infty} A_t \coloneqq \bigl\{w \in W \bigm| \text{for each } i \in \mathbb{N} \text{ exist } t^i \in T,\, w^i \in A_{t^i},\,\text{s.t. } t^i \to t\text{ and } w^i \to w\bigr\}. \] Observe that $DR(q|w): Q \rightrightarrows W$ whereas $\frechetCodR(q|w): W \rightrightarrows Q$. Indeed, if $R(q)=Aq$ for a linear operator $A$ between Hilbert spaces, then for $w=Aq$ holds \[ DR(q|w) = A \qquad\text{and}\qquad \frechetCodR(q|w)=A^*. \] The former is immediate from \eqref{eq:graphderiv} (see also, e.g., \cite[Ex.~8.34]{Rockafellar:1998}), while the latter is contained in \cite[Cor.~1.39]{Mordukhovich:2006}. We say that $\graph R$ is \emph{locally closed} at $(q,w)$, if there exists a closed set $U \subset Q \times W$ such that $\graph R \isect U$ is closed. For any convex lower semicontinuous function $f: Q\to\overline{\mathbb{R}}$‚ the graph of the subdifferential $\partial f$, considered as a set-valued mapping, is closed. This is an immediate consequence of the definition of the convex subdifferential. Finally, $R$ is called \emph{proto-differentiable} if $\graph R$ is geometrically derivable. \subsection{Second-order derivatives of pointwise functionals} \label{sec:second-deriv} Let $X=L^2(\Omega; \mathbb{R}^m)$ for an open domain $\Omega \subset \mathbb{R}^n$ and $G:X\to\overline{\mathbb{R}}$ be given by \begin{equation} \label{eq:g-pointwise-integral} G(u)=\int_\Omega g(x, u(x)) \,d x \end{equation} for some integrand $g: \Omega \times \mathbb{R}^m \to (-\infty, \infty]$. Here we assume that \begin{enumerate}[label=(\roman*)] \item $g$ is normal, i.e., the epigraphical mapping $x\mapsto \epi g(x,\cdot)\subset \mathbb{R}^m\times \mathbb{R}$ is closed-valued and measurable, \item $g$ is proper and convex, i.e., the mapping $z\mapsto g(x,z)$ is proper and convex for each fixed $x\in\Omega$. \item $\partial g$ is \emph{pointwise a.\,e.~proto-differentiable}, i.e., the mapping $z\mapsto \partial_z g(x,z)$ is proto-differentiable for a.\,e. $x\in\Omega$. \end{enumerate} We call an integrand satisfying (i--iii) \emph{regular}. Note that (i) already implies that the mapping $z\mapsto g(x,z)$ is lower semicontinuous for each fixed $x\in\Omega$ and that $g(x,u(x))$ is measurable for each $u\in X$ \cite[Prop.~14.28]{Rockafellar:1998}. Examples of normal integrands are finite-valued Carathéodory functions \cite[Ex.~14.29]{Rockafellar:1998} and indicator functions of a closed-valued Borel-measurable mapping $C: \Omega \rightrightarrows \mathbb{R}^m$ \cite[Ex.~14.32]{Rockafellar:1998}. For a normal integrand, \eqref{eq:g-pointwise-integral} is well-defined, and $G(u) <\infty$ if and only if $u(x)\in\dom g(x,\cdot)$ almost everywhere \cite[Prop.~14.58]{Rockafellar:1998}. Proto-differentiability is more restrictive but holds for a large class of practically relevant examples. In particular, under the present assumptions that $g(x, \mkern0.5\thinmuskip{\boldsymbol\cdot}\mkern0.5\thinmuskip)$ is a proper, convex, lower semicontinuous function on a finite-dimensional domain, (iii) is equivalent to $g(x, \mkern0.5\thinmuskip{\boldsymbol\cdot}\mkern0.5\thinmuskip)$ being \emph{twice epi-differentiable}; see \cite[Def.~13.6]{Rockafellar:1998} for the (technical) definition as well as \cite[Prop.~8.41, Ex.~13.30, Thm.~13.40]{Rockafellar:1998}. It is therefore satisfied, e.g., for the maximum of a finite number of $C^2$ functions \cite[Ex.~13.16]{Rockafellar:1998} and for proper, convex and piecewise linear-quadratic functions \cite[Prop.~13.9]{Rockafellar:1998}. More general ways to verify the proto-differentiability of $\partial g(x, \mkern0.5\thinmuskip{\boldsymbol\cdot}\mkern0.5\thinmuskip)$ even without convexity include the concepts of \emph{full amenability} \cite[Def.~10.23 \& Cor.~13.41]{Rockafellar:1998} and \emph{prox-regularity} \cite[Def.~13.27 \& Thm.~13.40]{Rockafellar:1998}. Since $X=L^2(\Omega;\mathbb{R}^m)$ is decomposable, it suffices to have existence of at least one $u_0\in \dom G$ to be able to compute pointwise the Fenchel conjugate \begin{equation} \label{eq:g-pointwise-conjugate} G^*(u) = \int_\Omega g^*(x, u(x)) \,d x \end{equation} and the convex subdifferential \begin{equation} \label{eq:g-pointwise-subdiff} \partial G(u) = \{ \xi \in X \mid \xi(x) \in \partial g(x, u(x)) \text{ for a.\,e. } x \in \Omega \}, \end{equation} where conjugate and subdifferential of $g(x,z)$ are understood as taken with respect to $z$ for $x$ fixed; see \cite[Thm.~3C]{Rockafellar:1976} and \cite[Cor.~3F]{Rockafellar:1976}, respectively. In order to calculate $\frechetCod \partial G(x)$, we observe that \[ \graph[\partial G]=\{ (u, \xi) \in X \times X \mid \xi(x) \in \partial g(x, u(x)) \text{ for a.\,e. } x \in \Omega \}. \] Since $g$ is normal and convex, $\graph \partial g$ is measurable and closed-valued \cite[Prop.~14.56]{Rockafellar:1976}. Thus, in the definition of $\widehat N(\hat u, \hat \xi; \graph[\partial G])$, we are dealing with a sequence in \[ X \times X = [L^2(\Omega; \mathbb{R}^m)]^2 \simeq L^2(\Omega; \mathbb{R}^{2m}). \] To derive an expression for $\frechetCod \partial G$, it therefore suffices to prove the following result. \begin{proposition} \label{prop:pointwise-normal-cone} Let $U \subset L^2(\Omega; \mathbb{R}^m)$ have the form \[ U = \left\{ u \in L^2(\Omega; \mathbb{R}^m) \mid u(x) \in C(x) \text{ for a.\,e. } x \in \Omega \right\} \] for a Borel-measurable mapping $C: \Omega \rightrightarrows \mathbb{R}^m$ with $C(x) \subset \mathbb{R}^m$ geometrically derivable for almost every $x \in \Omega$. Then for every $u \in U$ we have \begin{equation} \label{eq:l2-pointwise-normal-cone} \widehat N(u; U) = \left\{ z \in L^2(\Omega; \mathbb{R}^m) \mid z(x) \in \widehat N(u(x); C(x)) \text{ for a.\,e. } x \in \Omega \right\}, \end{equation} and \begin{equation} \label{eq:l2-pointwise-tangent-cone} T(u; U) = \left\{ z \in L^2(\Omega; \mathbb{R}^m) \mid z(x) \in T(u(x); C(x)) \text{ for a.\,e. } x \in \Omega \right\}. \end{equation} \end{proposition} \begin{proof} We start with \eqref{eq:l2-pointwise-normal-cone}. Recalling the definition of $\widehat N(u; U)$, we have to find all $z \in L^2(\Omega; \mathbb{R}^m)$ satisfying \[ \limsup_{U \ni \alt{u} \to u} \frac{\iprod{z}{\alt{u}-u}}{\norm{\alt{u}-u}} \le 0, \] where the inner product and norm are now in $L^2(\Omega; \mathbb{R}^m)$, and the convergence is strong convergence in this space, within the subset $U$. Let us take a sequence $u^i \to u$, ($i=1,2,3,\ldots$) and let $\varepsilon > 0$ be arbitrary. We denote $v^i \coloneqq u^i-u$. Then, \[ L_i \coloneqq \frac{\iprod{z}{u^i-u}}{\norm{u^i-u}} = \frac{\int_\Omega \iprod{z(x)}{v^i(x)} \,d x} {\left(\int_\Omega \abs{v^i(x)}^2 \,d x\right)^{1/2}}. \] We let $Z_1=Z_1^i$, $Z_2$ and $Z_3$ be sets with Lebesgue measure $\L^m(\Omega \setminus Z_j) \le \varepsilon/3$ for each $j=1,2,3$ and satisfying, respectively, the conditions \begin{align} \label{eq:pointwise-norm-estimate} &\vnorm{v^i(x)} \le 3\inv\varepsilon \norm{v^i} \quad (x \in Z_1^i), \\ \label{eq:pointwise-boundedness} &z \text{ is bounded on } Z_2,\\ \intertext{and} \label{eq:pointwise-remainder-integral} &\left(\int_{\Omega \setminus Z_3} \vnorm{z(x)}^2 \,d x\right)^{1/2} \le \varepsilon/3. \end{align} To see how \eqref{eq:pointwise-norm-estimate} can hold, we take $\tilde Z_1^i$ as the set of $x \in \Omega$ satisfying \eqref{eq:pointwise-norm-estimate}. Then \[ \norm{v^i}^2 \ge \int_{\Omega \setminus \tilde Z_1^i} \vnorm{v^i(x)}^2 \,d x \ge 3\inv\varepsilon \L^m(\Omega \setminus \tilde Z_1^i) \norm{v^i}^2, \] which gives \[ \varepsilon/3 \ge \L^m(\Omega \setminus \tilde Z_1^i). \] We may therefore take $Z_1^i = \tilde Z_1^i$. The proofs that \eqref{eq:pointwise-boundedness} and \eqref{eq:pointwise-remainder-integral} can hold are similarly elementary. With \[ Z^i \coloneqq Z_1^i \isect Z_2 \isect Z_3, \] we have \[ \L^m(\Omega \setminus Z^i) \le \varepsilon. \] We calculate \[ \begin{aligned} L_i & = \frac{\int_{\Omega \setminus Z^i} \iprod{z(x)}{v^i(x)} \,d x}{\norm{v^i}} +\frac{\int_{Z^i} \iprod{z(x)}{v^i(x)} \,d x}{\norm{v^i}} \\ & \le \frac{\norm{\chi_{\Omega \setminus Z^i}z} \norm{v^i}}{\norm{v^i}} + \int_{Z^i} \frac{\iprod{z(x)}{v^i(x)}}{\vnorm{v^i(x)}} \cdot \frac{\abs{v^i(x)}}{\norm{v^i}} \,d x \\ & \le \norm{\chi_{\Omega \setminus Z^i}z} + 3\inv\varepsilon \int_{Z^i} \max\left\{0, \frac{\iprod{z(x)}{v^i(x)}}{\vnorm{v^i(x)}}\right\} \,d x. \\ & \le \norm{\chi_{\Omega \setminus Z^i}}\norm{z} + 3\inv\varepsilon \int_{Z_2} \max\left\{0, \frac{\iprod{z(x)}{v^i(x)}}{\vnorm{v^i(x)}}\right\} \,d x. \\ & \le \varepsilon^{1/2}\norm{z} + 3\inv\varepsilon \int_{Z_2} \max\left\{0, \frac{\iprod{z(x)}{v^i(x)}}{\vnorm{v^i(x)}}\right\} \,d x. \end{aligned} \] If now for almost every $x \in \Omega$ we have that $z(x) \in \widehat N(u(x); C(x))$, then we deduce using the boundedness of $z$ on $Z_2$ and reverse Fatou's inequality that \[ \limsup_{i \to \infty} L_i \le \varepsilon^{1/2} \norm{z}. \] Since $\varepsilon>0$ was arbitrary, we deduce \[ \widehat N(u; U) \supset \{ z \in L^2(\Omega; \mathbb{R}^m) \mid z(x) \in \widehat N(u(x); C(x)) \text{ for a.\,e. } x \in \Omega \}. \] This proves one direction of \eqref{eq:l2-pointwise-normal-cone}, which therefore holds even without geometric derivability. Now we have to prove the other direction, where we do need this assumption. So, let $z \in \widehat N(u; U)$. We have to show that $z(x) \in \widehat N(u(x); C(x))$ for a.\,e.~$x \in \Omega$. Suppose this does not hold. Using the standard polarity relationship $\widehat N(u(x); C(x))=\polar{[T(u(x); C(x))]}$, e.g., from \cite[Thm.~6.28]{Rockafellar:1998}, we can find $\delta > 0$ and a Borel set $E \subset \Omega$ of finite positive Lebesgue measure such that for each $x \in E$ there exists $w(x) \in T(u(x); C(x))$ with $\norm{w(x)}=1$ and $\iprod{z(x)}{w(x)} \ge \delta$. We may without loss of generality take $C(x)$ geometrically derivable for each $x \in E$. By \cref{def:geomderiv} there then exists for each $x \in E$ a curve $\xi(\mkern0.5\thinmuskip{\boldsymbol\cdot}\mkern0.5\thinmuskip, x): [0, \varepsilon(x)] \to C(x)$ such that $\xi'_+(0, x)=w(x)$ and $\xi(0, x)=u(x)$. Let us pick $c \in (0, \delta)$. By replacing $E$ by a subset of positive measure, we may by Egorov's theorem assume the existence of $\varepsilon > 0$ such that \[ \vnorm{\xi(t, x)-\xi(0, x)-w(x)t} \le c t \quad (t \in [0, \varepsilon],\, x \in E). \] Let us define \[ \tilde u^t(x) \coloneqq \begin{cases} \xi(t, x), & x \in E, \\ u(x), & x \in \Omega \setminus E. \end{cases} \] With $v^t \coloneqq \tilde u^t -u$, we have $v^t(x)=\xi(t, x)-\xi(0, x)$ for $x \in E$, and $v^t(x)=0$ for $x \in \Omega \setminus E$. Therefore, for $t \in (0, \varepsilon]$ and some $c'>0$ there holds \[ \norm{v^t} = \left(\int_E \vnorm{\xi(t, x)-\xi(0, x)}^2\, d x\right)^{1/2} \le \left(\int_E (\vnorm{w(x)}t+ ct)^2\, d x\right)^{1/2} \le c't. \] Likewise \[ \iprod{z(x)}{v^t(x)} \ge \iprod{z(x)}{w(x)}t - \vnorm{z(x)} \cdot \vnorm{\xi(t, x)-\xi(0, x)-wt} \ge \delta t - ct. \] It follows \begin{equation} \limsup_{t \searrow 0} \int_{E} \frac{\iprod{z(x)}{v^t(x)}}{\norm{v^t}} \,d x \ge \limsup_{t \searrow 0} \frac{\L^m(E)(\delta t - c t)}{c't} = \frac{\L^m(E)(\delta-c)}{c'} > 0. \end{equation} With $u^i \coloneqq \tilde u^{1/i}$ for $i \in \mathbb{N}$, it follows that $\lim_{i \to \infty} L_i > 0$. This provides a contradiction to $z \in \widehat N(u; U)$. Thus $z(x) \in \widehat N(u(x); C(x))$ for a.\,e.~$x \in \Omega$, finishing the proof of \eqref{eq:l2-pointwise-normal-cone}. \bigskip We still have to show \eqref{eq:l2-pointwise-tangent-cone}. The inclusion \[ T(u; U) \subset \left\{ z \in L^2(\Omega; \mathbb{R}^m) \mid z(x) \in T(u(x); C(x)) \text{ for a.\,e. } x \in \Omega \right\} \] follows from the defining equation \eqref{eq:tangent} and the fact that a sequence convergent in $L^2(\Omega)$ converges, after possibly passing to a subsequence, pointwise almost everywhere. For the other direction, we take for almost every $x \in \Omega$ a tangent vector $z(x) \in T(u(x); C(x))$ at $u(x) \in C(x)$. For the inclusion \eqref{eq:l2-pointwise-tangent-cone}, we only need to consider the case $z \in L^2(\Omega; \mathbb{R}^m)$. By geometric derivability, we may find for a.\,e. $x \in \Omega$ an $\varepsilon(x)>0$ and a curve $\xi(\mkern0.5\thinmuskip{\boldsymbol\cdot}\mkern0.5\thinmuskip, x): [0, \varepsilon(x)] \to C(x)$ such that $\xi(0, x)=u(x)$ and $\xi'_+(0, x)=z(x)$. In particular, for any given $c>0$, we may find $\varepsilon_c(x) \in (0, \varepsilon(x)]$ such that \begin{equation} \label{eq:tangent-xi} \frac{\vnorm{\xi(t, x)-\xi(0, x)-z(x)t}}{t} \le c \quad (t \in (0, \varepsilon_c(x)],\, \text{a.\,e. } x \in \Omega). \end{equation} For $t>0$, let us set \[ E_{c,t} \coloneqq \{ x \in \Omega \mid \varepsilon_c(x)\geq t \}. \] If we define \[ \tilde u^{c,t}(x) \coloneqq \begin{cases} \xi(t, x), & x \in E_{c,t}, \\ u(x), & x \in \Omega \setminus E_{c,t}, \end{cases} \] then by \eqref{eq:tangent-xi}, $\vnorm{\tilde u^{c,t}(x)-u(x)} \le t (c+\vnorm{z(x)})$ for a.\,e.~$x \in \Omega$, so that \begin{equation} \label{eq:tangent-approx} \norm{\tilde u^{c,t}-u} \le \left(\int_\Omega t^2 (c+\vnorm{z(x)})^2\right)^{1/2} \le t (c\sqrt{\L^m(\Omega)}+\norm{z}). \end{equation} Moreover, \eqref{eq:tangent-xi} also gives \begin{equation} \label{eq:tangent-construct} \begin{aligned}[t] \frac{\norm{\tilde u^{c,t}-u-z t}}{t} & \le \frac{1}{t} \left(\int_{E_{c,t}} \vnorm{\xi(t,x)-\xi(0,x)-z(x)t}^2 \, d x\right)^{1/2} \!+ \frac{1}{t} \left(\int_{\Omega \setminus E_{c,t}} \vnorm{z(x)t}^2 \, d x\right)^{1/2} \\ & \le c\sqrt{\L^m(\Omega)} + \norm{z\chi_{\Omega \setminus E_{c,t}}}. \end{aligned} \end{equation} For each $i \in \mathbb{N}$ we can find $t_i>0$ such that $\norm{z\chi_{\Omega \setminus E_{1/i,t_i}}} \le 1/i$. This follows from Lebesgue's dominated convergence theorem and the fact that $\L^m(\Omega \setminus E_{c,t}) \to 0$ as $t \to 0$. The estimates \eqref{eq:tangent-approx} and \eqref{eq:tangent-construct} thus show that $u^i \coloneqq u^{1/i,t_i}$ satisfy $u^i \to u$ and $(u^i-u)/t_i \to z$. Therefore $z \in T(u; U)$, finishing the proof of \eqref{eq:l2-pointwise-tangent-cone}. \end{proof} As a corollary, we may calculate $\frechetCod \partial G(u|w)$ for $G$ of the form \eqref{eq:g-pointwise-integral}. \begin{corollary} \label{corollary:g-form} Let $G: L^2(\Omega; \mathbb{R}^m) \to \mathbb{R}$ have the form \eqref{eq:g-pointwise-integral} for some regular integrand $g$. Then the regular coderivative of $\partial G$ at $u$ for $\xi$ in the direction $\dir\xi$, where $u,\xi,\dir\xi \in L^2(\Omega; \mathbb{R}^m)$, is given by \begin{equation} \label{eq:g-form} \frechetCod{[\partial G]}(u|\xi)(\dir\xi) = \left\{ \dir{u} \in L^2(\Omega; \mathbb{R}^m) \,\middle|\, \begin{array}{r} \dir{u}(x) \in \frechetCod{[\partial g(x, \mkern0.5\thinmuskip{\boldsymbol\cdot}\mkern0.5\thinmuskip)]}(u(x)|\xi(x))(\dir\xi(x)) \\ \text{ for a.\,e. } x \in \Omega \end{array} \right\}. \end{equation} Likewise, the graphical derivative at $u$ for $\xi$ in the directon $\dir u$ is given by \begin{equation} \label{eq:g-form-graph} D{[\partial G]}(u|\xi)(\dir u) = \left\{ \dir{\xi} \in L^2(\Omega; \mathbb{R}^m) \,\middle|\, \begin{array}{r} \dir{\xi}(x) \in D{[\partial g(x, \mkern0.5\thinmuskip{\boldsymbol\cdot}\mkern0.5\thinmuskip)]}(u(x)|\xi(x))(\dir u(x)) \\ \text{ for a.\,e. } x \in \Omega \end{array} \right\}. \end{equation} \end{corollary} \begin{proof} As we have already remarked in the beginning of the present \cref{sec:second-deriv}, the sets \[ C(x) \coloneqq \{ (z, \xi) \in \mathbb{R}^m \times \mathbb{R}^m \mid \xi \in \partial g(x, z) \} \] are by \cite[Prop.~8.41, Ex.~13.30, Thm.~13.40]{Rockafellar:1998} geometrically derivable for regular integrands $g$. The present result therefore follows by direct application of \cref{prop:pointwise-normal-cone} to the set $U=\graph \partial G$ with $\partial G$ given in \eqref{eq:g-pointwise-subdiff}. \end{proof} More generally, we have the following. \begin{corollary} \label{corollary:p-form} Let $P: Q \sim L^2(\Omega; \mathbb{R}^m) \rightrightarrows W \sim L^2(\Omega; \mathbb{R}^k)$ have the form \[ P(q) = \{ w \in L^2(\Omega; \mathbb{R}^m) \mid w(x) \in p(x, q(x))\, \text{for a.\,e. } x \in \Omega\} \] for some Borel-measurable and pointwise a.\,e.~proto-differentiable set-valued function $p: \Omega \times \mathbb{R}^m \rightrightarrows \mathbb{R}^k$. Then the regular coderivative of $P$ at $q$ for $w$ in the direction $\dir{w}$, where $q \in L^2(\Omega; \mathbb{R}^m)$, and $w,\dir{w} \in L^2(\Omega; \mathbb{R}^k)$, is given by \[ \frechetCod{P}(q|w)(\dir{w}) = \left\{ \dir{q} \in L^2(\Omega; \mathbb{R}^m) \,\middle|\, \begin{array}{r} \dir{q}(x) \in \frechetCod{[p(x, \mkern0.5\thinmuskip{\boldsymbol\cdot}\mkern0.5\thinmuskip)]}(q(x)|w(x))(\dir{w}(x)), \\ \text{ for a.\,e. } x \in \Omega \end{array} \right\}. \] Likewise, \[ D{P}(q|w)(\dir{q}) = \left\{ \dir{w} \in L^2(\Omega; \mathbb{R}^m) \,\middle|\, \begin{array}{r} \dir{w}(x) \in D{[p(x, \mkern0.5\thinmuskip{\boldsymbol\cdot}\mkern0.5\thinmuskip)]}(q(x)|w(x))(\dir{q}(x)), \\ \text{ for a.\,e. } x \in \Omega \end{array} \right\}. \] \end{corollary} \begin{corollary} \label{cor:cod-p-plus-smooth} Let $P$ be a pointwise set-valued functional as in \cref{corollary:p-form}, and let $h: Q \to W$ be single-valued and Fréchet differentiable. Then \begin{equation} \label{eq:d-p-plus-smooth-cod} \frechetCod{(P+h)}(q|w)(\dir{w}) = \frechetCod{P}(q|w-h(q))(\dir{w}) + [\grad h(q)]^*\dir{w}, \end{equation} and \begin{equation} \label{eq:d-p-plus-smooth} D{(P+h)}(q|w)(\dir{q}) = DP(q|w-h(q))(\dir{q}) + \grad h(q)\dir{q}. \end{equation} \end{corollary} \begin{proof} Similarly to the finite-dimensional case in \cite[Ex.~10.43]{Rockafellar:1998}, the rule \eqref{eq:d-p-plus-smooth} follows immediately from the defining equation \eqref{eq:graphderiv}. The rule \eqref{eq:d-p-plus-smooth-cod} is immediate consequence of the sum rule for regular coderivatives \cite[Thm.~1.62]{Mordukhovich:2006}. \end{proof} \begin{remark} Using \eqref{eq:l2-pointwise-normal-cone}, it is not difficult to obtain the characterization \begin{equation} \label{eq:pointwise-limiting-normal} N(u; U) = \left\{ z \in L^2(\Omega; \mathbb{R}^m) \mid z(x) \in N(u(x); C(x)) \text{ for a.\,e. } x \in \Omega \right\} \end{equation} of the \emph{limiting normal cone} $N(u; U) \coloneqq \limsup_{U \ni u' \to u} \widehat N(u'; U)$. The proof is based on $L^2$ convergence giving pointwise a.\,e.~convergence for a subsequence, and in the other direction, reindexing finite-dimensional sequences to get sequences convergent in $L^2$. The expression \eqref{eq:pointwise-limiting-normal} then allows obtaining corresponding versions of the corollaries above for the \emph{limiting coderivative} $D^*$ (which enjoys a richer calculus) in place of the regular coderivative $\widehat D^*$. However, the stability analysis which is the focus of this work rests on the relation between the regular coderivative and the graphical derivative, discussed in the following section, which does not hold for the limiting coderivative (which has a similar relation with the regular derivative). In particular, we cannot work in the same way with the convexified graphical derivative which is a key step in our analysis (see \cref{sec:linear-cone} below). Hence, we do not treat this case in detail here. \end{remark} \subsection{The finite-dimensional coderivative in terms of the graphical derivative} \label{sec:findimco} \Cref{corollary:g-form} and \cref{corollary:p-form} give us computable expressions for the coderivative for pointwise set-valued mappings in infinite dimensions in terms of the \emph{co}derivative in finite dimensions. It is, however, often easier to work with the graphical derivative \eqref{eq:graphderiv}. From \cite[Prop.~8.37]{Rockafellar:1998} we find for $R: \mathbb{R}^m \rightrightarrows \mathbb{R}^n$ that \begin{equation} \label{eq:upper-adj-cod} \frechetCod R(q|w) = [DR(q|w)]^{*+}. \end{equation} Here, for a general set-valued mapping $J: Q \rightrightarrows W$, the \emph{upper adjoint} $J^{*+}:W\rightrightarrows Q$ is defined via \[ J^{*+}(\dir{w}) \coloneqq \{ \dir{q} \in Q \mid \iprod{\dir{q}}{\dir{\alt{q}}} \le \iprod{\dir{w}}{\dir{\alt{w}}} \text{ when } \dir{\alt{w}} \in J(\dir{\alt{q}}) \}. \] In general, the graph of the regular coderivative need not be a convex set. It is often more convenient -- and for our analysis sufficient -- to work with its convexification. To see this, observe first that by definition of the upper adjoint, and minding the negative sign of $\dir{\alt{w}}$ in \eqref{eq:frechetcod}, the relation \eqref{eq:upper-adj-cod} is equivalent to \[ \widehat N((q, w); {\graph R})=\polar{T((q, w); {\graph R})}. \] In particular -- simply through the definitions of polarity and convexity, -- the convex hull of the tangent cone satisfies \[ \widehat N((q, w); {\graph R})=\polar{[\conv T((q, w); {\graph R})]}. \] Defining $\widetilde{D R}(q|w)$ via \[ \graph \widetilde{D R}(q|w) =\conv \graph[D R(q|w)], \] we therefore deduce \begin{equation} \label{eq:upper-adj-cod-tilde} \frechetCod R(q|w) = [DR(q|w)]^{*+} = [\widetilde{DR}(q|w)]^{*+}, \end{equation} where the first equality holds due to the finite-dimensional setting, while the second equality holds generally due to the properties of convex hulls and polars. The following central results shows that for the pointwise functionals that are the focus of this work, both equalities hold even in the infinite-dimensional setting. \begin{theorem} \label{thm:p-graph-deriv} Let $P: Q \sim L^2(\Omega; \mathbb{R}^m) \rightrightarrows W \sim L^2(\Omega; \mathbb{R}^k)$ have the form \[ P(q) = \{ w \in L^2(\Omega; \mathbb{R}^m) \mid w(x) \in p(x, q(x))\, \text{for a.\,e. } x \in \Omega\}. \] with proto-differentiable $p(x, \mkern0.5\thinmuskip{\boldsymbol\cdot}\mkern0.5\thinmuskip): \mathbb{R}^m \rightrightarrows \mathbb{R}^k$ (a.\,e.~$x \in \Omega$) having locally closed graph at $(q(x),w(x))$ for a.\,e.~$x \in \Omega$. Then \[ \frechetCod P(q|w) = [DP(q|w)]^{*+} = [\widetilde{DP}(q|w)]^{*+}, \] i.e., \[ \begin{split} \dir{q} \in \frechetCod P(q|w)(\dir{w}) & \iff \iprod{\dir{q}}{\dir{\alt{q}}} \le \iprod{\dir{w}}{\dir{\alt{w}}} \text{ when } \dir{\alt{w}} \in DP(q|w)(\dir{\alt{q}}) \\ & \iff \iprod{\dir{q}}{\dir{\alt{q}}} \le \iprod{\dir{w}}{\dir{\alt{w}}} \text{ when } \dir{\alt{w}} \in \widetilde{DP}(q|w)(\dir{\alt{q}}). \end{split} \] \end{theorem} \begin{proof} Let us define $p_x \coloneqq p(x, \mkern0.5\thinmuskip{\boldsymbol\cdot}\mkern0.5\thinmuskip)$. From \cref{corollary:p-form}, we have the equivalence \[ \dir{{w}}(x) \in Dp_x(q(x)|w(x))(\dir{{q}}(x)) \text{ for a.\,e. } x \in \Omega \iff \dir{{w}} \in DP(q|w)(\dir{{q}}). \] Provided that $\graph p_x$ is locally closed for a.\,e.~$x \in \Omega$, we may thus calculate \begin{equation} \label{eq:somesetmap-upper-adjoint-transformation-first-step} \begin{aligned}[t] \dir{q} \in \frechetCod P(q|w)(\dir{w}) & \iff \dir{q}(x) \in \frechetCod p_x(q(x)|w(x))(\dir{w}(x)) \quad\text{for a.\,e. } x \in \Omega \\ & \iff \iprod{\dir{q}(x)}{\dir{\alt{q}}(x)} \le \iprod{\dir{w}(x)}{\dir{\alt{w}}(x)} \quad\text{for a.\,e. } x \in \Omega \\ & \phantom{\iff} \quad \text{when } \dir{\alt{w}}(x) \in Dp_x(q(x)|w(x))(\dir{\alt{q}}(x)) \quad\!\text{for a.\,e. } x \in \Omega \\ & \iff \iprod{\dir{q}(x)}{\dir{\alt{q}}(x)} \le \iprod{\dir{w}(x)}{\dir{\alt{w}}(x)} \quad\text{for a.\,e. } x \in \Omega \\ & \phantom{\iff} \quad \text{when } \dir{\alt{w}} \in DP(q|w)(\dir{\alt{q}}). \end{aligned} \end{equation} Here we are still computing the upper adjoint pointwise. Clearly \eqref{eq:somesetmap-upper-adjoint-transformation-first-step} however implies \begin{equation} \label{eq:somesetmap-upper-adjoint-transformation-first-impl} \dir{q} \in \frechetCod P(q|w)(\dir{w}) \implies \iprod{\dir{q}}{\dir{\alt{q}}} \le \iprod{\dir{w}}{\dir{\alt{w}}} \text{ when } \dir{\alt{w}} \in DP(q|w)(\dir{\alt{q}}). \end{equation} Further, if there exists a set $E \subset \Omega$ with $\L^m(\Omega \setminus E)>0$ and \[ \iprod{\dir{q}(x)}{\dir{\alt{q}}(x)} > \iprod{\dir{w}(x)}{\dir{\alt{w}}(x)} \qquad (x \in E), \] then by constructing \[ \altalt{\dir{q}}(x) \coloneqq (1+t\chi_E(x))\dir{q}(x), \quad \altalt{\dir{w}}(x) \coloneqq (1+t\chi_E(x))\dir{w}(x), \] we observe for sufficient large $t$ the condition \[ \iprod{\dir{q}}{\altalt{\dir{q}}} > \iprod{\dir{w}}{\altalt{\dir{w}}}. \] Moreover, by the pointwise character of $P$, also $\altalt{\dir{w}} \in DP(q|w)(\altalt{\dir{q}})$. Thus the implication in \eqref{eq:somesetmap-upper-adjoint-transformation-first-impl} actually holds both ways, which is exactly what we set out to prove. Finally, $[DP(q|w)]^{*+} = [\widetilde{DP}(q|w)]^{*+}$ always holds, as we have already observed in \eqref{eq:upper-adj-cod-tilde}. \end{proof} Similarly to \cref{cor:cod-p-plus-smooth}, we have the following immediate corollary. \begin{corollary} \label{cor:d-p-plus-smooth} Let $P$ be a pointwise set-valued functional as in \cref{thm:p-graph-deriv}, and let $h: Q \to W$ be single-valued and Fréchet differentiable. Then \begin{equation} \label{eq:p-plus-h-upper-adjoint} \frechetCod (P+h)(q|w) = [D(P+h)(q|w)]^{*+}. \end{equation} \end{corollary} \begin{proof} Let us set $R \coloneqq P+h$, and recall from \cref{cor:cod-p-plus-smooth} that \[ \dir{q} \in \frechetCod R(q|w)(\dir{w}) = \frechetCod P(q|w-h(q))(\dir{w})+[\grad h(q)]^*\dir{w}. \] This is the same to say as that $\dir{\bar q} \coloneqq \dir{q} -[\grad h(q)]^*\dir{w}\in \frechetCod P(q|\alt{w})(\dir{w})$ for $\alt{w} \coloneqq w-h(q)$. By \cref{thm:p-graph-deriv} this holds if and only if \[ \iprod{\dir{\bar q}}{\dir{\alt{q}}} \le \iprod{\dir{w}}{\dir{\alt{w}}} \text{ when } \dir{\alt{w}} \in DP(q|\alt{w})(\dir{\alt{q}}), \] or equivalently \[ \iprod{\dir{q}}{\dir{\alt{q}}} \le \iprod{\dir{w}}{\dir{\alt{w}}+\grad h(q)\dir{\alt{q}}} \text{ when } \dir{\alt{w}} \in DP(q|\alt{w})(\dir{\alt{q}}), \] which is just the same as \[ \iprod{\dir{q}}{\dir{\alt{q}}} \le \iprod{\dir{w}}{\dir{\alt{w}}} \text{ when } \dir{\alt{w}} \in DP(q|\alt{w})(\dir{\alt{q}})+\grad h(q)\dir{\alt{q}}. \] Now we just use \eqref{eq:d-p-plus-smooth} to derive \eqref{eq:p-plus-h-upper-adjoint}. \end{proof} \begin{corollary} \label{corollary:g-form-graphderiv} Let $G: L^2(\Omega; \mathbb{R}^m) \to \mathbb{R}$ have the form \eqref{eq:g-pointwise-integral} for some regular integrand $g$. Then the graphical derivative of $\partial G$ at $u$ for $\xi$ in the direction $\dir{u}$, where $u,\xi,\dir{u} \in L^2(\Omega; \mathbb{R}^m)$, is given by \begin{equation} \label{eq:g-d-form} D{[\partial G]}(u|\xi)(\dir{u}) = \left\{ \dir\xi \in L^2(\Omega; \mathbb{R}^m) \,\middle|\, \begin{array}{r} \dir\xi(x) \in D{[\partial g(x, \mkern0.5\thinmuskip{\boldsymbol\cdot}\mkern0.5\thinmuskip)]}(u(x)|\xi(x))(\dir{u}(x)) \\ \text{ for a.\,e. } x \in \Omega \end{array} \right\}. \end{equation} Moreover, \begin{equation} \frechetCod{} [\partial G](u|\xi) = [D[\partial G](u|\xi)]^{*+}. \end{equation} \end{corollary} \begin{proof} The claim follows directly from \cref{thm:p-graph-deriv} with $p(x, z)=\partial g(x, z)$. Local closedness of $\graph \partial g(x, \mkern0.5\thinmuskip{\boldsymbol\cdot}\mkern0.5\thinmuskip)$ is a consequence of the lower semicontinuity of $g$. \end{proof} \subsection{Examples}\label{sec:pointwise:examples} We now study specific cases of the finite- and infinite-dimensional second-order generalized derivatives, relevant to our model problems \eqref{eq:l1fit_problem} and \eqref{eq:linffit_problem}. Other examples satisfying the assumptions are the piecewise linear-quadratic ``multi-bang'' and switching penalties introduced in \cite{ClasonKunisch:2013} and \cite{ClasonKunisch:2016}, respectively. \subsubsection{Squared \texorpdfstring{$\scriptstyle L^2(\Omega;\mathbb{R}^m)$}{L²(Ω;ℝᵐ)} norm} \label{sec:l2-fitting-analysis} The following result is standard; see, e.g., \cite[Ex.~8.34]{Rockafellar:1998}. \begin{lemma} \label{lemma:two-norm-squared-deriv} With $z \in \mathbb{R}^m$, let $g(z)=\frac12\norm{z}^2$. Then $\partial g$ is proto-differentiable with \begin{equation} D(\partial g)(z|\zeta)(\dirz) = \begin{cases} \dirz, & \zeta=z, \\ \emptyset, & \text{otherwise}. \end{cases} \end{equation} \end{lemma} From \cref{corollary:g-form-graphderiv}, we immediately obtain \begin{corollary}\label{cor:g-form-l2norm} With $g(z)=\frac12\norm{z}^2$ and $\Omega \subset \mathbb{R}^n$ an open bounded domain, let \[ G(u) \coloneqq \int_\Omega g(u(x)) \,d x \qquad (u \in L^2(\Omega; \mathbb{R}^m)). \] Then \[ D(\partial G)(u|\xi)(\diru) = \diru \quad\text{and}\quad \frechetCod(\partial G)(u|\xi)(\dir\xi) = \dir\xi. \] \end{corollary} \subsubsection{Indicator function} \label{sec:l1-fitting-analysis} The following lemma is useful for computing $D[\partial F^*](v|\eta)$ for the problem \eqref{eq:l1fit_problem}. Its claim in the one-dimensional case ($m=1$) is illustrated in \cref{fig:indicator}. \begin{lemma} \label{lemma:DsubdiffF} With $z \in \mathbb{R}^m$, let $f(z)=\iota_{B(0,\alpha)}(z)$. Then $\partial f$ is proto-differentiable with \begin{equation} \label{eq:d-subdiff-indicator} D(\partial f)(z|\zeta)(\dirz) = \begin{cases} \norm{\zeta} \dirz / \alpha + \mathbb{R} z, & \norm{z}=\alpha,\, \zeta \in (0, \infty) z,\, \iprod{z}{\dirz}=0, \\ [0, \infty) z, & \norm{z}=\alpha,\, \norm{\zeta}=0,\, \iprod{z}{\dirz} =0, \\ \{0\}, & \norm{z}=\alpha,\, \norm{\zeta}=0,\, \iprod{z}{\dirz} < 0, \\ \{0\}, & \norm{z} < \alpha,\, \norm{\zeta}=0, \\ \emptyset, & \text{otherwise}. \end{cases} \end{equation} In particular, if $m=1$, then \begin{align} \label{eq:d-subdiff-indicator-1d} D(\partial f)(z|\zeta)(\dirz) &= \begin{cases} \mathbb{R}, & \abs{z}=\alpha,\, \zeta \in (0, \infty) z,\, \dirz=0, \\ [0, \infty) z, & \abs{z}=\alpha,\, \zeta=0,\, \dirz = 0, \\ \{0\}, & \abs{z}=\alpha,\, \zeta = 0,\, z \dirz < 0, \\ \{0\}, & \abs{z} < \alpha,\, \zeta=0,\, \\ \emptyset, & \text{otherwise}, \end{cases} \intertext{as well as} \label{eq:d-subdiff-indicator-1d-conv} \widetilde{D(\partial f)}(z|\zeta)(\dirz) &= \begin{cases} \mathbb{R}, & \abs{z}=\alpha,\, \zeta \in (0, \infty) z,\, \dirz=0, \\ [0, \infty) z, & \abs{z}=\alpha,\, \zeta=0,\, z \dirz \le 0, \\ \{0\}, & \abs{z} < \alpha,\, \zeta=0,\, \\ \emptyset, & \text{otherwise}. \end{cases} \end{align} \end{lemma} \begin{figure} \centering \begin{subfigure}{0.30\textwidth} \centering \begin{asy} unitsize(50,50); real l=1.5; real eps=0.3; pair gfstart=(-1, -l); pair gfend=(1, l); path gf=gfstart--(-1, 0)--(1, 0)--gfend; draw(gf, dashed, Arrows); draw((-eps, 0)--(eps, 0), linewidth(1.1), Arrows); dot((0, 0)); label("(iv)", (0, 0), 1.5*N); draw((-1,-eps)--(-1,0)--(-1+eps, 0), linewidth(1.1), Arrows); dot((-1, 0)); label("(ii)", (-1, -eps), 1.5*W); label("(iii)", (-1+eps, 0), 1.5*N); draw((1,0.75+eps)--(1, 0.75-eps), linewidth(1.1), Arrows); dot((1, 0.75)); label("(i)", (1, 0.75), 1.5*E); \end{asy} \caption{$D(\partial f)(z|\zeta)$} \end{subfigure} \hfill \begin{subfigure}{0.30\textwidth} \centering \begin{asy} unitsize(50,50); real l=1.5; real eps=0.3; pair gfstart=(-1, -l); pair gfend=(1, l); path gf=gfstart--(-1, 0)--(1, 0)--gfend; draw(gf, dashed, Arrows); draw((-eps, 0)--(eps, 0), linewidth(1.1), Arrows); dot((0, 0)); fill((-1,-eps)--(-1,0)--(-1+eps, 0)..controls (-1+eps/sqrt(2), -eps/sqrt(2))..cycle, gray(.8)); draw((-1,-eps)--(-1,0)--(-1+eps, 0), linewidth(1.1), Arrows); dot((-1, 0)); draw((1,0.75+eps)--(1, 0.75-eps), linewidth(1.1), Arrows); dot((1, 0.75)); \end{asy} \caption{$\widetilde{D(\partial f)}(z|\zeta)$} \end{subfigure} \hfill \begin{subfigure}{0.30\textwidth} \centering \begin{asy} unitsize(50,50); real l=1.5; real eps=0.3; pair gfstart=(-1, -l); pair gfend=(1, l); path gf=gfstart--(-1, 0)--(1, 0)--gfend; draw(gf, dashed, Arrows); draw((0, -eps)--(0, eps), linewidth(1.1), Arrows); dot((0, 0)); fill((-1-eps,0)--(-1,0)--(-1, eps)..controls (-1-eps/sqrt(2), eps/sqrt(2))..cycle, gray(.8)); draw((-1-eps,0)--(-1,0)--(-1, eps), linewidth(1.1), Arrows); dot((-1, 0)); draw((1-eps,0.75)--(1+eps, 0.75), linewidth(1.1), Arrows); dot((1, 0.75)); \end{asy} \caption{$\widehat D^*(\partial f)(z|\zeta)$} \end{subfigure} \caption{Illustration of the graphical derivative and regular coderivative for $\partial f$ with $f=\iota_{[-1,1]}$. The dashed line is $\graph \partial f$. The dots indicate the base points $(z, \zeta)$ where the graphical derivative or coderivative is calculated, and the thick arrows and gray areas the directions of $(\dir z, \dir \zeta)$ relative to the base point. The labels (i) etc. indicate the corresponding case of \eqref{eq:d-subdiff-indicator-1d}. } \label{fig:indicator} \end{figure} \begin{proof} The proto-differentiability of $\partial f$ follows from the fact that $f$ is twice epi-differentiable; see \cite[Ex.~13.17 \& Thm.~13.40]{Rockafellar:1998}, writing $B{(0,\alpha)}=\{x\in\mathbb{R}^n\mid \norm{x}^2 \in (-\infty, \alpha]\}$ for the twice continuously differentiable mapping $x\mapsto \norm{x}^2$ and the polyhedral set $(-\infty,\alpha]$ satisfying the contraint qualification. For the full proof of \eqref{eq:d-subdiff-indicator}, using second-order subgradient theory from \cite{Rockafellar:1998}, we refer to \cite{Valkonen:2014}.% \footnote{There is a small omission in \cite[Lemma 4.2]{Valkonen:2014}, that actually causes $\widetilde{D(\partial f)}(z|\zeta)(\dirz)$ instead to be calculated. In calculating the subdifferential of (4.18) therein, at the end of the proof of the lemma, the cases $\iprod{y}{w}=0$ and $\iprod{y}{w} < 0$ need to be calculated separately to give the two different sub-cases of ($\norm{z}=\alpha$ and $\norm{\zeta}=0$) in our expression \eqref{eq:d-subdiff-indicator}.} For completeness, we provide here an elementary proof of the one-dimensional case~\eqref{eq:d-subdiff-indicator-1d}. We have \begin{equation} \label{eq:indicator-1d-subdiff} \partial f(z)= \begin{cases} [0, \infty)z, & \abs{z}=\alpha, \\ \{0\}, & \abs{z} < \alpha, \\ \emptyset, & \text{otherwise}. \end{cases} \end{equation} If $\zeta \in \partial f(z)$ and $\dir{\zeta} \in D(\partial f)(z|\zeta)(\dirz)$, there exists by \eqref{eq:graphderiv} sequences $t^i \searrow 0$, $\dir{z^i} \to \dir{z}$, and $\zeta^i \in \partial f(z+t^i\dir{z^i})$ such that \begin{equation} \label{eq:indicator-1d-limit} \dir{\zeta}=\lim_{i \to \infty} \frac1{t^i}(\zeta^i-\zeta). \end{equation} We proceed by case distinction. \begin{enumerate}[label=(\roman*)] \item If $\abs{z}=\alpha$, $\dir{z}=0$, and $\zeta \in (0, \infty)z$, choosing $z^i=0$, we can for any $\dir{\zeta} \in \mathbb{R}$ and large enough $i$ take $\zeta^i=t^i\dir{\zeta}+\zeta \in [0, \infty)z = \partial f(z)$. Thus we obtain the first case in \eqref{eq:d-subdiff-indicator-1d}. \item Let us then suppose $\abs{z}=\alpha$, $\dir{z}=0$, but $\zeta=0$. In this case, choosing $z^i=0$, we have by \eqref{eq:indicator-1d-subdiff} free choice of $\zeta^i \in [0, \infty) z$. Picking $\dir{\zeta} \in [0, \infty) z$ and setting $\zeta^i \coloneqq t^i\dir{\zeta}$, we deduce that $\zeta^i \in [0, \infty) z \subset D(\partial f)(z|\zeta)(\dir{z})$. Since $-(0, \infty) z$ is clearly not approximable from $[0, \infty) z$, we obtain the second case of \eqref{eq:d-subdiff-indicator-1d}. \item If $\abs{z}=\alpha$ and $z\dir{z}>0$, then $\partial f(z+t^i\dir{z^i})=\emptyset$ for large $i$. Therefore it must hold that $z\dir{z} \le 0$. If $\dir{z} \ne 0$, it follows that $\zeta^i=0$ (for large $i$). Since $\zeta$ is fixed, the limit \eqref{eq:indicator-1d-limit} does not exist unless $\zeta=0$, in which case also $\dir{\zeta}=0$. This is covered by the third case of~\eqref{eq:d-subdiff-indicator-1d}. \item If $\abs{z}<\alpha$, then $\zeta^i=\zeta=0$, so we get the fourth case in \eqref{eq:d-subdiff-indicator-1d}. \item If $\abs{z}=\alpha$ and $\dir{z}=0$, but $\zeta \in -(0, \infty)z$, we see that \[ \sign{\zeta}\,\frac1{t^i}(\zeta^i-\zeta)>\frac{1}{t^i} \abs{\zeta}, \] so the limit \eqref{eq:indicator-1d-limit} cannot exist. Therefore the coderivative is empty. Likewise, we obtain the empty coderivative if $\abs{z}>\alpha$, since even $\partial f(z)$ is empty and $\zeta$ does not exist. Together, we obtain the final case in \eqref{eq:d-subdiff-indicator-1d}. \end{enumerate} Finally, regarding $\widetilde{D(\partial f)}(z|\zeta)$ with $m=1$, we see that only the case $\abs{z}=\alpha$ and $\zeta=0$ is split into two sub-cases in \eqref{eq:d-subdiff-indicator-1d}, yielding an altogether non-convex $\graph[D(\partial f)(z|\zeta)]$. Taking the convexification of this set yields \eqref{eq:d-subdiff-indicator-1d-conv}; cf.~\cref{fig:indicator}. \end{proof} \begin{corollary}\label{cor:g-form-indicator} Let $f^*(z) \coloneqq \iota_{[-\alpha, \alpha]}(z)$ and \[ F^*(v) \coloneqq \int_\Omega f^*(v(x)) \,d x \qquad (v \in L^2(\Omega)). \] Then \begin{align} \widetilde{D[\partial F^*]}(v|\eta)(\dirv)&= \begin{cases} \DerivConeF[v|\eta]^\circ, & \dirv \in \DerivConeF[v|\eta] \text{ and } \eta \in \partial F^*(v),\\ \emptyset, & \text{otherwise}, \end{cases} \intertext{and} \frechetCod{[\partial F^*]}(v|\eta)(\dir\eta)&= \begin{cases} {\DerivConeF[v|\eta]}^\circ, & -\dir\eta \in \DerivConeF[v|\eta] \text{ and } \eta \in \partial F^*(v),\\ \emptyset, & \text{otherwise}, \end{cases} \end{align} for the cone \begin{align} \DerivConeF[v|\eta] &= \{ z \in L^2(\Omega) \mid z(x)v(x)\le 0 \text{ if } \abs{v(x)}=\alpha \ \text{ and }\ z(x)\eta(x)\ge 0 \} \intertext{and its polar} \polar{\DerivConeF[v|\eta]}&= \{ \nu \in L^2(\Omega) \mid \nu(x)v(x)\ge 0 \text{ if } \eta(x)=0\ \text{ and }\ \nu(x) = 0 \text{ if } \abs{v(x)}<\alpha \}. \end{align} \end{corollary} \begin{proof} The claim about the graphical derivative follows from \cref{corollary:g-form-graphderiv} and \cref{lemma:DsubdiffF}, using the fact that the indicator function of a closed convex set is normal. The regular coderivative formula follows from the more general \cref{prop:frechetcod-cone-operator} in the appendix. Here, in the derivation of the explicit form of the polar cone $\polar{\DerivConeF[v|\eta]}$, we use the fact that $D[\partial F^*](v|\eta)(\dirv)$ is non-empty if and only if \begin{equation} \label{eq:D-fstar-l1-constr} \eta(x)v(x)=\abs{\eta(x)} \quad\text{and}\quad \abs{v(x)} \le \alpha \qquad (x \in \Omega). \qedhere \end{equation} \end{proof} \begin{remark} If $(v,\eta)$ satisfy the strict complementarity condition $\abs{v(x)}<\alpha$ or $\abs{\eta(x)}>0$ for a.\,e.~$x \in \Omega$, the degenerate second and third case in \eqref{eq:d-subdiff-indicator-1d} (corresponding to the gray areas in \cref{fig:indicator}) do not occur, and the cone simplifies to \[ \DerivConeF[v|\eta] \coloneqq \{ z \in L^2(\Omega) \mid z(x)=0 ~\text{if}~ \abs{v(x)}=\alpha,\, x \in \Omega \}. \] Note that points $x\in\Omega$ where a degenerate case occurs are precisely those where there is no \emph{graphical regularity} of $\partial f$ at $(v(x), \eta(x))$. We refer to \cite[Thm.~8.40]{Rockafellar:1998} for the definition of this concept, which we do not require in the present work. \end{remark} \subsubsection{\texorpdfstring{$\scriptstyle L^1(\Omega;\mathbb{R}^m)$}{L¹(Ω;ℝᵐ)} norm} The following lemma is useful for computing $D[\partial F^*]$ for the problem \eqref{eq:linffit_problem}. Its claim in the one-dimensional case ($m=1$) is illustrated in \cref{fig:1norm-1d}. \begin{lemma} \label{lemma:one-norm-findim} With $z \in \mathbb{R}^m$, let $f^*(z)=\norm{z}_2$. Then $\partial f^*$ is proto-differentiable with \begin{equation} \label{eq:d-subdiff-1norm} D(\partial f^*)(z|\zeta)(\dirz) = \begin{cases} \left(\frac{I-(z \otimes z) / \norm{z}^2}{\norm{z}^2}\right) \dirz, & z \ne 0,\, \zeta\norm{z}=z, \\ \{\zeta\}^\perp, & z=0,\, \dirz \ne 0,\, \zeta\norm{\dirz}=\dirz, \\ \polar{\{\zeta\}}, & z=0,\, \dirz = 0,\, \norm{\zeta} = 1, \\ \mathbb{R}^m, & z=0,\, \dirz = 0,\, \norm{\zeta} < 1, \\ \emptyset, & \text{otherwise.} \end{cases} \end{equation} In particular, if $m=1$, then \begin{align} \label{eq:d-subdiff-1norm-1d} D(\partial f^*)(z|\zeta)(\dirz) &= \begin{cases} \{0\} & z \ne 0,\, \zeta = \sign z, \\ \{0\}, & z=0,\, \dirz \in (0, \infty)\zeta, \\ (-\infty,0]\zeta, & z=0,\, \dirz = 0,\, \abs{\zeta} = 1, \\ \mathbb{R}, & z=0,\, \dirz = 0,\, \abs{\zeta} < 1, \\ \emptyset, & \text{otherwise,} \end{cases} \intertext{as well as} \label{eq:d-subdiff-1norm-1d-conv} \widetilde{D(\partial f^*)}(z|\zeta)(\dirz) &= \begin{cases} \{0\} & z \ne 0,\, \zeta = \sign z, \\ (-\infty,0]\zeta, & z=0,\, \dirz \in [0, \infty)\zeta,\, \abs{\zeta} = 1, \\ \mathbb{R}, & z=0,\, \dirz = 0,\, \abs{\zeta} < 1, \\ \emptyset, & \text{otherwise.} \end{cases} \end{align} \end{lemma} \begin{figure} \centering \begin{subfigure}{0.30\textwidth} \centering \begin{asy} unitsize(50,50); real l=1.25; real eps=0.3; pair gfstart=(-l, -1); pair gfend=(l, 1); path gf=gfstart--(0, -1)--(0, 1)--gfend; draw(gf, dashed, Arrows); draw((0, -eps)--(0, eps), linewidth(1.1), Arrows); dot((0, 0)); label("(iv)", (0,0), 1.5*W); draw((-eps,-1)--(0,-1)--(0, -1+eps), linewidth(1.1), Arrows); dot((0, -1)); label("(iii)", (0,-1+eps), 1.5*E); label("(ii)", (-eps,-1), 1.5*S); draw((0.5*l-eps, 1)--(0.5*l+eps, 1), linewidth(1.1), Arrows); dot((0.5*l, 1)); label("(i)", (0.5*l,1), 1.5*N); \end{asy} \caption{$D(\partial f^*)(z|\zeta)$} \end{subfigure} \hfill \begin{subfigure}{0.30\textwidth} \centering \begin{asy} unitsize(50,50); real l=1.25; real eps=0.3; pair gfstart=(-l, -1); pair gfend=(l, 1); path gf=gfstart--(0, -1)--(0, 1)--gfend; draw(gf, dashed, Arrows); draw((0, -eps)--(0, eps), linewidth(1.1), Arrows); dot((0, 0)); fill((-eps, -1)--(0,-1)--(0, -1+eps)..controls (-eps/sqrt(2), -1+eps/sqrt(2))..cycle, gray(.8)); draw((-eps, -1)--(0,-1)--(0, -1+eps), linewidth(1.1), Arrows); dot((0, -1)); draw((0.5*l-eps, 1)--(0.5*l+eps, 1), linewidth(1.1), Arrows); dot((0.5*l, 1)); // For alignment label("$\phantom{(ii)}$", (-eps,-1), 1.5*S); label("$\phantom{(i)}$", (0.5*l,1), 1.5*N); \end{asy} \caption{$\widetilde{D(\partial f^*)}(z|\zeta)$} \end{subfigure} \hfill \begin{subfigure}{0.30\textwidth} \centering \begin{asy} unitsize(50,50); real l=1.25; real eps=0.3; pair gfstart=(-l, -1); pair gfend=(l, 1); path gf=gfstart--(0, -1)--(0, 1)--gfend; draw(gf, dashed, Arrows); draw((-eps,0)--(eps, 0), linewidth(1.1), Arrows); dot((0, 0)); fill((0, -1-eps)--(0,-1)--(eps, -1)..controls (eps/sqrt(2), -1-eps/sqrt(2))..cycle, gray(.8)); draw((0, -1-eps)--(0,-1)--(eps, -1), linewidth(1.1), Arrows); dot((0, -1)); draw((0.5*l, 1+eps)--(0.5*l, 1-eps), linewidth(1.1), Arrows); dot((0.5*l, 1)); \end{asy} \caption{$\widehat D^*(\partial f^*)(z|\zeta)$} \end{subfigure} \caption{Illustration of the graphical derivative and regular coderivative for $\partial f^*$ with $f^*=\abs{\mkern0.5\thinmuskip{\boldsymbol\cdot}\mkern0.5\thinmuskip}$. The dashed line is $\graph \partial f$. The dots indicate the base points $(z, \zeta)$ where the graphical derivative or coderivative is calculated, and the thick arrows and gray areas the directions of $(\dir z, \dir \zeta)$ relative to the base point. The labels (i) etc. indicate the corresponding case of \eqref{eq:d-subdiff-1norm-1d}. } \label{fig:1norm-1d} \end{figure} \begin{proof} In the case $m=1$, the proto-differentiability of $\partial f^*$ follows from the fact that $f^*$ is piecewise linear and hence twice epi-differentiable; see \cite[Prop.~13.9 \& Thm.~13.40]{Rockafellar:1998}. For general $m \in \mathbb{N}$, we may use the twice epi-differentiability of $f(x) = \iota_{B(0, 1)}(x)$ established in the proof of \cref{lemma:DsubdiffF} and the conjugate relationship in \cite[Thm.~13.21]{Rockafellar:1998} together with \cite[Thm.~13.40]{Rockafellar:1998}. It remains to verify the expressions \eqref{eq:d-subdiff-1norm}--\eqref{eq:d-subdiff-1norm-1d-conv}. We have for any $m\in \mathbb{N}$ that \[ \partial f^*(z)= \begin{cases} \left\{\frac{z}{\norm{z}}\right\}, & z \ne 0 \\ \closure B(0, 1), & z = 0. \end{cases} \] We again proceed by case distinction. \begin{enumerate}[label=(\roman*)] \item For $z \ne 0$ necessarily therefore $D(\partial f^*)(z|\zeta)=\emptyset$ unless $\zeta=z/\norm{z}$, which yields the last case. \item If $z \ne 0$ and $\zeta=z/\norm{z}$, for any \[ \alt{z}=z+t \dir{\alt{z}}/\norm{\dir{\alt{z}}} \] with $\alt{z} \to z$ and $t \searrow 0$, we have also $\partial f^*(\alt{z})=\alt{z}/\norm{\alt{z}}$. The first case in \eqref{eq:d-subdiff-1norm} now follows immediately from computing the outer limit \begin{equation} \limsup_{t \searrow 0, \dir{\alt{z}} \to \dirz} \frac{\partial f^*(\alt{z})-\zeta}{t} =\limsup_{t \searrow 0, \dir{\alt{z}} \to \dirz} \frac{\alt{z}/\norm{\alt{z}}-\zeta}{t} =\grad\left(\frac{z}{\norm{z}}\right) \dirz. \end{equation} \item If $z=0$, and $\dirz \ne 0$, then $\alt{z} \ne 0$ and $\alt{z}/\norm{\alt{z}}=\dir{\alt{z}}/\norm{\dir{\alt{z}}}$. Therefore \begin{equation} \limsup_{t \searrow 0, \dir{\alt{z}} \to \dirz} \frac{\partial f^*(\alt{z})-\zeta}{t} =\limsup_{t \searrow 0, \dir{\alt{z}} \to \dirz} \frac{\dir{\alt{z}}/\norm{\dir{\alt{z}}}-\zeta}{t} \end{equation} will only have limits if $\zeta$ lies on the boundary of $B(0, 1)$, and indeed $\zeta=\dirz/\norm{\dirz}$. This gives the limit $\{\zeta\}^\perp$, i.e., the second case. \item If $z=0$ and $\dirz=0$, then we may pick $\zeta \in \closure B(0, 1)$ arbitrarily by choosing also $\dir{\alt{z}}=0$. If $\norm{\zeta}=1$, then we obtain the limit \[ \limsup_{t \to \infty} \frac1{t}(B(0,1)-\zeta)=\{\dir\zeta \in \mathbb{R}^m \mid \iprod{\dir\zeta}{\zeta}\le 0\}=\polar{\{\zeta\}} \] and hence the third case. \item In the same situation, choosing $\norm{\zeta}<1$ gives the limit $\mathbb{R}^m$ and hence the fourth case. \end{enumerate} Finally, \eqref{eq:d-subdiff-1norm-1d} is a trivial specialization of \eqref{eq:d-subdiff-1norm}, while regarding $\widetilde{D(\partial f^*)}(z|\zeta)$ with $m=1$, we see that only the case $z=0$ and $\abs{\zeta}=1$ is split into two sub-cases in \eqref{eq:d-subdiff-1norm-1d}. These produce an altogether non-convex $\graph[D(\partial f^*)(z|\zeta)]$. Taking the convexification of this set yields \eqref{eq:d-subdiff-1norm-1d-conv}; cf.~\cref{fig:1norm-1d}. \end{proof} \begin{corollary}\label{cor:g-form-l1norm} Let $f^*(z) \coloneqq \delta\abs{z}$ and \[ F^*(v) \coloneqq \int_\Omega f^*(v(x)) \,d x \qquad (v \in L^2(\Omega)). \] Then \begin{align} \widetilde{D[\partial F^*]}(v|\eta)(\dirv)&= \begin{cases} \DerivConeF[v|\eta]^\circ, & \dirv \in \DerivConeF[v|\eta] \text{ and } \eta \in \partial F^*(v),\\ \emptyset, & \text{otherwise}, \end{cases} \intertext{and} \frechetCod{[\partial F^*]}(v|\eta)(\dir\eta)&= \begin{cases} {\DerivConeF[v|\eta]}^\circ, & -\dir\eta \in \DerivConeF[v|\eta] \text{ and } \eta \in \partial F^*(v),\\ \emptyset, & \text{otherwise}, \end{cases} \end{align} for the cone \begin{align} \label{eq:linfty-fitting-z} {\DerivConeF[v|\eta]} &= \{ z \in L^2(\Omega) \mid z(x)\eta(x) \ge 0 \text{ if } v(x)=0\ \text{ and }\ (\delta-\abs{\eta(x)})z(x)=0 \}, \intertext{and its polar} \label{eq:linfty-fitting-nu} \polar{{\DerivConeF[v|\eta]}} &= \{ \nu \in L^2(\Omega) \mid \nu(x)\eta(x)\le 0 \text{ if } \abs{\eta(x)}=\delta\ \text{ and }\ v(x)\nu(x)=0 \}. \end{align} \end{corollary} \begin{proof} The claim about the graphical derivative follows from \cref{corollary:g-form-graphderiv} and \cref{lemma:one-norm-findim}, using the fact that $g(z)=|z|$ is finite-valued and Lipschitz continuous and hence normal. The regular coderivative formula follows from the more general \cref{prop:frechetcod-cone-operator} in the appendix. To derive the explicit form of the polar cone $\polar{\DerivConeF[v|\eta]}$, we employ the fact that $D[\partial F^*](v|\eta)(\dirv)$ is non-empty if and only if \begin{equation} \label{eq:linfty-fitting-constr} \abs{\eta(x)}\le \delta \quad\text{and}\quad v(x)\eta(x)=\abs{v(x)}. \qedhere \end{equation} \end{proof} \begin{remark} If $(v,\eta)$ satisfy the strict complementarity condition $v(x) \ne 0$ or $\abs{\eta(x)}<\delta$ for a.\,e.~$x \in \Omega$, the degenerate second and third case in \eqref{eq:d-subdiff-1norm-1d} (corresponding to the gray areas in \cref{fig:1norm-1d}) do not occur, and the cone simplifies to \[ \DerivConeF[v|\eta] \coloneqq \{ z \in L^2(\Omega) \mid z(x)=0 ~\text{if}~ v(x)=0,\, x \in \Omega \}. \] Again, points $x\in\Omega$ where a degenerate case occurs are precisely those where graphical regularity fails to hold for $\partial f^*$ at $(v(x), \eta(x))$. \end{remark} \subsubsection{Spatially varying integrands} Let $\alpha, \beta \in L^2(\Omega)$ with $\alpha(x) < \beta(x)$ for a.\,e.~$x \in \Omega$. Define \[ f(x, z) \coloneqq \iota_{[\alpha(x), \beta(x)]}(z) \qquad (x \in \Omega; z \in \mathbb{R}). \] This example is useful for spatially or temporally varying ``tube'' constraints, which arise in the regularization of inverse problems subject to variable noise levels~\cite{Dong:2011}. The indicator function of temporally variable constraints also appears in Moreau's sweeping process, which is a model for several phenomena from nonsmooth mechanics such as elastoplasticity~\cite{Kunze:2000}. Due to the measurability of $\alpha$ and $\beta$, the integrand $f$ is proper, convex and normal \cite[Ex.~14.32]{Rockafellar:1998}, such that the subdifferential $\partial f(x,\mkern0.5\thinmuskip{\boldsymbol\cdot}\mkern0.5\thinmuskip)$ can be computed pointwise. Furthermore, $f$ is a.\,e. proto-differentiable as the indicator function of the convex polyhedral set $[\alpha(x),\beta(x)]$; see again \cite[Ex.~13.17 \& Thm.~13.40]{Rockafellar:1998}. By simple pointwise application of \cref{lemma:DsubdiffF} we can thus compute $D[\partial f(x,\mkern0.5\thinmuskip{\boldsymbol\cdot}\mkern0.5\thinmuskip)]$. We therefore deduce the applicability of \cref{corollary:g-form-graphderiv} to \[ F(v) = \int_\Omega f(x, v(x)) \,d x, \] and obtain a pointwise characterization of $D(\partial F)$ similar to \cref{cor:g-form-indicator}. Clearly, we can analogously modify \cref{cor:g-form-l2norm} (squared $L^2(\Omega;\mathbb{R}^m)$ norm) and \cref{cor:g-form-l1norm} ($L^1(\Omega;\mathbb{R}^m)$ norm) by, e.g., introducing a spatially varying weight in each norm. \section{Stability of variational inclusions}\label{sec:stability_vi} To pave the way towards studying the stability of saddle point systems in the following section, we now recall general concepts for the study of variational inclusions and develop general results that quickly specialize to saddle point systems in $L^2$. \subsection{Metric regularity and the Mordukhovich criterion} Our stability analysis is based on the following set-valued Lipschitz property \cite{Aubin:1990,Rockafellar:1998,Mordukhovich:2006}, also known as the \emph{Aubin property} of $\inv R$. \begin{definition} We say that the set-valued mapping $R: Q \rightrightarrows W$ is \emph{metrically regular} at $\realopt w$ for $\realopt q$ if $\graph R$ is locally closed and there exist $\rho, \delta, \ell > 0$ such that \begin{equation} \label{eq:inverse-aubin} \inf_{p\,:\, w \in R(p)} \norm{q-p} \le \ell \norm{w-R(q)} \quad\text{ for any $q,w$ such that } \norm{q-\realopt{q}} \le \delta, \, \norm{w-\realopt{w}} \le \rho . \end{equation} We denote the infimum over valid constants $\ell$ by $\lip{\inv R}(\realopt w|{\realopt{q}})$, or $\lip{\inv R}$ for short when there is no ambiguity about the point $(\realopt w, {\realopt{q}})$. \end{definition} A simplified view, indicating why this concept is useful, can be seen by taking ${\realopt{q}}$ satisfying $0 \in R({\realopt{q}})$. Setting $q={\realopt{q}}$ and $\realopt{w}=0$ in \eqref{eq:inverse-aubin}, we then obtain \begin{equation} \label{eq:sensitivity1} \inf_{p\,:\, w \in R(p)} \norm{{\realopt{q}}-p} \le \ell_{\inv R}(0|{\realopt{q}}) \norm{w} \quad\text{ for any $w$ such that } \norm{w} \le \rho . \end{equation} Therefore, if we perturb the variational inclusion $0 \in R({\realopt{q}})$ -- typically an optimality condition -- by a small linear perturbation $w$, we will still find a nearby solution to the perturbed problem. We will later see that for our problems of interest, we can encode variations in data and in an additional Moreau-Yosida regularization parameter into $w$. We therefore need to estimate $\ell_{\inv R}$, for which the following \emph{Mordukhovich criterion} \cite{Mordukhovich:1992} will be useful. It is also contained in \cite[Thm.~4.7]{Mordukhovich:2006} and simplified here to our Hilbert space setting from the original Asplund space setting. \begin{theorem} \label{thm:ell} Let $R: Q \rightrightarrows W$ be a set-valued mapping between Hilbert spaces $Q$ and $W$. Suppose $\graph R$ is locally closed around $(q, w) \in \graph R$. Then \[ \lip{R}(q|w) =\inf_{t>0} \sup \left\{ \norm{\frechetCod R(\alt{q}|\alt{w})} \,\middle|\, \alt{q} \in B(q, t),\, \alt{w} \in R(\alt{q}) \isect B(w, t) \right\}. \] \end{theorem} Here, for positively homogeneous $M: W \rightrightarrows Q$, we have defined \[ \norm{M} \coloneqq \sup\{ \norm{q} \mid q \in M(w),\, \norm{w} \le 1\}. \] If $R$ satisfies the regularity assumption $\frechetCod R(q|w) = [DR(q|w)]^{*+}$ (which is the case for pointwise mappings due to \cref{thm:p-graph-deriv}), we may translate \cref{thm:ell} to be expressed in terms of the graphical derivative $D R$, where by the second equation in \eqref{eq:upper-adj-cod-tilde} it suffices to consider the convexification $\widetilde{D R}$. This is the content of the next proposition. \begin{proposition} \label{prop:lip-estim-regular} Let $R: Q \rightrightarrows W$ be a set-valued mapping between Hilbert spaces $Q$ and $W$. Suppose $\graph R$ is locally closed around $(q, w) \in \graph R$ and \begin{equation} \label{eq:somesetmap-cod-d-upper-adjoint} \frechetCod R(q|w) = [DR(q|w)]^{*+}. \end{equation} Then \begin{equation} \label{eq:lip-inv} \lip{\inv R}(w|q) = \inf_{t>0} \sup \left\{ \tildelip{\inv R}(\alt{w}|\alt{q}) \,\middle|\, \alt{w} \in B(w, t),\, \alt{q} \in B(q, t),\, \alt{w} \in R(\alt{q}) \right\}, \end{equation} with \begin{equation} \label{eq:tildelip-inv-r-2} \tildelip{\invR}(\alt{w}|\alt{q}) \coloneqq \sup\left\{ \norm{\dir{w}} \,\middle|\, \begin{array}{l} \dir{q} \in Q,\, \dir{w} \in W,\, \norm{\dir{q}} \le 1, \text{ satisfying } \\ \iprod{\dir{q}}{\dir{\alt{q}}} \le \iprod{\dir{w}}{\dir{\alt{w}}} ~\text{when } \dir{\alt{w}} \in \widetilde{DR}(\alt{q}|\alt{w})(\dir{\alt{q}}) \end{array} \right\}. \end{equation} \end{proposition} \begin{proof} From the \cref{def:frechetcod} of $\frechetCod R(q|w)$ and $\frechetCod \inv R(w|q)$ through $\widehat N((w,1); \graph R)$, we observe from the definitions that \[ \dir{w} \in \frechetCod R(w|q)(\dir{q}) \iff - \dir{q} \in \frechetCod \inv R(q|w)(-\dir{w}). \] Applied to $\inv R$, \cref{thm:ell} therefore gives \begin{equation} \label{eq:inv-s-lip-estimate-coderivative} \begin{aligned}[t] \lip{\inv R}(w|q) & =\inf_{t>0} \sup \left\{ \norm{\frechetCod \inv R(\alt{w}|\alt{q})} \,\middle|\, \alt{w} \in B(w, t),\, \alt{q} \in \inv R(\alt{w}) \isect B(q, t) \right\} \\ & =\inf_{t>0} \sup \left\{ \norm{\inv{[\frechetCod R(\alt{q}|\alt{w})]}} \,\middle|\, \alt{w} \in B(w, t),\, \alt{q} \in B(q, t),\, \alt{w} \in R(\alt{q}) \right\} \\ & =\inf_{t>0} \sup \left\{ \norm{\dir{w}} \,\middle|\, \begin{array}{l} \dir{q} \in \frechetCod R(\alt{q}|\alt{w})(\dir{w}),\, \norm{\dir{q}} \le 1,\, \\ \alt{w} \in B(w, t),\, \alt{q} \in B(q, t),\, \alt{w} \in R(\alt{q}) \end{array} \right\} \\ & =\inf_{t>0} \sup \left\{ \tildelip{\inv R}(\alt{w}|\alt{q}) \,\middle|\, \alt{w} \in B(w, t),\, \alt{q} \in B(q, t),\, \alt{w} \in R(\alt{q}) \right\}, \end{aligned} \end{equation} where \begin{equation} \label{eq:tilde-lip-inv-r} \tildelip{\inv R}(\alt{w}|\alt{q}) \coloneqq \sup \left\{ \norm{\dir{w}} \,\middle|\, \dir{q} \in \frechetCod R(\alt{q}|\alt{w})(\dir{w}),\, \norm{\dir{q}} \le 1 \right\}. \end{equation} Referral to \eqref{eq:somesetmap-cod-d-upper-adjoint} and the fact that \[ [DR(q|w)]^{*+}= [\widetilde{DR}(q|w)]^{*+}, \] now establishes the claim with the expression \eqref{eq:tildelip-inv-r-2} for $\tildelip{\inv R}(\alt{w}|\alt{q})$. \end{proof} \subsection{Graphical derivatives expressed with linear operators and cones} \label{sec:linear-cone} We now derive necessary and sufficient conditions for the Aubin property to hold for variational inclusions involving second-order set-valued derivatives of pointwise functionals. As seen in \cref{sec:pointwise:examples}, these commonly have the structure of a sum of a linear operator and a cone. In fact, for the following analysis, it suffices that the graphical derivatives merely contain such a sum in order to derive upper bounds; this will be important for treating discretization by projection in \cref{sec:discretization}. We therefore assume therefore that $W=Q=L^2(\Omega; \mathbb{R}^N)$ and that \begin{equation} \label{eq:linear-polar-form} \widetilde{D R}(q|w)(\dir{q}) \supset \begin{cases} T_q \dir{q} + \polar{\DerivCone[q|w]}, & \dir{q} \in \DerivCone[q|w], \\ \emptyset, & \dir{q} \not\in \DerivCone[q|w], \end{cases} \end{equation} for some linear operator $T\coloneqq T_q: Q \to Q$, dependent on $q$ but not $w$, and a cone $\DerivCone \coloneqq \DerivCone[q|w] \subset Q$, dependent on both $q$ and $w$. Here we recall from \eqref{eq:polar} that $\polar\DerivCone$ is the \emph{polar cone} of $\DerivCone$. Although it will not be needed in our analysis, an explicit characterization of the regular coderivatives of set-valued mappings satisfying \eqref{eq:linear-polar-form} (with equality) is derived in \cref{sec:coderivative} for completeness. \enlargethispage{1cm} Following the reasoning in \cite[Prop.~4.1]{Valkonen:2014}, we may, using the structural assumption \eqref{eq:linear-polar-form}, continue from \cref{prop:lip-estim-regular} to derive \begin{equation} \label{eq:tildelip-inv-h-3} \begin{aligned}[t] \tildelip{\inv R}(w|q) & \le \sup\left\{ \norm{\dir{w}} \,\middle|\, \begin{array}{l} \dir{q} \in Q,\, \dir{w} \in Q,\, \norm{\dir{q}} \le 1, ~\text{satisfying} \\ \qquad \iprod{\dir{w}}{T \dir{\alt{q}}+\dir{\alt{p}}} \le \iprod{\dir{q}}{\dir{\alt{q}}} \\ \hfill\text{for } \dir{\alt{q}} \in \DerivCone,\, \dir{\alt{p}} \in \polar\DerivCone \end{array} \right\} \\[0.5em] & = \sup\left\{ \norm{\dir{w}} \,\middle|\, \begin{array}{l} \dir{q} \in Q,\, \dir{w} \in \DerivCone,\, \norm{\dir{q}} \le 1, ~\text{satisfying} \\ \hfill \iprod{\dir{w}}{T \dir{\alt{q}}} \le \iprod{\dir{q}}{\dir{\alt{q}}} \text{ for } \dir{\alt{q}} \in \DerivCone \end{array} \right\} \\[0.5em] & = \sup \left\{ \norm{\dir{w}} \,\middle|\, \begin{array}{l} \dir{q} \in Q,\, \dir{w} \in \DerivCone,\, \norm{\dir{q}} \le 1, ~\text{satisfying} \\ \hfill \iprod{T^* \dir{w}-\dir{q}}{\dir{\alt{q}}} \le 0 \text{ for } \dir{\alt{q}} \in \DerivCone \end{array} \right\} \\[0.5em] & = \sup \left\{ \norm{\dir{w}} \,\middle|\, \dir{q} \in Q,\, \dir{w} \in \DerivCone,\, \norm{\dir{q}} \le 1,\, T^* \dir{w}-\dir{q} \in \polar\DerivCone \right\} \\[.3em] & = \sup \left\{ \norm{\dir{w}} \,\middle|\, \dir{w} \in \DerivCone,\, \inf_{z \in \polar\DerivCone} \norm{T^* \dir{w}-z} \le 1 \right\}. \end{aligned} \end{equation} We illustrate this expression geometrically in \cref{fig:polar-tildelip}. Observe also that if \eqref{eq:linear-polar-form} holds as an equality, then so does the first inequality in \eqref{eq:tildelip-inv-h-3}. That is, in this case \[ \tildelip{\inv R}(w|q) = \sup \left\{ \norm{\dir{w}} \,\middle|\, \dir{w} \in \DerivCone,\, \inf_{z \in \polar\DerivCone} \norm{T^* \dir{w}-z} \le 1 \right\}. \] \begin{figure} \centering \asyinclude{polar.asy} \caption{A geometric illustration of the distance $\tildelip{\inv R}$ computed in \eqref{eq:tildelip-inv-h-3} for \eqref{eq:linear-polar-form}. The dashed line indicates the transformed cone $T^* \DerivCone$. Without loss of generality, we restrict $\dir{w}$ to lie on the unit sphere (dotted), in which case the distance between points on the unit sphere between the polar $\polar\DerivCone$ and $T^* \DerivCone$ gives the inverse Lipschitz constant.} \label{fig:polar-tildelip} \end{figure} \begin{remark} If $\DerivCone$ is a closed subspace, then $\polar \DerivCone=\DerivCone^\perp$, and \eqref{eq:tildelip-inv-h-3} reduces to \begin{equation} \begin{aligned}[t] \tildelip{\inv R}(w|q) & \le \sup \left\{ \norm{\dir{w}} \,\middle|\, \dir{w} \in \DerivCone,\, \norm{P_{\DerivCone} T^* \dir{w}} \le 1 \right\} \\ & = \sup \left\{ \norm{P_{\DerivCone} \dir{w}} \,\middle|\, \dir{w} \in Q,\, \norm{P_{\DerivCone} T^* P_{\DerivCone} \dir{w}} \le 1 \right\}. \end{aligned} \end{equation} \end{remark} We can use \eqref{eq:tildelip-inv-h-3} and the expansions above to estimate $\tildelip{\inv R}(\alt{w}|\alt{q})$ for $R=H_{{\bar{u}}}$ and $R=R_0$. To study stability and metric regularity, we however still need to pass to \[ \lip{\inv R}(w|q) = \inf_{t>0} \sup \left\{ \tildelip{\inv R}(\alt{w}|\alt{q}) \,\middle|\, \alt{w} \in B(w, t),\, \alt{q} \in B(q, t),\, \alt{w} \in R(\alt{q}) \right\}. \] This in essence involves a uniform $c>0$ in the condition \[ \inf_{z \in \polar{\DerivCone(\alt{q}|\alt{w})}} \norm{T_{\alt{q}}^* \dir{w}-z} \ge c\norm{\dir{w}} \qquad (\dir{w} \in \DerivCone(\alt{q}|\alt{w})) \] for all $(\alt{q}, \alt{w})$ close to $(q, w)$. If we assume continuity of the mapping $\alt{q} \mapsto T_{\alt{q}}$, we can simplify this condition. The following lemma prepares the way for the stability analysis of saddle points in the next section (cf.~\eqref{eq:t-split} below). \begin{lemma} \label{lemma:general-limit-polar-projection-lower-bound} Let $q, w \in Q=W=X \times Y$, and suppose that for $(\alt{q}, \alt{w})$ in a neighborhood $U$ of $(q, w)$, $\graph R \isect U$ is closed, \eqref{eq:somesetmap-cod-d-upper-adjoint} holds, and we have \begin{equation} \label{eq:general-dr-expr} \widetilde{DR}(\alt{q}|\alt{w})(\dir{q}) \supset \begin{cases} T_{\alt{q}} \dir{q} + \polar{\DerivCone[\alt{q}|\alt{w}]}, & \dir{q} \in \DerivCone[\alt{q}|\alt{w}], \\ \emptyset, & \dir{q} \not\in \DerivCone[\alt{q}|\alt{w}], \end{cases} \end{equation} for a cone $\DerivCone[\alt{q}|\alt{w}] \subset Y$. In addition to these structural assumptions, assume the continuity at $q$ of $\alt{q} \mapsto T_{\alt{q}}$, and for some $c>0$ the bound \begin{equation} \label{eq:grbound} a(q|w; R) \coloneqq \sup_{t>0} \inf_{\substack{(\dir{w}, z) \in \DerivConeDXt[q|w]{t}{R}, \\ \dir{w} \ne 0}} \frac{\norm{T_{q}^* \dir{w}-z}}{\norm{\dir{w}}} \ge c, \end{equation} where \begin{align} \label{eq:general-vt} \DerivConeDXt[q|w]{t}{R} & \coloneqq \Union\left\{ \DerivCone[\alt{q}|\alt{w}] \times \polar{\DerivCone[\alt{q}|\alt{w}]} \,\middle|\, \alt{w} \in R(\alt{q}),\, \norm{\alt{q}-q} < t,\, \norm{\alt{w}-w} < t \right\} \subset Y^2. \end{align} Then \begin{equation} \label{eq:lip-inv-r-bound} \lip{\inv R}(w|q) \le \inv c. \end{equation} Moreover, if \eqref{eq:general-dr-expr} holds as an equality, then $\ell_{\inv R}(w|q) < \infty$ if and only if \( a(q|w; R) > 0\). \end{lemma} \begin{proof} Suppose \eqref{eq:grbound} holds, and pick $c_1 \in (0, c)$. Whenever $t>0$ is small enough and $\alt{w}$ and $\alt{q}$ satisfy \[ \alt{w} \in R(\alt{q}),\quad \norm{q-\alt{q}}<t\quad\text{and}\quad \norm{w-\alt{w}}<t, \] the bound \eqref{eq:grbound}, the continuity of $\alt{q} \mapsto T_{\alt{q}}$, and the inclusion \[ \DerivCone(\alt{q}|\alt{w}) \times \polar{\DerivCone(\alt{q}|\alt{w})} \subset \DerivConeDXt[q|w]{t}{R}, \] guarantee the estimate \[ \norm{T_{\alt{q}} \dir{w} - z} \ge c_1 \norm{\dir{w}} \qquad (\dir{w} \in \DerivCone(\alt{q}|\alt{w}),\, z \in \polar{\DerivCone(\alt{q}|\alt{w})} ). \] The latter says that \[ \tildelip{\inv R}(\alt{w}|\alt{q}) \le \inv c_1. \] By \eqref{eq:tildelip-inv-h-3} and \eqref{eq:inv-s-lip-estimate-coderivative}, therefore \[ \lip{\inv R}(\alt{w}|\alt{q}) = \sup\{ \tildelip{\inv R}(\alt{w}|\alt{q}) \mid \alt{w} \in R(\alt{q}),\, \norm{q-\alt{q}}<t,\, \norm{w-\alt{w}}<t \} \le \inv c_1. \] Since $c_1 \in (0, c)$ was arbitrary, this proves \eqref{eq:lip-inv-r-bound}. If \eqref{eq:grbound} does not hold, and \eqref{eq:general-dr-expr} holds as an equality, we can, given $\varepsilon>0$, find for every $t>0$ a pair $(\dir{w}, z) \in \DerivConeDXt[q|w]{t}{R} \setminus \{0\} \times Y$, such that \[ \norm{T_{q}^* \dir{w}-z}\le\varepsilon\norm{\dir{w}}. \] Thus, by the definition of $\DerivConeDXt[q|w]{t}{R}$, we can also find $\alt{q}$ and $\alt{w}$ satisfying \[ \alt{w} \in R(\alt{q}),\quad \norm{q-\alt{q}}<t\quad\text{and}\quad \norm{w-\alt{w}}<t \] such that \[ \dir{w} \in \DerivCone(\alt{q}|\alt{w})\quad\text{and}\quad z \in \polar{\DerivCone(\alt{q}|\alt{w})}. \] Recalling \eqref{eq:tildelip-inv-h-3}, which holds as an equality under the present assumption that \eqref{eq:general-dr-expr} holds as an equality, this implies that \[ \tildelip{\inv R}(\alt{w}|\alt{q}) \ge \inv\varepsilon. \] Since $t>0$ was arbitrary, we have as well that \[ \lip{\inv R}(w|q) \ge \inv\varepsilon. \] Finally, since $\varepsilon>0$ was arbitrary, it follows that $\ell_{\inv R}(w|q) = \infty$ if \eqref{eq:grbound} does not hold. \end{proof} \section{Stability of non-linear saddle point systems}\label{sec:stability_sp} We now apply the results of the preceding section to saddle points characterizing minimizers of nonsmooth optimization problems of the form \eqref{eq:prob_convex}. In particular, we assume that \begin{equation} \label{eq:pde-f-choice} F^*(v)=\int_\Omega f^*(v(x)) \,d x \end{equation} for a proper, convex, lower semicontinuous $f^*$ and, motivated by the problems considered in the next section, \begin{equation} \label{eq:pde-g-choice} G(u)=\int_\Omega g(u(x)) \,d x \quad \text{for} \quad g(z) = \frac{\alpha}{2} \abs{z}^2. \end{equation} \subsection{Non-linear saddle point systems as variational inclusions} \label{sec:saddle-vi} We first write the first-order optimality conditions \eqref{eq:oc} for the problem \eqref{eq:prob_convex} as an inclusion for a set-valued mapping and compute its derivative. For ${\realopt{q}}=({\realopt{u}},\realoptv)$ to be a saddle point of \eqref{eq:lagrangian}, the Lagrangian $L$ has to satisfy \[ L({\realopt{u}}, v) \le L({\realopt{u}}, \realoptv) \le L(u, \realoptv) \qquad (u \in X,\, v \in Y). \] Since $-L(u, \mkern0.5\thinmuskip{\boldsymbol\cdot}\mkern0.5\thinmuskip)$ is convex, proper, and lower semicontinuous for any $u \in X$, we deduce from the necessary and sufficient first-order optimality condition $0 \in \partial(-L(u, \mkern0.5\thinmuskip{\boldsymbol\cdot}\mkern0.5\thinmuskip))(\realoptv)$ for convex functions together with the sum rule \cite[Prop.~5.6]{Ekeland:1999} that $K({\realopt{u}}) \in \partial F^*(\realoptv)$. We also see that \[ {\realopt{u}} \in \argmin_u~ G(u)+ \iprod{K(u)}{\realoptv}. \] Since $G$ is convex and $K \in C^1(X; Y)$, we can apply the calculus of Clarke's generalized derivative (which reduces to the Fréchet derivative and convex subdifferential for differentiable and convex functions, respectively; see, e.g., \cite[Chap.~2.3]{Clarke}) to deduce the overall system of critical point conditions \begin{equation} \label{eq:oc-v2} \left\{ \begin{aligned} K({\realopt{u}}) &\in \partial F^*({\realopt{\Dual}}),\\ - [\grad K({\realopt{u}})]^* {\realopt{\Dual}} &\in \partial G({\realopt{u}}). \end{aligned} \right. \end{equation} This may be rewritten concisely as \begin{equation} \label{eq:oc-h} 0 \in H_{{\realopt{u}}}({\realopt{q}}) \end{equation} for the monotone operator \begin{equation} \label{eq:h-def} H_{\bar{u}}(u, v) \coloneqq \begin{pmatrix} \partial G(u) + \grad K({\bar{u}})^* v \\ \partial F^*(v) -\grad K({\bar{u}}) u - c_{\bar{u}} \end{pmatrix}, \quad \text{where} \quad c_{\bar{u}} \coloneqq K({\bar{u}})-\grad K({\bar{u}}){\bar{u}}. \end{equation} This is defined at an arbitrary base point ${\bar{u}} \in X$ for the linearization of $K$. Here and generally we use the notation \begin{equation} \label{eq:qw} q=(u, v) \in X \times Y \quad\text{and}\quad w=(\xi, \eta) \in X \times Y \end{equation} for combining (primal, dual) and (co-primal, co-dual) variable pairs, respectively. This nomenclature stems from $v$ being the dual variable in the original saddle-point problem, whereas the co-primal and co-dual variables generally satisfy $w \in H_{\bar{u}}(q)$. Alternatively, we may rewrite the critical point conditions \eqref{eq:oc-v2} as \begin{equation} \label{eq:oc-r} 0 \in R_0({\realopt{q}}) \end{equation} for \begin{equation} \label{eq:r0-def} R_0(u, v) \coloneqq H_u(u, v) = \begin{pmatrix} \partial G(u) + \grad K(u)^* v \\ \partial F^*(v) -K(u) \end{pmatrix}. \end{equation} The mapping $R_0$ will be useful for general stability analysis, while $H_{\bar{u}}$ is critical for the primal-dual algorithm of \cite{Valkonen:2014}. We can prove the following about these mappings. \begin{proposition} \label{prop:dh} Let $G: X = L^2(\Omega; \mathbb{R}^m) \to (-\infty,\infty]$ and $F^*: Y = L^2(\Omega; \mathbb{R}^n) \to (-\infty,\infty]$ have the form \eqref{eq:g-pointwise-integral} for some regular integrands $g$ and $f^*$, respectively. Let $K \in C^1(X; Y)$. Then $\graph H_{\bar{u}}$ is locally closed, and \begin{equation} \label{eq:dh} D H_{\bar{u}}(q|w)(\dir{q}) = \begin{pmatrix} D{[\partial G]}(u|\xi - \grad K({\bar{u}})^* v)(\dir{u}) + \grad K({\bar{u}})^* \dirv \\ D{[\partial F^*]}(v|\eta + \grad K({\bar{u}})u + c_{\bar{u}})(\dirv) - \grad K({\bar{u}}) \dir{u} \\ \end{pmatrix}, \end{equation} with $D[\partial G]$ and $D[\partial F^*]$ given by \eqref{eq:g-d-form}. Moreover \eqref{eq:somesetmap-cod-d-upper-adjoint} holds, i.e., \begin{equation} \frechetCod H_{\bar{u}}(q|w) = [DH_{\bar{u}}(q|w)]^{*+}. \end{equation} \end{proposition} \begin{proof} That $\graph H_{\bar{u}}$ is locally closed is an immediate consequence of the lower semicontinuity of the convex functionals $G$ and $F^*$ and the continuity of $\grad K$. The expression \eqref{eq:dh} is an immediate consequence of \cref{cor:d-p-plus-smooth}, where we set \[ h(u,v) \coloneqq \begin{pmatrix} \grad K({\bar{u}})^*v \\ -\grad K({\bar{u}}) u - c_{\bar{u}} \end{pmatrix} \quad\text{and}\quad P(u, v) \coloneqq \begin{pmatrix} \partial G(u) \\ \partial F^*(v) \\ \end{pmatrix}, \] and observe that $h$ is not only smooth but linear with \[ \begin{split} \grad h(u, v)= \begin{pmatrix} 0 & \grad K({\bar{u}})^* \\ -\grad K({\bar{u}})u & 0 \end{pmatrix}. \qedhere \end{split} \] \end{proof} \begin{proposition} \label{prop:dr0} Let $G: X = L^2(\Omega; \mathbb{R}^m) \to (-\infty,\infty]$ and $F^*: Y = L^2(\Omega; \mathbb{R}^n) \to (-\infty,\infty]$ have the form \eqref{eq:g-pointwise-integral} for some regular integrands $g$ and $f^*$, respectively. Let $K \in C^2(X; Y)$. Then $\graph R_0$ is locally closed, and \begin{equation} \label{eq:dr0} D R_0(q|w)(\dir{q}) = \begin{pmatrix} D{[\partial G]}(u|\xi - \grad K({\bar{u}})^* v)(\dir{u}) + \grad_u [\grad K(u)^*v]\diru + \grad K(u)^* \dirv \\ D{[\partial F^*]}(v|\eta + K(u))(\dirv) - \grad K(u) \dir{u} \\ \end{pmatrix}, \end{equation} with $D[\partial G]$ and $D[\partial F^*]$ given by \eqref{eq:g-d-form}. Moreover \eqref{eq:somesetmap-cod-d-upper-adjoint} holds, i.e., \begin{equation} \frechetCod R_0(q|w) = [DR_0(q|w)]^{*+}. \end{equation} \end{proposition} \begin{proof} Again, the fact that $\graph R_0$ is locally closed is an immediate consequence of the lower semicontinuity of the convex functionals $G$ and $F^*$ and the continuity of $\grad K$. The expression \eqref{eq:dr0} is also again an immediate consequence of \cref{cor:d-p-plus-smooth}, where we set \[ h_{0}(u,v) \coloneqq \begin{pmatrix} \grad K(u)^*v \\ -K(u) \end{pmatrix} \quad\text{and}\quad P(u, v) \coloneqq \begin{pmatrix} \partial G(u) \\ \partial F^*(v) \\ \end{pmatrix}, \] and observe that \[ \grad h_0(u,v) =\begin{pmatrix} \grad_u [\grad K(u)^*v] & \grad K(u)^* \\ -\grad K(u) & 0 \end{pmatrix}, \] where we denote $\grad_u [\grad K(u)^*v] \coloneqq \grad(\tilde u \mapsto [\grad K(\tilde u)^*v])(u)$, using the assumption that $K$ is twice differentiable. \end{proof} \begin{remark} Observe from \eqref{eq:dh} and \eqref{eq:dr0} that if ${\bar{u}}=u$, \[ D R_0(q|w)=D H_u(q|w)+\begin{pmatrix} \grad_u [\grad K(u)^*v]\diru \\ 0 \end{pmatrix}. \] Comparing \eqref{eq:h-def} and \eqref{eq:r0-def} also shows that in this case $R_0(q)=H_u(q)$. \end{remark} Recalling \eqref{eq:inverse-aubin} and \eqref{eq:sensitivity1}, as well as \cref{prop:lip-estim-regular}, we see that in order to analyze the stability of \eqref{eq:oc}, resp.~\eqref{eq:oc-h}, we have to compute $\tildelip{\inv R_0}(\alt{w}|\alt{q})$ in a neighborhood of $({\realopt{q}}, 0)$. We will later see that this will be necessary both for ${\bar{u}}={\realopt{u}}$ and ${\bar{u}}=\alt{u}$. \subsection{Lipschitz estimates for saddle points} We now derive sufficient conditions for the Aubin property to hold for saddle points of \eqref{eq:oc-v2}. We proceed in several steps. First, we observe that provided that if both $\widetilde{D[\partial G]}$ and $\widetilde{D[\partial F^*]}$ have individually the form \eqref{eq:linear-polar-form}, then the convexified graphical derivative $\compositeaccents{\widetilde}{D{H_{\bar u}}}(q|w)(\dir{q})$ also has the form \eqref{eq:linear-polar-form}. More precisely \begin{equation} \label{eq:t-split} T_q = \begin{pmatrix} \bar G_q & \bar K_{\bar{u}}^* \\ - \bar K_{\bar{u}} & \bar F_q \end{pmatrix} \end{equation} for some linear operators $\bar G_q:X \to X$ and $\bar F_q: Y \to Y$ and $\bar K_{\bar{u}}=\grad K({\bar{u}})$, as well as the cone \[ \DerivCone[q|w]= \DerivConeG[u|\xi-\bar K_{\bar{u}}^* v] \times \DerivConeF[v|\eta + \bar K_{\bar{u}} u + c_{\bar{u}}] \subset X \times Y. \] Since $G$ is assumed to be quadratic, we have $\DerivConeG[u|\alt{\xi}] \equiv X$, which gives the more specific structure \begin{align} D [\partial G](u|\alt{\xi})(\dir{u}) &= \bar G_q \dir{u} \\ \intertext{and} \widetilde{D [\partial F^*]}(v|\alt{\eta})(\dirv) &= \begin{cases} \bar F_q \dirv + \polar{\DerivConeF[v|\alt{\eta}]}, & \dirv \in \DerivConeF[v|\alt{\eta}], \\ \emptyset, & \dirv \not\in \DerivConeF[v|\alt{\eta}]. \end{cases} \label{eq:subspace-linear-structure} \end{align} We make, of course, the implicit assumption that $\alt{\xi} \in \partial G(u)$ and $\alt{\eta} \in \partial F^*(v)$; if this does not hold, then the respective graphical derivatives are empty. As we will see, it is difficult in general to guarantee the Aubin property. One way of doing so is to consider a Moreau--Yosida regularization of $F$, that is to replace $F^*$ by \[ F^*_\gamma(v) \coloneqq F^*(v) + \frac{\gamma}{2}\norm{v}^2 \] for some parameter $\gamma>0$; see, e.g., \cite[Chap.~12.4]{Bauschke:2011}. The regular coderivative of the regularized subdifferential satisfies at least at non-degenerate points for some cone $\DerivConeF[v|\eta]$ the expression \begin{equation} \label{eq:fstar-polar-form-huber} \widetilde{D{[\partial F_\gamma^*]}}(v|\eta)(\dir{v}) = \begin{cases} \gamma \dir{v} + \polar{\DerivConeF[v|\eta]}, & \dir{v} \in {\DerivConeF[v|\eta]}, \\ \emptyset, & \dir{v} \not\in {\DerivConeF[v|\eta]}. \end{cases} \end{equation} We denote the corresponding operator $H_{\realopt{u}}$ by $H_{\gamma,{\realopt{u}}}$. From \cref{prop:dr0}, we observe that $\widetilde{DR_0}(q|w)(\dir{q})$ also has the form \eqref{eq:linear-polar-form} with \eqref{eq:t-split}, albeit with a different term $\bar K_{\bar{u}}$ and with $\bar G_q$ including the second-order term $\grad_u [\grad K(u)^*v]$ from $K$. \bigskip We now specialize the results of \cref{sec:stability_vi} to the specific setting considered in this section. We therefore assume that $\bar F_q=\gamma I$ for some $\gamma \ge 0$ and that $\DerivConeG = X$. For the statement of the next lemma, we drop many of the subscripts and denote for short $T \coloneqq T_q$, $\bar K \coloneqq \bar K_{\bar{u}}$, $\bar G \coloneqq \bar G_q$, and $\tilde \DerivCone \coloneqq \DerivConeF[v|\eta]$. \begin{lemma} \label{lemma:general-polar-projection-lower-bound} Let $\DerivCone = X \times \tilde\DerivCone \subset X \times Y$ be a cone, and let $\bar G: X \to X$ and $\bar K: X \to Y$ be bounded linear operators. For $\gamma \ge 0$, define \begin{equation} T \coloneqq \begin{pmatrix} \bar G & \bar K^* \\ - \bar K & \gamma I \end{pmatrix}. \end{equation} Suppose $\bar G$ is self-adjoint and positive definite, i.e., there exists $c_G>0$ such that \begin{equation} \label{eq:general-c_g} \iprod{\bar G\xi}{\xi} \ge c_G^2 \norm{\xi}^2 \qquad (\xi \in X). \end{equation} Then, there exists $c>0$ such that \begin{equation} \label{eq:general-polar-projection-lower-bound} \inf_{z \in \polar\DerivCone} \norm{T^* w - z}^2 \ge c\norm{w}^2 \qquad (w \in \DerivCone) \end{equation} if and only if either of the following conditions hold: \begin{enumerate}[label=(\roman*)] \item\label{item:general-polar-projection-lower-bound-i} $\gamma>0$, in which case $c=c(\gamma, c_G)$; \item\label{item:general-polar-projection-lower-bound-ii} there exists $c_{K,\DerivCone}>$ such that \begin{equation}\displayindent0pt \displaywidth\columnwidth \label{eq:ckv-0} \inf_{\nu \in \polar{\tilde\DerivCone}} \norm{\bar K \inv{\bar G} \bar K^* \eta - \nu}^2 \ge c_{K,\DerivCone} \norm{\eta}^2 \qquad (\eta \in \tilde\DerivCone), \end{equation} in which case $c=c(\norm{\bar K}, c_G, c_{K,\DerivCone}) \le c_{K, \DerivCone}$. \end{enumerate} \end{lemma} \begin{proof} We first prove the sufficiency of \ref{item:general-polar-projection-lower-bound-i} and \ref{item:general-polar-projection-lower-bound-ii}. With $w=(\xi,\eta) \in \DerivCone = X \times \DerivConeF$, and $z=(0, \nu) \in \polar\DerivCone = \{0\} \times \polar \DerivConeF$, we calculate \begin{equation} \label{eq:general-polar-projection-lower-bound-est0} \begin{aligned} \norm{T^*w-z}^2 & = \norm{\bar G \xi-\bar K^*\eta}^2 + \norm{\bar K \xi+\gamma\eta-\nu}^2 \\ & = \norm{\bar G \xi}^2+\norm{\bar K^*\eta}^2-2\iprod{\bar G \xi}{\bar K^*\eta} \\ \MoveEqLeft[-1] +\norm{\bar K \xi}^2+\gamma^2\norm{\eta}^2+\norm{\nu}^2 +2\gamma\iprod{\bar K \xi}{\eta} -2\iprod{\bar K \xi}{\nu} -2\gamma\iprod{\eta}{\nu} \\ & = \norm{\bar G \xi}^2+\norm{\bar K^*\eta}^2-2\iprod{\bar G \xi}{\bar K^*\eta} +\norm{\bar K\xi-\nu}^2 +\gamma^2\norm{\eta}^2+2\gamma\iprod{\bar K \xi}{\eta} -2\gamma\iprod{\eta}{\nu}. \end{aligned} \end{equation} Assume first that $\gamma>0$. For arbitrary $\lambda\in [0,\gamma]$, we can insert the productive zero and use $(\lambda-\gamma)\iprod{\nu}{\eta} \ge 0$ for all $\nu \in \polar {\tilde\DerivCone}$ and $\eta \in {\tilde\DerivCone}$ to obtain \begin{equation} \label{eq:general-polar-projection-lower-bound-lambda-introduce} \begin{aligned} \norm{T^*w-z}^2 & = \norm{\bar G \xi}^2+\norm{\bar K^*\eta}^2-2\iprod{(\bar G +\lambda I-\gamma I)\xi}{\bar K^*\eta} \\ \MoveEqLeft[-1] +2\lambda\iprod{\bar K \xi}{\eta} +\norm{\bar K\xi-\nu}^2 +\gamma^2\norm{\eta}^2 -2\gamma\iprod{\eta}{\nu}\\ & \ge \norm{\bar G \xi}^2+\norm{\bar K^*\eta}^2-2\iprod{(\bar G +\lambda I-\gamma I)\xi}{\bar K^*\eta} \\ \MoveEqLeft[-1] +2\lambda\iprod{\bar K\xi-\nu}{\eta} +\norm{\bar K\xi-\nu}^2 +\gamma^2\norm{\eta}^2. \end{aligned} \end{equation} This we further estimate by application of Young's inequality for any $\rho_1,\rho_2>0$ as \begin{equation} \label{eq:general-polar-projection-lower-bound-est2} \begin{aligned}[t] \norm{T^*w-z}^2 & \ge \norm{\bar G \xi}^2+\norm{\bar K^*\eta}^2-2\iprod{(\bar G +\lambda I-\gamma I)\xi}{\bar K^*\eta} \\ \MoveEqLeft[-1] +(\gamma^2-\lambda\inv\rho_1)\norm{\eta}^2 +(1-\lambda\rho_1)\norm{\bar K\xi-\nu}^2 \\ & \ge \norm{\bar G \xi}^2-\inv\rho_2\norm{(\bar G +\lambda I-\gamma I)\xi}^2+(1-\rho_2)\norm{\bar K^*\eta}^2 \\ \MoveEqLeft[-1] +(\gamma^2-\lambda\inv\rho_1)\norm{\eta}^2 +(1-\lambda\rho_1)\norm{\bar K\xi-\nu}^2. \end{aligned} \end{equation} Let us choose $\rho_1=\inv\lambda$ and $\rho_2=1$. Then \eqref{eq:general-polar-projection-lower-bound-est2} becomes \[ \norm{T^*w-z}^2 \ge \norm{\bar G \xi}^2-\norm{(\bar G +\lambda I-\gamma I)\xi}^2 +(\gamma^2-\lambda^2)\norm{\eta}^2. \] Now, \[ \norm{\bar G \xi}^2-\norm{(\bar G +\lambda I-\gamma I)\xi}^2 = 2(\gamma-\lambda)\iprod{\bar G \xi}{\xi}-(\gamma-\lambda)^2\norm{\xi}^2, \] so that by \eqref{eq:general-c_g} we therefore require that \[ 2(\gamma-\lambda)c_G - (\gamma-\lambda)^2 > 0. \] This holds if $\lambda < \gamma$ is large enough, verifying case \ref{item:general-polar-projection-lower-bound-i} including the relationship $c=c(\gamma, c_G)$. Suppose next that $\gamma=0$. To verify the sufficiency of \ref{item:general-polar-projection-lower-bound-ii}, we proceed by contradiction, assuming \eqref{eq:general-polar-projection-lower-bound} not to hold for \[ c= \frac{c_{K,\DerivCone}}{1+\norm{\bar K \inv{\bar G}}}. \] Thus, for some $c' \in (0, c)$, we can find $w \in X \times {\tilde\DerivCone}$ and $\nu \in \polar{\tilde\DerivCone}$ satisfying \[ \norm{T^* w - (0, \nu)}^2 \le c'\norm{w}^2. \] We may assume that $\norm{w}=1$. Thus, \[ T^* w-(0, \nu)=(e_1, e_2) \quad\text{where}\quad \norm{e_1}^2+\norm{e_2}^2 \le c', \] which means \[ \bar G \xi-\bar K^*\eta=e_1 \quad \text{and} \quad \bar K \xi -\nu=e_2. \] Since $\bar G$ is invertible by \eqref{eq:general-c_g}, this shows that \[ \bar K \inv{\bar G} \bar K^* \eta-\nu=e_2-\bar K \inv{\bar G}e_1. \] Thus \[ \norm{\bar K \inv{\bar G} \bar K^* \eta - \nu}^2 \le 2(1+\norm{\bar K \inv{\bar G}}) c' < c_{K,\DerivCone}, \] in contradiction to \eqref{eq:ckv-0}. Therefore \ref{item:general-polar-projection-lower-bound-ii} is sufficient for \eqref{eq:general-polar-projection-lower-bound}. We also estimate \[ \norm{\bar G \xi} \ge c_G \norm{\xi}. \] Using the standard relation \[ \sup_{\norm{\xi}=1} \norm{\inv{\bar G}\xi} = \sup_{\xi \ne 0} \frac{\norm{\xi}}{\norm{\bar G \xi}}, \] we therefore have \[ \norm{\bar K \inv{\bar G}} \le c_G^{-1}\norm{\bar K}. \] This verifies $c=c(\norm{K}, c_G, c_{K,\DerivCone})$. Having dealt with the sufficient conditions, let us now verify the necessity of \eqref{eq:ckv-0} when $\gamma=0$. We expand \begin{equation} \label{eq:general-polar-projection-lower-bound-necessary-conds} \norm{T^*w-z}^2 = \norm{\bar G \xi - \bar K^* \eta}^2 +\norm{\bar K\xi-\nu}^2. \end{equation} Using the invertibility of $\bar G$ from \eqref{eq:general-c_g}, let us choose $\xi=\inv{\bar G}\bar K^*\eta$. Then \eqref{eq:general-polar-projection-lower-bound-necessary-conds} gives \[ \norm{T^*w-z}^2 = \norm{\bar K\inv{\bar G}\bar K^*\eta-\nu}^2, \] immediately showing the necessity of \eqref{eq:ckv-0} and $c_{K,\DerivCone} \ge c$. \end{proof} \begin{remark} \label{rem:barkstar-simplified-necessary} It is easily seen that if $\gamma=0$, then existence of a $c>0$ such that \begin{equation} \label{eq:ckv-1} \norm{\bar K^*\eta} \ge c \norm{\eta} \quad (\eta \in \DerivConeF) \end{equation} is necessary for the satisfaction of \eqref{eq:general-polar-projection-lower-bound}. \end{remark} We now combine the above low-level lemma with \cref{lemma:general-limit-polar-projection-lower-bound}. \begin{lemma} \label{lemma:limit-polar-projection-lower-bound} Let $q, w \in Q=W=X \times Y$ and suppose that for $(\alt{q}, \alt{w})$ in a neighborhood $U$ of $(q, w)$, $\graph R \isect U$ is closed, \eqref{eq:somesetmap-cod-d-upper-adjoint} holds, and we have \begin{equation} \label{eq:dr-expr} \widetilde{DR}(\alt{q}|\alt{w})(\dir{q}) \supset \begin{cases} T_{\alt{q}} \dir{q} + \polar{\DerivConeX[\alt{q}|\alt{w}]{R}}, & \dir{q} \in \DerivConeX[\alt{q}|\alt{w}]{R}, \\ \emptyset, & \dir{q} \not\in \DerivConeX[\alt{q}|\alt{w}]{R}, \end{cases} \end{equation} for \begin{equation} T_{\alt{q}} = \begin{pmatrix} \bar G_{\alt{q}} & \bar K_{\alt{u}}^* \\ - \bar K_{\alt{u}} & \gamma I \end{pmatrix}, \end{equation} and \begin{equation} \label{eq:limit-polar-projection-lower-bound-derivcone} \DerivConeX[\alt{q}|\alt{w}]{R}= X \times \tilde\DerivCone(\alt{v}|\alt{\eta}) \subset X \times Y. \end{equation} In addition to these structural assumptions, suppose that the mappings $\alt{q} \mapsto \bar G_{\alt{q}}$ and $\alt{u} \mapsto \bar K_{\alt{u}}$ are continuous at $q$ and $u$, respectively. Assume, moreover, that each $\bar G_{\alt{q}}$ is self-adjoint and positive definite, i.e., there exists $c_G>0$ such that \begin{gather} \label{eq:limit-c_G} \iprod{\bar G_q\xi}{\xi} \ge c_G \norm{\xi}^2 \qquad (\xi \in X). \end{gather} Define further \begin{equation} \label{eq:kvbound} b(q|w; R) \coloneqq \sup_{t>0} \inf_{\substack{((0, \dir{\eta}), (0, \nu)) \in \DerivConeDXt[q|w]{t}{R}, \\ \dir{\eta} \ne 0}} \frac{\norm{\bar K_u \inv{\bar G_u} \bar K_u^*\dir{\eta}-\nu}}{\norm{\dir{\eta}}}. \end{equation} Then \begin{equation} \label{eq:kvbound-lipnum} \ell_{\inv R}(w|q) < \infty \end{equation} provided \begin{gather} \label{eq:limit-cKF} \max\{\gamma, b(q|w; R)\} > 0. \end{gather} If \eqref{eq:dr-expr} holds as an equality, then \eqref{eq:kvbound-lipnum} holds if and only if \eqref{eq:limit-cKF} holds. \end{lemma} \begin{proof} If $\gamma>0$, we may directly apply \cref{lemma:general-limit-polar-projection-lower-bound}. So we take $\gamma=0$. Suppose first that \eqref{eq:limit-cKF} holds. Then $b(q|w; R) =: c_{K, \DerivCone}>0$, and \eqref{eq:kvbound} gives \begin{equation} \label{eq:limit-polar-projection-lower-bound-step1} \frac{\norm{\bar K_u \inv{\bar G_u} \bar K_u^*\dir{\eta}-\nu}}{\norm{\dir{\eta}}} \ge c_{K, \DerivCone} \end{equation} for every $\dir{\eta} \ne 0$ and $\nu$ satisfying \[ ((0, \dir{\eta}), (0, \nu)) \in \DerivConeDXt[q|w]{t}{R}. \] That is, using the facts that $0 \in X$ and $0 \in \polar X$, as well as the expression \eqref{eq:limit-polar-projection-lower-bound-derivcone}, we see that \eqref{eq:limit-polar-projection-lower-bound-step1} holds whenever \[ \dir{\eta} \in \tilde\DerivCone(\alt{v}|\alt{\eta}) \quad\text{and}\quad \nu \in \polar{\tilde\DerivCone(\alt{v}|\alt{\eta})} \] for some $\alt{q}=(\alt{u}, \alt{v})$ and $\alt{w}=(\alt{\xi}, \alt{\eta})$ satisfying \[ \alt{w} \in R(\alt{q}),\quad \norm{\alt{q}-q} < t,\quad \norm{\alt{w}-w} < t. \] With $\alt{q}$ and $\alt{w}$ fixed, \cref{lemma:general-polar-projection-lower-bound} now shows the existence of a constant $c>0$ such that \begin{equation} \label{eq:general-polar-projection-lower-bound-use1} \norm{T^* \dir{w} - z}^2 \ge c\norm{\dir{w}}^2 \end{equation} for all \[ (\dir{w}, z) \in (X \times \tilde\DerivCone(\alt{v}|\alt{\eta})) \times \polar{(X \times \tilde\DerivCone(\alt{v}|\alt{\eta}))}, \] with $c$ depending only on $\norm{\bar K}$, $c_G$, and $c_{K,\DerivCone}$. Therefore \eqref{eq:general-polar-projection-lower-bound-use1} holds for all \[ (\dir{w}, z) \in \DerivConeDXt[q|w]{t}{R}. \] Applying \eqref{eq:general-polar-projection-lower-bound-use1} in the expression for $a$ in \eqref{eq:grbound} now shows that \[ a(q|w; R) \ge c. \] Finally, an application of \cref{lemma:general-limit-polar-projection-lower-bound} shows that $\ell_{\inv R}(w|q) < \infty$. In the other direction, to show that $\ell_{\inv R}(w|q) = \infty$ if $b(q|w; R)=0$, we assume to the contrary that $\ell_{\inv R}(w|q) < \infty$. Then $a(q|w; R) \ge c$ for some constant $c>0$. Now we perform the above steps in the opposite direction to show that $b(q|w; R)>0$, in contradiction to the premise. \end{proof} The following theorem, which specializes \cref{lemma:limit-polar-projection-lower-bound} to the specific structure assumed in this section and estimates the lower bounds slightly to derive easier conditions, is one of the main results of this work. \begin{theorem} \label{thm:limit-polar-projection-lower-bound-fstar} Let $q, w \in Q=W=X \times Y$ and let $U$ be a neighborhood of $(q, w)$. Suppose that \begin{equation} \label{eq:limit-polar-projection-lower-bound-fstar-r} R(\alt{q})=P(\alt{q})+h(\alt{q}) \quad\text{for}\quad P(\alt{q})=\begin{pmatrix} \grad G(\alt{u}) \\ \partial F^*(\alt{v})\end{pmatrix} \quad\text{and}\quad h(\alt{q})=\begin{pmatrix}\grad \bar h(\alt{u})^*\alt{v} \\ -\bar h(\alt{u}) \end{pmatrix} \end{equation} for $G$ and $F^*$ of the form \eqref{eq:g-pointwise-integral} for some regular integrands $g$ and $f^*$, respectively. Assume further that $\bar h \in C^1(X; Y)$ and $G \in C^2(X)$, and that $F^*$ satisfies for some $\gamma \ge 0$ the inclusion \begin{equation} \label{eq:dfstar-expr} \widetilde{D[\partial F^*]}(\alt{v}|\alt{\eta})(\dir{v}) \supset \begin{cases} \gamma \dir{v} + \polar{\DerivConeF(\alt{v}|\alt{\eta})}, & \dir{v} \in \DerivConeF[\alt{v}|\alt{\eta}], \\ \emptyset, & \dir{v} \not\in \DerivConeF[\alt{v}|\alt{\eta}]. \end{cases} \end{equation} In addition to these structural assumptions, suppose that there exists a constant $c_G>0$ such that \begin{gather} \label{eq:limit-c_G-fstar} \iprod{\grad^2 G(u)\xi + \grad_u[\grad \bar h(u)^*v]\xi}{\xi} \ge c_G \norm{\xi}^2 \qquad (\xi \in X). \end{gather} Define for \[ \bar B \coloneqq \grad \bar h(u)\inv{\bigl(\grad^2 G(u) + \grad_u[\grad \bar h(u)^*v]\bigr)}\grad \bar h(u)^* \] and \begin{equation} \label{eq:vt-fstar} \DerivConeDFt[v|\eta]{t} \coloneqq \Union\left\{ \DerivConeF[\alt{v}|\alt{\eta}] \times \polar{\DerivConeF[\alt{v}|\alt{\eta}]} \,\middle|\, \alt{\eta} \in \partial F^*(\alt{v}),\, \norm{\alt{v}-v} < t,\, \norm{\alt{\eta}-\eta} < t \right\} \end{equation} the quantity \begin{equation} \label{eq:kvbound-fstar} \barb(q|w; R) \coloneqq \sup_{t>0} \inf\left\{ \frac{\norm{\bar B z-\nu}}{\norm{z}} ~\middle|~ (z, \nu) \in \DerivConeDFt[v|\eta+\bar h(u)]{t},\, z \ne 0 \right\}. \end{equation} Then \begin{equation} \label{eq:lipnum-kvbound-fstar} \ell_{\inv R}(w|q) < \infty \end{equation} provided \begin{equation} \label{eq:limit-cKF-fstar} \max\{\gamma, \barb(q|w; R)\} > 0. \end{equation} If \eqref{eq:dfstar-expr} holds as an equality, then \eqref{eq:lipnum-kvbound-fstar} holds if and only if \eqref{eq:limit-cKF-fstar} holds. \end{theorem} \begin{proof} Similarly to \cref{prop:dr0}, we compute \begin{equation} D R(\alt{q}|\alt{w})(\dir{q}) = \begin{pmatrix} \grad^2 G(\alt{u})(\diru) + \grad_u [\grad \bar h(\alt{u})^*\altv]\diru + \grad \bar h(\alt{u})^* \dirv \\ D{[\partial F^*]}(\altv|\alt\eta + \bar h(\alt{u}))(\dirv) - \grad \bar h(\alt{u}) \dir{u} \\ \end{pmatrix}, \end{equation} and \begin{equation} \label{eq:limit-polar-projection-lower-bound-fstar-convexdr} \widetilde{D R}(\alt{q}|\alt{w})(\dir{q}) = \begin{pmatrix} \grad^2 G(\alt{u})(\diru) + \grad_u [\grad \bar h(\alt{u})^*\altv]\diru + \grad \bar h(\alt{u})^* \dirv \\ \widetilde{D{[\partial F^*]}}(\altv|\alt\eta + \bar h(\alt{u}))(\dirv) - \grad \bar h(\alt{u}) \dir{u} \\ \end{pmatrix}, \end{equation} where we denote \[ \grad_u [\grad \bar h(\alt{u})^*\altv]\diru \coloneqq \grad\left(\tilde u \mapsto [\grad \bar h(\tilde{u})^*\altv]\diru\right)(\alt u). \] The structural assumptions of \cref{lemma:limit-polar-projection-lower-bound} are thus satisfied with \[ \bar G_{\alt{q}} \coloneqq \grad^2 G(\alt{u})+\grad_u[\grad \bar h(\alt{u})^*\alt{v}], \quad \bar K_{\alt{u}} \coloneqq \grad \bar h(\alt{u}), \quad\text{and}\quad \DerivConePrime[\alt{q}|\alt{w}] \coloneqq \DerivConeF[\alt{v}|\alt{\eta}+\bar h(\alt{u})]. \] Moreover, Graph $R \isect U$ is closed due to the assumptions on $\bar h$ and $G$ and to $G$ and $F^*$ being convex. Further, \eqref{lemma:limit-polar-projection-lower-bound} holds by \cref{cor:d-p-plus-smooth}. Condition \eqref{eq:limit-c_G} is guaranteed by \eqref{eq:limit-c_G-fstar}, while for \eqref{eq:limit-cKF}, we first of all observe that \[ \bar B = \bar K_u \inv{\bar G_u} \bar K_u^*\dir{\eta}, \] so \eqref{eq:kvbound} becomes \begin{equation} \label{eq:kvbound-fstar-lemma-orig} b(q|w; R) = \sup_{t>0} \inf_{\substack{((0, \dir{\eta}), (0, \nu)) \in \DerivConeDXt[q|w]{t}{R}, \\ \dir{\eta} \ne 0}} \frac{\norm{\bar B\dir{\eta}-\nu}}{\norm{\dir{\eta}}}. \end{equation} Here \[ \DerivConeDXt[q|w]{t}{R} = \Union \left\{ (X \times V) \times (\{0\} \times \polar V) \mid V \in \widetilde{\DerivConeDXt[q|w]{t}{R}} \right\} \] with \[ \widetilde{\DerivConeDXt[q|w]{t}{R}} = \left\{ {\DerivConeF[\alt{v}|\alt{\eta}-\bar h(\alt{u})]} \,\middle|\, \begin{array}{ll} \alt{\xi}=\grad G(\alt{u})+\grad \bar h(\alt{u})^*\alt{v},& \norm{\alt{q}-q} < t, \\ \alt{\eta} \in \partial F^*(\alt{v})-\bar h(\alt{u}), & \norm{\alt{w}-w} < t \end{array} \right\}. \] We derive for small $t>0$ and some constant $C > 1$ (depending on $(q, w)$) the inclusion \begin{equation} \begin{aligned} \widetilde{\DerivConeDXt[q|w]{t}{R}} & = \left\{ {\DerivConeF[\alt{v}|\alt{\eta}]} \,\middle|\, \begin{array}{l} \norm{\alt{u}-u}^2 + \norm{\alt{v}-v}^2 < t^2,\, \alt{\eta} \in \partial F^*(\alt{v}), \\ \norm{\grad G(\alt{u})+\grad \bar h(\alt{u})^*\alt{v}-\xi}^2 +\norm{\alt{\eta}-\bar h(\alt{u})-\eta}^2 < t^2 \end{array} \right\} \\ & \subset \left\{ {\DerivConeF[\alt{v}|\alt{\eta}]} \,\middle|\, \begin{array}{l} \norm{\alt{u}-u}^2+\norm{\alt{v}-v}^2 < t^2,\, \alt{\eta} \in \partial F^*(\alt{v}), \\ \norm{\alt{\eta}-\bar h(\alt{u})-\eta}^2 < t^2 \end{array} \right\} \\ & \subset \left\{ {\DerivConeF[\alt{v}|\alt{\eta}]} \,\middle|\, \begin{array}{l} \norm{\alt{u}-u} < t,\, \norm{\alt{v}-v} < t,\, \alt{\eta} \in \partial F^*(\alt{v}), \\ \norm{\alt{\eta}-\bar h(u)-\eta} - \norm{\bar h(\alt{u})-\bar h(u)} < t \end{array} \right\} \\ & \subset \left\{ {\DerivConeF[\alt{v}|\alt{\eta}]} \,\middle|\, \begin{array}{l} \norm{\alt{v}-v} < t,\, \alt{\eta} \in \partial F^*(\alt{v}), \\ \norm{\alt{\eta}-\bar h(u)-\eta} < C t \end{array} \right\} =: \mathcal{V}_C . \end{aligned} \end{equation} In the final step we have used the fact that $\bar h \in C^1(X; Y)$ is Lipschitz on $B(u, t)$ for small $t>0$. Now \[ \Union\{ V \times \polar V \mid V \in \mathcal{V}_C \} \subset \DerivConeDFt[v|\eta+\bar h(u)]{Ct}. \] Since we take the supremum over $t>0$ in \eqref{eq:kvbound}, the scaling factor $C>0$ disappears, and we deduce from \eqref{eq:kvbound-fstar} and \eqref{eq:kvbound-fstar-lemma-orig} that \[ b(u|w; R) \ge \bar b(u|w; R). \] Thus \eqref{eq:kvbound-fstar} guarantees \eqref{eq:kvbound}. Similarly, retracing the steps, we verify that \[ \DerivConeDFt[v|\eta-\bar h(u)]{t} = \Union\{ V \times \polar V \mid V \in \mathcal{V}_1 \} \quad\text{and}\quad \mathcal{V}_1 \subset \widetilde{\DerivConeDXt[q|w]{C_2 t}{R}} \] for some $C_2>1$. Indeed, using \[ \grad G(u)+\grad \bar h(u)^*v=\xi, \] we compute for some $C_1>1$ that \begin{equation} \begin{aligned} \mathcal{V}_1 & = \left\{ {\DerivConeF[\alt{v}|\alt{\eta}]} \,\middle|\, \begin{array}{l} \norm{\alt{v}-v} < t,\, \alt{\eta} \in \partial F^*(\alt{v}), \\ \norm{\alt{\eta}-\bar h(u)-\eta} < t \end{array} \right\} \\ & = \left\{ {\DerivConeF[\alt{v}|\alt{\eta}]} \,\middle|\, \begin{array}{l} \norm{\alt{v}-v} < t,\, \alt{\eta} \in \partial F^*(\alt{v}), \\ \norm{\grad G(u)+\grad \bar h(u)^*v-\xi} + \norm{\alt{\eta}-\bar h(u)-\eta} < t \end{array} \right\} \\ & \subset \left\{ {\DerivConeF[\alt{v}|\alt{\eta}]} \,\middle|\, \begin{array}{l} \norm{\alt{v}-v} < t,\, \alt{\eta} \in \partial F^*(\alt{v}), \\ \norm{\grad G(u)+\grad \bar h(u)^*\alt{v}-\xi} + \norm{\alt{\eta}-\bar h(u)-\eta} < C_1 t \end{array} \right\} \\ & \subset \left\{ {\DerivConeF[\alt{v}|\alt{\eta}]} \,\middle|\, \begin{array}{l} \norm{\alt{u}-u} + \norm{\alt{v}-v} < t,\, \alt{\eta} \in \partial F^*(\alt{v}), \\ \norm{\grad G(\alt{u})+\grad \bar h(\alt{u})^*\alt{v}-\xi}^2 + \norm{\alt{\eta}-\bar h(\alt{u})-\eta}^2 < C_2 t^2 \end{array} \right\} \\ & \subset \widetilde{\DerivConeDXt[q|w]{C_2 t}{R}}. \end{aligned} \end{equation} Hence, \[ \bar b(u|w; R) \ge b(u|w; R), \] and in particular, \eqref{eq:kvbound} guarantees \eqref{eq:kvbound-fstar}. Our claims now follow from an application of \cref{lemma:limit-polar-projection-lower-bound}, since its continuity requirements on $\bar G$ and $\bar K$ follow from the assumptions on $\bar G$ and $\bar h$. \end{proof} \bigskip In the remainder of this section, we apply \cref{thm:limit-polar-projection-lower-bound-fstar} to show several stability properties of saddle points to \eqref{eq:lagrangian}. \subsection{Metric regularity of the linearized variational inclusion} We begin with the simplest example of verifying $\ell_{H_{\gamma,{\bar{u}}}}(0|{\realopt{q}})<\infty$ with fixed ${\bar{u}}={\realopt{u}}$ for ${\realopt{q}}$ solving $0 \in H_{\gamma,{\realopt{u}}}({\realopt{q}})$. This is useful for showing convergence of the primal-dual algorithm of \cite{Valkonen:2014}. By \cref{prop:dh}, we are in the setting of \cref{thm:limit-polar-projection-lower-bound-fstar}. Indeed, for $R=H_{\gamma,{\bar{u}}}$, we obtain an instance of \eqref{eq:dr-expr} with \[ \bar h(\alt{u}) = \grad K({\bar{u}})\alt{u}+c_{\bar{u}}. \] Furthermore, we also have \[ \grad^2 G(u)\xi + \grad_u[\grad \bar h(u)^*v]\xi=\alpha I, \] so we may take $c_G=\alpha$ in \eqref{eq:limit-c_G-fstar}. If $\gamma>0$, \eqref{eq:limit-cKF-fstar} is trivially satisfied. By \cref{thm:limit-polar-projection-lower-bound-fstar}, we therefore obtain \[ \ell_{\inv H_{\gamma,{\realopt{u}}}}(0|{\realopt{q}}) < \infty. \] Thus $\inv H_{\gamma,{\realopt{u}}}$ has the Aubin property at $(0, {\realopt{q}})$ provided $\gamma>0$. We summarize these findings in the following proposition. \begin{proposition} \label{prop:pde-metric-regularity} Let $G$ be as in \eqref{eq:pde-g-choice}, $K \in C^1(X; Y)$, and let $F^*$ satisfy \eqref{eq:pde-f-choice} and \eqref{eq:fstar-polar-form-huber}. Suppose that ${\realopt{q}}$ solves $0 \in H_{\gamma,{\realopt{u}}}({\realopt{q}})$ for some $\gamma \ge 0$. Then $w \mapsto \inv H_{\gamma,{\realopt{u}}}(w)$ has the Aubin property at $(0|{\realopt{q}})$ if and only if $\gamma>0$ or $\barb({\realopt{q}}|0; H_{\gamma,{\realopt{u}}})>0$. \end{proposition} If $\gamma=0$, we have to prove existence of a lower bound $c_{K,\DerivCone}>0$ through $\barb$. This is significantly more difficult. With ${\bar{u}}={\realopt{u}}$, we use \eqref{eq:h-def} to compute \[ \bar h({\realopt{u}})= \grad K({\realopt{u}}){\realopt{u}}+c_{\bar{u}}=K({\realopt{u}}) \quad\text{and}\quad \grad \bar h({\realopt{u}})=\grad K({\realopt{u}}). \] Consequently, \eqref{eq:kvbound-fstar} can be expressed in the setting of this proposition as \begin{equation} \label{eq:kvbound-fstar-metric} \begin{aligned} \barb({\realopt{q}}|0; H_{{\realopt{u}}}) & = \sup_{t>0} \inf\left\{ \frac{\norm{\inv\alpha\grad K({\realopt{u}})\grad K({\realopt{u}})^* z-\nu}}{\norm{z}} \,\middle|\, (z, \nu) \in \DerivConeDFt[\realopt{v}|K({\realopt{u}})]{t},\, z \ne 0 \right\} \\ & = \inv\alpha\sup_{t>0} \inf\left\{ \frac{\norm{\grad K({\realopt{u}})\grad K({\realopt{u}})^* z-\nu}}{\norm{z}} \,\middle|\, \begin{array}{l} 0 \ne z \in \DerivConeF[\alt{v}|\alt{\eta}],\, \nu \in \polar{\DerivConeF[\alt{v}|\alt{\eta}]}, \\ \alt{\eta} \in \partial F^*(\alt{v}),\, \norm{\alt{v}-\realopt{v}} < t,\\ \norm{\alt{\eta}-K({\realopt{u}})} < t \end{array} \right\}. \end{aligned} \end{equation} We will return to the issue of verifying -- or disproving -- the lower bound on $\barb$ with specific examples in \cref{sec:stability_paramid}. \subsection{Stability with respect to data} We now want to study the stability of the condition $0 \in H_{{\realopt{u}}}({\realopt{q}})$ with respect to perturbation of the data $y^\delta$. This of course only makes sense if we equate the base point ${\bar{u}}$ in $H_{\bar{u}}$ to the solution ${\realopt{u}}$. Therefore, we define for variations $\dir{y}$ in the data \[ J_{\dir{y}}(u,v) \coloneqq P(u, v)+h_{\dir{y}}(u,v) \] with \[ h_{\dir{y}}(u,v) \coloneqq \begin{pmatrix} \grad K(u)^*v \\ \dir{y}-K(u) \end{pmatrix} \quad\text{and}\quad P(u, v) \coloneqq \begin{pmatrix} \grad G(u) \\ \partial F^*(v) \\ \end{pmatrix}. \] We remark that due to the linear dependence of the optimality conditions $0 \in J_{\dir{y}}(u,v)$ on $\Delta y$, the stability with respect to $\Delta y$ can be seen as a form of tilt-stability \cite{Rockafellar:1998b,Mordukhovich:2012,Drusvyatskiy:2013,Eberhard:2012,Lewis:2013,Mordukhovich:2013,Levy:2000,Mordukhovich:2014} for saddle-point systems. Observe now that \[ J_{\dir{y}}(q)=R_0(q)+\begin{pmatrix}0 \\ \dir{y}\end{pmatrix}, \] and in particular that $J_{0}=R_0$. Thus \eqref{eq:sensitivity1} with $R=R_0$ and $w=(0, \dir{y})$ yields \begin{equation} \label{eq:sensitivity2} \inf_{q\,:\, 0 \in J_{\dir{y}}(q)} \norm{{\realopt{q}}-q} \le \ell_{\inv R_0}(0|{\realopt{q}}) \norm{\dir{y}} \quad\text{ whenever }\quad \norm{\dir{y}} \le \rho. \end{equation} If $K \in C^2(X; Y)$, by \cref{prop:dr0}, we can compute $DR_0(q|w)$. In fact, with \[ \bar h(u) = K(u), \] we see that $R_0$ is an instance of the class covered by \cref{thm:limit-polar-projection-lower-bound-fstar}. Its application directly yields the following proposition. \begin{proposition} \label{prop:pde-data-stability} Let $K \in C^2(X; Y)$ and suppose that $F^*$ satisfies \eqref{eq:pde-f-choice} and \eqref{eq:fstar-polar-form-huber}. Denote by ${\realopt{q}}_{\dir{y}}$ a solution to the optimality conditions \eqref{eq:oc} for the problem \[ \min_u \max_v \frac{\alpha}{2}\norm{u}^2 + \iprod{K(u)-\dir{y}}{v} - F_\gamma^*(v). \] Suppose that a solution ${\realopt{q}}={\realopt{q}}_0$ exists, and there exists a constant $c_G>0$ such that \begin{equation} \label{eq:pde-cg} \alpha\norm{\xi}^2+\iprod{\grad_u [\grad K({\realopt{u}})^*\realopt{v}]\xi}{\xi} \ge c_G \norm{\xi}^2 \qquad (\xi \in X). \end{equation} If $\gamma>0$ or $\barb({\realopt{q}}|0; R_0)>0$, then for some $\rho, \ell > 0$ there exist solutions ${\realopt{q}}_{\dir{y}}$ with \begin{equation} \label{eq:sensitivity3} \norm{{\realopt{q}}-{\realopt{q}}_{\dir{y}}} \le \ell \norm{\dir{y}} \quad\text{ whenever }\quad \norm{\dir{y}} \le \rho. \end{equation} \end{proposition} Note that for $\barb({\realopt{q}}|0; R_0)$, we obtain from \eqref{eq:kvbound-fstar} exactly the same expression as for $\barb({\realopt{q}}|0; H_{{\realopt{u}}})$ in \eqref{eq:kvbound-fstar-metric}, i.e., \begin{equation} \label{eq:kvbound-fstar-data} \barb({\realopt{q}}|0; R_0) = \inv\alpha\sup_{t>0} \inf\left\{ \frac{\norm{\grad K({\realopt{u}})\grad K({\realopt{u}})^* z-\nu}}{\norm{z}} \,\middle|\, \begin{array}{l} 0 \ne z \in \DerivConeF[\alt{v}|\alt{\eta}],\, \nu \in \polar{\DerivConeF[\alt{v}|\alt{\eta}]}, \\ \alt{\eta} \in \partial F^*(\alt{v}),\, \norm{\alt{v}-\realopt{v}} < t,\\ \norm{\alt{\eta}-K({\realopt{u}})} < t \end{array} \right\}. \end{equation} \subsection{Stability with respect to the Moreau--Yosida parameter} Finally, we study the stability of the regularized optimality condition $0 \in H_{{\realopt{u}},\gamma}({\realopt{q}})$ with respect to the Moreau--Yosida parameter $\gamma$. With $P$ as in the previous section, we now set \[ J_{\gamma}(u,v) \coloneqq P_\gamma(u, v)+h_{\gamma}(u,v), \] with \[ h_{\gamma}(u,v) \coloneqq \begin{pmatrix} \grad K(u)^*v \\ \gamma v -K(u) \end{pmatrix} \quad\text{and}\quad P_\gamma(u, v) \coloneqq \begin{pmatrix} \grad G(u) \\ \partial F_\gamma^*(v) \\ \end{pmatrix}. \] Observe that $J_0=R_0$. Let ${\realopt{q}}$ solve $0 \in R_0({\realopt{q}})$. Now \eqref{eq:inverse-aubin} applied to $J_\gamma$ at ${\realopt{q}}$ and $\realopt{w} \in J_\gamma({\realopt{q}})$ gives with $w=0$ and $q={\realopt{q}}$ the estimate \begin{equation} \label{eq:moreau-yosida-sensitivity-1} \inf_{p\,:\, 0 \in J_\gamma(p)} \norm{p-{\realopt{q}}} \le \ell_{\inv J_\gamma}(\realopt{w}|{\realopt{q}}) \norm{J_\gamma({\realopt{q}})} \quad\text{ whenever }\quad \norm{\realopt{w}} \le \rho . \end{equation} Since $0 \in R_0({\realopt{q}})$, we deduce that $\realopt{w}_\gamma \coloneqq (0, \gamma\realopt{v}) \in J_\gamma({\realopt{q}})$. This quickly leads to the following proposition. \begin{proposition} \label{prop:pde-moreau-yosida-stability} Let $K \in C^2(X; Y)$, and suppose $F^*$ satisfies \eqref{eq:pde-f-choice} and \eqref{eq:fstar-polar-form-huber}. Denote by ${\realopt{q}}_{\gamma}$ a solution to the optimality conditions \eqref{eq:oc} for the problem \[ \min_u \max_v \frac{\alpha}{2}\norm{u}^2 + \iprod{K(u)}{v} - F_\gamma^*(v). \] Suppose a solution ${\realopt{q}}={\realopt{q}}_0$ exists, $\barb({\realopt{q}}|0; R_0)>0$, and \eqref{eq:pde-cg} holds. Then for some $\rho, \ell > 0$ there exist solutions ${\realopt{q}}_{\gamma}$ with \begin{equation} \label{eq:moreau-yosida-sensitivity-2} \norm{{\realopt{q}}-{\realopt{q}}_{\gamma}} \le \ell \gamma \quad\text{ whenever }\quad 0 \le \gamma \le \rho. \end{equation} \end{proposition} \begin{proof} We may assume that $\norm{\realopt{v}} \ne 0$, because otherwise ${\realopt{q}}_\gamma={\realopt{q}}$. With $\realopt{w}_\gamma = (0, \gamma\realopt{v})$, as above, we expand \eqref{eq:moreau-yosida-sensitivity-1} into \begin{equation} \inf_{p\,:\, 0 \in J_\gamma(p)} \norm{p-{\realopt{q}}} \le \gamma \ell_{\inv J_\gamma}(\realopt{w}_\gamma|{\realopt{q}}) \norm{\realopt{v}}, \quad\text{ whenever }\quad 0 \le \gamma \le \norm{\realopt{v}}^{-1}\rho. \end{equation} In order to derive \eqref{eq:moreau-yosida-sensitivity-2}, we only need to show the existence of a finite constant $\ell_{\inv J_\gamma}(\realopt{w}_\gamma|{\realopt{q}}) < \infty$ and integrate $\norm{\realopt{v}}$ into the constant. For this, we simply apply \cref{thm:limit-polar-projection-lower-bound-fstar} to $R=J_\gamma$ with $\bar h(\alt{u}) = K(\alt{u})$, and observe that $\barb({\realopt{q}}|\realopt{w}_\gamma; J_\gamma)=\barb({\realopt{q}}|0; R_0)$. This follows from the fact that the expression \eqref{eq:kvbound-fstar} only depends on $\gamma$ through the base point $\eta+\bar h(u)$, which in this case is $\gamma\realopt{v}+(-\gamma\realopt{v}+K({\realopt{u}}))=K({\realopt{u}})$. Observe that $\eqref{eq:pde-cg}$ is equally independent of $\gamma$. We can thus bound $\ell_{\inv J_\gamma}(\realopt{w}_\gamma|{\realopt{q}})$ from above uniformly in $\gamma\in[0,\rho]$. \end{proof} \section{Application to parameter identification problems}\label{sec:stability_paramid} We now discuss the possibility of satisfying the assumptions of the preceding propositions in the context of the motivating parameter identification problems \eqref{eq:l1fit_problem} and \eqref{eq:linffit_problem}. Since this will depend on the specific structure of the parameter-to-observation mapping $S$, we consider as a concrete example the problem of recovering the potential term in an elliptic equation. Let $\Omega\subset\mathbb{R}^d$ be an open bounded domain with a Lipschitz boundary $\partial\Omega$. For a given parameter $u\in \{v\in L^\infty(\Omega):v\geq \varepsilon\}\eqcolon U\subset X\coloneqq L^2(\Omega)$, denote by $S(u)\coloneqq y\in H^1(\Omega)\subset L^2(\Omega)\eqcolon Y$ the weak solution of \begin{equation}\label{eq:forward} \inner{\nabla y,\nabla v} + \inner{uy,v} = \inner{f,v} \qquad (v\in H^1(\Omega)). \end{equation} This operator has the following useful properties \cite{Kroener:2009a}: \begin{enumerate}[label=(\textsc{a}\arabic*), ref=\textsc{a}\arabic*] \item The operator $S$ is uniformly bounded in $U\subset{X}$ and completely continuous: If for $u\in U$, the sequence $\{u_n\}\subset U$ satisfies $u_n \rightharpoonup u$ in ${X}$, then \begin{equation} S(u_n)\to S(u) \quad\text{ in } Y. \end{equation} \item $S$ is twice Fr\'echet differentiable. \item\label{ass:a3} There exists a constant $C>0$ such that \begin{equation} \norm{\grad{S}(u)h}_{L^2}\leq C\norm{h}_X\qquad (u\in U,h\in X). \end{equation} \item\label{ass:a4} There exists a constant $C>0$ such that \begin{equation} \norm{\grad^2 S(u)(h,h)}_{L^2} \le C \norm{h}_X^2\qquad (u\in U,h\in X). \end{equation} \end{enumerate} Furthermore, from the implicit function theorem, the directional Fréchet derivative $\grad{S}(u)h$ for given $h\in X$ can be computed as the solution $w\in H^1(\Omega)$ to \begin{equation}\label{eq:forward_lin} \inner{\nabla w,\nabla v} + \inner{uw,v} = \inner{-yh,v} \qquad(v\in H^1(\Omega)). \end{equation} Similarly, the directional adjoint derivative $\grad{S}(u)^*h$ is given by $yz$, where $z\in H^1(\Omega)$ solves \begin{equation}\label{eq:forward_adj} \inner{\nabla z,\nabla v} + \inner{uz,v} = \inner{-h,v} \qquad(v\in H^1(\Omega)). \end{equation} Similar expressions hold for $\grad^2{S}(u)(h_1,h_2)$ and $\grad(\grad{S}(u)^*h_1)h_2$. Hence, assumptions (\ref{ass:a3}--\ref{ass:a4}) hold for $\grad{S}^*$ and $\grad_u(\grad{S}(u)^*v)$ for given $v$ as well. Other operators satisfying the above assumptions are mappings from a Robin or diffusion coefficient to the solution of the corresponding elliptic partial differential equation \cite{ClasonJin:2011}. \subsection{\texorpdfstring{$\scriptstyle L^1$}{L¹} fitting} \label{sec:stability-l1} Let us first consider the $L^1$ fitting problem \eqref{eq:l1fit_problem}. We are in the setting of \eqref{eq:pde-f-choice}--\eqref{eq:pde-g-choice}. More specifically now \[ F^*(v)=\int_\Omega f^*(v(x)) \,d x \quad \text{for} \quad f^*(z) = \iota_{[-1, 1]}(z), \] where we allow the integral to be possibly infinite if the integrand does not satisfy $f^* \circ v \in L^1(\Omega)$. We also have \[ G(u)=\int_\Omega g(u(x)) \,d x \quad \text{for} \quad g(z) = \frac{\alpha}{2} \abs{z}^2, \] as well as \[ K(u)=S(u)-y^\noise. \] Thus, the saddle-point conditions \eqref{eq:oc-h} for \eqref{eq:l1fit_problem} are given by \begin{equation} 0\in \begin{pmatrix} \alpha {\realopt{u}} +\grad S({\realopt{u}}) {\realopt{\Dual}}\\ \partial\iota_{[-1,1]}({\realopt{\Dual}}) + S({\realopt{u}})-y^\noise \end{pmatrix}. \end{equation} \paragraph{Metric regularity} We first address metric regularity of $H_{\realopt{u}}$ (\cref{prop:pde-metric-regularity}) when $\gamma=0$. Recall from \cref{prop:pde-metric-regularity} that in this case we need to show that \begin{equation} \label{eq:l1-fitting-kvbound} \begin{aligned}[t] \barb({\realopt{q}}|0; H_{{\realopt{u}}}) &= \sup_{t>0} \inf\left\{ \frac{\norm{\inv\alpha\grad S({\realopt{u}})\grad S({\realopt{u}})^* z-\nu}}{\norm{z}} \,\middle|\, (z, \nu) \in \DerivConeDFt[\realopt{v}|y^\noise-S({\realopt{u}})]{t},\, z \ne 0 \right\} \\ &= \inv\alpha\sup_{t>0} \inf\left\{ \frac{\norm{\grad S({\realopt{u}})\grad S({\realopt{u}})^* z-\nu}}{\norm{z}} \,\middle|\, \begin{array}{l} 0 \ne z \in \DerivConeF[\alt{v}|\alt{\eta}],\, \nu \in \polar{\DerivConeF[\alt{v}|\alt{\eta}]}, \\ \alt{\eta} \in \partial F^*(\alt{v}),\, \norm{\alt{v}-\realopt{v}} < t,\\ \norm{\alt{\eta}-(y^\noise-S({\realopt{u}}))} < t \end{array} \right\} \\ &>0. \end{aligned} \end{equation} Let us try to force $z(x) \ne 0$ as much as possible in \eqref{eq:l1-fitting-kvbound}. From \cref{cor:g-form-indicator}, we obtain that $z \in \DerivConeF[\alt{v}|\alt{\eta}]$ satisfies \begin{equation} \label{eq:l1fitting-z} z(x) \in \begin{cases} \{0\}, & \abs{\altv(x)}=1 \text{ and } \alt\eta(x) \ne 0,\\ -\sign \altv(x) [0, \infty), & \abs{\altv(x)}=1 \text{ and } \alt\eta(x) = 0,\\ \mathbb{R}, & \abs{\altv(x)} < 1 \text{ and } \alt\eta(x) = 0,\\ \end{cases} \end{equation} while $\nu \in \polar{\DerivConeF[\alt{v}|\alt{\eta}]}$ satisfies \begin{equation} \label{eq:l1fitting-nu} \nu(x) \in \begin{cases} \mathbb{R}, & \abs{\altv(x)}=1 \text{ and } \alt\eta(x) \ne 0,\\ \sign \altv(x) [0, \infty), & \abs{\altv(x)}=1 \text{ and } \alt\eta(x) = 0,\\ \{0\}, & \abs{\altv(x)} < 1 \text{ and } \alt\eta(x) = 0.\\ \end{cases} \end{equation} Therefore, $z(x) \ne 0$ can only happen if $\alt\eta(x) = 0$. If \begin{equation} \label{eq:l1-unreachable} \realopt\eta \coloneqq y^\noise-S({\realopt{u}}) \ne 0 \qquad (x \in\Omega), \end{equation} that is, if the data is reached almost nowhere, then the condition $\norm{\realopt\eta-\alt\eta}<t$ gives for any $\varepsilon>0$ for small enough $t>0$ the estimate \[ \L^d(\{x \in \Omega \mid \alt\eta(x)=0\})<\varepsilon. \] In consequence, \[ \L^d(\{x \in \Omega \mid z(x) \ne 0\})<\varepsilon. \] With this, we deduce that \[ \norm{z}_{L^1(\Omega)} \le \norm{\chi_{\{z \ne 0\}}}_{L^2(\Omega)} \norm{z}_{L^2(\Omega)} \le \sqrt{\varepsilon} \norm{z}_{L^2(\Omega)}. \] We furthermore have that \[ \norm{\grad S({\realopt{u}})^* z} \le C \norm{z}_{L^1(\Omega)}; \] this follows from the fact that $\grad S({\realopt{u}}):W^{-1,s'}(\Omega)\to C(\overline\Omega)$ for any $s'>d$ due to the regularity of $\partial\Omega$ and ${\realopt{u}}$ (see, e.g., \cite[Thm.~6.3]{Griepentrog:2001}) and hence that $\grad S({\realopt{u}})^*:C(\Omega)^*\to W^{1,s}(\Omega)$ for $s<d$ together with the embeddings $L^1(\Omega)\hookrightarrow C(\overline\Omega)^*$ and $W^{1,s}(\Omega)\hookrightarrow L^2(\Omega)$ for $s\geq 1$ ($d=2$) or $s\geq 6/5$ ($d=3$). An application of these estimates with $\nu=0$ in \eqref{eq:l1-fitting-kvbound} yields \[ \barb({\realopt{q}}|0; H_{{\realopt{u}}}) \le \inv\alpha C \norm{\grad S({\realopt{u}})} \sqrt{\varepsilon}. \] Letting $\varepsilon \searrow 0$, we deduce that \[ \barb({\realopt{q}}|0; H_{{\realopt{u}}}) =0. \] If, on the other hand, the data is reached on a set $E$ of positive measure, i.e., $\realopt\eta=0$ on $E$, we may choose $z$ freely on $E$. However, the lower bound \[ \norm{\grad S({\realopt{u}})^* z} \ge c \norm{z} \qquad (z \in L^2(E)), \] does not hold in general (take any orthonormal basis of $L^2(E)$, which converges weakly but not strongly to zero, and use the fact that $\grad S(u)$ is a compact operator from $L^2(\Omega)$ to $L^2(\Omega)$ due to the Rellich--Kondrachev embedding theorem). Again, $\barb({\realopt{q}}|0; H_{{\realopt{u}}})=0$. Therefore by \cref{prop:pde-metric-regularity}, \emph{there is no metric regularity without some sort of regularization}. On the other hand, with Moreau--Yosida regularization, i.e., for $\gamma>0$, we always have metric regularity of $H_{\realopt{u}}$ at $({\realopt{q}}, 0)$ by the same proposition. \paragraph{Data stability} The situation is very similar for stability with respect to data (\cref{prop:pde-data-stability}) when $\gamma=0$. Comparing \eqref{eq:kvbound-fstar-metric} and \eqref{eq:kvbound-fstar-data}, we see that we have to study whether \[ \barb({\realopt{q}}|0; R_0) = \bar b({\realopt{q}}|0; H_{{\realopt{u}}})>0. \] Hence, we again cannot have data stability without Moreau--Yosida regularization. With regularization, i.e., for $\gamma>0$, we still need to prove \eqref{eq:pde-cg}. Using the reverse triangle inequality, the boundedness of the dual variable ${\realopt{\Dual}}(x)\in [-1,1]$ due to the choice of $F^*$, and assumption \eqref{ass:a4}, we have that \begin{equation} \alpha\norm{\xi}^2+\iprod{\grad_u [\grad S({\realopt{u}})^*\realopt{v}]\xi}{\xi} \ge (\alpha - C)\norm{\xi}^2 \ge c_G \norm{\xi}^2 \end{equation} for $\alpha$ sufficiently large and hence data stability. \paragraph{Stability with respect to $\gamma$} Since \cref{prop:pde-moreau-yosida-stability} holds under exactly the same conditions as \cref{prop:pde-data-stability}, we deduce that there is no stability with respect to the Moreau--Yosida parameter at $\gamma=0$. This is to be expected, as any addition of regularization will, whenever $\realopt\eta(x)=0$, immediately force $v(x)=0$. At a point $\gamma>0$, the stability can be proved similarly to the arguments in \cref{prop:pde-moreau-yosida-stability}. \subsection{\texorpdfstring{$\scriptstyle L^\infty$}{L∞} fitting} Let us now consider the $L^\infty$ fitting problem \eqref{eq:linffit_problem}. We are again in the setting of \eqref{eq:pde-f-choice}--\eqref{eq:pde-g-choice}, this time with \[ F^*(v)=\int_\Omega f^*(v(x)) \,d x \quad \text{for} \quad f^*(z) = \delta\abs{z}, \] and $G$ and $K$ as in the previous subsection. Hence, the saddle-point conditions \eqref{eq:oc-h} are now given by \begin{equation} 0\in \begin{pmatrix} \alpha {\realopt{u}} +\grad S({\realopt{u}}) {\realopt{\Dual}}\\ \delta \sign({\realopt{\Dual}}) + S({\realopt{u}})-y^\noise \end{pmatrix}. \end{equation} \paragraph{Metric regularity} Again, for metric regularity of $H_{\realopt{u}}$ (\cref{prop:pde-metric-regularity}) we need to show \[ \barb({\realopt{q}}|0; H_{{\realopt{u}}})>0. \] Let us force again $z(x)\ne 0$. From \cref{cor:g-form-l1norm}, we obtain that $z \in \DerivConeF[\alt{v}|\alt{\eta}]$ satisfies \begin{equation} \label{eq:linffitting-z} z(x) \in \begin{cases} \{0\}, & \abs{\alt\eta(x)} \ne \delta,\\ \sign \alt\eta(x) [0, \infty), & \abs{\alt\eta(x)}=\delta \text{ and } \altv(x) = 0,\\ \mathbb{R}, & \abs{\alt\eta(x)} = \delta \text{ and } \altv(x) \neq 0, \end{cases} \end{equation} while $\nu \in \polar{\DerivConeF[\alt{v}|\alt{\eta}]}$ satisfies \begin{equation} \label{eq:linffitting-nu} \nu(x) \in \begin{cases} \mathbb{R}, & \abs{\alt\eta(x)}\neq \delta \text{ and } \altv(x) = 0,\\ \sign \altv(x) (-\infty, 0], & \abs{\alt\eta(x)}= \delta \text{ and } \altv(x) = 0,\\ \{0\}, & \altv(x) \neq 0, \end{cases} \end{equation} If $\realopt v(x)=0$ a.\,e., and $\esssup_{x \in \Omega} \abs{\realopt\eta(x)} < \delta$ holds -- meaning the constraint \begin{equation} \label{eq:linfty-constr-discussion} \abs{S({\realopt{u}})(x)-y^\noise(x)}\le\delta \end{equation} is almost never active -- then we can proceed as in \cref{sec:stability-l1} to show for any $\varepsilon>0$ for small enough $t>0$ the estimate \[ \L^d(\{x \in \Omega \mid \abs{\alt\eta(x)}=\delta\}) \le \varepsilon. \] In consequence, $z \in \DerivConeF[v|\alt\eta]$ satisfies \[ \L^d(\{x \in \Omega \mid z(x) \ne 0\}) \le \L^d(\{x \in \Omega \mid \abs{\alt\eta(x)}=\delta\}) \le \varepsilon, \] and we deduce following \cref{sec:stability-l1} that \[ \barb({\realopt{q}}|0; H_{{\realopt{u}}}) =\barb({\realopt{q}}|0; R_0)=0. \] Therefore, by \cref{prop:pde-metric-regularity}, \emph{we have no metric regularity if the constraint \eqref{eq:linfty-constr-discussion} is almost never active.} (Any small change could force it to be active, and hence cause a large change in the dual variable.) However, also if the constraint \eqref{eq:linfty-constr-discussion} is active on an open set $E$, we may reason as in \cref{sec:stability-l1} to show instability. The only way to obtain stability is therefore with Moreau--Yosida regularization. \paragraph{Data stability} Stability with respect to data (\cref{prop:pde-data-stability}) again requires that \[ \barb({\realopt{q}}|0; R_0) = \bar b({\realopt{q}}|0; H_{{\realopt{u}}})>0. \] Hence, we cannot have data stability without Moreau--Yosida regularization. If $\gamma>0$, we additionally need to prove \eqref{eq:pde-cg}. Using the reverse triangle inequality and assumption \eqref{ass:a4}, we have that \begin{equation} \alpha\norm{\xi}^2+\iprod{\grad_u [\grad S({\realopt{u}})^*\realopt{v}]\xi}{\xi} \ge (\alpha - C)\norm{\xi}^2 \ge c_G \norm{\xi}^2 \end{equation} for $\alpha$ sufficiently large and hence data stability. Since in this case we do not have an a priori bound on ${\realopt{\Dual}}$, the choice of $\alpha$ depends on ${\realopt{\Dual}}$ and hence on the data $y^\delta$. \paragraph{Stability with respect to $\gamma$} As in the case of $L^1$-fitting, stability with respect to the Moreau--Yosida parameter only holds at $\gamma>0$. \subsection{Regularization through projection} \label{sec:discretization} Discretization provides an alternative to regularization. Indeed, in practice, the data $y^\noise$ lies in a finite-dimensional subspace $Y' \subset Y=L^2(\Omega)$. With $\mathcal{P}$ the orthogonal projection from $Y$ into $Y'$, we then replace the fitting term $F$ by $F_{\mathcal{P}} \coloneqq F\circ \mathcal{P}$. We then have with $y=y'+y^\bot \in Y'\oplus (Y')^\bot=Y$ that \begin{equation} F_{\mathcal{P}}^*(v) = \sup_{y^\bot}\, \langle y^\bot, v \rangle + (\sup_{y'}\, \langle y', v \rangle - F(y')) = \begin{cases} F^*(v), & v\in ((Y')^\bot)^\bot = Y',\\ \infty, & v \notin Y'. \end{cases} \end{equation} We emphasize that in this approach we only discretize the fitting term, while the nonlinear operator $S$ and the regularizer $G$ remain infinite-dimensional. Hence, \[ \partial F_{\mathcal{P}}^*(v) = \begin{cases} \partial F^*(v)+(Y')^\perp, & v \in Y', \\ \emptyset, & v \not \in Y'. \end{cases} \] From the definition \eqref{eq:graphderiv} of the graphical derivative, we calculate that if either $v\not\in Y'$ or $\dirv \not\in Y'$, then $D[\partial F_{\mathcal{P}}^*](v|\eta)(\dirv)=\emptyset$. When $v\in Y'$ and $\dirv \in Y'$, we have for any $\tilde \eta \in \partial F^*(v) \isect(\eta + (Y')^\bot)$ the inclusion \[ \begin{split} D[\partial F_{\mathcal{P}}^*](v|\eta)(\dirv) & = D[\mathcal{P}\partial F^*](v|\eta)(\dirv)+(Y')^\bot \\ & \supset \mathcal{P}D[\partial F^*](v|\tilde\eta)(\dirv)+(Y')^\bot \\ & = D[\partial F^*](v|\tilde\eta)(\dirv)+(Y')^\bot. \end{split} \] Consequently, by basic properties of convex hulls, we also have the inclusion \begin{equation} \label{eq:discr-inner-approx} \widetilde{D[\partial F_{\mathcal{P}}^*]}(v|\eta)(\dirv) \supset \begin{cases} \widetilde{D[\partial F^*]}(v|\tilde\eta)(\dirv)+(Y')^\bot, & v, \Delta v \in Y',\, \eta \in \partial F^*(v) + (Y')^\bot, \\ \emptyset, & \text{otherwise.} \end{cases} \end{equation} Suppose now that $DF^*$ satisfies \eqref{eq:dfstar-expr}, that is, \begin{equation} \notag \widetilde{D[\partial F^*]}(\alt{v}|\alt{\eta})(\dir{v}) \supset \begin{cases} \gamma \dir{v} + \polar{\DerivConeF(\alt{v}|\alt{\eta})}, & \dir{v} \in \DerivConeF[\alt{v}|\alt{\eta}], \\ \emptyset, & \dir{v} \not\in \DerivConeF[\alt{v}|\alt{\eta}]. \end{cases} \end{equation} Since any cone $V$ and subspace $Y'$ satisfy the easily verified identity \[ \polar{(V \isect Y')} = \polar V + (Y')^\bot, \] it follows for $v, \Delta v \in Y'$ and $\eta \in \partial F^*(v) + (Y')^\bot $ that \begin{equation} \label{eq:structure-f-project} \widetilde{D[\partial F_{\mathcal{P}}^*]}(\alt{v}|\alt{\eta})(\dir{v}) \supset \begin{cases} \gamma \dir{v} + \polar{\DerivConeF(\alt{v}|\tilde{\eta})} + (Y')^\bot, & \dir{v} \in \DerivConeF[\alt{v}|\tilde{\eta}] \isect Y', \\ \emptyset, & \text{otherwise}. \end{cases} \end{equation} This holds for arbitrary $\tilde \eta \in \partial F^*(v) \isect(\eta + (Y')^\bot)$. In fact, in the following, we let $\tilde \eta \in \partial F^*(v)$ be the free parameter, and take, for example, $\eta=\tilde\eta$. Let now $H_{{\realopt{u}},\mathcal{P}}$ and $R_{0,\mathcal{P}}$ be defined by \eqref{eq:h-def} and \eqref{eq:r0-def}, respectively, with $F_{\mathcal{P}}$ in place of $F$. An application of \cref{thm:limit-polar-projection-lower-bound-fstar} to the inclusion \eqref{eq:structure-f-project} then gives the lower bound \begin{equation} \label{eq:l1-metric-regularity-kvbound} \begin{aligned}[t] \barb({\realopt{q}}|0;\, & H_{{\realopt{u}},\mathcal{P}}) =\barb({\realopt{q}}|0; R_{0,\mathcal{P}}) \\ & \ge \inv\alpha\sup_{t>0} \inf\left\{ \frac{\norm{\grad S({\realopt{u}})\grad S({\realopt{u}})^* z-\nu}}{\norm{z}} \,\middle|\, \begin{array}{l} 0 \ne z \in \DerivConeF[\alt{v}|\tilde{\eta}] \isect Y', \\ \nu \in \polar{\DerivConeF[\alt{v}|\tilde{\eta}]} + (Y')^\perp, \\ \tilde{\eta} \in \partial F^*(\alt{v}),\, \alt{v} \in Y',\\ \norm{\alt{v}-\realopt{v}} < t,\, \norm{\mathcal{P}(\tilde{\eta}-\realopt{\eta})} < t \end{array} \right\} \\ & = \inv\alpha\sup_{t>0} \inf\left\{ \frac{\norm{\mathcal{P}\grad S({\realopt{u}})\grad S({\realopt{u}})^* z-\mathcal{P}\tilde\nu}}{\norm{z}} \,\middle|\, \begin{array}{l} 0 \ne z \in \DerivConeF[\alt{v}|\tilde{\eta}] \isect Y', \\ \tilde\nu \in \polar{\DerivConeF[\alt{v}|\tilde{\eta}]}, \\ \tilde{\eta} \in \partial F^*(\alt{v}),\, \alt{v} \in Y',\\ \norm{\alt{v}-\realopt{v}} < t,\, \norm{\mathcal{P}(\tilde{\eta}-\realopt{\eta})} < t \end{array} \right\}. \end{aligned} \end{equation} Let $\{e_1, \ldots, e_N\}$ be an orthonormal basis for $Y'$, and for any $v \in Y$‚ denote $v_i \coloneqq \iprod{e_i}{v}$. Then \eqref{eq:l1-metric-regularity-kvbound} forces \begin{equation} \label{eq:discr-t-bound} \sum_{i=1}^N \abs{\alt{v}_i-\realopt{v}_i}^2 \le t^2 \quad\text{and}\quad \sum_{i=1}^N \abs{\tilde{\eta}_i-\realopt{\eta}_i}^2 \le t^2. \end{equation} Suppose there exist for each $i=1,\ldots,N$ closed sets $A_i, B_i \subset \mathbb{R}$ satisfying for each $\tilde{\eta}$ and $\alt{v}$ with $\tilde{\eta} \in \partial F^*(\alt{v})$ the conditions \begin{subequations} \label{eq:discr-cond} \begin{align} \label{eq:discr-cond-sc} \text{either}\quad (\alt{v}_i \not\in A_i \text{ and } \alt{\eta}_i \in B_i) &\quad\text{or}\quad (\alt{v}_i \in A_i \text{ and } \alt{\eta}_i \not\in B_i), \\ \label{eq:discr-cond-b} \tilde\eta_i \not\in B_i \text{ and } z \in \DerivConeF[\alt{v}|\tilde{\eta}] \isect Y' & \implies z_i=0, \\ \label{eq:discr-cond-a} \alt{v}_i \not\in A_i \text{ and } \tilde\nu \in \polar{\DerivConeF[\alt{v}|\tilde{\eta}]} & \implies \tilde\nu_i=0. \end{align} \end{subequations} Observe that the condition \eqref{eq:discr-cond-sc} is a type of strict complementarity condition. We observe the following two situations. \begin{enumerate}[label=(\roman*)] \item If $\realopt{v}_i \not\in A_i$, \eqref{eq:discr-t-bound} for small enough $t>0$ forces $\alt{v}_i \not\in A_i$, and through \eqref{eq:discr-cond-sc}, $\tilde\eta_i \in B_i$. Thus by \eqref{eq:discr-cond-a}, $\tilde\nu_i=0$. \item Likewise, if $\realopt{\eta}_i \not\in B_i$, \eqref{eq:discr-t-bound} gives $\alt{\eta}_i \not\in A_i$. Thus by \eqref{eq:discr-cond-b}, $z_i=0$, and by \eqref{eq:discr-cond-sc}, $v_i \in A_i$. \end{enumerate} Thus the situations are mutually exclusive, and, by \eqref{eq:discr-cond-sc}, one of them has to occur. \emph{Importantly, therefore, $\tilde\nu_i=0$ has to hold when we do not have the constraint $z_i=0$.} Therefore, letting $\tilde\nu_i$ vary freely whenever we have the constraint $z_i=0$, and $z_i$ vary freely whenever we have the constraint $\tilde\nu_i=0$, we obtain from \eqref{eq:l1-metric-regularity-kvbound} the lower estimate \begin{equation} \label{eq:l1-metric-regularity-kvbound-discr-2} \begin{aligned}[t] \barb({\realopt{q}}|0; H_{{\realopt{u}},\mathcal{P}}) & =\barb({\realopt{q}}|0; R_{0,\mathcal{P}}) \\ & \ge \inv\alpha \inf\left\{ \frac{\norm{\mathcal{P}\grad S({\realopt{u}})\grad S({\realopt{u}})^*\mathcal{P} z}}{\norm{z}} \,\middle|\, z \in Y',\, z_i=0 \text{ if } \realopt{v}_i \in A_i \right\} \\ & \ge \inv\alpha \inf\left\{ \frac{\norm{\mathcal{P}\grad S({\realopt{u}})\grad S({\realopt{u}})^*\mathcal{P} z}}{\norm{z}} \,\middle|\, z \in Y' \right\}. \end{aligned} \end{equation} Since $\grad S({\realopt{u}})$ is invertible (as the inverse of a linear partial differential operator; cf. \eqref{eq:forward_lin}) and the orthogonal projection $\mathcal{P}$ is self-adjoint, the restriction of $\mathcal{P}\grad S({\realopt{u}})\grad S({\realopt{u}})^*\mathcal{P}$ to $Y'$ is a self-adjoint positive definite operator on the finite-dimensional space $Y'$ and therefore boundedly invertible. This implies the existence of a constant $c>0$ such that \begin{equation} \norm{\mathcal{P}\grad S({\realopt{u}})\grad S({\realopt{u}})^*\mathcal{P} z} \geq c \norm{z}\qquad (z \in Y'), \end{equation} which yields $\barb({\realopt{q}}|0; H_{{\realopt{u}},\mathcal{P}})=\barb({\realopt{q}}|0; R_{0,\mathcal{P}}) \geq \alpha^{-1}c >0$ and therefore metric regularity. \bigskip It remains to verify the conditions \eqref{eq:discr-cond}. For this, let us further assume that the discretization is piecewise constant, that is $e_i=\abs{\Omega_i}^{-1}\chi_{\Omega_i}$, for some subdomains $\Omega_i \subset \Omega$ with $\sum_{i=1}^N \chi_{\Omega_i}=\chi_\Omega$. For the $L^1$-fitting example, \eqref{eq:D-fstar-l1-constr} then gives \begin{align} \abs{\alt{v}_i}<1 &\implies \tilde{\eta}\chi_{\Omega_i}=0. \intertext{For $t>0$ small enough, \eqref{eq:l1fitting-z} and \eqref{eq:l1fitting-nu} therefore give} \label{eq:discr-cond1} \abs{\alt{v}_i}<1 &\implies \tilde\nu_i=0 \text{ and } z_i \in \mathbb{R}. \end{align} This suggests to take $A_i=\{-1,1\}$ to satisfy \eqref{eq:discr-cond-a}. Then, for \eqref{eq:discr-cond-sc} to be satisfied, the condition \eqref{eq:D-fstar-l1-constr} gives the only possibility of $B_i=\{0\}$. Further, for the strict complementarity within \eqref{eq:discr-cond-sc} to hold, it is necessary to impose that the middle two cases in \eqref{eq:l1fitting-z} and \eqref{eq:l1fitting-nu} do not occur. This strict complementary condition may also be stated as \begin{equation} \label{eq:l1-strict-compl} (1-\abs{\realoptv_i})\realopt{\eta}_i=0 \quad\text{and}\quad 0<(1-\abs{\realoptv_i})+\abs{\realopt{\eta}_i}. \end{equation} To verify \eqref{eq:discr-cond-b}, suppose that $\tilde\eta_i \not \in B_i$. It may happen that $\tilde\eta\chi_{\Omega_i}$ behaves wildly. Nevertheless, by $\tilde\eta_i \not \in B_i$, there are points $x \in \Omega_i$ where necessarily $\tilde\eta(x)\ne 0$. Hence the condition \eqref{eq:D-fstar-l1-constr} forces $0 \ne \alt{v}(x)=\sign \tilde\eta(x)$. Thus $z(x)=0$, which through $z \in Y'$ forces $z_i=0$, proving \eqref{eq:discr-cond-b}. Thus \eqref{eq:discr-cond} and the estimate in \eqref{eq:l1-metric-regularity-kvbound-discr-2} hold for projection-regularized $L^1$ fitting under the strict complementarity condition \eqref{eq:l1-strict-compl}. For $L^\infty$-fitting, still using piecewise constant discretization, studying \eqref{eq:linfty-fitting-constr}, \eqref{eq:linffitting-z}, and \eqref{eq:linffitting-nu}, we see that we can take $A_i=\{0\}$ and $B_i=\{-\delta, \delta\}$ to obtain the same results under the strict complementarity condition \begin{equation} \label{eq:linfty-strict-compl} \realoptv_i(\delta-\abs{\realopt{\eta}_i})=0 \quad\text{and}\quad 0<\abs{\realoptv_i}+(\delta-\abs{\realopt{\eta}_i}). \end{equation} Note that the strict complementarity conditions \eqref{eq:l1-strict-compl} and \eqref{eq:linfty-strict-compl} can always be satisfied after a small perturbation of $\realoptv$, if necessary. Indeed, if $\realopt\eta_i$ is active ($\realopt\eta_i=0$ resp.~$\abs{\realopt\eta_i}=\delta$), then by \eqref{eq:D-fstar-l1-constr} resp.~\eqref{eq:linfty-fitting-constr}, $\realoptv_i$ can be made inactive -- in which case the strict complementarity condition is satisfied -- while maintaining the optimality condition $\realopt\eta \in \partial F_{\mathcal{P}}(\realoptv)$. This change in $\realoptv$ can, however, alter the constant in \eqref{eq:pde-cg}. Observe further that for the original infinite-dimensional problem, similar strict complementarity conditions (pointwise almost everywhere) could be derived, but these would not be sufficient to obtain metric regularity since we still have the problem that the inverse of $\grad S({\realopt{u}})\grad S({\realopt{u}})^*$ is unbounded on $L^2(\Omega)$. Further, in the $L^2$ topology, even a strong complementarity condition (with $\varepsilon$ lower bound in the inequality) would not be sufficient to transport it from the optimal solution to the perturbed variables $\altv$ and $\alt\eta$. \bigskip For stability with respect to perturbations of the data, condition \eqref{eq:pde-cg} is also required. Since this condition is independent of $F$, it will hold under discretization whenever it holds for the original problem (with a possibly different constant $c_G$ since now ${\realopt{\Dual}}\in Y'\subset Y$). \section{Conclusion} The purpose of this work was to derive explicit stability criteria for solutions to saddle-point problems in Hilbert spaces, in particular those arising from the minimization of nonsmooth nonlinear functionals commonly occurring in parameter identification, image processing, or PDE-constrained optimization problems. Our main results are a pointwise characterization of regular coderivatives of convex subdifferentials of integral functionals and explicit conditions for metric regularity of the corresponding variational inclusions. These make it possible to verify the Aubin property for concrete problems. While the results for our model problems are mostly negative (no regularity unless regularization or discretization is introduced), they are still useful: Our function-space analysis provides a unified framework for \emph{any} conforming regularization; in particular, it shows that the stability properties are independent of the discretization of the unknown parameter. Furthermore, for \emph{arbitrary small} fixed Moreau--Yosida parameters, the properties are also independent of the discretization of the data; this is especially important for the convergence of numerical algorithms, where this translates in a discretization-independent number of iterations required to reach a given tolerance. This work can be extended in a number of directions. In a follow-up paper, we will apply our results on the Aubin property of pointwise set-valued mappings to the convergence analysis of the nonlinear primal-dual extragradient method from \cite{Valkonen:2014} in function spaces. We also plan to investigate the possibility of obtaining \emph{partial} stability results with respect to only the primal variable without regularization or discretization. An alternative would be to exploit the uniform stability with respect to regularization for fixed discretization, and with respect to discretization for fixed regularization, to obtain a combined convergence for a suitably chosen net $(\gamma,h)\to (0,0)$; this is related to the adaptive regularization and discretization of inverse problems \cite{KaltenbacherKirchnerVexler11}. Furthermore, it would be of interest to extend our analysis to include nonsmooth regularizers $G$, which were excluded in the current work for the sake of the presentation. It would also be worthwhile to try to adapt the stability analysis to make use of the limiting coderivative and its richer calculus; in particular to remove the geometric derivability assumption by directly working with the limiting coderivative. Finally, the pointwise characterization of coderivatives could be useful in deriving more explicit optimality conditions for bilevel optimization problems.
2112.06817
\subsection*{Non-expected utility decision criteria} A careful inspection of the proof of manipulation-proofness (Theorem 3) reveals two important facts. First, notice that we never used any properties of the underlying probability space beside that (i) the random variable representing the risk faced by the insured had full support and that (ii) the two integrals in \eqref{Problem I} are finite. In layman's terms, we only used the fact that when Bob has an accident (i) the damage to his bike can fully range from small scratches to a total loss and (ii) that Bob's risk is not infinitely large. This is important because we could have considered other decision criteria without changing anything to the veracity of Theorem 3. For instance, replacing the probability $\mathbb{P}$ with a capacity $\nu$ in one integral and integrating in the sense of Choquet instead of Lebesgue would have changed nothing, provided that the set of solutions to \eqref{Problem I} remains non-empty and another technical condition is satisfied. This is particularly important for the risk practitioners because it implies that one does not want to sell insurance contracts vulnerable to arson-type actions no matter how difficult it is to evaluate the fundamentals of the risk to be bear.\footnote{The technical condition is that the risk to insure must still have full support under the probabilistic belief of the decision-maker with the non-expected utility criterion. The optimal contract might not be manipulation-proof otherwise. This happens when a decision-maker believes that the set of realizations for which a contract induces manipulation is a set of measure zero.}\footnote{Non-expected utility decision criterion can be interpreted as situations where the decision-maker has to statistically infer the risks. This extension does not seem to change the fundamental message of the article. In particular, it seems that one can keep invoking arson-type actions to justify the no-sabotage condition.} \subsection*{Ex-ante moral hazard in loss reduction} The most well-known and studied agency problem is the classical Principal-Agent problem with hidden actions, or ex-ante moral hazard.\footnote{We refer to \citet{winter2013optimal} for an introduction to the Principal-Agent problem in the context of insurance design.} In the context of insurance, the hidden action is often interpreted as unobservable preventive measures that the insured takes to reduces its risk, for instance by driving carefully. There does not seem to be a trade-off in the provision of incentives to mitigate ex-ante moral hazard and to prevent arson-type actions. This is because the optimal contract with only ex-ante moral hazard (often) satisfies the no-sabotage condition and his therefore manipulation-proof with regards to arson-type actions. Precisely, the contract satisfies the no-sabotage condition when the \textit{first-order approach} is valid, a condition most often assumed in the literature. \subsection*{Limitations and avenues for further research} The result that optimal contract is manipulation-proof seems to be in contradiction with the real-world, where arson and insurance fraud does happen. It is not. This apparent contradiction comes from the interplay of three important assumptions: we defined arson-type actions as (i) the ability to physically destroy an object while (ii) having no chance of being caught and while (iii) assuming implicitly that the insured faces no other risks.\\ While (i) is a natural consideration in the context of Property insurance, it does not capture all the possibilities of insurance fraud. On theoretical grounds, the classical costly state falsification model of \citep{crocker1998honesty} contains an intuitive example where manipulations happen. In their model, the insurer can defraud the contract by declaring greater damages than the real damages, and the insurer cannot verify the claims. The optimal contracts always entail manipulations in equilibrium because the marginal cost of lying is essentially null for small lies. However, the insured's ability to fill dishonest claims depends on the damage's size, and the insurer can still differentiate between small and large fundamental losses. This unravelling implies that a contract with manipulations in equilibrium dominates (their model's) manipulation-proof contracts.\footnote{The topic of fraud has received a great deal of attention in the literature, and our short discussion is far from exhaustive. We refer to \citet{picard2013economic} for an overview.}\\ With (ii), we implicitly assumed that arson-type actions cannot be detected by auditing. This is unlikely to be true in general, as arson-type actions often leave evidences. For instance, fire accelerants like gasoline leave traces that can be detected by forensic investigators. Given that perfectly preventing arson-type actions by using manipulation-proof contracts is costly in term of welfare, one might want to control for these actions by auditing suspect claims instead of implementing manipulation-proof contracts. This opens up an interesting theoretical possibility, where arson-type actions might happen in equilibrium as a result of optimal contracting.\\ There are situations where arson-type risks does not seem to be realistic on practical grounds. The leading example is the field of health insurance, where arson-type actions as we modelled them would require the insured to commit self-injury. However, anecdotes exist where people attempted to defraud an insurance contract by hurting themselves, health insurance contracts routinely contains self-inflicted injury exclusion clauses, and insurance companies routinely investigates injury claims. It is thus not far fetched to think that self-injuries can be modeled using the method developed in this article. Which brings us to (iii), as our manipulation-proof result does not cover the situation where the insured also faces uninsurable (background) risks. This observation also opens up another interesting theoretical possibility, where arson-types actions like self-injuries could arise in equilibrium as a response to large uninsurable losses. \subsection*{Literature review} \citet{huberman1983optimal} contains the earliest mention of arson-type risks which we are aware of. The authors analyze the optimal insurance contract when respecting the contract involves non-actuarial costs such as administrative cost. Their model's optimal contract is a completely disappearing deductible, and the authors observe that this type of contract is infrequently observed in real life, if at all. They show that if the insured can cause extra damage then their model's next best contract is a simple deductible.\\ \citet{picard2000design}'s introduction of arson-type risks is similar to \citet{huberman1983optimal}'s. \citet{picard2000design} designs the optimal insurance contract when the insured can defraud the contract and manipulate the audit costs. The model's optimal contract is discontinuous, another oddity infrequently observed in real life. The author then shows that this discontinuity disappears when there are arson-type risks.\\ Both the contract's continuity and its bounded slope obtain from the same mathematical result. Formally, the manipulation stage of the game defines an optimisation problem. As these contracts can be discontinuous as stated in \citet{picard2000design} the first-order conditions cannot be used. This is where the results of \citet{Lauzier2019positioningmaths} become necessary. These results also inform that the value function of the manipulation stage's optimisation problem is continuous and has a bounded slope.\\ Using an argument similar to one made in \citet{Lauzier2019securitydesign} we determine that the shape of the contract is continuous and has a bounded slope. This is because replacing a contract by the value function of the optimisation problem it defines does not change the amount of insurance provided. The new contract does not induce manipulation, it is therefore cheaper as the expected waste due to manipulation is priced by the insurer at nil.\\ While \citet{Lauzier2019securitydesign} obtained acceptable manipulations we cannot phantom them in the case of insurance contracts. In insurance contracts, the possibility of manipulations simply hurts the insured by lowering the protection by the insurer in equilibrium. Since all contracts mixing coinsurance and deductibles are robust to arson-type risks, there does not seem to be a trade-off between the prevention of arson-type manipulations and the provision of incentives to prevent ex-ante moral hazard. This is exactly the opposite result as that obtained in \citet{Lauzier2019securitydesign}.\\ We will discuss \citet{spaeter1997design} in the main text and we refer to \citet{dionne2000handbook} for an introduction to the Arrow-Borch-Raviv model.\\ \subsection*{Related literature} We are not the first to point out that the possibility of inflating an insurance claim by physically destroying an object imposes structure on the type of contract which an insurer can offer. \citet{huberman1983optimal} contains the earliest mention of arson-type actions we are aware of. The authors analyze the optimal insurance contract when respecting the contract involves administrative costs but where there are economies of scale. Their model's optimal first-best contract is a completely disappearing deductible [Fig. 1(a)]. They show that if the insured can cause extra damage, then the second-best contract is a straight deductible [Fig. 1(b)]. Similarly, \citet{picard2000design} introduces arson-type actions to restrict the set of feasible contracts in a setting where the insured can defraud the contract and manipulate the audit costs. The model's first-best contract is discontinuous, and the author shows that this discontinuity disappears when there are arson-type risks.\\ \begin{figure}[ht] \caption{Disappearing v.s. straight deductible} \begin{subfigure}{0.5\textwidth} \begin{tikzpicture} \draw[<->] (0,5) -- (0,0) -- (5,0); \draw[blue,thick](0,0) -- (2,0) -- (4,4); \draw[gray,dotted] (0,0) -- (4,4); \node[below] at (5,0) {$X$}; \node[left] at (0,5) {$Y$}; \node[below] at (2,0) {$d$}; \end{tikzpicture} \caption{A completely disappearing deductible} \label{figure1:sub1} \end{subfigure} \begin{subfigure}{0.5\textwidth} \begin{tikzpicture} \draw[<->] (0,5) -- (0,0) -- (5,0); \draw[blue,thick](0,0) -- (2,0) -- (4,2); \draw[gray,dotted] (0,0) -- (4,4); \node[below] at (5,0) {$X$}; \node[left] at (0,5) {$Y$}; \node[below] at (2,0) {$d$}; \end{tikzpicture} \caption{A straight deductible} \label{figure1:sub2} \end{subfigure} \end{figure} Following the aforementioned articles it has become routine in the literature to assume that the retention schedule is monotonic. For instance, \citet{cai2020optimal}'s survey contains an exhaustive overview of the use of this (co)monotonicity assumption in the reinsurance literature. This assumption is now referred to as the \textit\textit{no-sabotage condition} as of \citet{carlier2003pareto}. The name refers to the observation that non-monotonic retention schedules seem to incites the insured to inflate its losses. Our model provides a formal counterpart to such argument, thereby shedding lights on its veracity and limitations. We obtain the contract's continuity, its bounded slope and the no-sabotage condition as implications of the same result. Formally, since the contract can be discontinuous, as in \citep{picard2000design}, we cannot use standard first-order conditions to characterize the optimal manipulation correspondence. But the manipulation stage of the game is a positioning choice problem, a class of optimisation problems we defined and characterized in \citep{Lauzier2019positioningmaths}. The ad-hoc envelope theorem of \citet{Lauzier2019positioningmaths} thereby guarantees that the value function of the manipulation stage's problem is continuous and has a bounded slope. This articles main result follows from observing that any contract is dominated by the value function of the optimisation problem it defines in the manipulation stage of the game. The no-sabotage condition obtains as a special case, being the situation where there are no extra costs to inflating the losses. This articles thus complements the literature on the desirability of comonotonic contracts [\citep{landsberger1994co}, \citep{dana2003modelling}, \citep{LUDKOVSKI20081181}, \citep{CARLIER2012207}] by showing how comonotonicity also naturally obtains as the equilibrium outcome of a contracting game.\\ Many types of contracts satisfy the no-sabotage condition and are therefore manipulation-proof with regards to arson-type actions.\footnote{This includes full insurance contracts, straight deductibles and pure coinsurance contracts. See section 2.1.1.} Thus, if a contract is a first-best solution to a problem of insurance design without arson-type actions, it is also a second-best solution to the same problem with arson-types actions. In other words, the possibility of arson-type actions affects the design of insurance contracts if, and only if, the first-best contract is not itself manipulation-proof. Such departures routinely happen when the insurer faces administrative cost to respect the contract. We therefore cast our model in the context of \citet{spaeter1997design}, which contains a general analysis of the design of insurance contracts with administrative costs. This approach allows us to streamline proofs greatly and thus the exposition.\\ While \citet{Lauzier2019securitydesign} obtained acceptable manipulations while designing securities, we cannot fathom them in the case of insurance contracts. In insurance, the possibility of arson-type manipulations simply hurts the insured by lowering the coverage offered by the insurer in equilibrium. Since all contracts mixing deductibles, coinsurance, and upper limits are robust to arson-type risks, there does not seem to be a trade-off between the prevention of arson-type manipulations and the provision of incentives to prevent ex-ante moral hazard. This is exactly the opposite result as that obtained in \citet{Lauzier2019securitydesign}. We discuss further this observation in the conclusion. \subsection*{Notation and problem} We use standard notation throughout. Let $S$ be a set of states of the world and let $\mathbb{P}$ be the probability of state $s\in S$. Let the function $X:S \rightarrow [0,M]$ be a continuous random variable representing the risk to be insured against, with $X$ having full support $[0,M]$ and some mass at $0$, said mass representing the possibility that no accident ever occurs. Let the function $c:[0,M] \rightarrow \mathbb{R}_+$ be the cost of respecting the insurance contract and let $Y:[0,M]\rightarrow [0,M]$ be the state-contingent, variable part of the contract\footnote{ So $Y\in B_+(\mathcal{B}([0,M]))$, the space of non-negative and bounded functions (sup-norm) which are measurable with regard to the Borel $\sigma$-algebra of $[0,M]$.}. The amount $H\geq 0$ denotes the price of the contract while $W_0>0$ is the initial wealth of the insured and $\rho \geq 0 $ is the loading factor of the insurer. As usual, the function $u:\mathbb{R}_+ \rightarrow \mathbb{R}_+ $ is a twice differentiable and strictly concave Bernoulli utility function satisfying Inada conditions.\\ \noindent The game proceeds as follows: \begin{description} \item[Stage 1] the insured buys the insurance contract $Y$ at price $H$; \item[Stage 2] the state $s$ realizes and loss $X(s)$ occurs (Nature moves); \item[Stage 3] the insured observes the loss and decides to take hidden action $z\in [0,M]$ to augment the damages; \item[Stage 4] the contract is implemented without renegotiation. \end{description} \noindent The solution concept is a weak Perfect Bayesian equilibrium where we assume that the insured takes the insurer's most favoured action whenever indifferent\footnote{We refer to \citet{mas1995microeconomic} for a definition of our equilibrium concept.}.\\ By backward induction the optimal contract $(H,Y)$ solves the following optimization program: {\scriptsize \begin{align} \sup_{H \geq 0, Y \in B_+(\mathcal{B}([0,M]))} & \int u(W_0 - H - X(s)- z(s) -g(z(s))+Y(X(s)+z(s)))d\mathbb{P} \tag{Problem I} \label{Problem I}\\ s.t.\, & \,0 \leq Y \tag{LLI} \label{LLI}\\ &\, Y \leq X \tag{BI} \label{BI}\\ & \int Y(X(s)+z(s))+c(Y(X(s)+z(s))) d \mathbb{P} \leq (1+\rho)H \tag{PCI} \label{pci} \\ & \forall s,\, z(s) \in argmax_z \{ X(s) - z -g(z)+Y(X(s)+z)\} \tag{IIC} \label{iic} \end{align}} where \eqref{LLI} is the insured's limited liability constraint, \eqref{BI} is the "boundedness constraint" stating that the insurer will never pay more than the observed loss, \eqref{pci} is the insurer's participation constraint, \eqref{iic} is the insured's interim incentive compatibility constraint and the function \begin{align*} g(z)= \begin{cases} +\infty &\text{ if } z<0\\ \beta z &\text{ if } z\geq 0 \end{cases} \end{align*} represents an extra cost of inflicting damage (Bob buying a sledgehammer).\\ We now aim to prove that $Y$ is Lipschitz continuous with constant $\leq 1+\beta$. \begin{assumption} We assume throughout that $Y$ is a non-decreasing function. \end{assumption} The next statement is standard and will not be proved: \begin{lemma} The insurer's participation constraint \eqref{pci} must be binding.\end{lemma} The interim incentive compatibility constraint defines an optimisation problem that was defined in \citet{Lauzier2019positioningmaths} as a positioning choice problem. It immediately follows from Theorem 3 (p.12) that the value function $V(s)$ is Lipschitz continuous with constant $\leq 1+\beta$. Paralleling the argumentation of Theorem 6 (p.19) in \citet{Lauzier2019securitydesign} we conclude: \begin{proposition} Any optimal contract $Y^*$ is manipulation-proof: for every $s \in S$ it is $0 \in argmax_z \{ X(s) - z -g(z)+Y(X(s)+z)\}$.\end{proposition} \begin{proof} Suppose, by the way of contradiction, that $Y^*$ is optimal but that there exists a $s\in S$ such that such that for every $z(s)\in argmax_z \{ X(s) - z -g(z)+Y(X(s)+z)\}$ it is $z(s)>0$. Consider the alternative contract $(\Tilde{H},\Tilde{Y})$ for which we set $\Tilde{Y}(s)=V(s)$, where $V(s)$ is the value function of the optimisation problem \eqref{iic} defined by $Y^*$. Since $\Tilde{Y}(s)$ is Lipschitz with constant $1+\beta$ it is manipulation-proof so $V(s)=\Tilde{V}(s)$, where $\Tilde{V}(s)$ is the value function of the optimisation problem \eqref{iic} defined by $\Tilde{Y}$. In words, this means that the insured receives state-by-state the same final payoff under both contracts. By lemma 1, it holds that $\Tilde{H}<H$ and $(\Tilde{H},\Tilde{Y})$ dominates $(H,Y^*)$, a contradiction.\end{proof} We can intuitively understand proposition 2 as stating that arson-type risks are fully priced by the insurer. The insured thus will prefer the cheapest contract as he receives state-by-state the same final amount under both contracts. Or, from Bob's perspective, he was offered two contracts offering the same protection. An expensive one which would allow him to take a sledgehammer to his bike and a cheaper one which did not, so Bob, being a rational person, chose the cheaper option.\\ \begin{corollary} Any contract $Y$ must be Lipschitz and with slope $\leq 1+\beta$.\end{corollary} Furthermore, together, corollary 3 and assumption 1 imply that, by Theorem 3 of \citet{Lauzier2019positioningmaths}, the family of contracts $$\{Y\in B_+(\mathcal{B}([0,M])):Y(s)=V(s)\}$$ consist of functions which are almost everywhere differentiable and everywhere directionnally differentiable.\\ We can thus rewrite \eqref{Problem I} as \begin{align} \max_{H \geq 0, Y \in B_+(\mathcal{B}([0,M]))} & \int u(W_0 - H - X(s) +Y(X(s))d\mathbb{P} \tag{Problem S} \label{Problem S}\\ s.t.\, & \,0 \leq Y \leq X \label{S1}\\ & slope(Y)\leq 1+ \beta \label{S2}\\ & \int Y(X(s))+c(Y(X(s))) d \mathbb{P} = (1+\rho)H \label{S3} \end{align} under the implicit assumption that $Y\in C^0[0,M]$ \footnote{The space of continuous functions on the close interval $[0,M]$.} is almost everywhere differentiable and everywhere directionnally differentiable. Notice how this problem, as rewritten, is almost identical to the problem studied in \citet{spaeter1997design} except for the extra constraint $slope(Y)\leq 1+\beta$.\\ A deduction from the author's work informs when the constraint is binding. While the constraint is not binding the authors' solution is also a solution to \ref{Problem S}. While the constraint is binding we must resolve \ref{Problem S}. \begin{lemma} If $c=0$ then the optimal contract entails full insurance, i.e. $Y=X$. \end{lemma} \begin{proof} Observe that for $Y=X$ constraint \eqref{S2} is never binding and for $c=0$ constraint \eqref{S3} collapses to $$ \int Y(X(s)) d \mathbb{P} = (1+\rho)H$$ so problem \eqref{Problem S} is the standard Arrow-Borch-Raviv problem. \end{proof} This lemma tells us that \eqref{Problem S} becomes interesting only when $c>0$ somewhere. This is because when $c=0$ Bob is completely insured this precludes any incentive to take a sledgehammer to his bike. \\ \begin{fact} If $Y^*$ solves the reduced problem \begin{align} \max_{H \geq 0, Y \in B_+(\mathcal{B}([0,M]))} & \int u(W_0 - H - X(s) +Y(X(s))d\mathbb{P} \tag{Reduced problem} \label{reduced problem}\\ s.t.\, & \,0 \leq Y \leq X \nonumber \\ & \int Y(X(s))+c(Y(X(s))) d \mathbb{P} = (1+\rho)H \nonumber \end{align} of \citet{spaeter1997design} and $Y^*$ satisfies constraint \eqref{S2} then $Y^*$ solves \eqref{Problem S}. \end{fact} This fact informs us that the only problematic case which must be handled is when the contract $Y^*$ found in \citet{spaeter1997design} does not satisfy constraint \eqref{S2} somewhere. Intuitively it seems natural to attempt fattening $Y^*$ sufficiently to satisfy $slope(Y)\leq 1+\beta$ thus solving \ref{Problem S}. This approach is sometimes fruitful but does not work in certain cases as we explain later. While intermediate cases will be left to the reader we will characterise a simple case when $\beta=0$. \begin{lemma} If $\beta=0$, $Y_R^*$ solves \eqref{reduced problem} and $Y_R^*$ is a completely disappearing deductible then the unique solution $Y_I^*$ to problem \eqref{Problem I} is a simple deductible, i.e. a function of the form $$Y^*_I(s)=\max\{0,X(s)-d\}$$ for $d\geq 0$. \end{lemma} \begin{proof} If $Y_R^*$ is a completely disappearing deductible then there exists a $\Tilde{s}\in S$ such that for every $X(s)\geq X(\Tilde{s})$ it is $Y_R^*(s)=X(s)$. This implies that there is a state $\Tilde{\Tilde{s}}\in S$ such that for every $X(\overline{s})\geq X(\Tilde{\Tilde{s}})$ the constraint \eqref{S2} of problem \eqref{Problem S} will be binding and so $slope(Y_I^*(s))=1$ on the set $[X(\Tilde{\Tilde{s}}), M]$. It is easy to check that\footnote{Either by "Guessing \& Verifying" in \eqref{Problem S} or working by contradiction assuming $Y_I^*(s)$ is increasing somewhere on $[0,X(\Tilde{\Tilde{s}}))$.} $slope(Y_I^*(s)) = 0$ on $[0,X(\Tilde{\Tilde{s}}))$. Setting $d=X(\Tilde{\Tilde{s}})$ we obtain that $Y^*_I$ can be written as $Y^*_I(s)=\max\{0,X(s)-d\}$. \end{proof} The possibility that people have to take a sledgehammer to their bicycles is priced into insurance contracts is the reason why Bob will never be offered a completely disappearing deductible contract. Simply, though Bob would honestly like to purchase such contract, he cannot. This is because no Bob can commit to being honest.\\ The intuition that the contract solving problem \eqref{Problem S} is simply a "flattening" of the solution to \eqref{reduced problem} is misleading. By additivity of the Lebesgue integral, the insurer's participation constraint states that the insurer should recoup the cost \textit{on average} and \textit{not state-by-state}. This, fortunately, gives us some leeway in solving \eqref{Problem S}. However, this also means that there are some cases where we have to roll-up our sleeves and directly attack the problem. \subsection{Notation and manipulation-proofness} Let $S$ be a set of states of the world, let $\mathbb{P}$ be a probability measure for states $s\in S$. Let the risk $X:S \rightarrow [0,M]$ be a continuous random variable with full support $[0,M]$ and some mass at $0$, this mass being the probability that no accident occurs. Let the function $c:[0,M] \rightarrow \mathbb{R}_+$ be the administrative cost of respecting the insurance contract and let $Y:[0,M]\rightarrow [0,M]$ be the indemnity schedule.\footnote{So $Y\in B_+(\mathcal{B}([0,M]))$, the space of non-negative and bounded functions (sup-norm) which are measurable with regard to the Borel $\sigma$-algebra $\mathcal{B}$ of $[0,M]$.} The amount $H\geq 0$ denotes the price (premium) of the contract while $W_0>0$ is the initial wealth of the insured and $\rho \geq 0 $ is the loading factor. As usual, the function $u:\mathbb{R}_+ \rightarrow \mathbb{R}_+ $ is a twice differentiable and strictly concave Bernoulli utility function satisfying Inada conditions.\\ \noindent The game proceeds as follows: \begin{description} \item[Stage 1] the insured buys the insurance contract $Y$ at price $H$; \item[Stage 2] the state $s$ realizes and loss $X(s)$ occurs (Nature moves); \item[Stage 3] the insured observes the loss and decides to take hidden action $z\in [0,M - X(s)]$ to augment the damages; \item[Stage 4] the contract is implemented without renegotiation. \end{description} \noindent The solution concept is a weak Perfect Bayesian equilibrium \citep{mas1995microeconomic} where we assume that the insured takes the insurer's favoured action whenever indifferent.\\ By backward induction the optimal contract solves the following optimisation program: \begin{align} \sup_{H \geq 0, Y \in B_+(\mathcal{B}([0,M]))} & \int u(W_0 - H - X(s)- z(s) -g(z(s))+Y(X(s)+z(s)))d\mathbb{P} \tag{Problem I} \label{Problem I}\\ s.t.\, & \,0 \leq Y \tag{LL} \label{LL}\\ &\, \forall s,\, Y(X(s)+z(s)) \leq X(s)+z(s) \tag{B} \label{B}\\ & (1+\rho)\int Y(X(s)+z(s))+c(Y(X(s)+z(s))) d \mathbb{P} \leq H \tag{PC} \label{pci} \\ & \forall s,\, z(s) \in \arg\max_z\{ Y(X(s)+z)- z -g(z)\} \tag{IC} \label{ic} \end{align} where \eqref{LL} is the insured's limited liability constraint, \eqref{B} is the "boundedness constraint" stating that the insurer will never pay more than the observed loss, \eqref{pci} is the insurer's participation constraint, \eqref{ic} is the insured's incentive compatibility constraint and the function \begin{align*} g(z)= \begin{cases} +\infty &\text{ if } z<0\\ \beta z &\text{ if } z\geq 0 \end{cases} \end{align*} represents an extra cost of inflicting damage (Bob buying a sledgehammer).\\ We aim to prove that the optimal contract is Lipschitz continuous with constant $\leq 1+\beta$. Let us start with a handy assumption which is common in the literature: \begin{assumption} We assume throughout that feasible contracts are non-decreasing and upper semi-continuous functions. \end{assumption} Of course, this implies that the optimal contract $Y$ is non-decreasing and upper semi-continuous (if it exists). Assumption 1 is standard and need not be discussed further. Similarly, the next statement is standard and will not be proved: \begin{lemma} The insurer's participation constraint \eqref{pci} must be binding.\end{lemma} Lemma 1 simply states that the contract must be sold at actuarially fair price. Let us now inspect the incentive compatibility constraint \eqref{ic}. This constraint is an optimisation program which is ill-behaved. This is because we do not know at this level of generality if the optimal contract $Y$ is continuous. Moreover, we cannot assume away the possibility that $Y$ is discontinuous. We seem to be in trouble now, because we cannot differentiate the objective function in \eqref{ic} and thus cannot use standard first-order conditions to characterize the (set of) optimal manipulations. Fortunately, the optimisation problem of constraint \eqref{ic} is what we defined as a positioning choice problem in \citet{Lauzier2019positioningmaths}. The interest of positioning choice problems relies in that their value function is always Lipschitz continuous and almost everywhere differentiable. The next ancillary lemma is an immediate consequence of the ad-hoc envelope theorem of \citet{Lauzier2019positioningmaths}.\\ Let the value function $V$ of the manipulation stage of the game be \begin{align*} V(s;Y)= Y(X(s)+z(s))- z(s) -g(z(s)) \end{align*} for \begin{align*} z(s) \in \sigma(s;Y):=\arg\max_z\{ Y(X(s)+z)- z -g(z)\}, \end{align*} where $\sigma$ denotes the optimal choice correspondence of the manipulation stage of the game. \begin{lemma} If $Y$ is non-decreasing and upper semi-continuous then $V$ is Lipschitz continuous with constant $\leq 1+\beta$ and almost everywhere differentiable. \end{lemma} We are now ready to prove our main result. We say that a contract $Y$ is manipulation-proof if for every $s \in S$ it is $$0 \in \sigma(s;Y).$$ \begin{theorem} Any optimal contract $Y$ is manipulation-proof.\end{theorem} \begin{proof} Suppose, by contraposition, that $Y$ is optimal but that there exist a set $S'\subset S$ of such that simultaneously $\mathbb{P}[s\in S']>0$ and for every $s\in S'$ it is $$0\notin \sigma(s;Y).$$ Let $V(s;Y)$ denote the value function of the manipulation problem defined by $Y$, and consider now the alternative contract $(\overline{H}, \overline{Y})$ where $\overline{Y}=V$, i.e. the new indemnity schedule gives state-by-state the same final reimbursement than the original one after manipulations. The new indemnity schedule $\overline{Y}$ is manipulation-proof: by Assumption 1 and Lemma 2 it holds that $\overline{Y}$ is non-decreasing and Lipschitz with constant $1+\beta$, and thus to any two $x,x'\in[0,M]$ such that $x<x'$ it is $$\overline{Y}(x) \geq \overline{Y}(x') - g(x'-x),$$ and for every realisation $x\in [0,M]$ we have $$0 \in \sigma(x;\overline{Y}):=\arg\max_z\{\overline{Y}(x+z) -g(z) \}.$$ The new contract strictly dominates the original contract. Indeed, notice that \begin{align*} \int Y(X(s)+z(s))+c(Y(X(s)+z(s))) d \mathbb{P} > \int \overline{Y}(X(s)) + c(\overline{Y}(X(s))d\mathbb{P} \end{align*} so the price $\overline{H}$ of $\overline{Y}$ is strictly smaller than the price $H$ of the original contract. \end{proof} We can understand Theorem 3 as stating that since the insurer fully prices arson-type risks, contracts which induce arson-type actions will never be offered in equilibrium. Indeed, all extra damage due to arson-type actions must lead to a higher premium for the contract to be actuarially fair. Of course, the insured will prefer the cheapest contract as he receives state-by-state the same final amount under both contracts. Or, from Bob's perspective, he was offered two contracts offering the same protection. An expensive one which would allow him to take a sledgehammer to his bike and a cheaper one which did not, so Bob, being a rational person, chose the cheaper option. \begin{corollary} Any optimal contract $Y$ must be Lipschitz and with slope $\leq 1+\beta$.\end{corollary} \subsubsection{Characterization} Corollary 4 implies that the family of contracts $$\{Y\in B_+(\mathcal{B}([0,M])):Y(s)=V(s)\}$$ consists of functions which are a.e. differentiable.\footnote{This follows from Rademacher's Theorem. See Theorem 3 of \citet{Lauzier2019positioningmaths}.} We can thus rewrite \eqref{Problem I} as \begin{align} \max_{H \geq 0, Y \in B_+(\mathcal{B}([0,M]))} & \int u(W_0 - H - X(s) +Y(X(s))d\mathbb{P} \tag{Problem S} \label{Problem S}\\ s.t.\, & \,0 \leq Y \leq X \tag{S1} \label{S1}\\ & slope(Y)\leq 1+ \beta \tag{S2} \label{S2}\\ & (1+\rho)\int Y(X(s))+c(Y(X(s))) d \mathbb{P} = H \tag{S3}\label{S3} \end{align} under the implicit assumption that $Y\in C^0[0,M]$ is a.e. differentiable.\footnote{The notation $C^0$ denotes the space of continuous functions.} Notice how this problem, as rewritten, is almost identical to the problem studied in \citet{spaeter1997design}, except for the extra constraint $slope(Y)\leq 1+\beta$.\\ We conclude this section with a general representation theorem and a few observations that are handy when deriving closed form solutions. As a consequence of Theorem 3, all optimal contract $Y$ can be written as \begin{align*} Y(x)=\max\{0, \alpha(x)x-d\}, \end{align*} where $d\geq 0$ is a deductible and $\alpha(x)$ is a non-negative, continuous and a.e. differentiable function satisfying for every $x\in [0,M]$ \begin{align*} 0 \leq \frac{\partial \alpha(x)x}{\partial x}\leq 1+\beta. \end{align*} We say that contract $Y$ \begin{itemize} \item is a \textbf{full insurance contract }when $d=0$ and $\alpha(x)=1$ everywhere; \item is a \textbf{straight deductible} when $d>0$ and $\alpha(x)=1$ everywhere; \item entails \textbf{coinsurance} when $\alpha(x)=\alpha \in (0,1)$ and \item has \textbf{upper limit} $\delta>0$ if for every $x$, $Y(x)\leq \delta$, with strict equality for some $x$. \end{itemize} Any contract mixing deductibles, coinsurance and upper limits can be written as \begin{align*} Y(x)=\min\{\delta, \max\{0, \alpha x - d\}\}. \end{align*} When $\beta=0$ we recover the well-known \textit{no-sabotage condition} explained in details in \citet{carlier2003pareto}. Formally, if $\beta=0$ then for any contract $Y$ the following holds: \begin{enumerate} \item $slope(Y) \leq 1$; \item the retention function $R(x)=x-Y(x)$ is weakly monotone increasing; \item the random vector $(X,Y, X-Y)$ is comonotonic. \end{enumerate} \subsection{Some closed form solutions} We now provide closed form solutions to our contracting problem. \subsubsection{Full insurance, straight deductibles and the irrelevance of arson} \begin{lemma}[Arrow's Theorem] If $c=0$ the optimal contract is a straight deductible $Y(x)=\max\{0,x-d\}$ for which $d=0$ if, and only if, $\rho=0$. \end{lemma} Lemma 5 need not be proved as it is the classic Arrow-Borch-Raviv Theorem (see \citet{dionne2000handbook}). The lemma tells us that \eqref{Problem S} becomes interesting only when $c>0$ somewhere. This is because when $c=0$ the first-best contract never incentivizes Bob to take a sledgehammer to his bike. In other words, arson-type risks impact the provision of insurance only when there is a meaningful reason not to provide either full insurance or a straight deductible. \subsubsection{Fixed costs and nuisance claims: arson-type risks reduce welfare} Let us now consider the case when the administrative costs to deliver the contract are fixed. This situation is important because it is the easiest way to show that arson-type risks hurt the insured, i.e. that Bob would prefer to be unable to destroy his bike. \begin{assumption} The cost function involves only a fixed cost per claim: $c$ satisfies $c(0)=0$ and $c(y)=c_0>0$ for every claim requiring $y>0$ to be paid by the insurer. \end{assumption} We claim that when the insured can freely augment the damage ($\beta =0$) then the optimal contract is a straight deductible. \begin{proposition} Under assumption 2, if $\beta =0$ then the solution to \eqref{Problem S} is a straight deductible: there exist a $d>0$ such that $$Y(x)= \max\{0, x-d\}.$$ \end{proposition} \begin{proof} Notice that the insured files a reclamation if, and only if, $Y(x)>0$, so we can partition the interval $[0,M]$ in two regions $\mathcal{M},\mathcal{M}^C$ such that \begin{align*} &\mathcal{M}=\{x\in [0,M]: Y(x)>0\}=\{x: \text{ the insured files a claim}\}\\ &\mathcal{M}^C=\{x\in [0,M]: Y(x)=0\}=\{x: \text{ the insured does not files a claim}\}. \end{align*} Recall that our optimal contract takes the form of a generalized deductible \begin{align*} Y(x)=\max\{0, \alpha(x)x-d\}. \end{align*} Since the marginal cost $c'$ is null it is easily verified that \begin{align} \left. \frac{\partial Y(x)}{\partial x}\right\vert_{x\in \mathcal{M}} \geq 1. \end{align} However, since $\beta =0$ constraint \eqref{S2} implies that $Y'\leq 1$ and thus the previous is an equality, i.e. the optimal contract provides full marginal insurance on $\mathcal{M}$. We can thus set $\alpha (x)=1$ and rewrite problem \eqref{Problem S} as \begin{align} \max_{d\geq 0} & \int u(W_0 - H - X(s) +\max\{0, X(s)-d\})d\mathbb{P} \tag{Problem FC \& A} \label{Problem FC A} \\ s.t.\quad & c_0 \mathbb{P}[X\geq d] + \mathbb{E}[X-d\vert X\geq d] = (1+\rho)^{-1}H \nonumber. \end{align} \end{proof} We want to show that the possibility of arson-type risks hurts the insured, i.e that Bob would like to commit to leave his bike intact because he would be offered a better contract. Formally, we will show that absent the incentive compatibility constraint, a contract with an upward jump would improves upon a straight deductible. Let us consider a contract of the form \begin{align*} Y_{Ret}(x;t,j)=\begin{cases} 0 &\text{ if } x<t \\ x-j &\text{ if } x\geq t \end{cases} \end{align*} for parameter $t\geq 0$ being a threshold, $j\geq 0$ being a loss retention parameter and $t\geq j$. The number $t-j$ is the magnitude of the jump of $Y_{Ret}$ at $t$ and is interpreted as the minimum amount of money that the insured receives provided he files a claim. We will refer to contracts with the above form as \textit{contracts with constant retention}. Clearly, $Y_{Ret}$ is not manipulation-proof when $t>j$ and $Y_{Ret}$ is a simple deductible when $t=j$. More importantly, notice that the insured files a reclamation if, and only if, $Y_{Ret}(x)>0$, so we can again partition the interval $[0,M]$ in the two regions $\mathcal{M}$ and $\mathcal{M}^C$ defined above.\\ \begin{figure}[ht!] \caption{Straight deductibles and constant retention contracts} \begin{subfigure}{0.5\textwidth} \begin{tikzpicture} \draw[<->] (0,5) -- (0,0) -- (5,0); \draw[blue,thick](0,0) -- (2,0) -- (4,2); \draw[gray,dotted] (0,0) -- (4,4); \node[below] at (5,0) {$X$}; \node[left] at (0,5) {$Y$}; \node[below] at (2,0) {$d$}; \end{tikzpicture} \caption{Straight deductible} \label{figure2:sub1} \end{subfigure} \begin{subfigure}{0.5\textwidth} \begin{tikzpicture} \draw[<->] (0,5) -- (0,0) -- (5,0); \draw[blue,thick](0,0) -- (2,0); \draw[blue,thick](2,1) -- (4,3); \draw[gray,dotted] (0,0) -- (4,4); \draw[green,red] (2,0)--(2,1); \node[red, right] at (2,0.5){$t-j$}; \node[below] at (5,0) {$X$}; \node[left] at (0,5) {$Y$}; \node[below] at (2,0) {$t$}; \end{tikzpicture} \caption{Constant retention contract} \label{figure2:sub2} \end{subfigure} \end{figure} Suppose now that the cost $c$ satisfies Assumption 2, that the insured cannot use arson-type actions so that there are no incentive compatibility constraints and that we are restricting our search to contracts with constant retention. A few calculations shows that we can write the optimisation problem as \begin{align} \max_{t\geq 0, j\geq 0} & \int u(W_0 - H - X(s) +Y_{Ret}(X(s);t,j)d\mathbb{P} \tag{Problem FC \& No-A} \label{Problem FC No-A} \\ s.t.\quad & c_0 \mathbb{P}[X\geq t] + \mathbb{E}[X-j\vert X\geq t] = (1+\rho)^{-1}H \nonumber\\ & t-j\geq 0 \tag{Positive jump} \label{positive jump}. \end{align} Clearly, the optimisation problem with arson-type risks \eqref{Problem FC A} is \eqref{Problem FC No-A} when $t=j$. It is thus clear that arson-type risks can hurt the insured. We are left to show that arson-type risks always hurt the insured when the fixed cost is strictly positive. This entails to show that the constraint \eqref{positive jump} of \eqref{Problem FC No-A} is an equality if, and only, $c_0>0$. But this is a well-known result, the statement being essentially Theorem 2 of \citet{gollier1987}. \begin{lemma} Consider \eqref{Problem FC No-A}. It holds that $t=j$ if, and only if, $c_0=0$. \end{lemma} The proof does not provide intuition and is thus omitted from the text. The interpretation is straightforward. Absent arson-type risks, the optimal insurance contract provides the maximum possible protection while eliminating nuisance claims, the small claims which are more costly to administer than the coverage they offer. This is achieved by refusing to reimburse small claims while offering a generous protection conditional on the loss being large enough. This creates a spread in coverage, the discontinuity at the threshold. The problem is that Bob would like to exploit this spread by augmenting the damage to his bike. In equilibrium, Bob is never offered such contract with high protection, and Bob would be strictly better off if he could commit not to buy a mace. \subsubsection{Continuous costs and the sub-optimality of disappearing deductibles} We now consider the case when $c$ is a continuous function. \begin{assumption} The cost function $c$ is continuous, weakly positive, non-decreasing, twice-differentiable and satisfies $c(0)=0$. \end{assumption} Recall that our problem is now almost identical to the one of \citet{spaeter1997design}. \begin{lemma} Under Assumption 3, if $Y$ solves the reduced problem \begin{align} \max_{H \geq 0, Y \in B_+(\mathcal{B}([0,M]))} & \int u(W_0 - H - X(s) +Y(X(s))d\mathbb{P} \tag{Reduced problem} \label{reduced problem}\\ s.t.\, & \,0 \leq Y \leq X \nonumber \\ & (1+\rho)\int Y(X(s))+c(Y(X(s))) d \mathbb{P} = H \nonumber \end{align} of \citet{spaeter1997design} and $Y$ satisfies constraint \eqref{S2} then $Y$ solves \eqref{Problem S}. \end{lemma} This lemma informs us that the only problematic case which must be handled is when the contract $Y$ found in \citet{spaeter1997design} does not satisfy constraint \eqref{S2} somewhere. Intuitively, it seems natural to attempt flattening $Y$ sufficiently to satisfy $slope(Y)\leq 1+\beta$ thus solving \ref{Problem S}. This approach is sometimes fruitful but does not work in certain cases as we explain later. We consider the easiest case when the best contract absent arson-type risks is a \textit{completely} disappearing deductible.\footnote{\citet{huberman1983optimal} is the first to notice that the optimality of disappearing deductibles is no longer true when the insured has access to arson-type actions.} Formally, we say that contract $Y$ is a completely disappearing deductible if there exists a realisation $x'\in[0,M]$ such that for every $x\geq x'$ it is $Y(x)=x$.\\ \begin{proposition} If $\beta=0$, $Y_R$ solves \eqref{reduced problem} and $Y_R$ is a completely disappearing deductible then the unique solution to \eqref{Problem I} is a straight deductible. \end{proposition} \begin{proof} The fact that $Y_R$ is a completely disappearing deductible informs us that our optimal contract $Y$ must provide as much coverage as possible. Since $\beta=0$ it is $$\left. slope(Y)\right\vert_\mathcal{M} =1,$$ for $\mathcal{M}$ being as before. Hence, finding $Y$ is equivalent to solving \begin{align*} \max_{d\geq 0} & \int u(W_0 - H - X(s) +\max\{0, X(s)-d\})d\mathbb{P} \\ s.t.\quad &(1+\rho)\mathbb{E}[\max\{0, X-d\} +c(\max\{0, X-d\})] = H \nonumber. \end{align*} \end{proof} Similarly as before, Bob's ability to take a sledgehammer to its bicycles is priced in the insurance contract, and Bob will never be offered a completely disappearing deductible in equilibrium. Again, though Bob would like to purchase such contract, he cannot. This is because no Bob can commit to being honest. The intuition that the contract solving problem \eqref{Problem S} is simply a "flattening" of the solution to \eqref{reduced problem} is misleading. By additivity of the Lebesgue integral, the insurer's participation constraint states that the insurer should recoup the cost on average and not state-by-state. This, fortunately, gives us some leeway in solving \eqref{Problem S}. However, this also means that there are some cases where we have to roll-up our sleeves and directly attack the problem. \section{Introduction} \input{introduction02} \section{Model} \input{model02} \section{Discussion and conclusion} \input{conclusion02}
cond-mat/9204009
\section{Identify the Broken Symmetry} What is it which distinguishes the hundreds of different states of matter? Why do we say that water and olive oil are in the same state (the liquid phase), while we say aluminum and (magnetized) iron are in different states? Through long experience, we've discovered that most phases differ in their symmetry.\footnote{This is not to say that different phases always differ by symmetries! Liquids and gases have the same symmetry. In fact, one can go continuously from a liquid to a gas, by going first to high pressures and then heating. It is safe to say, though, that if the two materials have different symmetries, they are different phases.} \begin{figure}[thb] \center{ \epsfxsize=1.25truein \epsffile{fig/2A.ps} \hskip 0.2truein \epsfxsize=1.25truein \epsffile{fig/2B.ps}} \caption{{\bf Which is more symmetric?} The cube has many symmetries. It can be rotated by $90^\circ$, $180^\circ$, or $270^\circ$ about any of the three axes passing through the faces. It can be rotated by $120^\circ$ or $240^\circ$ about the corners, and by $180^\circ$ about an axis passing from the center through any of the 12 edges. The sphere, though, can be rotated by {\it any} angle. The sphere respects rotational invariance: all directions are equal. The cube is an object which breaks rotational symmetry: once the cube is there, some directions are more equal than others. } \label{fig:2} \end{figure} Consider figure~2, showing a cube and a sphere. Which is more symmetric? Clearly, the sphere has many more symmetries than the cube. One can rotate the cube by 90$^\circ$ in various directions and not change its appearance, but one can rotate the sphere by any angle and keep it unchanged. \begin{figure}[thb] \center{ \epsfxsize=1.25truein \epsffile{fig/3A.ps} \hskip 0.2truein \epsfxsize=1.25truein \epsffile{fig/3B.ps}} \caption{{\bf Which is more symmetric?} At first glance, water seems to have much less symmetry than ice. The picture of ``two--dimensional'' ice clearly breaks the rotational invariance: it can be rotated only by $120^\circ$ or $240^\circ$. It also breaks the translational invariance: the crystal can only be shifted by certain special distances (whole number of lattice units). The picture of water has no symmetry at all: the atoms are jumbled together with no long--range pattern at all. Water, though, isn't a snapshot: it would be better to think of it as a combination of all possible snapshots! Water has a complete rotational and translational symmetry: the pictures will look the same if the container is tipped or shoved. } \label{fig:3} \end{figure} In figure~3, we see a 2-D schematic representation of ice and water. Which state is more symmetric here? Naively, the ice looks much more symmetric: regular arrangements of atoms forming a lattice structure. The water looks irregular and disorganized. On the other hand, if one rotated figure~3B by an arbitrary angle, it would still look like water! Ice has broken rotational symmetry: one can rotate figure~3A only by multiples of 60$^\circ$. It also has a broken translational symmetry: it's easy to tell if the picture is shifted sideways, unless one shifts by a whole number of lattice units. While the snapshot of the water shown in the figure has no symmetries, water as a phase has complete rotational and translational symmetry. One of the standard tricks to see if two materials differ by a symmetry is to try to change one into the other smoothly. Oil and water won't mix, but I think oil and alcohol do, and alcohol and water certainly do. By slowly adding more alcohol to oil, and then more water to the alcohol, one can smoothly interpolate between the two phases. If they had different symmetries, there must be a first point when mixing them when the symmetry changes, and it is usually easy to tell when that phase transition happens. \section{Define the Order Parameter} Particle physics and condensed--matter physics have quite different philosophies. Particle physicists are constantly looking for the building blocks. Once pions and protons were discovered to be made of quarks, they became demoted into engineering problems. Now that quarks and electrons and photons are made of strings, and strings are hard to study (at least experimentally), there is great anguish in the high--energy community. Condensed--matter physicists, on the other hand, try to understand why messy combinations of zillions of electrons and nuclei do such interesting simple things. To them, the fundamental question is not discovering the underlying quantum mechanical laws, but in understanding and explaining the new laws that emerge when many particles interact. As one might guess, we don't keep track of all the electrons and protons.\footnote{The particle physicists use order parameter fields too. Their order parameter fields also hide lots of details about what their quarks and gluons are composed of. The main difference is that they don't know of what their fields are composed. It ought to be reassuring to them that we don't always find our greater knowledge very helpful.} We're always looking for the important variables, the important degrees of freedom. In a crystal, the important variables are the motions of the atoms away from their lattice positions. In a magnet, the important variable is the local direction of the magnetization (an arrow pointing to the ``north'' end of the local magnet). The local magnetization comes from complicated interactions between the electrons, and is partly due to the little magnets attached to each electron and partly due to the way the electrons dance around in the material: these details are for many purposes unimportant. \begin{figure}[thb] \epsfxsize=3.5truein \epsffile{fig/4.ps} \caption{{\bf Magnet.} We take the magnetization $\vec M$ as the order parameter for a magnet. For a given material at a given temperature, the amount of magnetization $|\vec M| = M_0$ will be pretty well fixed, but the energy is often pretty much independent of the direction $\hat M = \vec M / M_0$ of the magnetization. (You can think of this as a arrow pointing to the north end of each atomic magnet.) Often, the magnetization changes directions smoothly in different parts of the material. (That's why not all pieces of iron are magnetic!) We describe the current state of the material by an order parameter field $\vec M({\bf x})$.\hfill\break The order parameter field is usually thought of as an arrow at each point in space. It can also be thought of as a function taking points in space ${\bf x}$ into points on the sphere $|\vec M| = M_0$. This sphere ${\cal S}^2$ is the order parameter space for the magnet. } \label{fig:4} \end{figure} The important variables are combined into an ``order parameter field''.\footnote{Choosing an order parameter is an art. Usually it's a new phase which we don't understand yet, and guessing the order parameter is a piece of figuring out what's going on. Also, there is often more than one sensible choice. In magnets, for example, one can treat $\vec M$ as a fixed--length vector in ${\cal S}^2$, labelling the different broken symmetry states. This is the best choice at low temperatures, where we study the elementary excitations and topological defects. For studying the transition from low to high temperatures, when the magnetization goes to zero, it is better to consider $\vec M$ as a vector of varying length (a vector in ${\cal R}^3$). Finding the simplest description for your needs is often the key to the problem.} In figure~4, we see the order parameter field for a magnet.\footnote{Most magnets are crystals, which already have broken the rotational symmetry. For some ``Heisenberg'' magnets, the effects of the crystal on the magnetism is small. Magnets are really distinguished by the fact that they break time--reversal symmetry: if you reverse the arrow of time, the magnetization would change direction!} At each position ${\bf x}=(x,y,z)$ we have a direction for the local magnetization ${\vec M}({\bf x})$. The length of ${\vec M}$ is pretty much fixed by the material, but the direction of the magnetization is undetermined. By becoming a magnet, this material has broken the rotational symmetry. The order parameter ${\vec M}$ labels which of the various broken symmetry directions the material has chosen. The order parameter is a field: at each point in our magnet, ${\vec M}({\bf x})$ tells the local direction of the field near ${\bf x}$. Why do we do this? Why would the magnetization point in different directions in different parts of the magnet? Usually, the material has lowest energy when the order parameter field is uniform, when the symmetry is broken in the same way throughout space. In practise, though, the material often doesn't break symmetry uniformly. Most pieces of iron don't appear magnetic, simply because the local magnetization points in different directions at different places. The magnetization is already there at the atomic level: to make a magnet, you pound the different domains until they line up. We'll see in this lecture that most of the interesting behavior we can study involves the way the order parameter varies in space. The order parameter field ${\vec M}({\bf x})$ can be usefully visualized in two different ways. On the one hand, one can think of a little vector attached to each point in space. On the other hand, we can think of it as a mapping from real space into order parameter space. That is, ${\vec M}$ is a function which takes different points in the magnet onto the surface of a sphere (figure~4). Mathematicians call the sphere {\cal S}$^2$, because it locally has two dimensions. (They don't care what dimension the sphere is embedded in.) \begin{figure}[thb] \epsfxsize=2.5truein \epsffile{fig/5.ps} \caption{{\bf Nematic liquid crystal.} Nematic liquid crystals are made up of long, thin molecules that prefer to align with one another. (Liquid crystal watches are made of nematics.) Since they don't care much which end is up, their order parameter isn't precisely the vector $\hat n$ along the axis of the molecules. Rather, it is a unit vector up to the equivalence $\hat n \equiv - \hat n$. The order parameter space is a half-sphere, with antipodal points on the equator identified. Thus, for example, the path shown over the top of the hemisphere is a closed loop: the two intersections with the equator correspond to the same orientations of the nematic molecules in space. } \label{fig:5} \end{figure} Before varying our order parameter in space, let's develop a few more examples. The liquid crystal in LCD displays (like those in digital watches) are nematics. Nematics are made of long, thin molecules which tend to line up so that their long axes are parallel. Nematic liquid crystals, like magnets, break the rotational symmetry. Unlike magnets, though, the main interaction isn't to line up the north poles, but to line up the axes. (Think of the molecules as American footballs: the same up and down.) Thus the order parameter isn't a vector $\vec M$ but a headless vector $\vec n \equiv -\vec n$. The order parameter space is a hemisphere, with opposing points along the equator identified (figure~5). This space is called ${\cal RP}^2$ by the mathematicians (the projective plane), for obscure reasons. \begin{figure}[thb] \epsfxsize=2.5truein \epsffile{fig/6.ps} \caption{ {\bf Two dimensional crystal.} A crystal consists atoms arranged in regular, repeating rows and columns. At high temperatures, or when the crystal is deformed or defective, the atoms will be displaced from their lattice positions. The displacements $\vec u$ are shown. Even better, one can think of $u({\bf x})$ as the local translation needed to bring the ideal lattice into registry with atoms in the local neighborhood of $\bf x$.\hfill\break Also shown is the ambiguity in the definition of $u$. Which ``ideal'' atom should we identify with a given ``real'' one? This ambiguity makes the order parameter $u$ equivalent to $u+m a \hat x + n a \hat y$. Instead of a vector in two dimensional space, the order parameter space is a square with periodic boundary conditions. } \label{fig:6} \end{figure} For a crystal, the important degrees of freedom are associated with the broken translational order. Consider a two-dimensional crystal which has lowest energy when in a square lattice, but which is deformed away from that configuration (figure~6). This deformation is described by an arrow connecting the undeformed ideal lattice points with the actual positions of the atoms. If we are a bit more careful, we say that $\vec u({\bf x})$ is that displacement needed to align the ideal lattice in the local region onto the real one. By saying it this way, $\vec u$ is also defined between the lattice positions: there still is a best displacement which locally lines up the two lattices. \begin{figure}[thb] \epsfxsize=3.5truein \epsffile{fig/7.ps} \caption{{\bf Order parameter space for a two-dimensional crystal.} Here we see that a square with periodic boundary conditions is a torus. (A torus is a surface of a doughnut, inner tube, or bagel, depending on your background.) } \label{fig:7} \end{figure} The order parameter $\vec u$ isn't really a vector: there is a subtlety. In general, which ideal atom you associate with a given real one is ambiguous. As shown in figure~6, the displacement vector $\vec u$ changes by a multiple of the lattice constant $a$ when we choose a different reference atom: \begin{equation} \label{eq:equiv} \vec u \equiv \vec u + a \hat x = \vec u + m a \hat x + n a \hat y. \end{equation} The set of distinct order parameters forms a square with periodic boundary conditions. As figure~7 shows, a square with periodic boundary conditions has the same topology as a torus, {\cal T}$^2$. (The torus is the surface of a doughnut, bagel, or inner tube.) Finally, let's mention that guessing the order parameter (or the broken symmetry) isn't always so straightforward. For example, it took many years before anyone figured out that the order parameter for superconductors and superfluid Helium 4 is a complex number $\psi$. The order parameter field $\psi({\bf x})$ represents the ``condensate wave function'', which (extremely loosely) is a single quantum state occupied by a large fraction of the Cooper pairs or helium atoms in the material. The corresponding broken symmetry is closely related to the number of particles. In ``symmetric'', normal liquid helium, the local number of atoms is conserved: in superfluid helium, the local number of atoms becomes indeterminate! (This is because many of the atoms are condensed into that delocalized wave function.) Anyhow, the magnitude of the complex number $\psi$ is a fixed function of temperature, so the order parameter space is the set of complex numbers of magnitude $|\psi|$. Thus the order parameter space for superconductors and superfluids is a circle {\cal S}$^1$. Now we examine small deformations away from a uniform order parameter field. \section{Examine the Elementary Excitations} Its amazing how slow human beings are. The atoms inside your eyelash collide with one another a million million times during each time you blink your eye. It's not surprising, then, that we spend most of our time in condensed--matter physics studying those things in materials that happen slowly. Typically only vast conspiracies of immense numbers of atoms can produce the slow behavior that humans can perceive. \begin{figure}[thb] \epsfxsize=2.5truein \epsffile{fig/8.ps} \caption{{\bf One dimensional crystal: phonons.} The order parameter field for a one--dimensional crystal is the local displacement $u(x)$. Long--wavelength waves in $u(x)$ have low frequencies, and cause sound.\hfill\break Crystals are rigid because of the broken translational symmetry. Because they are rigid, they fight displacements. Because there is an underlying translational symmetry, a uniform displacement costs no energy. A nearly uniform displacement, thus, will cost little energy, and thus will have a low frequency. These low--frequency elementary excitations are the sound waves in crystals. } \label{fig:8} \end{figure} A good example is given by sound waves. We won't talk about sound waves in air: air doesn't have any broken symmetries, so it doesn't belong in this lecture.\footnote{We argue here that low frequency excitations come from spontaneously broken symmetries. They can also come from conserved quantities: since air cannot be created or destroyed, a long--wavelength density wave cannot relax quickly.} Consider instead sound in the one-dimensional crystal shown in figure~8. We describe the material with an order parameter field $u(x)$, where here $x$ is the position within the material and $x - u(x)$ is the position of the reference atom within the ideal crystal. Now, there must be an energy cost for deforming the ideal crystal. There won't be any cost, though, for a uniform translation: $u(x)\equiv u_0$ has the same energy as the ideal crystal. (Shoving all the atoms to the right doesn't cost any energy.) So, the energy will depend only on derivatives of the function $u(x)$. The simplest energy that one can write looks like \begin{equation} \label{eq:Energy} {\cal E} = \int dx\,(\kappa/2) (du/dx)^2. \end{equation} (Higher derivatives won't be important for the low frequencies that humans can hear.) Now, you may remember Newton's law $F=m a$. The force here is given by the derivative of the energy $F=-(d{\cal E}/du)$. The mass is represented by the density of the material $\rho$. Working out the math (a variational derivative and an integration by parts, for those who are interested) gives us the equation \begin{equation} \label{eq:motion} \rho \ddot u = \kappa (d^2u/dx^2). \end{equation} The solutions to this equation \begin{equation} \label{eq:phonon} u(x,t) = u_0 \cos(2 \pi ( x/\lambda - \nu_\lambda t )) \end{equation} represent phonons or sound waves. The wavelength of the sound waves is $\lambda$, and the frequency is $\nu_\lambda$. Plugging \ref{eq:phonon} into \ref{eq:motion} gives us the relation \begin{equation} \label{eq:dispersion} \nu_\lambda = \sqrt{\kappa/\rho} / \lambda. \end{equation} The frequency gets small only when the wavelength gets large. This is the vast conspiracy: only huge sloshings of many atoms can happen slowly. {\sl Why does the frequency get small?} Well, there is no cost to a uniform translation, which is what \ref{eq:phonon} looks like for infinite wavelength. {\sl Why is there no energy cost for a uniform displacement?} Well, there is a translational symmetry: moving all the atoms the same amount doesn't change their interactions. {\sl But haven't we broken that symmetry?} That is precisely the point. \begin{figure}[thb] \epsfxsize=2.5truein \epsffile{fig/9A.ps} \epsfxsize=2.5truein \epsffile{fig/9B.ps} \caption{{\bf (a)~Magnets: spin waves.} Magnets break the rotational invariance of space. Because they resist twisting the magnetization locally, but don't resist a uniform twist, they have low energy spin wave excitations.\hfill\break {\bf (b)~Nematic liquid crystals: rotational waves.} Nematic liquid crystals also have low--frequency rotational waves. } \label{fig:9} \end{figure} Long after phonons were understood, Jeffrey Goldstone started to think about broken symmetries and order parameters in the abstract. He found a rather general argument that, whenever a continuous symmetry (rotations, translations, $SU(3)$, ...) is broken, long--wavelength modulations in the symmetry direction should have low frequencies. The fact that the lowest energy state has a broken symmetry means that the system is stiff: modulating the order parameter will cost an energy rather like that in equation \ref{eq:Energy}. In crystals, the broken translational order introduces a rigidity to shear deformations, and low frequency phonons (figure~8). In magnets, the broken rotational symmetry leads to a magnetic stiffness and spin waves (figure~9a). In nematic liquid crystals, the broken rotational symmetry introduces an orientational elastic stiffness (it pours, but resists bending!) and rotational waves (figure~9b). In superfluids, the broken gauge symmetry leads to a stiffness which results in the superfluidity. Superfluidity and superconductivity really aren't any more amazing than the rigidity of solids. Isn't it amazing that chairs are rigid? Push on a few atoms on one side, and $10^9$ atoms away atoms will move in lock--step. In the same way, decreasing the flow in a superfluid must involve a cooperative change in a macroscopic number of atoms, and thus never happens spontaneously any more than two parts of the chair ever drift apart. The low--frequency Goldstone modes in superfluids are heat waves! (Don't be jealous: liquid helium has rather cold heat waves.) This is often called second sound, but is really a periodic modulation of the temperature which passes through the material like sound does through a metal. O.K., now we're getting the idea. Just to round things out, what about superconductors? They've got a broken gauge symmetry, and have a stiffness to decays in the superconducting current. What is the low energy excitation? It doesn't have one. But what about Goldstone's theorem? Well, you know about physicists and theorems $\dots$ That's actually quite unfair: Goldstone surely had conditions on his theorem which excluded superconductors. Actually, I believe Goldstone was studying superconductors when he came up with his theorem. It's just that everybody forgot the extra conditions, and just remembered that you always got a low frequency mode when you broke a continuous symmetry. We of course understood all along why there isn't a Goldstone mode for superconductors: it's related to the Meissner effect. The high energy physicists forgot, though, and had to rediscover it for themselves. Now we all call the loophole in Goldstone's theorem the Higgs mechanism, because (to be truthful) Higgs and his high--energy friends found a much simpler and more elegant explanation than we had. We'll discuss Meissner effects and the Higgs mechanism in the next lecture. I'd like to end this section, though, by bringing up another exception to Goldstone's theorem: one we've known about even longer, but which we don't have a nice explanation for. What about the orientational order in crystals? Crystals break both the continuous translational order and the continuous orientational order. The phonons are the Goldstone modes for the translations, but {\it there are no orientational Goldstone modes.}\footnote{In two dimensions, crystals provide another loophole in a well-known result, known as the Mermin-Wagner theorem. Hohenberg, Mermin, and Wagner, in a series of papers, proved in the 1960's that two-dimensional systems with a continuous symmetry cannot have a broken symmetry at finite temperature. At least, that's the English phrase everyone quotes when they discuss the theorem: they actually prove it for several particular systems, including superfluids, superconductors, magnets, and translational order in crystals. Indeed, crystals in two dimensions do not break the translational symmetry: at finite temperatures, the atoms wiggle enough so that the atoms don't sit in lock-step over infinite distances (their translational correlations decay slowly with distance). But the crystals do have a broken orientational symmetry: the crystal axes point in the same directions throughout space. (Mermin discusses this point in his paper on crystals.) The residual translational correlations (the local alignment into rows and columns of atoms) introduce long-range forces which force the crystalline axes to align, breaking the continuous rotational symmetry. Mermin, Wagner, and Hohenberg's methods apply very generally, but are not general enough to apply to this case (for good reason!)} We'll discuss this further in the next lecture, but I think this is one of the most interesting unsolved basic questions in the subject. \section{Classify the Topological Defects} \begin{figure}[thb] \epsfxsize=2.5truein \epsffile{fig/10.ps} \caption{{\bf Dislocation in a crystal.} Here is a topological defect in a crystal. We can see that one of the rows of atoms on the right disappears halfway through our sample. The place where it disappears is a defect, because it doesn't locally look like a piece of the perfect crystal. It is a topological defect because it can't be fixed by any local rearrangement. No reshuffling of atoms in the middle of the sample can change the fact that five rows enter from the right, and only four leave from the left! \hfill\break The Burger's vector of a dislocation is the net number of extra rows and columns, combined into a vector (columns, rows). } \label{fig:10} \end{figure} When I was in graduate school, the big fashion was topological defects. Everybody was studying homotopy groups, and finding exotic systems to write papers about. It was, in the end, a reasonable thing to do.\footnote{The next fashion, catastrophe theory, never became important for anything.} It is true that in a typical application you'll be able to figure out what the defects are without homotopy theory. You'll spend forever drawing pictures to convince anyone else, though. Most important, homotopy theory helps you to think about defects. A defect is a tear in the order parameter field. A topological defect is a tear that can't be patched. Consider the piece of 2-D crystal shown in figure~10. Starting in the middle of the region shown, there is an extra row of atoms. (This is called a dislocation.) Away from the middle, the crystal locally looks fine: it's a little distorted, but there is no problem seeing the square grid and defining an order parameter. Can we rearrange the atoms in a small region around the start of the extra row, and patch the defect? No. The problem is that we can tell there is an extra row without ever coming near to the center. The traditional way of doing this is to traverse a large loop surrounding the defect, and count the net number of rows crossed on the path. In the path shown, there are two rows going up and three going down: no matter how far we stay from the center, there will naturally always be an extra row on the right. \begin{figure}[thb] \epsfxsize=3.5truein \epsffile{fig/11.ps} \caption{{\bf Loop around the dislocation mapped onto order parameter space.} How do we think about our defect in terms of order parameters and order parameter spaces? Consider a closed loop around the defect. The order parameter field $u$ changes as we move around the loop. The positions of the atoms around the loop with respect to their local ``ideal'' lattice drifts upward continuously as we traverse the loop. This precisely corresponds to a loop around the order parameter space: the loop passes once through the hole in the torus. A loop {\it around} the hole corresponds to an extra column of atoms.\hfill\break Moving the atoms slightly will deform the loop, but won't change the number of times the loop winds through or around the hole. Two loops which traverse the torus the same number of times through and around are equivalent. The equivalence classes are labelled precisely by pairs of integers (just like the Burger's vectors), and the first homotopy group of the torus is ${\cal Z}\times{\cal Z}$. } \label{fig:11} \end{figure} How can we generalize this basic idea to a general problem with a broken symmetry? Remember that the order parameter space for the 2-D square crystal is a torus (see figure~7). Remember that the order parameter at a point is that translation which aligns a perfect square grid to the deformed grid at that point. Now, what is the order parameter far to the left of the defect (a), compared to the value far to the right (d)? Clearly, the lattice to the right is shifted vertically by half a lattice constant: the order parameter has been shifted halfway around the torus. As shown in figure~11, along the top half of a clockwise loop the order parameter (position of the atom within the unit cell) moves upward, and along the bottom half, again moves upward. All in all, the order parameter circles once around the torus. The winding number around the torus is the net number of times the torus is circumnavigated when the defect is orbited once. This is why they are called topological defects. Topology is the study of curves and surfaces where bending and twisting is ignored. An order parameter field, no matter how contorted, which doesn't wind around the torus can always be smoothly bent and twisted back into a uniform state. If along any loop, though, the order parameter winds either around the hole or through it a net number of times, then enclosed in that loop is a defect which cannot be bent or twisted flat: the winding number can't change by an integer in a smooth and continuous fashion. How do we categorize the defects for 2-D square crystals? Well, there are two integers: the number of times we go around the central hole, and the number of times we pass through it. In the traditional description, this corresponds precisely to the number of extra rows and columns of atoms we pass by. This was called the Burger's vector in the old days, and nobody needed to learn about tori to understand it. We now call it the first Homotopy group of the torus: \begin{equation} \label{eq:Homotopy} \Pi_1({\cal T}^2) = {\cal Z} \times {\cal Z} \end{equation} where ${\cal Z}$ represents the integers. That is, a defect is labeled by two integers $(m,n)$, where $m$ represents the number of extra rows of atoms on the right-hand part of the loop, and $n$ represents the number of extra columns of atoms on the bottom. \begin{figure}[thb] \epsfxsize=2.5truein \epsffile{fig/12A.ps} \epsfxsize=2.5truein \epsffile{fig/12B.ps} \caption{{\bf (a)~Hedgehog defect.} Magnets have no line defects (you can't lasso a basketball), but do have point defects. Here is shown the hedgehog defect, $\vec M({\bf x})= M_0\, \hat x$. You can't surround a point defect in three dimensions with a loop, but you can enclose it in a sphere. The order parameter space, remember, is also a sphere. The order parameter field takes the enclosing sphere and maps it onto the order parameter space, wrapping it exactly once. The point defects in magnets are categorized by this {\it wrapping number}: the second Homotopy group of the sphere is $\cal Z$, the integers.\hfill\break {\bf (b)~Defect line in a nematic liquid crystal.} You can't lasso the sphere, but you can lasso a hemisphere! Here is the defect corresponding to the path shown in figure~5. As you pass clockwise around the defect line, the order parameter rotates counterclockwise by $180^\circ$.\hfill\break This path on figure~5 would actually have wrapped around the right--hand side of the hemisphere. Wrapping around the left--hand side would have produced a defect which rotated clockwise by $180^\circ$. (Imagine that!) The path in figure~5 is halfway in between, and illustrates that these two defects are really not different topologically. } \label{fig:12} \end{figure} Here's where in the lecture I show the practical importance of topological defects. Unfortunately for you, I can't enclose a soft copper tube for you to play with, the way I do in the lecture. They're a few cents each, and machinists on two continents have been quite happy to cut them up for my demonstrations, but they don't pack well into books. Anyhow, most metals and copper in particular exhibits what is called work hardening. It's easy to bend the tube, but it's amazingly tough to bend it back. The soft original copper is relatively defect--free. To bend, the crystal has to create lots of line dislocations, which move around to produce the bending.\footnote{This again is the mysterious lack of rotational Goldstone modes in crystals.} The line defects get tangled up, and get in the way of any new defects. So, when you try to bend the tube back, the metal becomes much stiffer. Work hardening has had a noticable impact on the popular culture. The magician effortlessly bends the metal bar, and the strongman can't straighten it $\dots$ Superman bends the rod into a pair of handcuffs for the criminals $\dots$ Before we explain why these curves form a group, let's give some more examples of topological defects and how they can be classified. Figure 12a shows a ``hedgehog'' defect for a magnet. The magnetization simply points straight out from the center in all directions. How can we tell that there is a defect, always staying far away? Since this is a point defect in three dimensions, we have to surround it with a sphere. As we move around on this sphere in ordinary space, the order parameter moves around the order parameter space (which also happens to be a sphere, of radius $|\vec M|$). In fact, the order parameter space is covered exactly once as we surround the defect. This is called the {\it wrapping number}, and doesn't change as we wiggle the magnetization in smooth ways. The point defects of magnets are classified by the wrapping number: \begin{equation} \label{eq:Homotopy2} \Pi_2({\cal S}^2) = {\cal Z}. \end{equation} Here, the $2$ subscript says that we're studying the second Homotopy group. It represents the fact that we are surrounding the defect with a 2-D spherical surface, rather than the 1-D curve we used in the crystal.\footnote{The zeroth homotopy group classifies domain walls. The third homotopy group, applied to defects in three-dimensional materials, classifies what the condensed matter people call textures and the particle people sometimes call skyrmions. The fourth homotopy group, applied to defects in space--time path integrals, classifies types of instantons.} You might get the impression that a strength 7 defect is really just seven strength 1 defects, stuffed together. You'd be quite right: occasionally, they do bunch up, but usually big ones decompose into small ones. This doesn't mean, though, that adding two defects always gives a bigger one. In nematic liquid crystals, two line defects are as good as none! Magnets didn't have any line defects: a loop in real space never surrounds something it can't smooth out. Formally, the first homotopy group of the sphere is zero: you can't loop a basketball. For a nematic liquid crystal, though, the order parameter space was a hemisphere (figure~5). There is a loop on the hemisphere in figure~5 that you can't get rid of by twisting and stretching. It doesn't look like a loop, but you have to remember that the two opposing points on the equater really represent the same nematic orientation. The corresponding defect has a director field $n$ which rotates $180^\circ$ as the defect is orbited: figure~12b shows one typical configuration (called an $s=-1/2$ defect). Now, if you put two of these defects together, they cancel. (I can't draw the pictures, but consider it a challenging exercise in geometric visualization.) Nematic line defects add modulo~2, like clock arithmetic in elementary school: \begin{equation} \label{eq:Homotopy_2} \Pi_1({\cal RP}^2) = {\cal Z}_2. \end{equation} Two parallel defects can coalesce and heal, even though each one individually is stable: each goes halfway around the sphere, and the whole loop can be shrunk to zero. \begin{figure}[thb] \epsfxsize=3.5truein \epsffile{fig/13.ps} \caption{{\bf Multiplying two loops.} The product of two loops is given by starting from their intersection, traversing the first loop, and then traversing the second. The inverse of a loop is clearly the same loop travelled backward: compose the two and one can shrink them continuously back to nothing. This definition makes the homotopy classes into a group.\hfill\break This multiplication law has a physical interpretation. If two defect lines coalesce, their homotopy class must of course be given by the loop enclosing both. This large loop can be deformed into two little loops, so the homotopy class of the coalesced line defect is the product of the homotopy classes of the individual defects. } \label{fig:13} \end{figure} Finally, why are these defect categories a group? A group is a set with a multiplication law, not necessarily commutative, and an inverse for each element. For the first homotopy group, the elements of the group are equivalence classes of loops: two loops are equivalent if one can be stretched and twisted onto the other, staying on the manifold at all times.\footnote{A loop is a continuous mapping from the circle into the order parameter space: $\theta \rightarrow u(\theta),\> 0\le\theta<2\pi$. When we encircle the defect with a loop, we get a loop in order parameter space as shown in figure~4: $\theta \rightarrow \vec x(\theta)$ is the loop in real space, and $\theta \rightarrow u(\vec x(\theta))$ is the loop in order parameter space. Two loops are equivalent if there is a continuous one-parameter family of loops connecting one to the other: $u \equiv v$ if there exists $u_t(\theta)$ continuous both in $\theta$ and in $0\le t\le1$, with $u_0 \equiv u$ and $u_1 \equiv v$.} For example, any loop going through the hole from the top (as in the top right-hand torus in figure~13) is equivalent to any other one. To multiply a loop $u$ and a loop $v$, one must first make sure that they meet at some point (by dragging them together, probably). Then one defines a new loop $u \otimes v$ by traversing first the loop $u$ and then $v$.\footnote{That is, $u\otimes v(\theta) \equiv u(2\theta)$ for $0\le\theta\le\pi$, and $\equiv v(2\theta)$ for $\pi \le \theta \le 2\pi$.} The inverse of a loop $u$ is just the loop which runs along the same path in the reverse direction. The identity element consists of the equivalence class of loops which don't enclose a hole: they can all be contracted smoothly to a point (and thus to one another). Finally, the multiplication law has a direct physical implication: encircling two defect lines of strength $u$ and $v$ is completely equivalent to encircling one defect of strength $u\otimes v$. This all seems pretty trivial: maybe thinking about order parameter spaces and loops helps one think more clearly, but are there any real uses for talking about the group structure? Let me conclude this lecture with an amazing, physically interesting consequence of the multiplication laws we described. There is a fine discussion of this in Mermin's article\cite{Mermin}, but I learned about it from Dan Stein's thesis. \begin{figure}[thb] \center{\epsfxsize=1truein \epsffile{fig/14A.ps} \hskip 0.2truein \epsfxsize=1.5truein \epsffile{fig/14B.ps}} \epsfxsize=3.5truein \epsffile{fig/14C.ps} \caption{{\bf Defect entanglement.} (a)~Can a defect line of class $\alpha$ pass by a line of class $\beta$, without getting topologically entangled? (b)~We see that we can pass by if we leave a trail: is the connecting double line topologically trivial? Encircle the double line by a loop. The loop can be wiggled and twisted off the double line, but it still circles around the two legs of the defects $\alpha$ and $\beta$. (c)~The homotopy class of the loop is precisely $\beta\alpha\beta^{-1}\alpha^{-1}$, which is trivial precisely when $\beta\alpha = \alpha\beta$. Thus two defect lines can pass by one another if their homotopy classes commute! } \label{fig:14} \end{figure} Can two defect lines cross one another? Figure~14a shows two defect lines, of strength (homotopy type) $\alpha$ and $\beta$, which are not parallel. Suppose there is an external force pulling the $\alpha$ defect past the $\beta$ one. Clearly, if we bend and stretch the defect as shown in figure~14b, it can pass by, but there is a trail left behind, of two defect lines. $\alpha$ can really leave $\beta$ behind only if it is topologically possible to erase the trail. Can the two lines annihilate one another? Only if their net strength is zero, as measured by the loop in 14b. Now, get two wires and some string. Bend the wires into the shape found in figure~14b. Tie the string into a fairly large loop, surrounding the doubled portion. Wiggle the string around, and try to get the string out from around the doubled section. You'll find that you can't completely remove the string, (No fair pulling the string past the cut ends of the defect lines!) but that you can slide it downward into the configuration shown in 14c. Now, in 14c we see that each wire is encircled once clockwise and once counterclockwise. Don't they cancel? Not necessarily! If you look carefully, the order of traversal is such that the net homotopy class is $\beta \alpha \beta^{-1} \alpha^{-1}$, which is only the identity if $\beta$ and $\alpha$ {\it commute}. Thus the physical entanglement problem for defects is directly connected to the group structure of the loops: commutative defects can pass through one another, noncommutative defects entangle. I'd like to be able to tell you that the work hardening in copper is due to topological entanglements of defects. It wouldn't be true. The homotopy group of dislocation lines in fcc copper is commutative. (It's rather like the 2-D square lattice: if $\alpha = (m,n)$ and $\beta = (o,p)$ with $m,n,o,p$ the number of extra horizontal and vertical lines of atoms, then $\alpha \beta = (m+o,n+p) = \beta \alpha$.) The reason dislocation lines in copper don't pass through one another is energetic, not topological. The two dislocation lines interact strongly with one another, and energetically get stuck when they try to cross. Remember at the beginning of the lecture, I said that there were gaps in the system: the topological theory can only say when things are impossible to do, not when they are difficult to do. I'd like to be able to tell you that this beautiful connection between the commutativity of the group and the entanglement of defect lines is nonetheless is important in lots of other contexts. That too would not be true. There are two types of materials I know of which are supposed to suffer from defect lines which topological entangle. The first are biaxial nematics, which were thoroughly analyzed theoretically before anyone found one. The other are the metallic glasses, where David Nelson has a theory of defect lines needed to relieve the frustration. We'll discuss closely related theories in lecture~3. Nelson's defects don't commute, and so can't cross one another. He originally hoped to explain the freezing of the metallic glasses into random configurations as an entanglement of defect lines. Nobody has ever been able to take this idea and turn it into a real calculation, though. Enough, then, of the beautiful and elegant world of homotopy theory: let's begin to think about what order parameter configurations are actually formed in practise. \bigskip\bigskip \centerline{\bf Acknowledgments} \bigskip I'd like to acknowledge NSF grant \# DMR-9118065, and thank NORDITA and the Technical University of Denmark for their hospitality while these lectures were written up.
2108.01674
\section*{Supplemental Material} In this \emph{Letter}, we presented the results for the NLO evolution of the first three moments of the track functions under the simplified assumption that $T_q=T_{\bar q}$, and that the track functions are equal for all quark flavors. This simplified case was sufficient to illustrate the structure of the equations, without excessive notation. In this \emph{Supplemental Material}, we present the complete results in several different forms. These different forms may be useful for different users, depending on their particular application. Although the results can be drastically simplified by working in terms of shift-invariant central moments, we begin by presenting the results for the evolution of the standard moments of the track functions. Using the following notation \begin{align} \frac{\mathrm{d}}{\mathrm{d}\ln \mu^2}{\color{blue} T_i(n)} = D_{T_i(n)}\,, \quad D_{T_i}(n) = \sum_{L=0}^\infty a_s^{L+1}D_{T_i(n)}^{(L)}\,, \end{align} we find that the evolution for the first three moments of the gluon track function is given by \begin{align}\label{eq:full_gluon} &D_{T_g(1)}^{(0)} =-\gamma^{(0)}_{gg}(2) {\color{blue}T_g{(1)}}-\sum_i\gamma^{(0)}_{qg}(2) {\color{blue}(T_{q_i}{(1)}+T_{\bar q_i}{(1)})}\,, \\ & D^{(0)}_{T_g(2)}= -\gamma^{(0)}_{gg}(3){\color{blue}T_g(2)}-\sum_{i}\gamma^{(0)}_{qg}(3){\color{blue}\left(T_{q_i}(2)+T_{\bar{q}_i}(2)\right)}+\frac{14}{5}C_A\,{\color{blue}T_{g}(1)T_{g}(1)}+\sum_{i}\frac{2}{5}T_F\,{\color{blue}T_{q_i}(1)T_{\bar{q}_i}(1)} \,, \nonumber \\ & D^{(0)}_{T_g(3)}= -\gamma^{(0)}_{gg}(4){\color{blue}T_g(3)}-\sum_i \gamma^{(0)}_{qg}(4){\color{blue}\left(T_{q_i}(3)+T_{\bar{q}_i}(3)\right)}+\frac{21}{5}C_A\,{\color{blue}T_g(2)T_g(1)}+\sum_i\frac{3}{10}T_F {\color{blue}\left(T_{q_i}(2)T_{\bar{q}_i}(1)+T_{\bar{q}_i}(2)T_{q_i}(1)\right)}\,, \nonumber \\ &D_{T_g(1)}^{(1)} =-\gamma^{(1)}_{gg}(2) {\color{blue}T_g{(1)}}-\sum_i\gamma^{(1)}_{qg}(2) {\color{blue}(T_{q_i}{(1)}+T_{\bar q_i}{(1)})}\,, \nonumber\\ &D_{T_g(2)}^{(1)}=-\gamma^{(1)}_{gg}(3) {\color{blue}T_g{(2)}}-\sum_i\gamma^{(1)}_{qg}(3) {\color{blue}(T_{q_i}{(2)}+T_{\bar q_i}{(2)})} +\left[ C_A^2 \left( -8 \zeta_3+\frac{26}{45}\pi^2 +\frac{2158}{675} \right)-\frac{4}{9}C_A n_f T_F \right] {\color{blue}T_g{(1)}T_g{(1)}} \nonumber \\ &+\sum_i \left[ T_F\left( \!-\frac{299}{225}C_A\!-\!\frac{4387}{900}C_F \right) \right] {\color{blue}T_g{(1)}(T_{q_i}{(1)} \!+\!T_{\bar q_i}{(1)})} \!+\!\sum_i T_F\left[\left( \frac{12413}{1350}\!-\!\frac{52}{45}\pi^2 \right) C_A\!+\!\frac{1528}{225}C_F \!-\!\frac{16}{25} n_f T_F \right] {\color{blue}T_{q_i}{(1)} T_{\bar q_i}{(1)}} \,, \nonumber \\ &D_{T_g(3)}^{(1)} =-\gamma^{(1)}_{gg}(4) {\color{blue}T_g{(3)}}-\sum_i \gamma^{(1)}_{q g}(4) {\color{blue}(T_{q_i}{(3)}+T_{\bar q_i}{(3)})}+\left[ C_A^2 \left( 24 \zeta_3 -\frac{278}{15}\pi^2 +\frac{767263}{4500} \right)-\frac{2}{3}C_A n_f T_F \right]{\color{blue}T_g(2)T_g{(1)}} \nonumber \\ &+\sum_i \left[T_F \left( \!-\frac{46}{15}C_A\!-\!\frac{1727}{2250}C_F \right) \right] {\color{blue}T_g{(2)}(T_{q_i}{(1)}\!+\!T_{\bar q_i}{(1)})} + \sum_i T_F \left[\left( \frac{14}{15}\pi^2\!-\!\frac{10318}{1125} \right)C_A\!-\!\frac{4544}{1125}C_F \right]{\color{blue}(T_{q_i}{(2)}\!+\!T_{\bar q_i}{(2)}) T_g{(1)}} \nonumber \\ &+\sum_i T_F\left[ \left( \frac{5321}{3000} -\frac{2}{5}\pi^2 \right)C_A+\frac{1523}{240}C_F-\frac{12}{25}n_f T_F \right] {\color{blue}(T_{q_i}{(2)}T_{\bar q_i}{(1)}+T_{q_i}{(1)}T_{\bar q_i}{(2)} )}\nonumber \\ &+C_A^2 \left(-\frac{248561}{2250}+\frac{194}{15}\pi^2-24 \zeta_3 \right){\color{blue}T_g{(1)}T_g{(1)}T_g{(1)}} +\sum_i \left[ C_A T_f \left( \frac{23051}{1125}-\frac{28}{15}\pi^2 \right)-C_F T_f \frac{501}{100} \right]{\color{blue}T_g{(1)}T_{q_i}{(1)} T_{\bar q_i}{(1)}}\,, \nonumber \end{align} while for quarks, \begin{align}\label{eq:full_quark} &D_{T_q(1)}^{(0)} =-\gamma^{(0)}_{gq}(2) {\color{blue}T_g(1)} -\gamma^{(0)}_{q q}(2) {\color{blue}T_q{(1)}} \,, \\ &D^{(0)}_{T_q(2)}=-\gamma_{gq}^{(0)}(3){\color{blue}T_g(2)}-\gamma^{(0)}_{qq}(3){\color{blue}T_q(2)}+3C_F\, {\color{blue}T_g(1)T_{q}(1)} \,,\nonumber \\ &D^{(0)}_{T_q(3)}=-\gamma^{(0)}_{gq}(4){\color{blue}T_g(3)}-\gamma^{(0)}_{qq}(4){\color{blue}T_q(3)}+\frac{13}{10}C_F\,{\color{blue}T_g(2)T_q(1)}+\frac{16}{5}C_F\,{\color{blue}T_g(1)T_q(2)} \,,\nonumber \\ &D_{T_q(1)}^{(1)} =-\gamma^{(1)}_{gq}(2) {\color{blue}T_g(1)} -\gamma^{(1)}_{q q}(2) {\color{blue}T_q{(1)}}-\gamma^{(1)}_{\bar q q}(2){\color{blue}T_{\bar q}{(1)} }-\sum_i \gamma^{(1)}_{Q q}(2){\color{blue}(T_{Q_i}{(1)}+ T_{\bar Q_i}{(1)})} \,,\nonumber \\ &D_{T_q(2)}^{(1)}=-\gamma^{(1)}_{gq}(3) {\color{blue}T_g(2)} -\gamma^{(1)}_{q q}(3) {\color{blue}T_q{(2)}}-\gamma^{(1)}_{\bar q q}(3){\color{blue}T_{\bar q}{(2)} }-\sum_i \gamma^{(1)}_{Q q}(3){\color{blue}(T_{Q_i}{(2)}+ T_{\bar Q_i}{(2)})} \nonumber \\ &+\left[ \left( \frac{1399}{5400}-\frac{7}{9}\pi^2 \right)C_A C_F -\frac{67}{18}C_F^2 \right]{\color{blue}T_g(1)T_g(1)}\nonumber \\ &+\left[ \left( -\frac{3023}{108}+\frac{34}{9}\pi^2-8\zeta_3 \right)C_A C_F +\left(\frac{3023}{54}-\frac{68}{9}\pi^2+16\zeta_3 \right) C_F^2 -\frac{53}{18}C_F T_F \right] {\color{blue}T_q{(1)} T_q{(1)}} \nonumber \\ &+\left[ \left(\frac{14057}{216}-\frac{77}{9}\pi^2+16 \zeta_3 \right)C_A C_F +\left(-\frac{14057}{108}+\frac{154}{9}\pi^2-32 \zeta_3 \right) C_F^2 -\frac{2803}{900}C_F T_F \right] {\color{blue}T_q{(1)} T_{\bar q}{(1)}} \nonumber \\ &+\left[ \frac{229}{18}C_A C_F+\left(\frac{2573}{72}-4\pi^2 \right)C_F^2 \right] {\color{blue}T_g(1) T_q{(1)}} -\sum_i \frac{17}{100}C_F T_F {\color{blue}T_{Q_i}{(1)}T_{\bar Q_i}{(1)}} -\sum_i \frac{53}{18}C_F T_F {\color{blue}T_{q}{(1)}( T_{Q_i}{(1)}+ T_{\bar Q_i}{(1)} )}\,, \nonumber \\ &D_{T_q(3)}^{(1)} =-\gamma^{(1)}_{gq}(4) {\color{blue}T_g(3)} -\gamma^{(1)}_{q q}(4) {\color{blue}T_q{(3)}}-\gamma^{(1)}_{\bar q q}(4){\color{blue}T_{\bar q}{(3)} }-\sum_i \gamma^{(1)}_{Q q}(4){\color{blue}(T_{Q_i}{(3)}+ T_{\bar Q_i} {(3)} )} \nonumber \\ &+\left[ -\frac{3787}{750} C_A C_F -\frac{249}{50}C_F^2 \right]{\color{blue}T_g(2)T_g(1)} +\left[ \left( \frac{7}{3}\pi^2-\frac{14161}{3000} \right) C_A C_F +\left( \frac{84329}{6000}-\frac{26}{15}\pi^2 \right)C_F^2 \right] {\color{blue}T_g(2)T_q{(1)}}\nonumber \\ &+\left[ \frac{2327}{180} C_A C_F +\left( \frac{10189}{250}-\frac{64}{15}\pi^2 \right)C_F^2 \right]{\color{blue}T_g(1)T_q{(2)}} - \sum_i \frac{724}{225}C_F T_F {\color{blue}T_q{(2)}( T_{Q_i}{(1)}+T_{\bar Q_i}{(1)})}\nonumber \\ & -\sum_i \frac{9557}{9000}C_F T_F {\color{blue}T_q{(1)}( T_{Q_i}{(2)}+T_{\bar Q_i}{(2)})} - \sum_i \frac{59}{1000}C_F T_F {\color{blue}\left(T_{Q_i}{(2)} T_{\bar Q_i}{(1)} + T_{Q_i}{(1)} T_{\bar Q_i}{(2)} \right)} \nonumber \\ &+\left[ \left( -\frac{353801}{3600}+\frac{77}{6}\pi^2-24\zeta_3 \right)C_A C_F+ \left( \frac{353801}{1800}-\frac{77}{3}\pi^2+48\zeta_3 \right)C_F^2-\frac{12839}{3000}C_F T_F \right] {\color{blue}T_q{(2)}T_q{(1)}} \nonumber \\ &+\left[ \left( -\frac{369503}{3000}+\frac{77}{5}\pi^2-24\zeta_3 \right)C_A C_F+ \left(\frac{369503}{1500}-\frac{154}{5}\pi^2+48\zeta_3 \right)C_F^2-\frac{1261}{1125}C_F T_F \right] {\color{blue}T_{\bar q}{(2)}T_q{(1)}} \nonumber \\ &+ \left[ \left( \frac{649211}{6000}-\frac{139}{10}\pi^2+24\zeta_3 \right)C_A C_F+ \left(-\frac{649211}{3000}+\frac{139}{5}\pi^2-48 \zeta_3 \right)C_F^2-\frac{29491}{9000}C_F T_F \right] {\color{blue}T_{\bar q}{(1)}T_q{(2)}} \nonumber \\ &+\left[ \left( \frac{97883}{9000}-\frac{7}{3}\pi^2 \right)C_A C_F-\frac{181}{150}C_F^2 \right] {\color{blue}T_q{(1)}T_g(1)T_g(1)} - \sum_i \frac{137}{500}C_F T_F {\color{blue}T_q{(1)}T_{Q_i}{(1)}T_{\bar Q_i}{(1)}} \nonumber \\ &+\left[ \left( \frac{202651}{1800}-\frac{43}{3}\pi^2+24\zeta_3 \right)C_A C_F+\left( -\frac{202651}{900}+\frac{86}{3}\pi^2 -48\zeta_3 \right)C_F^2 -\frac{137}{500}C_F T_F \right] {\color{blue}T_q{(1)}T_q{(1)}T_{\bar q}{(1)}}\,. \nonumber \end{align} Here $\gamma_{ij}^{(0)}(n)$ ($\gamma_{ij}^{(1)}(n)$) are the (N)LO moments of the timelike splitting function, and $Q\neq q$ is used to denote the distinct quark flavors. The expressions for anti-quarks can be obtained by charge conjugation. As described in the \emph{Letter}, when written in terms of standard moments these equations are highly redundant, due to the presence of the underlying shift symmetry. To present the evolution equations in terms of central moments for the general case with different track functions for each quark flavor, we must extend the situation discussed in the \emph{Letter}, by introducing $\Delta_{q_i} = T_{q_i}(1) - T_g(1)$, in addition to $\sigma_j$ for each flavor. For gluons, we find that the evolution of the second and third central moments, can be written as \begin{align} \label{eq:final_gluon_shift} D^{(0)}_{\sigma_g(2)} & = -\gamma^{(0)}_{gg}(3){\color{blue}\sigma_g(2)}+\sum_i\left\{-\gamma^{(0)}_{qg}(3){\color{blue}\left(\sigma_{q_i}(2)+\sigma_{\bar{q}_i}(2)+\Delta_{q_i}^2+\Delta_{\bar{q}_i}^2\right)} + \frac{2}{5}T_F\, {\color{blue}\Delta_{q_i}\Delta_{\bar{q}_i}} \right\}\ , \\ D^{(0)}_{\sigma_g(3)} &= -\gamma^{(0)}_{gg}(4){\color{blue}\sigma_g(3)}+\sum_i\biggl\{-\gamma^{(0)}_{qg}(4){\color{blue}\left(\sigma_{q_i}(3)+\sigma_{\bar{q}_i}(3) +3\sigma_{q_i}(2)\Delta_{q_i}+3\sigma_{\bar{q}_i}(2)\Delta_{\bar{q}_i} +\Delta_{q_i}^3+\Delta_{\bar{q}_i}^3\right)} \nonumber\\ &\quad -2T_F\,{\color{blue}\sigma_g(2)(\Delta_{q_i}+\Delta_{\bar{q}_i})} +\frac{3}{10}T_F\,{\color{blue}\left(\sigma_{q_i}(2)\Delta_{\bar{q}_i}+\sigma_{\bar{q}_i}(2)\Delta_{q_i} +\Delta_{q_i}^2\Delta_{\bar{q}_i}+\Delta_{\bar{q}_i}^2\Delta_{q_i} \right)} \biggr\} \ ,\nonumber \\ D_{\sigma_g(2)}^{(1)} &=-\gamma^{(1)}_{gg}(3) {\color{blue}\sigma_g(2)} +\sum_i \biggl\{-\gamma^{(1)}_{qg}(3) {\color{blue}(\sigma_{q_i}(2)+\sigma_{\bar q_i}(2)+\Delta_{q_i}^2+\Delta_{\bar q_i}^2)} \nonumber \\ & \quad +T_F\Bigl[\Bigl( \frac{12413}{1350}-\frac{52}{45}\pi^2 \Bigr) C_A+\frac{1528}{225}C_F -\frac{16}{25} n_f T_F \Bigr] {\color{blue}\Delta_{q_i}\Delta_{\bar q_i}} \biggr\} \,,\nonumber \\ D_{\sigma_g(3)}^{(1)} &=-\gamma^{(1)}_{gg}(4) {\color{blue}\sigma_g(3)} +\sum_i \biggl\{-\gamma^{(1)}_{q g}(4) {\color{blue}(\sigma_{q_i}(3)+\sigma_{\bar q_i}(3) +3 \sigma_{q_i}(2) \Delta_{q_i} +3\sigma_{\bar q_i}(2) \Delta_{\bar q_i} + \Delta_{q_i}^3 + \Delta_{\bar q_i}^3)} \nonumber \\ & \quad +T_F \Bigl[\Bigl(- \frac{638}{45}+ \frac{8}{3}\pi^2\Bigr)C_A - \frac{3803}{250}C_F \Bigr] {\color{blue}\sigma_g(2)(\Delta_{q_i} + \Delta_{{\bar q}_i})} \nonumber \\ &\quad +T_F\Bigl[ \Bigl( \frac{5321}{3000} -\frac{2}{5}\pi^2 \Bigr)C_A+\frac{1523}{240}C_F-\frac{12}{25}n_f T_F \Bigr] {\color{blue}(\sigma_{q_i}(2) \Delta_{\bar q_i} +\sigma_{\bar q_i}(2) \Delta_{q_i}+ \Delta_{q_i}^2 \Delta_{\bar q_i} + \Delta_{\bar q_i}^2 \Delta_{q_i})}\biggr\}\,. \nonumber \end{align} This form emphasizes the large redundancy present in the expressions given in Eq.~\eqref{eq:full_gluon}. We emphasize that while it is true that the mixing into $\sigma_{q_i}(2)$ and $\sigma_{q_i}(3)$ is governed to all loop order by $\gamma_{qg}$, the fact that the mixing into the products $\sigma_{q_i}(2) \Delta_{q_i}$ and $\Delta_{q_i}^3$ is also governed by this same anomalous dimension is a coincidence at this order in perturbation theory. Finally, for the evolution of the quark track functions in terms of central moments, we have \begin{align}\label{eq:final_quark_shift} D^{(0)}_{\sigma_q(2)} &= -\gamma^{(0)}_{gq}(3){\color{blue}\left(\sigma_g(2)+\Delta_q^2\right)}-\gamma^{(0)}_{qq}(3){\color{blue}\sigma_q(2)} \,, \\ % D^{(0)}_{\sigma_q(3)} &= -\gamma^{(0)}_{gq}(4){\color{blue}\left(\sigma_g(3)-3\sigma_g(2)\Delta_q-\Delta_q^3\right)} -\gamma^{(0)}_{qq}(4){\color{blue}\sigma_q(3)}+\frac{24}{5}C_F\, {\color{blue}\sigma_q(2)\Delta_q} \,, \nonumber \\ D_{\sigma_q(2)}^{(1)} & = - \gamma^{(1)}_{gq}(3)\, {\color{blue}\sigma_g(2)} - \gamma^{(1)}_{qq}(3) {\color{blue} (\sigma_{q}(2) + \Delta_{q}^2)}- \sum_{j} \gamma_{Qq}^{(1)} (3) {\color{blue} ( \sigma_{Q_j}(2) +\sigma_{\bar{Q}_j}(2) +\Delta_{Q_j}^2+ \Delta_{\bar{Q}_j}^2)} \nonumber \\ &\quad - \gamma_{\bar{q}q}^{(1)} {\color{blue} (\sigma_{\bar{q}}(2)+ \Delta_{\bar{q}}^2 -2 \Delta_{q}\Delta_{\bar{q}})} +\frac{97}{54} C_F T_F \sum_{j} {\color{blue} \Delta_{q}(\Delta_{Q_j}+\Delta_{\bar{Q}_j})} \nonumber \\ &\quad + \left[\frac{2957}{108} C_A C_F + \left(\frac{2323}{54}-\frac{64 \pi^2}{9}\right)C_F^2 + \left(\frac{97}{54}-\frac{256}{27} n_f \right) C_F T_F \right] {\color{blue} \Delta_{q}^2} \nonumber \\ &\quad - \sum_{j} \frac{17}{100} C_F T_F{\color{blue} \Delta_{Q_j}\Delta_{\bar{Q}_j}}\,, \nonumber\\ D_{\sigma_q(3)}^{(1)} & = -\gamma_{gq}^{(1)}(4) {\color{blue}( \sigma_g(3)-2 \sigma_g(2)\Delta_{q})} -\gamma_{qq}^{(1)}(4){\color{blue} ( \sigma_{q}(3) - 2 \sigma_g(2) \Delta_{q} + 3\sigma_q(2) \Delta_{{q}}- 2 \Delta_{q}^3)} \nonumber \\ &\quad -\gamma_{\bar{q}q}^{(1)}(4) {\color{blue} (\sigma_{\bar{q}}(3)+\sigma_{g}(2) \Delta_{q} +3 \sigma_{\bar{q}}(2)\Delta_{\bar{q}} +3\sigma_{q}(2) \Delta_{\bar{q}} - 3\sigma_{\bar{q}}(2) \Delta_{q} +3\Delta_{q}^2 \Delta_{\bar{q}}} \nonumber \\ & \hspace{1.6cm} {\color{blue} -3 \Delta_{q}\Delta_{\bar{q}}^2+\Delta_{\bar{q}}^3 )} \nonumber \\ &\quad -\gamma_{Qq}^{(1)}(4) \sum_{j \neq i } {\color{blue}\bigl[ \sigma_{Q_j}(3)+ \sigma_{\bar{Q}_j}(3)-\sigma_g(2)\Delta_{q}+3\sigma_{Q_j}(2)\Delta_{Q_j}+3\sigma_{\bar{Q}_j}(2)\Delta_{\bar{Q}_j}} \nonumber \\ &\quad \hspace{2cm} {\color{blue}-3(\sigma_{Q_j}(2) +\sigma_{\bar{Q}_j}(2))\Delta_{q} + \Delta_{Q_j}^3+\Delta_{\bar{Q}_j}^3-3 \Delta_{q}( \Delta^2_{Q_j}+\Delta^2_{\bar{Q}_j}-\Delta_{Q_j}\Delta_{\bar{Q}_j})\bigr] }\nonumber \\ &\quad-\frac{59}{1000} C_F T_F \sum_{j} {\color{blue}\bigl[\sigma_{Q_j}(2) \Delta_{\bar{Q}_j}+\sigma_{\bar{Q}_j}(2)\Delta_{Q_j} - (\sigma_{\bar{Q}_j}(2)+\sigma_{Q_j}(2) )\Delta_{q}+ \Delta_{Q_j}^2\Delta_{\bar{Q}_j} }\nonumber \\ &\quad \hspace{2.6cm} {\color{blue}+ \Delta_{Q_j}\Delta_{\bar{Q}_j}^2-\Delta_{q}(\Delta_{\bar{Q}_j}^2+\Delta_{Q_j}^2+\Delta_{Q_j}\Delta_{\bar{Q}_j}) \bigr] } \nonumber \\ &\quad + \frac{292}{75} C_F T_F\sum_{j} {\color{blue}\bigl[\sigma_{q}(2)(\Delta_{Q_j}+\Delta_{\bar{Q}_j})+\Delta_{q}^2(\Delta_{Q_j}+\Delta_{\bar{Q}_j})-\Delta_{q}\Delta_{Q_j}\Delta_{\bar{Q}_j} \bigr] }\nonumber \\ &\quad -\frac{97}{18}C_F T_F\sum_{j} {\color{blue}\bigl[\Delta_{q}^2(\Delta_{Q_j}+\Delta_{\bar{Q}_j})-\Delta_{q}\Delta_{Q_j}\Delta_{\bar{Q}_j} \bigr]} - \frac{12929}{9000}(n_f-1) C_F T_F \, {\color{blue}\sigma_g(2) \Delta_{q}}\nonumber \\ &\quad+ \left[ \frac{29}{300}C_A C_F-\frac{29}{150}C_F^2+\frac{5797}{1125}C_F T_F \right]{\color{blue}\sigma_{q}(2) \Delta_{\bar{q}} }\nonumber \\ &\quad + \left[\left( -\frac{12929}{9000}C_F +\frac{4648}{225}C_F n_f \right)T_F +\left( -\frac{2163833}{18000}+\frac{247}{30}\pi^2-12\zeta_3 \right) C_A C_F \right.\nonumber \\ &\quad \hspace{2cm}\left. +\left(\frac{81443}{3000}-\frac{23}{15}\pi^2+24\zeta_3 \right) C_F^2 \right] {\color{blue} ( \sigma_{g}(2)\Delta_{q}+\Delta_{q}^3)}\nonumber \\ &\quad + \left[ \frac{45253}{450}C_A C_F+C_F^2\left( \frac{662327}{3600}-\frac{82}{3}\pi^2 \right)+\left( \frac{23719}{4500}C_F-\frac{671}{18}C_F n_f \right) T_F \right]{\color{blue}\sigma_q(2) \Delta_q}\,. \nonumber \end{align} This case is notationally more cumbersome than for the gluon evolution due to the contributions from different quark flavors. As with Eq.~\eqref{eq:final_gluon_shift}, this result exhibits a number of coincidences in the evolution, that will not persist at higher orders in perturbation theory. For completeness, we also provide results for the timelike anomalous dimensions appearing in the evolution equations for the track functions. Expanding the timelike splitting functions perturbatively in $a_s=\alpha_s/(4\pi)$ as \begin{align} P_{ij}(z)=\sum_{L=0}^\infty a_s^{L+1}P_{ij}^{(L)}(z)\,, \end{align} we define the Mellin moments of the timelike splitting functions as \begin{align} \gamma_{ij}^{(L)}(k)=-\int_0^1 \mathrm{d} z\ z^{k-1}P_{ij}^{(L)}(z)\,. \end{align} This definition is chosen so that for the case of the spacelike splitting function, one obtains the standard twist-2 spin-$k$ anomalous dimensions. We obtained our results by directly integrating the $z$-space results of \cite{Chen:2020uvt}. This has the advantage that it works for both even and odd $k$. At LO we have, \begin{align} \gamma_{gg}^{(0)}(2)&=\frac{4}{3}n_f T_F\,,\quad\gamma_{gg}^{(0)}(3)=\frac{14}{5}C_A+\frac{4}{3}n_fT_F\,, \quad \gamma_{gg}^{(0)}(4)=\frac{21}{5}C_A+\frac{4}{3}n_fT_F\,, \nonumber\\ \gamma_{gq}^{(0)}(2)&=-\frac{8}{3}C_F\,,\quad\gamma_{gq}^{(0)}(3)=-\frac{7}{6}C_F\,, \quad \gamma_{gq}^{(0)}(4)=-\frac{11}{15}C_F\,,\nonumber\\ \gamma_{qg}^{(0)}(2)&=-\frac{2}{3}T_F\,,\quad\gamma_{qg}^{(0)}(3)=-\frac{7}{15}T_F\,,\quad \gamma_{qg}^{(0)}(4)=-\frac{11}{30}T_F\,, \nonumber\\ \gamma_{qq}^{(0)}(2)&=\frac{8}{3}C_F\,,\quad\gamma_{qq}^{(0)}(3)=\frac{25}{6}C_F\,, \quad \gamma_{qq}^{(0)}(4)=\frac{157}{30}C_F\,,\nonumber\\ \gamma_{\bar{q}q}^{(0)}(2)&=\gamma_{\bar{q}q}^{(0)}(3)=\gamma_{\bar{q}q}^{(0)}(4)=0\,,\nonumber\\ \gamma_{Qq}^{(0)}(2)&=\gamma_{Qq}^{(0)}(3)=\gamma_{Qq}^{(0)}(4)=0\,,\nonumber \\ \gamma_{\bar{Q}q}^{(0)}(2)&=\gamma_{\bar{Q}q}^{(0)}(3)=\gamma_{\bar{Q}q}^{(0)}(4)=0\,. \end{align} At NLO we have, \begin{align} \gamma_{gg}^{(1)}(2)&=n_f T_F \left[\left(\frac{200}{27}-\frac{16 \pi ^2}{9}\right)C_A+\frac{260}{27}C_F\right]\,,\nonumber\\ \gamma_{gq}^{(1)}(2)&=\left(\frac{32 \pi ^2}{9}-\frac{568}{27}\right)C_F^2-\frac{376}{27}C_AC_F \,,\nonumber\\ \gamma_{qg}^{(1)}(2)&=T_F \left[\left(\frac{8 \pi ^2}{9}-\frac{100}{27}\right)C_A-\frac{130}{27}C_F\right]\,,\nonumber\\ \gamma_{qq}^{(1)}(2)&=C_AC_F \left(4 \zeta_3+\frac{1495}{54}-\frac{17 \pi^2}{9}\right)+C_F^2 \left(-8 \zeta_3-\frac{175}{27}+\frac{2 \pi^2}{9}\right)-\frac{128}{27}C_Fn_fT_F+\frac{64}{27}C_FT_F\,,\nonumber\\ \gamma_{\bar{q}q}^{(1)}(2)&=C_AC_F \left(-4 \zeta_3-\frac{743}{54}+\frac{17 \pi^2}{9}\right)+C_F^2 \left(8 \zeta_3+\frac{743}{27}-\frac{34\pi^2}{9}\right)+\frac{64}{27}C_FT_F \,,\nonumber\\ \gamma_{Qq}^{(1)}(2)&=\frac{64}{27}C_FT_F\,,\nonumber\\ \gamma_{\bar{Q}q}^{(1)}(2)&=\frac{64}{27}C_FT_F \,,\nonumber\\ \gamma_{gg}^{(1)}(3)&=C_A^2 \left(-8 \zeta_3+\frac{2158}{675}+\frac{26 \pi ^2}{45}\right)+n_fT_F \left[\left(\frac{3803}{675}-\frac{16 \pi ^2}{9}\right)C_A+\frac{12839 }{2700}C_F\right] \,,\nonumber\\ \gamma_{gq}^{(1)}(3)&=\left(-\frac{39451}{5400}-\frac{7 \pi ^2}{9}\right)C_AC_F+\left(\frac{14\pi ^2}{9}-\frac{2977}{432}\right) C_F^2 \,,\nonumber\\ \gamma_{qg}^{(1)}(3)&=T_F \left[\left(\frac{619}{2700}+\frac{14 \pi ^2}{45}\right) C_A-\frac{833}{216}C_F\right]-\frac{8}{25}n_fT_F^2 \,,\nonumber\\ \gamma_{qq}^{(1)}(3)&=C_AC_F \left(4 \zeta_3+\frac{16673}{432}-\frac{43 \pi^2}{18}\right)+C_F^2 \left(-8 \zeta_3+\frac{989}{432}-\frac{7 \pi^2}{9}\right)-\frac{415}{54}C_Fn_fT_F+\frac{4391}{5400}C_FT_F\,,\nonumber\\ \gamma_{\bar{q}q}^{(1)}(3)&=C_AC_F \left(4 \zeta_3+\frac{8113}{432}-\frac{43 \pi^2}{18}\right)+C_F^2 \left(-8 \zeta_3-\frac{8113}{216}+\frac{43 \pi^2}{9}\right)+\frac{4391}{5400}C_FT_F \,,\nonumber\\ \gamma_{Qq}^{(1)}(3)&=\frac{4391}{5400}C_FT_F\,,\nonumber\\ \gamma_{\bar{Q}q}^{(1)}(3)&=\frac{4391}{5400}C_FT_F \,, \nonumber \\ \gamma_{gg}^{(1)}(4)&=\left(\frac{90047}{1500}-\frac{28 \pi ^2}{5}\right)C_A^2+n_fT_F\left[\left(\frac{2273}{675}-\frac{16 \pi ^2}{9}\right)C_A+\frac{57287}{13500}C_F\right]\,,\nonumber\\ \gamma_{qg}^{(1)}(4)&=T_F\left[\left(\frac{22 \pi ^2}{45}-\frac{60391}{27000}\right)C_A-\frac{166729}{54000}C_F\right]-\frac{12}{25}n_fT_F^2\,,\nonumber\\ \gamma_{gq}^{(1)}(4)&=\left(\frac{44 \pi ^2}{45}-\frac{104389}{27000}\right) C_F^2-\frac{142591}{13500}C_AC_F\,,\nonumber\\ \gamma_{qq}^{(1)}(4)&=C_AC_F\left(4 \zeta_3+\frac{2495453}{54000}-\frac{247 \pi^2}{90}\right)+C_F^2\left(-8 \zeta_3+\frac{55553}{6000}-\frac{67 \pi^2}{45}\right)-\frac{13271}{1350}C_Fn_fT_F\nonumber\\ &+\frac{11867}{27000}C_FT_F\,,\nonumber\\ \gamma_{\bar{q}q}^{(1)}(4)&=C_AC_F\left(-4 \zeta_3-\frac{1202893}{54000}+\frac{247 \pi^2}{90}\right)+C_F^2\left(8 \zeta_3+\frac{1202893}{27000}-\frac{247 \pi^2}{45}\right)+\frac{11867}{27000}C_FT_F\,,\nonumber\\ \gamma_{Qq}^{(1)}(4)&=\frac{11867}{27000}C_FT_F\,,\nonumber\\ \gamma_{\bar{Q}q}^{(1)}(4)&=\frac{11867}{27000}C_FT_F\,. \end{align} \end{widetext} \end{document}
2005.10284
\section{Introduction} \label{sec:intro} Explaining the outcomes of complex machine learning models is a prerequisite for establishing trust between the machines and users. As humans increasingly rely on DNNs to process large amounts of data and make decisions, it is crucial to develop solutions that can interpret the predictions of DNNs in a user-friendly manner. Explaining the outcomes of a model can help reduce bias and contribute to improvements in model design, performance, and accountability by providing beneficial insights into how models behave \cite{fidel2019explainability}. Consequently, the field of explainable artificial intelligence systems, XAI, has gained traction in recent years, where researchers from different disciplines have come together to define, design and evaluate explainable systems \cite{vstrumbelj2014explaining, datta2016algorithmic, mohseni2018survey}. The majority of current explainability algorithms for DNNs produce an explanation for a single input-output pair: an input data point fed into the DNN and the respective prediction made by the DNN. The algorithm usually finds the most important features in the input contributing the most to the model's predictions and selects those as explanations for the model's behavior \cite{alvarez2018robustness}. The majority of these algorithms find the important features using either a \emph{perturbation-based} approach or a \emph{saliency-based} approach \cite{lundberg2017unified}. The saliency-based approaches rely on gradients of the outputs in relation to the inputs to find the important features \cite{simonyan2013deep,selvaraju2017grad}. Perturbation-based methods on the other hand apply small local changes to the input, track the changes in the output, and find and rank the important input features \cite{ribeiro2016should, alvarez2017causal}. One main problem with current state-of-the-art explainability tools is their reliance on a large set of hyper-parameters. This leads to local instability of explanations and can negatively affect the user's experience \cite{alvarez2018robustness}. An explainability algorithm should satisfy 3 properties: 1- It has to produce human-understandable explanations which are loyal to the decision making process of the DNN, 2- It has to be locally consistent and efficient, 3- It should be user-friendly, easy to apply and quick in providing explanations. In this work, we propose a new algorithm, explanations via adversarial attacks, which satisfies these 3 important properties and more. We call our method \textbf{A}dversarial \textbf{E}xplanations for \textbf{A}rtificial \textbf{I}ntelligence systems or AXAI \footnote{Code will be readily available.}. AXAI inherits from the nature of adversarial attacks to automatically find and select important features affecting the model's prediction to produce explanations. The idea behind our work comes from the natural behavior of adversarial attacks. The attacks tend to manipulate important features in the input to deceive a DNN. The logic is simple, rather than trying to build a model that learns to explain the DNN's behavior, why don’t we utilize the nature of attacks to learn this behavior? One who knows how to fool a model, certainly knows what the model may be thinking. Another benefit of our approach is that certain attacks, such as the Projected Gradient Descent (PGD) method \cite{madry2017towards}, are fast, efficient, and consistent in their adversarial behavior. Our work further aims to solve at least 2 problems: 1- Provide fast explanations without a need for model training, 2- Reduce the need for selecting a large set of hyper-parameters to produce consistent results. Obviously, one needs to first show how adversarial attacks link to explainbility, i.e., how an attack can point to the important features in the input and how one can filter out the unimportant ones to produce explanations. Further, one needs to show how an adversary behaves similarly in its approach across models, tasks and datasets so that the explanations are consistent, stable, and applicable to a large group of models. Here, we present a novel algorithm for explaining the DNN's predictions in multiple domains including text, audio and image. In particular, this paper makes the following contributions: \begin{itemize} \item We show that given an $\ell_2$ PGD attack and a trained DNN, the distribution of attack magnitudes vs. frequency across all unseen test inputs follows a beta distribution, regardless of the task and dataset. We also show that these distributions are symmetric and the differences between their means, medians, and quantiles are not statistically significant. \item We show that the most important input features, i.e., features with the largest effect on the model's predictions, can be found using a consistent rule across different DNN architectures, datasets, and tasks. This rule leverages the properties of the distributions explained above. \item We propose a novel algorithm for explaining the outcomes of DNNs and provide a detailed analysis of our algorithm's performance for different DNN architectures, datasets and tasks. \item We benchmark our algorithm against methods such as LIME and SHAP \cite{ lundberg2017unified, ribeiro2016should} and show that our algorithm performs faster while producing similar or better explainability results. \end{itemize} \section{Related Work} \label{sec:related} One of the popular explainability solutions called LIME \cite{ribeiro2016should} assumes that DNNs are linear locally. LIME trains weighted linear models on the top of the DNN for perturbed samples around a target input to produce explanations. The computational bottleneck in LIME is caused by the training part where a selected number of perturbed samples are sent through the DNN for learning the explanation. Certain combination of LIME's hyper-parameters can produce unstable results \cite{alvarez2018robustness}. DeepLIFT produces explanations by modeling the slope of gradient changes of output with respect to the input \cite{shrikumar2017learning}. Grad-CAM is a saliency-based method that uses the gradients of the input at the final convolutional layer to produce coarse localization maps pointing to important regions in the input \cite{selvaraju2017grad}. The majority of approaches based on sensitivity maps fail to produce explanations that only rely on important features. Creators of DeepLIFT associate this lack of stability to the behavior of activation functions such as ReLU. \cite{smilkov2017smoothgrad} proposed Smooth Grad which uses gradients and Gaussian based de-noising methods to produce stable explanations. The authors of the paper mention that large outlier values in the gradient maps produced by gradient differentiation may cause instability. In our algorithm, we overcome the problem of instability by utilizing the density of attacks, which are created iteratively on segments. Some other important works in this area are given in \cite{sundararajan2017axiomatic,jacovi2018understanding, zhao2018respond, bach2015pixel, becker2018interpreting, erhan2009visualizing, letham2015interpretable}. DNNs are vulnerable to subtle adversarial perturbations applied to their input. The basic idea behind most adversarial attacks revolves around solving a maximization problem with a constraint that keeps the distance between the original input and adversarial input small, so that the adversarial input, while capable of fooling the DNN, is not perceptually recognizable by humans. The connection between model interpretation and attacks has recently gravitated the interest of researchers. \cite{ilyas2019adversarial} and \cite{tsipras2018robustness} showed that one benefit of adversarial examples is that they reveal useful insights into the salient features of input data and their effects on DNNs' predictions. Our solution relies on the nature of adversarial attacks to select and produce important and explainable features given a specific input and DNN. Our work puts more emphasis on model interpretability, where we make use of the information obtained from an adversarial attack on a DNN to de-noise the sensitivity maps and produce stable explanations. We de-noise the gradient map by utilizing the iterative nature of the PGD attack and by considering only a minimum number of highly influential gradients that contribute the most to the predictions. We use the density of gradients in a number of segments to remove the noise that was not filtered out in the previous steps and produce human-interpretable explanations. \section{Main Results} \label{sec:main} The core idea behind our approach, AXAI, is to utilize the knowledge gained from an adversarial attack on a DNN and an input, to find the important features in the input in order to produce good explanations. This is done by mapping ``carefully filtered attacked inputs'' onto predefined segments and filtering out the unimportant features. This will be discussed in more detail in later sections. First let's look at an example in Fig. \ref{fig:1} to see how our approach works. Given an image classification DNN, the $\ell_2$ adversarial attack changes the pixels in the entire image, as seen in Fig. \ref{fig:1c}. The reason for this is simple: each pixel value is changed by the adversary so that the accumulated loss value can increase enough to fool the DNN. Fig. \ref{fig:1b} shows the distribution of the attack on this image. The x-axis represents the magnitude of the pixel changes and the y-axis represents the number of pixels given each value on x-axis. AXAI maps the strongly attacked pixels to the image segments of the original image and filters out the segments with highest density of attacked pixels which meet certain criteria to produce explanations. Fig. \ref{fig:1c} shows the value changes for the important attacked pixels. As we will show, the important features used for explanations are located at specific sections in the tails of the distribution given in Fig. \ref{fig:1b}. These are the pixels that directly affect the classification decision made by the model. We use QuickShift \cite{vedaldi2008quick} for segmenting the input image (Fig. \ref{fig:1d}). It is important to note that the segmentation step in our algorithm is general and any type of input segmentation method may be utilized for this step depending on the model and input type, e.g., language, signal or imagery. Fig. \ref{fig:1e} shows the explanation produced by our algorithm. \begin{figure}[!htp] \centering \begin{subfigure}[!]{0.13\textwidth} \includegraphics[width=\linewidth]{fig1a.jpg} \caption{Original image} \label{fig:1a} \end{subfigure} \quad \begin{subfigure}[!]{0.14\textwidth} \includegraphics[width=\linewidth]{fig1b.pdf} \caption{Attack Magnitude vs. Freq.} \label{fig:1b} \end{subfigure} \quad \begin{subfigure}[!]{0.15\textwidth} \includegraphics[width=\linewidth]{fig1c.jpg} \caption{Adversarial changes in pixels} \label{fig:1c} \end{subfigure} \quad \begin{subfigure}[!]{0.15\textwidth} \includegraphics[width=\linewidth]{fig1d.jpg} \caption{Image segments} \label{fig:1d} \end{subfigure} \quad \begin{subfigure}[!]{0.14\textwidth} \includegraphics[width=\linewidth]{fig1e.jpg} \caption{Explanation} \label{fig:1e} \end{subfigure} \caption{A simple example depicting the steps taken in AXAI to produce explanations.} \label{fig:1} \end{figure} Algorithm 1 details the steps taken by AXAI to produce an explanation $E$ for the output of a selected model $f$. Suppose that input $X$ is segmented into $p$ groups using a segmentation method and that the attack magnitudes for the input $X$ and DNN $f$ are obtained. Let $X_{diff}$ be the difference between the original $X$ and adversarial $X^\prime$. We filter out the low intensity attack magnitudes $X_{diff}$ and create a Boolean array $X_{difft}$, where values larger than a threshold, are only set to True. Let $Su$ be the set of unique segments, $Su=\{Su_{1},…,Su_{p}\}$. Next, we map the filtered attack $X_{difft}$ to the segments $Su$, and create a new list of filtered attack groups, $Su_{x}=\{Su_{x_1},…, Su_{x_p}\}$. The mapping function, \emph{Map} in Algorithm 1, simply stacks the filtered attacks on the segments and groups the filtered attack $X_{difft}$ based on the segments. Finally, the attack density of each unique segment can be written as $Su_{d}=\{\frac{card(Su_{x_1})}{card(Su_{1})},…,\frac{card(Su_{x_p})}{card(Su_{p})}\}$ (\emph{Calculate\_density} in Algorithm 1). We then extract the indices $j$'s of the top $K$ maximum values in $Su_{d}$ (\emph{TopK\_indices} in Algorithm 1), and produce $Su(j)$ as explanation $E$ for the input $X$. In next sections, we explain each step in details. \begin{algorithm} \caption{AXAI} \begin{algorithmic}[1] \Require {Model $f$, input $X$} \State $X' \leftarrow Attack(f,X)$\Comment{i.e. PGD attack} \State $X_{diff} = x' - x$\Comment{The attack magnitudes} \State $X_{difft} \leftarrow Threshold(X_{diff})$\Comment{Filtered attack magnitudes} \State $Su \leftarrow Segment(X)) $ \State $Su_{x} \leftarrow Map(X_{difft}, Su)$\Comment{Group attack magnitudes based on segmentation} \State $Su_{d} \leftarrow Calculate\_density(Su_{x})$\Comment{Calculate attacks per segment} \State \textbf{return} $Su(TopK\_indices(Su_{d}))$ \end{algorithmic} \end{algorithm} \subsection{White-box adversarial attacks} \label{subsec:adv} Adversary can attack a DNN by adding engineered noise to the input to increase the associated loss value, if it has some prior knowledge of the DNN including the weights and biases. AXAI utilizes Projected Gradient Descend (PGD) attack \cite{madry2017towards}, although any $\ell_2$ adversarial attack can replace PGD in our algorithm (Appendix \ref{app_2}). However, PGD provides specific benefits such as stability and gradient smoothness that other attacks do not. PGD can be thought of as an iterative version of $\ell_2$ Fast Gradient Method (FGM) attack \cite{goodfellow2014explaining}, where in each iteration, the adversarial changes are clipped into an $\ell_2$ ball of some $\epsilon$ value. PGD is generally considered a strong stable attack and is defined as, \begin{equation} x^{t+1}=\sqcap_{x+S}(x^{t}+\epsilon\nabla_x L(\Theta,x,y)), \end{equation} where for $t$ iterations, $x$ and $y$ are the inputs and outputs, and $\Theta$ are the weights and biases. \subsection{Statistical analysis of attack magnitudes vs. frequency distributions} \label{subsec:stat} Here, we briefly report our statistical analysis of attack magnitudes vs. frequency distributions for a fixed DNN, dataset and an adversarial attack. We can show that the distributions are similar in their ``shapes,'' ``means,'' ``mean ranks,'' ``medians,'' and ``quantiles,'' and follow a Beta distribution with specific parameters. Given that there is no significant difference in the distributions, we can provide a universal threshold using quantiles which separates the important features from the rest to produce explanations. To be able to show that highly perturbed regions can be chosen to produce explanations for a single input, we should first show analytically that the results are consistent for all inputs, i.e., adversarial attacks are consistent in their adversarial behavior and the manner in which they attack the most influential input segments . Our analyses prove this point and consequently we can show that our proposed rule to find the important input segments for a single input holds. Finally, we empirically show that these segments are indeed the most important parts of the input by analyzing the effects of them on the test error rate. We can measure the symmetricity of distributions using the Fisher-Pearson coefficient of skewness. We present the results for AlexNet on CIFAR10 \cite{kaur2018convolutional}, VGG16 on CIFAR100 \cite{krizhevsky2009cifar} and ResNet34 on ImageNet \cite{deng2009imagenet}. The Fisher-Pearson coefficients of the attack magnitudes vs. frequency distributions for all cases are shown in Fig. \ref{fig:10}. It is seen that the skewness of all distributions falls within the $[-0.5, 0.5]$ range showing strong evidence that they are approximately symmetric \cite{bulmer1979principles}. Only 0.9\% of CIFAR10, 3.3\% of CIFAR100 and 1.9\% of ImageNet test datasets lie outside of $[-0.5, 0.5]$ range. \begin{figure}[!] \centering \begin{subfigure}[!]{0.2\textwidth} \includegraphics[width=\linewidth]{fig7a.pdf} \caption{PGD, AlexNet, CIFAR10} \label{fig:10a} \end{subfigure} \quad \begin{subfigure}[!]{0.2\textwidth} \includegraphics[width=\linewidth]{fig7c.pdf} \caption{PGD, VGG16, CIFAR100} \label{fig:10c} \end{subfigure} \quad \begin{subfigure}[!]{0.2\textwidth} \includegraphics[width=\linewidth]{fig7b.pdf} \caption{PGD, ResNet34, ImageNet} \label{fig:10b} \end{subfigure} \caption{The Fisher-Pearson coefficient of attack magnitudes vs. frequency distributions.} \label{fig:10} \end{figure} Quantile-Quantile (Q-Q) plot allows us to understand how the quantiles of a distribution deviate from a specified theoretical distribution. The theoretical distribution selected is the normal distribution. The x-axis and y-axis represent the quantile values of the theoretical and sample distributions, respectively. While it is unlikely to have identical distributions that perfectly match, one can look at different parts of the Q-Q plot to distinguish between the similar and dissimilar locations in the distributions. Fig. \ref{fig:11} shows the Q-Q plots for random subsets of ImageNet and CIFAR10 test datasets each containing 1000 images. It is seen that the distributions follow a fairly straight line in the middle portion of the curve, while deviating at the upper and lower parts. This provides some evidence supporting the hypothesis that distributions are symmetrical with heavier tails. \begin{figure}[h] \centering \begin{subfigure}[!]{0.22\textwidth} \includegraphics[width=\linewidth]{fig8a.pdf} \caption{PGD, AlexNet, CIFAR10} \label{fig:11a} \end{subfigure} \quad \begin{subfigure}[!]{0.22\textwidth} \includegraphics[width=\linewidth]{fig8b.pdf} \caption{PGD, ResNet34, ImageNet} \label{fig:12b} \end{subfigure} \caption{The Q-Q plot of sample distributions vs. theoretical normal distribution (mean=0, std=1).} \label{fig:11} \end{figure} \begin{table}[!] \centering \resizebox{1.\columnwidth}{!}{ \begin{tabular}{|l|l|l|l|l|} \hline & t-test (CIFAR10) & Mann-Whitney (CIFAR10) & t-test (ImageNet) & Mann-Whitney (ImageNet) \\ \hline p-value & 0.70 & 0.58 & 0.64 & 0.55 \\ \hline \end{tabular}} \caption{p-values for the mean similarity statistical tests at significance level 0.05.} \label{tab:table3} \end{table} We perform the two-sample location t-test and Mann-Whitney U test to determine if there is a significant difference between the hypotheses where the null hypothesis is the equality of the means. Carrying out pair t-tests on all samples allows us to be conservative in confirming the mean similarity of the distributions. A sample here is defined as the attack magnitudes vs. frequency distribution for a data point in the test adversarial dataset created by the PGD attack on a DNN trained on the training dataset. The results reported in Table \ref{tab:table3} indicate no significant difference between the means. Further, the Mann-Whitney U test results indicate that all pairs are similar to each other on the mean ranks. Under the assumption of two distributions having similar shapes, one could further state that Mann-Whitney test can be considered as a test of medians \cite{mcdonald2009handbook}. Since, we have shown that the shapes are similar, we can conclude that there are no significant difference between the medians of the distributions. Further details in addition to the results for the ANOVA test are given in Appendix \ref{app_3}. \begin{table}[] \centering \resizebox{1.\columnwidth}{!}{ \begin{tabular}{|l|l|l|l|} \hline & AlexNet, CIFAR10, PGD & VGG16, CIFAR100, PGD & ResNet34, ImageNet, PGD\\ \hline 15th Quantile & $ (-1.807e-02, -1.805e-02)$&$ (-1.419e-02, -1.414e-02)$&$ (-1.785e-03, -1.777e-03) $ \\ \hline 25th Quantile & $ (-1.145e-02, -1.071e-02)$&$ (-8.153e-03, -8.110e-03)$&$ (-1.015e-03, -1.101e-03) $ \\ \hline Mean & $ (1.775e-05, 2.295e-05)$ & $(-6.850e-06, -3.624e-06)$ & $(-1.090e-07, -6.000e-08) $ \\ \hline Median & $ (2.115e-06, 1.127e-05) $&$(-2.842e-06, 4.467e-06)$&$ (-2.155e-07, -9.381e-08)$ \\ \hline 75th Quantile & $ (1.071e-02, 1.073e-02)$&$ (8.102e-03, 8.146e-03)$&$ (1.011e-03, 1.016e-03) $ \\ \hline 85th Quantile & $ (1.809e-02, 1.812e-02)$&$ (1.413e-02, 1.418e-02)$&$ (1.777e-03, 1.785e-03) $ \\ \hline \end{tabular} } \caption{Estimations for mean, median, 15th , 25th, 75th and 85th quantiles at 95\% confidence level.} \label{tab:table4} \end{table} \begin{table}[] \centering \resizebox{1.\columnwidth}{!}{ \begin{tabular}{|l|l|l|l|} \hline & AlexNet, CIFAR10, PGD & VGG16, CIFAR100, PGD & ResNet34, ImageNet, PGD \\ \hline $p$ & $ (1.124e+01, 1.132e+01)$ & $(2.129e+01, 2.171e+01)$ & $(1.306e+02, 1.329e+02) $ \\ \hline $q$ & $ (1.136e+01, 1.145e+01) $&$(2.124e+01, 2.164e+01)$&$ (1.303e+02, 1.326e+02)$ \\ \hline \end{tabular} } \caption{Statistical estimations for parameters of beta distribution at 95\% confidence level.} \label{tab:table5} \end{table} Next, to show consistency across distributions for a given model, dataset and attack, we estimate the values of quantiles, means and medians. We do this by estimating the statistics of the distributions and constructing confidences intervals. For each experiment, we estimate the mean, median, 15th, 25th, 75th and 85th quantiles of each attack magnitude vs. frequency distribution for the entire test dataset. The statistical confidence interval estimations at confidence level of $95\%$ are reported in Table \ref{tab:table4}. Our results show that the confidence intervals have narrow ranges and the estimations are consistent. The estimates for the 15th, 25th, 75th and 85th quantiles indicate a strong symmetricity with respect to the origin in all cases. This matches the results of the skewness test in Fig. \ref{fig:10}. Another observation is that the confidence interval of the mean and medians are pretty narrow, supporting the results of the t-tests and Mann-Whitney U test. Finally, we can show with high confidence that the distributions consistently follow a beta distribution. The beta distribution is a family of distributions defined by two positive shape parameters, denoted by $p$ and $q$. The estimated $p$ and $q$ of the beta distribution are reported in Table \ref{tab:table5}. Further technical details on our analyses presented in this section, in addition to further experiments with audio and text input types, are provided in Appendix \ref{app_3}. \subsection{Quantile selection for the explanations} \label{subsec:quan} Our algorithm produces explanations that rely only on the features in the input that have the largest effect on the predictions. While the majority of the input is attacked, our belief is that only important features are strongly attacked. We show how one can select the boundary threshold between ``explainable features'' and the rest based on attack magnitudes. We demonstrate this with 2 experiments: 1) AlexNet trained on CIFAR10, 2) ResNet34 trained on ImageNet, both attacked by PGD with 20 iterations. In each case, we select the successfully attacked inputs from the adversarial test dataset, i.e., the inputs that fool the DNN. We then only re-attack specific features of the original clean inputs within the $[0\%$, $\alpha\%]$ and $[(100-\alpha)\%, 100\%]$ percentile of the distributions, where $\alpha$ is the percentage threshold. The re-attacking process starts from $\alpha=0$, where none of the input features are attacked, and then we gradually increase the value of $\alpha$ until the attack successfully changes the prediction, and then we save the value of $\alpha$ (Fig. \ref{fig:12a}). We repeat this for every input. The probability density distribution of $\alpha$'s are given in Fig. \ref{fig:12b} and Fig. \ref{fig:12c} with an estimated mean of $\alpha=15$. \begin{figure}[!] \centering \begin{subfigure}[!]{0.14\textwidth} \includegraphics[width=\linewidth]{fig9a.pdf} \caption{} \label{fig:12a} \end{subfigure} \quad \begin{subfigure}[!]{0.14\textwidth} \includegraphics[width=\linewidth]{fig9b.pdf} \caption{} \label{fig:12b} \end{subfigure} \quad \begin{subfigure}[!]{0.14\textwidth} \includegraphics[width=\linewidth]{fig9c.pdf} \caption{} \label{fig:12c} \end{subfigure} \caption{Visualization of the re-attacking process where only portions of inputs lying outside the red lines are attacked ($[0\%, \alpha\%]$, $[(100-\alpha)\%, 100\%]$ ) (b) AlexNet, CIFAR10 (c) ResNet34, ImageNet} \label{fig:12} \end{figure} \begin{table}[!] \centering \resizebox{1.\columnwidth}{!}{ \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline & \multicolumn{2}{l|}{CIFAR10, AlexNet} & \multicolumn{2}{l|}{ImageNet, ResNet34} & & CIFAR10, AlexNet & ImageNet, ResNet34 \\ \hline Attack Percentile & \multicolumn{2}{l|}{} & \multicolumn{2}{l|}{} & Attack Percentile & & \\ \hline $15\%-85\%$ & \multicolumn{2}{l|}{0.78} & \multicolumn{2}{l|}{0.88} & $0\%-15\% \& 85\%-100\%$ & 0.16 & 0.07 \\ \hline $10\%-90\%$ & \multicolumn{2}{l|}{0.26} & \multicolumn{2}{l|}{0.79} & $0\%-10\% \& 90\%-100\%$ & 0.26 & 0.13 \\ \hline $5\%-95\%$ & \multicolumn{2}{l|}{0.50} & \multicolumn{2}{l|}{0.63} & $0\%-5\% \& 95\%-100\%$ & 0.45 & 0.25 \\ \hline $1\%-99\%$ & \multicolumn{2}{l|}{0.07} & \multicolumn{2}{l|}{0.12} & $0\%-1\% \& 99\%-100\%$ & 0.92 & 0.80 \\ \hline \end{tabular} } \caption{Adversarial test accuracy where only features within a certain percentile of the attack magnitudes vs. frequency distributions are attacked (PGD with 20 Iterations).} \label{tab:table1} \end{table} Further, we report the test accuracies of the DNNs on the adversarial test datasets that are created based on different attack percentiles. Given an attack percentile range, the adversarial test dataset consists of adversarial test inputs which are created by attacking only portions of the input features that lie withing a specific percentile range of the attack magnitudes vs. frequency distributions similar to above. This allows us to understand how the features lying in the middle area, tails and outliers of the distributions affect the DNN's predictions. Our findings are reported in Table \ref{tab:table1}. Our results show that the majority of the input features including those within the first two standard deviations and the outliers of the distributions do not have a strong effect on the predictions. A smaller portion of the input features which are also those attacked with the highest intensity, i.e., within the $[0\%,15\%]$ and $[85\%, 100\%]$ percentiles of the distributions have the largest effect on the DNN's predictions, confirming our hypothesis. We see the same trend across different DNNs and datasets (Appendix \ref{app_3}). \section{Experiment Results} \label{exp} Earlier, we provided a sample explanation created by AXAI for an image classifier. Appendix \ref{app_5} contains more experiments for image classification and object detection DNNs. Further, Appendix \ref{app_5} contains an ablation study and an interesting comparison between explanations produced by a non-robust model and an adversarially robust model. Here, we provide sample explanations produced by our algorithm for speech recognition and language-based tasks. \subsection{Explaining a speech recognition model} The Speech Commands Dataset \cite{warden2018speech} is an audio dataset of short spoken words. Here, we have converted the audio files to spectrograms and used them to train a LeNet model to identify ``speech commands.'' We have created time-frequency segments by dividing the spectrogram into time-frequency grids similar to \cite{Mishra2017LocalIM}. The x-axis and y-axis indicate the time-scale and log-scale frequency of the spectograms respectively, and the color bar indicates the magnitude. This kind of segmentation results in equal sized rectangular blocks where the height of the segment covers the range of frequencies (y-axis) and the width of the segment covers the range of the time (x-axis) associated with the spoken word. The spectrogram of the first word "Right" and its explanation are shown in Fig. \ref{fig:13a} and Fig. \ref{fig:13b}. The explanation shows that the first and last character in the spoken word ``Right'' stand out as important features ($[0.4s, 0.6s]$ and $[1.0s , 1.2s]$ intervals). This is reasonable because ``Five'' is the neighboring class of ``Right'' in the dataset (Appendix \ref{app_4}) and ``Right'' and ``Five'' differ in the pronunciation of ``r'' and ``f'' and ``t'' and ``v.'' The second example is for the word ``Three'' (Fig. \ref{fig:13c} and Fig. \ref{fig:13d}). The produced explanation indicates the importance of ``Thr'' ($[1.4s, 1.7s]$ interval). This is reasonable because ``Three'' and its neighbor ``Tree'' differ in the letter ``h'' in ``Thr,'' and this difference is learned by the model during training to identify the two words correctly. More examples are shown in \ref{fig:26}. Details on this experiment are given in Appendix \ref{app_5}. \begin{figure}[!htp] \centering \begin{subfigure}[!]{0.22\textwidth} \includegraphics[width=\linewidth]{fig13a.pdf} \caption{"Right" } \label{fig:13a} \end{subfigure} \quad \begin{subfigure}[!]{0.22\textwidth} \includegraphics[width=\linewidth]{fig13b.pdf} \caption{Explanation} \label{fig:13b} \end{subfigure} \quad \begin{subfigure}[!]{0.22\textwidth} \includegraphics[width=\linewidth]{fig13c.pdf} \caption{"Three"} \label{fig:13c} \end{subfigure} \quad \begin{subfigure}[!]{0.22\textwidth} \includegraphics[width=\linewidth]{fig13d.pdf} \caption{Explanation} \label{fig:13d} \end{subfigure} \quad \caption{The AXAI explanations for the LeNet speech recognition model.} \label{fig:13} \end{figure} \subsection{Explaining a text classification model} The Sentence Polarity Dataset \cite{Pang+Lee:05a} is a collection of movie-review documents labeled with respect to their overall sentiment polarity. Here, we will look at a negative and positive example (Fig. \ref{fig:144a} and Fig. \ref{fig:144b}) where the rows are the word tokens in the sentence, and the columns are the embedding dimensions. The NLP model used in our experiment is taken from \cite{kim2014convolutional} and trained on the dataset. As part of the pre-processing, the words in the dataset are tokenized and mapped to an embedding matrix. The word embedding matrix is also used as the segments in our algorithm. \cite{li2015visualizing} mentions that the saliency map of an NLP model can be visualized using the embedding layer similar to saliency maps used for image-based models. Consequently, one can apply our algorithm to NLP models in a similar manner, i.e., we can utilize the first order derivative of the loss with respect to the word embedding. This technique is similar to what was used in \cite{miyato2016adversarial}. The first example, ``it's a glorified sitcom, and a long, unfunny one at that.'' is classified as a negative review by the model. Fig. \ref{fig:144a} shows that the word ``unfunny'' is strongly highlighted as the main explanation for this prediction. For the positive example ``a work of astonishing delicacy and force,'' it is seen that the word ``astonishing'' has the most significant influence on model's prediction. More examples are shown in Fig. \ref{fig:27}. \begin{figure}[!] \centering \begin{subfigure}[!]{0.3\textwidth} \includegraphics[width=\linewidth]{fig14a.pdf} \caption{Text example 1} \label{fig:144a} \end{subfigure} \quad \begin{subfigure}[!]{0.4\textwidth} \includegraphics[width=\linewidth]{fig14b.pdf} \caption{Text example 2} \label{fig:144b} \end{subfigure} \quad \caption{The AXAI explanations for the sentence classification model.} \label{fig:144} \end{figure} \subsection{Benchmark tests} We test our algorithm against LIME and SHAP (Gradient Explainer). It is important to note that SHAP subsumes a number of prior approaches and provides a fair baseline. To show the consistency of our approach, we present visualizations for 3 cases: 1) AlexNet, CIFAR10, 2) VGG16, CIFAR100, 3) ResNet34, ImageNet using the 3 explainability tools and provide more experiments in Appendix \ref{app_6}. The algorithms produce similar explanations where AXAI has fewer tune-able parameters and performs faster. LIME fails to produce good explanations for low-resolution CIFAR10 images. In Appendix \ref{app_6}, we provide examples showing that AXAI outperforms LIME for low-resolution inputs. We benchmark the running-time performance of AXAI, LIME and SHAP for ResNet34 trained on ImageNet on a single CPU (Intel Core i5-7360U) and single GPU (Tesla V100-SXM2) on the entire test dataset. The results are given in Table. \ref{tab:table44}. LIME is the slowest to produce explanations. This is because LIME needs to forward propagate the perturbed inputs through the DNN several times. SHAP is also slower to generate the results in comparison to AXAI. LIME works better on a GPU. AXAI maintains its relative performance on the CPU and GPU. This is because the segmentation step which mainly uses the CPU is the main computational bottleneck for the algorithms (Appendix \ref{app_1}). A few comparisons between AXAI, LIME, and SHAP are shown in Fig. \ref{fig:25}. \begin{table}[!] \centering \resizebox{1.\columnwidth}{!}{ \begin{tabular}{|l|l|l|} \hline & Single CPU (Intel Core i5-7360U) & Single GPU (Tesla V100-SXM2) \\ \hline LIME & 105s & 5.8s \\ \hline SHAP (Gradient Explainer) & 35s & 3.8s \\ \hline AXAI (PGD with 20 iters) & 6.6s & 1.7s \\ \hline \end{tabular} } \caption{Benchmark running-time experiments.} \label{tab:table44} \end{table} \begin{figure}[h] \centering \begin{subfigure}[t]{0.14\textwidth} \includegraphics[width=\linewidth]{figapp_img1.pdf} \caption{} \label{fig:25a} \end{subfigure} \quad \begin{subfigure}[t]{0.14\textwidth} \includegraphics[width=\linewidth]{figapp_img2.pdf} \caption{} \label{fig:25b} \end{subfigure} \quad \begin{subfigure}[t]{0.14\textwidth} \includegraphics[width=\linewidth]{figapp_img3.pdf} \caption{} \label{fig:25c} \end{subfigure} \centering \begin{subfigure}[t]{0.12\textwidth} \includegraphics[width=\linewidth]{figapp_img4.pdf} \caption{} \label{fig:25d} \end{subfigure} \quad \begin{subfigure}[t]{0.12\textwidth} \includegraphics[width=\linewidth]{figapp_img5.pdf} \caption{} \label{fig:25e} \end{subfigure} \quad \begin{subfigure}[t]{0.12\textwidth} \includegraphics[width=\linewidth]{figapp_img6.pdf} \caption{} \label{fig:25f} \end{subfigure} \centering \begin{subfigure}[t]{0.12\textwidth} \includegraphics[width=\linewidth]{figapp_img7.pdf} \caption{} \label{fig:25g} \end{subfigure} \quad \begin{subfigure}[t]{0.12\textwidth} \includegraphics[width=\linewidth]{figapp_img8.pdf} \caption{} \label{fig:25h} \end{subfigure} \quad \begin{subfigure}[t]{0.12\textwidth} \includegraphics[width=\linewidth]{figapp_img9.pdf} \caption{} \label{fig:25i} \end{subfigure} \caption{Comparisons between our adversarial explainability approach (Left Column), LIME (Middle Column), and SHAP (Right Column). Explanations are produced for a ResNet34 trained on ImageNet.} \label{fig:25} \end{figure} \begin{figure}[h] \centering \begin{subfigure}[!]{0.18\textwidth} \includegraphics[width=\linewidth]{figapp_audio1.pdf} \caption{"Happy"} \label{fig:13e} \end{subfigure} \quad \begin{subfigure}[!]{0.18\textwidth} \includegraphics[width=\linewidth]{figapp_audio2.pdf} \caption{Explanation} \label{fig:13f} \end{subfigure} \quad \begin{subfigure}[!]{0.18\textwidth} \includegraphics[width=\linewidth]{figapp_audio5.pdf} \caption{"six"} \label{fig:13i} \end{subfigure} \quad \begin{subfigure}[!]{0.18\textwidth} \includegraphics[width=\linewidth]{figapp_audio6.pdf} \caption{Explanation} \label{fig:13j} \end{subfigure} \caption{Examples for the LeNet speech recognition model.} \label{fig:26} \end{figure} \begin{figure}[!] \centering \begin{subfigure}[!]{0.3\textwidth} \includegraphics[width=\linewidth]{figapp_text1.pdf} \caption{Text example 1} \label{fig:144c} \end{subfigure} \quad \begin{subfigure}[!]{0.3\textwidth} \includegraphics[width=\linewidth]{figapp_text2.pdf} \caption{Text example 2} \label{fig:144d} \end{subfigure} \quad \begin{subfigure}[!]{0.3\textwidth} \includegraphics[width=\linewidth]{figapp_text4.pdf} \caption{Text example 3} \label{fig:144e} \end{subfigure} \caption{Examples for the sentence classification model.} \label{fig:27} \end{figure} \section{Final Remarks and Conclusion} \label{exp} In this paper, we proposed a new approach for explaining the predictions of DNNs. Interpretability is directly related to the readability of an explanation \cite{gilpin2018explaining}. An explanation relying on thousands of features is not interpretable. AXAI, similar to LIME, uses input segmentation to create human-readable explanations focused on important input features. Further, AXAI has the following properties, \textbf{Property 1 (Robustness):} Our approach is more robust to the changes in segmentation hyper-parameters in comparison to other segmentation based approaches such as LIME. This is because AXAI does not require a surrogate model trained on ``randomly perturbed inputs.'' AXAI uses the deterministic attack magnitudes as ``base explanations" for a given DNN and dataset, and uses segments as an ``aid'' to visualize the results. The segmentation affects the visualizations. We further explain this in Appendix \ref{app_1}. Robustness is identical to stability of explanations as defined in \cite{pub.1104451629}. A lower number of non-deterministic steps in the algorithm enhances stability. A carefully filtered explanation based on our approach simply removes the features that have a low impact on predictions. One can interpret this process as a de-noising step to create a sparse representation of explanations. \textbf{Property 2 (Local attribution):} Our algorithm is locally stable and uses local attributes to produce explanations. This is because an adversarial attack uses the most minimal amount of noise within an $\ell_2$ ball of some small $\epsilon$ to fool the DNN. Given the un-targeted nature of the attack used in AXAI, the distributions can be interpreted as estimations of the boundaries among neighboring classes. Thus, one can conclude that the attack magnitudes are a representation of feature contributions to the predictions on a local scale. A similar conclusion is made in \cite{ancona2017towards}, where it is argued that gradients can in fact point to important local attributions of a DNN. We explore this in details in Appendix \ref{app_4}. \textbf{Property 3 (Completeness):} Completeness as a property is described as the ability to accurately explain the operations of a DNN \cite{gilpin2018explaining}. An explanation is more complete when it can explain the behavior of the DNN for a larger set of inputs. \cite{sundararajan2017axiomatic} and \cite{smilkov2017smoothgrad} mention the problem of sensitivity and lack of stability in gradient-based algorithms. In the literature, if a solution can reduce the gradient ``sensitivity'' problem, it can be described as having the “completeness” property \cite{gilpin2018explaining}. AXAI with PDG attack is complete in the same sense as SmoothGrad is \cite{smilkov2017smoothgrad}. SmoothGrad takes the average of saliency maps with added Gaussian noise to reduce sensitivity. The PGD attack behaves in a similar manner by adding adversarial noise at each iteration. Both solutions add perturbations to the input to smooth gradient fluctuations. While further research can be done on the power of iterative attacks in their gradient smoothing effects, we argue that AXAI with iterative PGD does have the desirable characteristic and produces stable sharpen visualizations of sensitivity maps for robust explanations. Lastly, as shown in Section \ref{sec:main}, our explainability algorithm exhibits a high-level of fidelity where the explainability outputs are both interpretable and also loyal to the decision making process of the DNN. The produced explainability segments directly point to the places in the input that affect the decision of the DNN. As a result, our solution can be used to explore the relationship between input features and predictions and to understand issues related to the training of DNNs, bias and robustness against adversarial attacks (Appendix \ref{app_5}). \section*{Potential Ethical Impact} Our work in this paper contributes to the fields of adversarial machine learning and artificial intelligence explainability (AI Explainability). There is still a huge gap between building a model in Jupyter notebook and shipping it as a stand-alone product to the users. Advances in these two fields directly relate to the deployment of AI systems that behave in a robust and user-friendly manner after deployment. Building AI systems is hard. AI explainability can provide insights into how AI models behave, why they make the decision they make and the reasoning behind their incorrect predictions. Additionally, explaining the outcomes of a model can help reduce bias and contribute to improvements in accountability and ethics by providing beneficial insights into how AI models think and make their decisions. Despite the hype, AI engineers struggle with deploying models which meet the users' performance expectations. A lack of robustness in the performance of trained model is a major impediment. We need to be able to design AI systems that both perform well and are robust. A robust model not only makes correct predictions in expected environment, but it also maintains an acceptable level of performance in unpredictable situations. Our work gives insights into how the adversary attacks an AI system trained to perform a specific task. Understanding how adversarial attacks behave can help AI engineers in development of AI systems that perform as expected while maintaining some level of robustness in presence of external disturbances and adversarial noise. This type of information can help AI engineers in developing AI models that perform better. In short our paper can help AI researchers in their endeavor to design, develop and deploy explainable ethical AI systems that are robust and reliable. \begin{quote} \begin{small} \bibliographystyle{aaai}
2005.11849
\section{Introduction} Grammatical error correction (GEC) is the automatic correction of grammatical and other language-related errors in text. Most works regard this task as a translation task and use encoder--decoder (Enc--Dec) architectures to convert ungrammatical sentences to grammatical ones. This Enc--Dec approach often does not require linguistic knowledge of the target language. Strong Enc--Dec models for GEC are pretrained with a large amount of artificially generated data, commonly referred to as `pseudodata', that is created by introducing artificial error to a monolingual corpus. Hereafter, pretraining using pseudodata aimed at the GEC task is referred to as \textbf{task-oriented} pretraining \cite{kiyono2019,beasota,low_resource_gec,kaneko_bert}. For example, \citet{kiyono2019} generated a pseudo corpus using back-translation and achieved strong results for English GEC. \citet{low_resource_gec} generated a pseudo corpus by introducing artificial errors into monolingual corpora and achieved the best scores for GEC in several languages by adopting the methods proposed by \citet{beasota}. These task-oriented pretraining approaches require extensive use of a pseudo-parallel corpus. Specifically, \citet{beasota} used 100M ungrammatical and grammatical sentence pairs, while \citet{kiyono2019} and \citet{kaneko_bert} used 70M sentence pairs, which required time-consuming pretraining of GEC models using the pseudo corpus. In this study, we determined the effectiveness of publicly available pretrained Enc--Dec models for GEC. Specifically, we investigated pretrained models without the need for pseudodata. We explored a pretrained model proposed by \citet{bart} called bidirectional and auto-regressive transformers (BART). \citet{m-bart} also proposed multilingual BART. These models were pretrained by predicting the original sequence, given a masked and shuffled sentence. The motivation for using these models for GEC was that it achieved strong results for several text generation tasks, such as summarization; we refer to it as a \textbf{generic} pretrained model. We used generic pretrained BART models to compare with GEC models using a pseudo-corpus approach \cite{kiyono2019,kaneko_bert,low_resource_gec}. We conducted GEC experiments for four languages: English, German, Czech, and Russian. The Enc--Dec model based on BART achieved results comparable with those of current strong Enc--Dec models for English GEC. The multilingual model also showed high performance in other languages, despite only requiring fine-tuning. These results suggest that BART can be used as a simple baseline for GEC. \section{Previous Work} The Enc--Dec approach for GEC often uses the task-oriented pretraining strategy. For example, \citet{copy_gec} and \citet{beasota} reported that pretraining of the Enc--Dec model using a pseudo corpus is effective for the GEC task. In particular, they introduced word- and character-level errors into a sentence in monolingual corpora. They developed a confusion set derived from a spellchecker and randomly replaced a word in a sentence. They also randomly deleted a word, inserted a random word, and swapped a word with an adjacent word. They performed these same operations, i.e., replacing, deleting, inserting, and swapping, for characters. The pseudo corpus made by the above methods consisted of 100M training samples. Our study aims to investigate whether the generic pretrained models are effective for GEC, because pretraining with such a large corpus is time-consuming. \citet{low_resource_gec} adopted \citet{beasota}'s method for several languages, including German, Czech, and Russian. They trained a Transformer \cite{attention} with pseudo corpora (10M sentence pairs), and achieved current state-of-the-art (SOTA) results for German, Czech, and Russian GEC. We compared their results with those of the generic pretrained model to confirm whether the model was effective for GEC in several languages. \citet{kiyono2019} explored the generation of a pseudo corpus by introducing random errors or using back-translation. They reported that a task-oriented pretraining with back-translation data and character errors is better than that with pseudodata based on random errors. \citet{kaneko_bert} combined \citet{kiyono2019}'s pretraining approach with BERT \cite{bert2019} and improved \citet{kiyono2019}'s results. Specifically, \citet{kaneko_bert} fine-tuned BERT with a grammatical error detection task. The fine-tuned BERT outputs for each token were combined with the original tokens as a GEC input. Their study is similar to our research in that both studies use publicly available generic pretrained models to perform GEC. The difference between these studies is that \citet{kaneko_bert} used the architecture of the pretrained model as an encoder. Therefore, their method still requires pretraining with a large amount of pseudodata. The current SOTA approach for English GEC uses the sequence tagging model proposed by \citet{gector}. They designed token-level transformations to map input tokens to target corrections to produce training data. The sequence tagging model then predicts the transformation corresponding to the input token. We do not attempt to make a comparison with this approach, as the purpose of our study is to create a strong GEC model without using pseudodata or linguistic knowledge. \section{Generic Pretrained Model} BART \cite{bart} is pretrained by predicting an original sequence, given a masked and shuffled sequence using a Transformer. They introduced masked tokens with various lengths based on the Poisson distribution, inspired by SpanBERT \cite{spanbert}, at multiple positions. BART is pretrained with large monolingual corpora (160 GB), including news, books, stories, and web-text domains. This model achieved strong results in several generation tasks; thus, it is regarded as a generic model. They released pretrained models using English monolingual corpora for several tasks, including summarization, which we used for English GEC. \citet{m-bart} proposed multilingual BART (mBART) for a machine translation task, which we used for GEC of several languages. The latter model was trained using monolingual corpora for 25 languages simultaneously. They used a special token for representing the language of a sentence. For example, they added \verb|<de_DE>| and \verb|<ru_RU>| into the initial token of the encoder and decoder for De--Ru translation. To fine-tune mBART for German, Czech, and Russian GEC, we set the target language for the special token referring to that language. \section{Experiment} \subsection{Settings} \begin{table}[t] \centering \small \begin{tabular}{llrrr} \toprule lang & Corpus & Train & Dev & Test \\ \midrule \midrule & BEA & 1,157,370 & 4,384 & 4,477 \\ En & JFLEG & - & - & 747 \\ & CoNLL-2014 & - & - & 1,312 \\ \midrule De & Falko+MERLIN & 19,237 & 2,503 & 2,337 \\ \midrule Cz & AKCES-GEC & 42,210 & 2,485 & 2,676 \\ \midrule Ru & RULEC-GEC & 4,980 & 2,500 & 5,000 \\ \bottomrule \end{tabular} \caption{Data statistics.} \label{gec-data} \end{table} \begin{table*}[t] \centering \small \begin{tabular}{lrrrrrrr} \toprule & \multicolumn{3}{c}{CoNLL-14 ($\mathrm{M^2}$)} & JFLEG & \multicolumn{3}{c}{BEA-test} \\ \cmidrule(lr){2-4} \cmidrule(lr){5-5} \cmidrule(lr){6-8} & P & R & $\mathrm{F_{0.5}}$ & GLEU & P & R & $\mathrm{F_{0.5}}$ \\ \midrule \citet{kiyono2019} & 67.9/\underline{73.3} & 44.1/44.2 & 61.3/64.7 & 59.7/61.2 & 65.5/\underline{74.7} & 59.4/56.7 & 64.2/\underline{70.2} \\ \citet{kaneko_bert} & 69.2/72.6 & \textbf{45.6}/\underline{46.4} & \textbf{62.6}/\underline{65.2} & \textbf{61.3}/\underline{62.0} & 67.1/72.3 & \textbf{60.1}/\underline{61.4} & \textbf{65.6}/69.8 \\ BART-based & \textbf{69.3}/69.9 & 45.0/45.1 & \textbf{62.6}/63.0 & 57.3/57.2 & \textbf{68.3}/68.8 & 57.1/57.1 & \textbf{65.6}/66.1 \\ \bottomrule \end{tabular} \caption{English GEC results. Left and right scores represent single and ensemble model results, respectively. Bold scores represent the best score in the single models, and underlined scores represent the best overall score.} \label{result_score_english} \end{table*} \paragraph{Common Settings.} As presented in Table \ref{gec-data}, we used learner corpora, including BEA\footnote{BEA corpus is made of several corpora. Details can be found in \citet{bea2019}.} \cite{bea2019,locness,lang8-1,lang8-2,fce,nucle}, JFLEG \cite{jfleg}, and CoNLL-14 \cite{ng2014} data for English; Falko+MERLIN data \cite{merlin} for German; AKCES-GEC \cite{low_resource_gec} for Czech; and RULEC-GEC \cite{rulec} for Russian. Our models were fine-tuned using a single GPU (NVIDIA TITAN RTX), and our implementations were based on publicly available code\footnote{BART, mBART: https://github.com/pytorch/fairseq}. We used the hyperparameters provided in some previous works \cite{bart,m-bart}, unless otherwise noted. The scores excluding the ensemble method were averaged in five fine-tuned experiments with random seeds. \paragraph{English.} Our setting for the English datasets was almost the same as that of \citet{kiyono2019}. We extracted the training data from BEA-train for English GEC. Similar to \citet{kiyono2019}, we did not use the unchanged sentences in the source and target sides; thus, the training data consisted of 561,525 sentences. We used BEA-dev to determine the best model. We trained the BART-based models by using \verb|bart.large|. This model was proposed for the summarization task, which required some constraints in inference to ensure appropriate outputs; however, we did not impose any constraints because our task was different. We applied byte pair encoding (BPE) \cite{bpe} to the training data for the BART-based model by using the BPE model of \citet{bart}. We used the $\mathrm{M^2}$ scorer \cite{m2score} and GLEU \cite{gleu} for CoNLL-14 and JFLEG, respectively, and used the ERRANT scorer \cite{errant} for BEA-test. We compared these scores with strong results \cite{kiyono2019,kaneko_bert}. \begin{table}[t] \centering \scalebox{0.8}{ \begin{tabular}{clrrr} \toprule & & P & R & $\mathrm{F_{0.5}}$ \\ \midrule \multirow{2}{*}{De} & \citet{low_resource_gec} & 78.21 & 59.94 & 73.31 \\ & mBART-based & 73.97 & 53.98 & 68.86 \\ \midrule \multirow{2}{*}{Cz} & \citet{low_resource_gec} & 83.75 & 68.48 & 80.17 \\ & mBART-based & 78.48 & 58.70 & 73.52 \\ \midrule & \citet{low_resource_gec} & 63.26 & 27.50 & 50.20 \\ Ru & mBART-based & 32.13 & 4.99 & 15.38 \\ & \ \ with pseudo corpus & 53.50 & 26.35 & 44.36 \\ \bottomrule \end{tabular} } \caption{German, Czech, and Russian GEC results. These models are not an ensemble of multiple models.} \label{result_score_others} \end{table} \paragraph{German, Czech, and Russian.} The dataset settings in this study were almost the same as those used by \citet{low_resource_gec} for each language. We used official training data and decided the best model by using the development data. In addition, we trained the mBART-based models for German, Czech, and Russian GEC. We used \verb|mbart.cc25| for the mBART-based models. For the mBART-based model, we followed \citet{m-bart}; we detokenized\footnote{We used detokenizer.perl in the Moses script \cite{moses}.} the GEC training data for the mBART-based model and applied SentencePiece \cite{spm} with the SentencePiece model shared by \citet{m-bart}. Using this preprocessing, the input sentence may not represent grammatical information, compared with the sentence tokenized using a morphological analysis tool and subword tokenizer. However, what preprocessing is appropriate for GEC is beyond this paper's scope and will be treated as future work. For evaluation, we tokenized the outputs after recovering the subwords. Then, we used a spaCy-based\footnote{https://spacy.io} tokenizer for German\footnote{We used the built-in de model.} and Russian\footnote{https://github.com/aatimofeev/spacy\_russian\_tokenizer}, and the MorphoDiTa tokenizer\footnote {https://github.com/ufal/morphodita} for Czech. Moreover, the $\mathrm{M^2}$ scorer was used for each language. We compared these scores with the current SOTA results \cite{low_resource_gec}. \subsection{Results} \paragraph{English.} Table \ref{result_score_english} presents the results of the English GEC task. When using a single model, the BART-based model is better than the model proposed by \citet{kiyono2019}, and the results are comparable to those reported by \citet{kaneko_bert} in terms of CoNLL-14 and BEA-test. \citet{kiyono2019} and \citet{kaneko_bert} incorporated several techniques to improve the accuracy of GEC. To compare these models, we experimented with an ensemble of five models. Our ensemble model was slightly better than our single model, but worse than the ensemble models by \citet{kiyono2019} and \citet{kaneko_bert}. The BART-based model along with the ensemble model achieved results comparable to current strong results despite only requiring fine-tuning of the BART model. We believe that the reason for the ineffectiveness of the ensemble method is that the five models are not significantly different as the initial weights are the same as those of the BART model, and seeds only affect minor changes, such as training data order, and so on. \paragraph{German, Czech, and Russian.} Table \ref{result_score_others} presents the results for German, Czech, and Russian GEC. In the German GEC task, the mBART-based model achieves 4.45 $\mathrm{F_{0.5}}$ points lower than the model by \citet{low_resource_gec}. This may be because \citet{low_resource_gec} pretrains the GEC model with only the target language, whereas mBART is pretrained with 25 languages, resulting in the information of other languages being included as noise. In the Czech GEC task, the mBART-based model achieves 6.65 $\mathrm{F_{0.5}}$ points lower than the model by \citet{low_resource_gec}. Similar to the case of the German GEC results, we suppose that mBART includes noisy information. Considering Russian GEC, the mBART-based model shows much lower scores than \citet{low_resource_gec}'s model. This may be because the training data for Russian GEC are scarce compared to those of German or Czech. To investigate the effect of corpus size, we additionally trained the mBART model with a 10M pseudo corpus, using the method proposed by \citet{beasota}, and fine-tuned it with the learner corpus to compensate for the low-resource scenario. The results presented in Table \ref{result_score_others} support our hypothesis. \begin{table}[t] \centering \small \begin{tabular}{lrrrrrr} \toprule & \multicolumn{3}{c}{\citet{kaneko_bert}} & \multicolumn{3}{c}{BART-based} \\ \cmidrule(lr){2-4} \cmidrule(lr){5-7} Error Type & P & R & $\mathrm{F_{0.5}}$ & P & R & $\mathrm{F_{0.5}}$ \\ \midrule PUNCT & 74.1 & 52.7 & 68.5 & 79.2 & 59.0 & \textbf{74.1} \\ DET & 73.7 & 72.9 & 73.5 & 76.3 & 71.1 & \textbf{75.2} \\ PREP & 73.4 & 69.1 & \textbf{72.5} & 71.2 & 64.8 & 69.9 \\ ORTH & 86.9 & 62.9 & \textbf{80.8} & 84.2 & 52.9 & 75.3 \\ SPELL & 83.1 & 79.5 & \textbf{82.3} & 84.7 & 55.2 & 76.5 \\ \bottomrule \end{tabular} \caption{BEA-test scores for the top five error types, except for OTHER. \citet{kaneko_bert} and BART-based are ensemble models. Bold scores represent the best score for each error type.} \label{analysis_bart} \end{table} \section{Discussion} \paragraph{BART as a simple baseline model.} According to the German and Czech GEC results, the mBART-based model, in which we only fine-tuned the pretrained mBART model, achieves comparable scores with SOTA models. In other words, mBART-based models are considered to show sufficiently high performance for several languages without using a pseudo corpus. These results indicate that the mBART-based model can be used as a simple GEC baseline for several languages. \paragraph{Performance comparison for each error type.} We compare the BART-based model with \citet{kaneko_bert}'s model for common error types using a generic pretrained model. Table \ref{analysis_bart} presents the results for the top five error types in BEA-test. According to these results, BART-based is superior to \citet{kaneko_bert} in PUNCT and DET errors; in particular, PUNCT is 5.6 $\mathrm{F_{0.5}}$ points better. BART is pretrained to correct the shuffled and masked sequence, so that this model learns to place punctuation adequately. In contrast, \citet{kaneko_bert} uses an encoder that is not pretrained with correcting shuffled sequences. Conversely, \citet{kaneko_bert} report better results for other errors, except for DET. Regarding ORTH and SPELL, their model is more than 5 $\mathrm{F_{0.5}}$ points better than the BART-based one. It is difficult for the BART-based model to correct these errors because BART uses shuffled and masked sequences as noise in pretraining; not using character-level errors. \citet{kaneko_bert} introduce character errors into a pseudo corpus as task-oriented Enc--Dec pretraining; this is the reason why the BART-based model is inferior to \citet{kaneko_bert} in these errors. \section{Conclusion} We introduced a generic pretrained Enc--Dec model, BART, for GEC. The experimental results indicated that BART better initialized the Enc--Dec model parameters. The fine-tuned BART achieved remarkable results, which were comparable to the current strong results in English GEC. Indeed, the monolingual BART seems to be more effective for GEC than the model with a multilingual setting. However, although it is not as good as SOTA, fine-tuned mBART exhibited high performance in other languages. This implies that BART is a simple baseline model for pretraining GEC methods because it only requires fine-tuning as training. \section*{Acknowledgements} We thank the anonymous reviewers for their insightful comments. This work has been partly supported by the programs of the Grant-in-Aid for Scientific Research from the Japan Society for the Promotion of Science (JSPS KAKENHI) Grant Numbers 19K12099 and 19KK0286.
2203.01686
\section{Introduction} Kernel smoothers form an essential suite of statistical techniques for data analysis in the 21st century due to their ability to convey complex statistical information in a concise and intuitive visual format. This ability arises from their shared characteristic of transforming data samples into smoothed estimates. Kernel smoothers have provided insight in data analysis problems in many situations. A small recent selection of these includes: the identification of important biomedical functions, such as characterising different sub-cellular structures in single cells \citep{schauer2010} or characterising a single cell population in mixed cell samples \citep{chacon2011ss}; the evaluation of predicted extreme temperatures to calibrate climate models \citep{beranger2019}; the estimation of the home range of animal movements \citep{baillo2021}; or the detection of traffic anomalies from traffic flows \citep{kalair2021}. The most widely used kernel smoother is the kernel density estimate, which can be considered to be a smoothed version of the histogram. A major access point to kernel smoothers in the \proglang{R} statistical programming environment is the \pkg{ks} (`\pkg{k}ernel \pkg{s}moothing') add-on package \citep{duong2007}, which implements density estimation, classification (unsupervised learning), clustering (unsupervised learning), and inferential methods. This package utilises the base \proglang{R} graphics engine to generate its statistical graphics. Whilst it remains the most comprehensive graphics engine in \proglang{R}, the \pkg{ggplot2} graphics engine \citep{ggplot2} has gained popularity, as part of the `tidyverse', especially with data analysis practitioners. Despite the dramatic rise in the number of analysis methods available in the tidyverse, nonetheless it comprises a limited range of natively implemented kernel smoothers. The first goal of the \pkg{eks} (`\pkg{e}xtended \pkg{k}ernel \pkg{s}moothing') package \citep{eks} is to provide access to the comprehensive suite of kernel smoothers from the \pkg{ks} package in the tidyverse. There is an analogous lack of kernel smoothers for geospatial data analysis. Existing \proglang{R} packages for geospatial analysis include \pkg{spatstat} \citep{badderley2005} and \pkg{spatialEco} \citep{spatialEco}. For these and similar packages, the underlying statistical framework are spatial point processes, rather than bivariate point clouds, as is for \pkg{eks}. This seemingly minor difference in the statistical framework has many consequences, since kernel-based data analyses, apart from kernel density estimates and intensity estimates, are rarely developed and implemented for spatial point processes. The geospatial functionality of the \pkg{eks} package is based on the the \pkg{sf} package \citep{sf}, as it provides seamless geospatial functionality in \proglang{R}. Furthermore, the geospatial outputs in \pkg{eks} can be exported to external `Geographical Information Systems' (GIS) software (such as ArcGIS and QGIS). This vastly expands the range of kernel smoothers available to geospatial analysts in a system which is familiar to them. The second goal of the \pkg{eks} package is to provide access to a comprehensive suite of kernel smoothers for geospatial analysis. Thus a wide range of kernel smoothers is now available for tidy and geospatial data, and for \pkg{ggplot2} and base \proglang{R} graphical visualisations. The user is able to select and combine these components, with their differing strengths and applicabilities, in order to construct suitable data analysis workflows. To illustrate kernel smoothers for tidy and geospatial data, we employ the \code{grevilleasf} data set from the \pkg{eks} package, as shown in Figure~\ref{fig:scatter}. The full data set of 22203 plants from 238 different {\it Grevillea} species were collected in Western Australia. The south-west corner of Western Australia is one of the 25 `biodiversity hotspots' which are `areas featuring exceptional concentrations of endemic species and experiencing exceptional loss of habitat' identified in \citet{myers2000} to assist in formulating priorities in biodiversity conservation policies. \begin{figure}[!ht] \centering \includegraphics[width=0.4\textwidth]{fig/grevillea1.png} \caption{Scatter plot of the \code{grevilleasf} ($n=22203$) data set, in the biodiversity hotspot of south-western Western Australia.} \label{fig:scatter} \end{figure} For brevity, we focus on this single data set as an example of both tidy and geospatial data. A tidy data set is a data matrix where (i) each variable forms a column and (ii) each observation forms a row, and it is also known as a `long' data set \citep{wickham2014}. For our purposes in this paper, we refer to \code{grevilleasf\_coord} solely as a tidy data set, and omit any mention of its geospatial characteristics, despite that its columns consist of 2-dimensional geographical coordinates. Similarly we refer to \code{grevilleasf} solely as a geospatial data set, and omit any mention of its tidy status, to emphasise its distinct geospatial characteristics. \begin{verbatim} R> grevilleasf_coord species lon lat 1 robusta 390106.5 6462671 2 speciosa 382689.2 6457387 3 robusta 390089.8 6462603 ... R> grevilleasf Simple feature collection with 22303 features and 2 fields Dimension: XY Bounding box: xmin: 73519.97 ymin: 6120859 xmax: 1795868 ymax: 8451928 Projected CRS: GDA2020 / MGA zone 50 Geometry type: POINT species geometry 1 robusta POINT (390106.5 6462671) 2 speciosa POINT (382689.2 6457387) 3 robusta POINT (390089.8 6462603) ... \end{verbatim} The GDA2020/MGA zone 50 (EPSG:7850) projection has been employed to convert the geodetic coordinates (degrees) of the {\it Grevillea} locations into planar coordinates (metres). For tidy data, the geographical coordinates (in the columns \code{lon}, \code{lat}) are encoded as floating point values, and are treated in the same manner as all other floating point variables. For geospatial data, they are encoded in a special data structure (in the column \code{geometry}) known as simple features \citep{ogc-sfa}. This simple feature encoding requires special attention since they cannot be treated like the usual floating point variables. This paper focuses on the software implementation of the kernel smoothers, and is complementary to \citet{chacon2018} which focuses on the underlying statistical framework. In Section~\ref{sec:kde}, we explore kernel density estimation, and in Section~\ref{sec:kde-app} its applications to classification (supervised learning) and density difference significance testing. In Section~\ref{sec:kdde}, we explore kernel density derivative estimation, and in Section~\ref{sec:kdde-app} its applications to clustering (unsupervised learning), density ridge estimation, and modal region significance testing. In Sections~\ref{sec:kde}--\ref{sec:kdde-app}, we illustrate each case first for tidy data with \code{ggplot2} graphics, followed by the equivalents for geospatial data with \code{ggplot2} and base \proglang{R} graphics. We outline briefly the export to external GIS software in Section~\ref{sec:export}, and kernel smoothers in other data analysis settings in Section~\ref{sec:other}. We end with some concluding remarks. \section{Density estimation} \label{sec:kde} Density estimation is a fundamental statistical analysis tool, since it supplies much information about the data set at hand. Our data ${\bf X}_1, \dots, {\bf X}_n$ is a random sample drawn from the common density function $f$. The goal of density estimation, as its name suggests, is to estimate this unknown density. Kernel density estimates are a popular choice among the many available smoothed density estimation methods, since they posses an intuitive construction. For an arbitrary estimation point ${\bm x}$, the kernel density estimate is \begin{equation} \hat{f}_{\bf H}({\bm x}) = n^{-1} \sum_{i=1}^n K_{\bf H}({\bm x} - {\bf X}_i). \label{eq:kde} \end{equation} Throughout the \pkg{eks} package, the kernel function is the $2$-dimensional Gaussian density function $K_{\bf H} ({\bm x}) = (2\pi)^{-1} |{\bf H}|^{-1/2} \exp (-\tfrac12 {\bm x}^\top {\bf H}^{-1} {\bm x})$. Equation~\eqref{eq:kde} tells us that to compute a kernel density estimate, we place a Gaussian function, with variance ${\bf H}$, at each data point ${\bf X}_i$, and then we sum these kernel functions. This way, the data sample ${\bf X}_1, \dots, {\bf X}_n$ are transformed into a smooth surface $\hat{f}_{\bf H}$. \citet[Chapter~2]{chacon2018} contains a more detailed overview of kernel density estimates. The bandwidth matrix ${\bf H}$ in Equation~\eqref{eq:kde} is the crucial tuning parameter. A bandwidth matrix which is too small leads to an undersmoothed density estimate since it does not offer sufficient reduction in the complexity of the observed data. On the other hand, a bandwidth matrix which is too large leads to an oversmoothed density estimate that obscures important details in the observed data. Thus it is critical to find an optimal trade-off between this under- and oversmoothing. Many possible solutions for optimal smoothing are implemented in the \pkg{ks} package, and are thus available in the \pkg{eks} package, including the plug-in, unbiased cross validation and smoothed cross validation bandwidths. \subsection{Tidy density estimation} To illustrate density estimation, we focus on single species subsets of the {\it Grevillea} data. Figure~\ref{fig:kde1} compares the density estimates for the $n=137$ locations of the {\it G. leptobotrys} species which result from an optimal bandwidth from the \pkg{eks} package and from a sub-optimal one from the \pkg{ggalt} package \citep{ggalt}. The former, known as the bivariate plug-in bandwidth matrix \citep{duong2003}, is the default optimal bandwidth in the \pkg{eks} package, and it is obtained from a call to \code{ks::Hpi}. For the {\it G. leptobotrys} locations, the optimal \code{ks::Hpi} matrix is [2.32e8, $-1.13$e8; $-1.13$e8, 4.59e8]. The presence of non-zero off-diagonal entries in the optimal matrix appropriately orients the kernel functions, and the resulting density estimate is unimodal, as shown in the centre panel of Figure~\ref{fig:kde1}. The default bandwidth in \pkg{ggalt}, which is widely used in the tidyverse, is obtained from the element-wise application of the univariate plug-in bandwidth \code{KernSmooth::dpik}. For the {\it G. leptobotrys} locations, this bandwidth is [1.48e8, 0; 0, 1.87e8]. Since this sub-optimal matrix only applies smoothing in the coordinate axis directions, it yields an undersmoothed density estimate with spurious multi-modal structure on the right panel. \begin{figure}[!ht] \centering \includegraphics[width=\textwidth]{fig/grevillea2.pdf} \caption{Filled contour plots of density estimates for {\it G. leptobotrys} ($n=137$) with quartile probability contour levels. (Left) Scatter plot. (Centre) Optimally smoothed. (Right) Undersmoothed.} \label{fig:kde1} \end{figure} In Figure~\ref{fig:kde1}, the heights of the contour regions are calculated according to the probability contours method \citep{bowman1993,hyndman1996jasa}. The pink region is the smallest region that contains 25\% of the probability mass, the orange region plus the enclosed pink region is the smallest region that contains 50\% of the probability mass, and the yellow region plus the enclosed orange and pink regions is the smallest region that contains 75\% of the probability mass. Since these are relative heights, they facilitate the choice of the contour levels, since it involves selecting values from 0\% to 100\%, rather than from the range of the density values. These probability contours can also be considered as a multivariate extension of the univariate percentiles, e.g., the 50\% contour region is a bivariate equivalent to the median. Due to their intuitive properties, these probability contours are employed throughout in \pkg{eks}, with the quartile contour levels (25\%, 50\%, 75\%) being the default values. In addition to their intuitive interpretation, these probability contours are straightforward to compute: the kernel density estimate is evaluated at the $n$ observed data values $\hat{f}_{\bf H}({\bf X}_1), \dots, \hat{f}_{\bf H}({\bf X}_n)$, then we compute $\tau_\alpha$ as the $\alpha$-quantile of these evaluated values, and the $\alpha$ probability contour region is the level set of the density estimate at $\tau_\alpha$, i.e. $\{{\bm x}: \hat{f}_{\bf H}({\bm x}) > \tau_\alpha\}$ \citep{hyndman1996jasa}. In contrast to {\it G. leptobotrys}, for the $n=93$ locations of the {\it G. yorkrakinensis} species, the optimally smoothed density estimate in the centre panel in Figure~\ref{fig:kde2} displays a trimodal structure with obliquely oriented contours. The oversmoothed density estimate in the right panel has circular contours and a bimodal structure, and is unable to distinguish between the two modal regions in the lower right. The optimal bandwidth matrix is [8.84e8, $-8.33$e8; $-8.33$e8, 1.36e9], whereas the sub-optimal bandwidth is [5.43e8, 0; 0, 9.10e8]. If we take Figures~\ref{fig:kde1}--\ref{fig:kde2} together, optimal smoothing prevents both under- and oversmoothing. \begin{figure}[!ht] \centering \includegraphics[width=\textwidth]{fig/grevillea3.pdf} \caption{Filled contour plots of density estimates for {\it G. yorkrakinensis} ($n=93$) with quartile probability contour levels. (Left) Scatter plot. (Centre) Optimally smoothed. (Right) Oversmoothed.} \label{fig:kde2} \end{figure} The \proglang{R} code snippets included here are intended to give an overall idea of the syntax of the \pkg{eks} package, rather than a complete code to reproduce the figures. The latter is provided in the companion \proglang{R} script (\code{eks-script.R}). This first code snippet treats the {\it G. yorkrakinensis} data as a tidy data set (\code{york\_coord}). The commands to compute the density estimate with the optimal bandwidth, in the centre panel in Figure~\ref{fig:kde1}, are \begin{verbatim} R> ## tidy density estimate R> yorkr_coord <- dplyr::filter(grevilleasf_coord, species=="yorkrakinensis") R> yorkr_coord <- dplyr::select(yorkr_coord, lon, lat) R> t1 <- tidy_kde(yorkr_coord) R> ggplot2::ggplot(t1, ggplot2::aes(x=lon, y=lat)) + + geom_contour_filled_ks(colour=1) \end{verbatim} The function \code{tidy\_kde} is a wrapper function for \code{ks::kde}. It computes the tidy density estimate explicitly. This differs from existing layer functions, e.g., \code{ggplot2::geom\_density\_2d} and \code{ggalt::geom\_bkde2d}, which compute the density estimate internally and do not return a user-level \proglang{R} object. The tidy density estimate output from \code{tidy\_kde} is: \begin{verbatim} R> t1 # A tibble: 22,801 × 6 lon lat estimate ks tks label <dbl> <dbl> <dbl> <list> <chr> <chr> 1 378915. 6207697. 0 <kde> kde Density 2 382385. 6207697. 0 <int [1]> kde Density 3 385855. 6207697. 0 <int [1]> kde Density ... \end{verbatim} The output is a tibble with an added \code{tidy\_ks} class. This allows for a \code{ggplot.tidy\_ks} method to be defined for this object class. Otherwise, it can be treated as a tibble. The first two columns \code{lon, lat} (same names as the input data) are the coordinates of the vertices in the estimation grid, the third column \code{estimate} is the density estimate value at \code{lon, lat}. The fourth column \code{ks} holds the output from \code{ks::kde}. This is required for the computation of probability contours in the new layer function \code{geom\_contour\_filled\_ks} which draws the filled contour plots for \code{tidy\_ks} objects. The remaining columns indicate that the output is a density estimate computed from \code{ks::kde}, and they are employed in \code{ggplot.tidy\_ks} to create default aesthetic mapping and legend labels. This default aesthetic mapping is \code{ggplot2::aes(x=lon, y=lat, z=estimate, weight=ks)}. Whilst the \code{x}, \code{y}, \code{z} aesthetics are as expected for a bivariate contour plot, the \code{weight} aesthetic is unorthodox, since it is not a weighting variable: it is a workaround in \pkg{ggplot2} graphics to mimic the dynamic display of probability contours in base \proglang{R} graphics. For the {\it G. yorkrakinensis} data, the quartile contour levels for the optimally smoothed density estimate in the centre panel in Figure~\ref{fig:kde2} are 1.81e-11, 2.23e-11, 2.91e-11, and for the undersmoothed density estimate in the right panel are 1.63e-11, 2.29e-11, 2.75e-11. These probability contour heights are different for each different density estimate, even if the target contour probabilities remain the same. On the other hand, it is some times useful to have a set of fixed contour heights for all density estimates for a direct comparison. Computing a suitable set of contour heights most likely needs some trial and error to produce visually appealing contour plots for all density estimates \citep[Section~2.2]{chacon2018}. A heuristic solution consists of computing the probability contour heights for each density estimate, for a fixed set of probabilities, which are then aggregated. We compute the corresponding probabilities for each density estimate for this aggregated set of contour heights, and we remove any contour levels whose estimated probability which are too close to each other. This procedure is implemented in \code{contour\_breaks}, and whose output can be then input into \code{geom\_contour\_filled\_ks} by setting the \code{breaks} parameter. We revisit the density estimates for the {\it G. yorkrakinensis} locations, this time with the fixed contour heights (9.25e$-12$, 1.85e$-11$, 2.46e$-11$, 4.01e$-11$). With these fixed contour heights, a direct comparison of different density estimates is possible. The density estimate on the right exceeds the highest density value (4.01e$-11$, dark pink), whereas the optimally smoothed estimate only exceeds the second highest value (2.46e$-11$, dark orange). Since the former misses some important modal information, then it is indeed oversmoothed in comparison. \begin{figure}[!ht] \centering \includegraphics[width=0.67\textwidth]{fig/grevillea4.pdf} \caption{Filled contour plots of density estimates for {\it G. yorkrakinensis} ($n=93$) with fixed contour levels. (Left) Optimally smoothed. (Right) Oversmoothed.} \label{fig:kde3} \end{figure} The code to produce two density estimates with a single set of contour heights in Figure~\ref{fig:kde3} is \begin{verbatim} R> ## fixed contour levels R> H2 <- diag(sapply(yorkr_coord, KernSmooth::dpik)^2) R> t2 <- tidy_kde(yorkr_coord, H=H2) R> t3 <- c(t1, t2) R> b <- contour_breaks(t3, cont=c(10,30,50,70,90)) R> ggplot2::ggplot(t3, ggplot2::aes(x=lon, y=lat)) + + geom_contour_filled_ks(colour=1, breaks=b) + ggplot2::facet_wrap(~group) \end{verbatim} To emphasise that the \pkg{eks} package is equally capable of computing a kernel density estimate for tidy data which are not also geospatial coordinates (as is the case for \code{grevilleasf\_coord}), we examine the well-known \code{crabs} data set from the \pkg{MASS} package: \begin{verbatim} R> crabs sp sex index FL RW CL CW BD 1 B M 1 8.1 6.7 16.1 19.0 7.0 2 B M 2 8.8 7.7 18.1 20.8 7.4 3 B M 3 9.2 7.8 19.0 22.4 7.7 ... \end{verbatim} This data set consists of $n=200$ observations for 50 crabs, each of two colour forms and two sexes, of the species {\it Leptograpsus variegatus} collected in Western Australia. We focus on the frontal lobe size \code{FL} (mm) and carapace width \code{CW} (mm) measurements. The effect of oversmoothing, with the default bandwidths $[2.23, 0; 0, 9.50]$, on the right panel of Figure~\ref{fig:kde4} is even more apparent than in Figure~\ref{fig:kde3}. The bimodality of the optimally smoothed density estimate with bandwidths $[1.45, 3.47; 3.47, 8.38]$ is highly detailed, whereas it is completely absent for the oversmoothed estimate. \begin{figure}[!ht] \centering \includegraphics[width=0.67\textwidth]{fig/crabs.pdf} \caption{Filled contour plots of density estimates for {\it L. variegatus} crabs ($n=200$) with fixed contour levels. (Left) Optimally smoothed. (Right) Oversmoothed.} \label{fig:kde4} \end{figure} For brevity, we do not display any more figures for these \code{crabs} data when demonstrating tidy kernel methods in the sequel, since they would be conceptually identical to those for the {\it Grevillea} data. \subsection{Geospatial density estimation} To produce a map like in the centre panel in Figure~\ref{fig:kde2} for the {\it G. yorkrakinensis} locations as geospatial data (\code{yorkr}), the commands are \begin{verbatim} R> ## geospatial density estimate R> yorkr <- dplyr::filter(grevilleasf, species=="yorkrakinensis") R> s1 <- st_kde(yorkr) \end{verbatim} The function \code{st\_kde} is the geospatial equivalent of \code{tidy\_kde}, and produces an object of class \code{sf\_ks}, which is a list of 3 fields: \code{tidy\_ks}, \code{grid}, and \code{sf}. The first field is a summary of the tidy density estimate from \code{tidy\_kde}, the second are the rectangular polygons of the estimation grid, and the third are the 1\% to 99\% probability contour regions of the density estimate. We focus on the contour regions. \begin{verbatim} R> s1$sf Simple feature collection with 99 features and 2 fields Geometry type: MULTIPOLYGON contlabel estimate geometry 1 99 3.469482e-12 MULTIPOLYGON (((366008.4 64... 2 98 4.373744e-12 MULTIPOLYGON (((368748.9 64... 3 97 5.781100e-12 MULTIPOLYGON (((374230.1 64... ... \end{verbatim} This has 2 attributes: \code{contlabel} (label for the probability contour) and \code{estimate} (height of contour probability region). Unlike for \code{tidy\_kde} where the probability contour regions are computed dynamically in the layer function \code{geom\_contour\_filled\_ks}, these 1\% to 99\% regions are converted to multipolygons prior to plotting since the dynamic conversion during plotting could be computationally heavy. The quartile contours 25\%, 50\%, 75\% are selected by default in \code{geom\_contour\_filled\_ks} for tidy data. Since we are unable replicate exactly this automatic behaviour for the \code{ggplot2::geom\_sf} layer function, we first apply \code{st\_get\_contour} to the input of \code{ggplot2::geom\_sf}. The \code{sf\_ks} class also has a \code{ggplot.sf\_ks} method which computes the default map legend. \begin{verbatim} R> ## geospatial density estimate geom_sf plot R> ggplot2::ggplot(s1) + ggplot2::geom_sf(data=st_get_contour(s1), + ggplot2::aes(fill=contlabel)) \end{verbatim} The display of geospatial data is more flexible than that for tidy data since the former can be displayed in either the \pkg{ggplot2} or base \proglang{R} graphics without any change to the input. The density estimate output from \code{st\_kde} is displayed in base \proglang{R} graphics via the \code{plot.sf\_ks} method for \code{sf\_ks} objects. The following command produces the equivalent output to the centre panel in Figure~\ref{fig:kde2}. \begin{verbatim} R> ## geospatial density estimate base R plot R> plot(s1) \end{verbatim} This plot method internally calls \code{st\_get\_contour} to extract the required contour polygons for plotting, so it is more concise than \code{ggplot2::geom\_sf} that requires an explicit user-level call to \code{st\_get\_contour}. The base \proglang{R} and \pkg{ggplot2} plots are essentially identical since they comply with the geospatial standard specifications \citep{ogc-sfa}. \subsection{Optimal bandwidth matrices} Since the bandwidth matrix is the crucial tuning parameter for kernel density estimates, we explore further their statistical properties. These properties are the subject of a vast body of research literature, which we do not attempt to review here, and instead provide a simplified outline of how the optimal bandwidth matrix in \pkg{eks} is obtained. We begin with a squared error discrepancy between a density estimate $\hat{f}_{\bf H}$ and the target density $f$, i.e., $M({\bf H}) = \int \E [\hat{f}_{\bf H} ({\bm x}) - f({\bm x})]^2 \, \mathsf{d} {\bm x}$. Since this expression involves the unknown target density $f$, it must be estimated for it to be of practical use. The plug-in bandwidth matrix in \code{ks::Hpi} computes the estimate $\hat{M}({\bf H}) = (4\pi)^{-d/2} n^{-1} |{\bf H}|^{-1/2} + \tfrac14 \hat{{\bm m}}_4^\top (\vec {\bf H} \otimes \vec {\bf H})$. We omit to describe this estimate rigorously since it would require lengthy technical definitions: the interested reader is encouraged to consult \citet[Chapter~3]{chacon2018} for details. We are content to state that the first term in $\hat{M}$ is related to the variance of the density estimate, and the second term to the square of the bias of the density estimate. An optimal bandwidth matrix $\hat{{\bf H}}$ is defined as \begin{equation} \hat{{\bf H}} = \argmin{{\bf H}} \hat{M} ({\bf H}) \label{eq:H} \end{equation} where the minimisation is carried out over the space of all symmetric positive definite matrices. When this minimisation is achieved, then there is an optimal trade-off between the variance and the squared bias, or equivalently between under- and over-smoothing. When an optimal bandwidth matrix $\hat{{\bf H}}$ is substituted into Equation~\eqref{eq:kde}, the resulting kernel density estimate is the closest to the target density $f$ as measured by the discrepancy $\hat{M}$. Different bandwidth matrices arise from the different ways of computing $\hat{M}$ and/or from different ways of carrying out the minimisation. For example, the default bandwidth in \pkg{ggalt} treats the joint bivariate optimisation in Equation~\eqref{eq:H} as two separate univariate optimisation problems. The density estimate functions \code{tidy\_kde} and \code{st\_kde} compute $\hat{{\bf H}}$ in Equation~\eqref{eq:H} by calling the \code{ks::Hpi} function, and then substitute this $\hat{{\bf H}}$ into Equation~\eqref{eq:kde}, to compute an optimal tidy/geospatial density estimate, as shown in the centre panels in Figures~\ref{fig:kde1}--\ref{fig:kde2}. Additional bandwidth matrices in the \pkg{ks} package include the normal scale \code{ks::Hns}, unbiased cross validation \code{ks::Hucv} and smoothed cross validation \code{ks::Hscv}. The commands are of the type: \begin{verbatim} R> ## smoothed cross validation selector R> H3 <- ks::Hscv(yorkr_coord) R> t3 <- tidy_kde(yorkr_coord, H=H3) \end{verbatim} For most data samples, the plug-in bandwidth \code{ks::Hpi} yields fast and robust kernel estimates, though there remain some cases where other bandwidths are more suitable. For a review of the performance of these other bandwidths, see \citet[Chapter~3]{chacon2018}. For brevity, we illustrate kernel estimates only with the plug-in optimal bandwidths in the sequel. \section{Applications of density estimates} \label{sec:kde-app} For tidy and geospatial data, kernel smoothing methodologies other than stand-alone density estimates are rarely implemented. We demonstrate the increased scope of kernel smoothers in the \pkg{eks} package with some applications of density estimates to density-based classification (supervised learning) and to significant density difference testing. \subsection{Density-based classification (supervised learning)} The goal of classification is to assign future data to one of the known classes in the current data. That is, this is a supervised learning problem. The data are $({\bf X}_1, Y_1), \dots, ({\bf X}_n, Y_n)$, where the ${\bf X}_i$ are the observed attributes, and the $Y_i$ is the known class label from $m$ classes. These data are a random sample from the mixture density $\pi_1 f_1 + \dots + \pi_m f_m$, where $\pi_j$ is the prior probability and $f_j$ is the marginal density function for class $j$, for $j=1, \dots, m$ \citep[Section~7.2]{chacon2018}. The simplest (and often most robust) classifier is the Bayes classifier. This assigns a candidate point ${\bm x}$ to the class $c$ with the highest density value at ${\bm x}$, i.e., $c({\bm x}) = \mathrm{argmax}_{j=1, \dots, m} \, \pi_j f_j({\bm x})$. The density-based classifier replaces the prior probability $\pi_j$ with the observed sample class proportion $\hat{\pi}_j$, and the marginal density $f_j$ with the marginal density estimate $\hat{f}_j$. Each marginal density estimate is computed with its own optimal bandwidth matrix. The estimated class label for ${\bm x}$ from the kernel density-based classifier is thus $$\hat{c}({\bm x}) = \argmax{j=1, \dots, m} \hat{\pi}_j \hat{f}_j({\bm x}).$$ This kernel classifier is more adaptable than the usual linear and quadratic classifiers. The linear classifier uses Gaussian density fits with a common variance matrix for all classes, and the quadratic classifier Gaussian density fits with a different variance matrix for each class. Our data sample comprises the combined {\it G. hakeoides} (pink circles, $n_1=207$) and {\it G. paradoxa} (green triangles, $n_2=358$) data samples, as shown in the scatter plot in the left panel of Figure~\ref{fig:kda}. In the centre panel are the quartile probability contour plots of marginal density estimates $\hat{\pi}_1 \hat{f}_1$ (pink lines) and $\hat{\pi}_2 \hat{f}_2$ (green lines), where $\hat{f}_1$ is the density estimate for {\it G. hakeoides}, and $\hat{f}_2$ for {\it G. paradoxa}. As the marginal density contours have considerable overlap in the central modal region, it is difficult to decide visually which marginal density value is higher. This is resolved in the plot of estimated class labels from the density-based classifier on the right of Figure~\ref{fig:kda}. The regions where {\it G. hakeoides} is more likely are coloured in pink, and where {\it G. paradoxa} is more likely are in green. \begin{figure}[!ht] \centering \includegraphics[width=\textwidth]{fig/grevillea5.pdf} \caption{Density-based classifier for {\it G. hakeoides} ($n_1=207$) and {\it G. paradoxa} ($n_2=358$). (Left) Scatter plots. (Centre). Quartile probability contours of marginal density estimates. (Right) Class label estimates.} \label{fig:kda} \end{figure} The command to compute a tidy kernel classifier is \code{tidy\_kda}. It requires a grouped tibble as its input (\code{grevilleasf\_gr\_coord}), grouped by the class factor variable (\code{species}). To produce the marginal densities plot for the density-based classifier in the centre panel in Figure~\ref{fig:kda}: \begin{verbatim} R> ## tidy density-based classifier contours R> t4 <- tidy_kda(grevilleasf_gr_coord) R> ggplot2::ggplot(t4, ggplot2::aes(x=lon, y=lat)) + + geom_contour_ks(ggplot2::aes(colour=species)) \end{verbatim} The layer function \code{geom\_contour\_ks} draws the contour lines for \code{tidy\_ks} objects. In addition to the columns already present in the density estimate, the extra columns in the output of a density-based classifier relate to the classes: \code{prior\_prob} (class sample proportion), \code{label} (estimated class label), \code{species} (same as input class label). The structure of a density-based classifier is similar to that for a density estimate grouped by a class variable. Moreover, we do not wish to plot the default rectangular display grid as this contains labels for pixels in the ocean. So prior to the generating the display using \code{ggplot2::geom\_tile} in the right panel in Figure~\ref{fig:kda}, we compute the estimates of the density support via \code{tidy\_ksupp}, and assign an \code{NA} class label to those pixels outside of the convex hull of the union of the density support estimates. \begin{verbatim} R> ## tidy density-based classifier labels plot R> ggplot2::ggplot(t4, ggplot2::aes(x=lon, y=lat)) + ggplot2::geom_tile(ggplot2::aes(fill=label), alpha=0.1) \end{verbatim} The equivalent code to produce a geospatial density-based classifier is \begin{verbatim} R> ## geospatial density-based classifier R> s4 <- st_kda(grevilleasf_gr) \end{verbatim} where \code{grevilleasf\_gr} is the geospatial version of \code{grevilleasf\_gr\_coord}. The estimated class labels are stored in the \code{sf\_ks} object in the \code{grid} field as a collection of rectangular polygons. To plot these class labels, for a \pkg{ggplot2} plot, we call \code{ggplot2::geom\_sf} on the \code{grid} field, and for a base \proglang{R} plot, we call \code{plot(, which\_geometry=="grid")}. \begin{verbatim} R> ## geospatial density-based classifier geom_sf plot R> gs <- ggplot2::ggplot(s4) R> gs + ggplot2::geom_sf(data=s4$grid, ggplot2::aes(fill=label), alpha=0.2, + colour=NA) R> gs + ggplot2::geom_sf(data=st_get_contour(s4), + ggplot2::aes(colour=species), fill=NA) R> ## base R plot R> plot(s4, which_geometry="grid", border=NA) R> plot(s4, which_geometry="sf") \end{verbatim} The question of optimal bandwidths for a density-based classifier is more complicated than that for a density estimate. We opt for a simple and robust implementation in the \pkg{eks} package, where \code{tidy\_kda} and \code{st\_kda} call \code{ks::Hpi} for each class data sub-sample. These class-wise optimal bandwidths are known to asymptotically minimise the misclassification error, i.e., the probability that we do not classify a candidate point in class $j$ given that it is drawn from class $j$, $\Prob \{\hat{c} ({\bf X}) \neq j | {\bf X} \sim f_j\}$. Whilst there is an intuitive appeal in selecting bandwidths to exactly minimise the misclassification error, it is not clear how much is gained in practise with this more complicated approach over the simpler bandwidths. Moreover, there are currently no efficient computational algorithms to compute these more complicated bandwidths. See \citet[Section~7.2]{chacon2018} for a discussion. \subsection{Local density difference hypothesis testing} We make our first foray into kernel-based statistical inference. The goal is to the determine the regions where two density functions are statistically significantly different from each other. Whilst hypothesis tests for global differences are well-known, e.g., Kolmogorov-Smirnov, these do not indicate where in the data the differences are most salient. Towards this goal, we employ local hypothesis tests based on the difference of the density functions. For each candidate point ${\bm x}$, the local null hypothesis is $H_0({\bm x})\colon f_1({\bm x}) = f_2({\bm x})$. From these local hypothesis tests, we define the significant density difference regions as \begin{align*} U^+ &= \{{\bm x}\colon \mathrm{reject \ } H_0({\bm x}), f_1 ({\bm x}) > f_2 ({\bm x})\}, \ U^- = \{{\bm x}\colon \mathrm{reject \ } H_0({\bm x}), f_1({\bm x}) \leq f_2 ({\bm x})\}. \end{align*} The data ${\bf X}_1, \dots, {\bf X}_{n_1}$ is an $n_1$ random sample drawn from the first density function $f_1$, and ${\bf Y}_1, \dots, {\bf Y}_{n_2}$ an $n_2$ sample from the second density function $f_2$. The local test statistic, as proposed in \citet{duong2013}, is $W({\bm x}) = [\hat{f}_1({\bm x}) - \hat{f}_2({\bm x})]^2/S({\bm x})^2$, where $S({\bm x})^2$ is an estimate of the variance of $\hat{f}_1({\bm x}) - \hat{f}_2({\bm x})$. This author computes an expression for $S({\bm x})^2$, and establishes that the null distribution of the test statistic $W({\bm x})$ is asymptotically chi-squared with 1 d.f. for all candidate points ${\bm x}$. See also \citet[Section~7.1]{chacon2018}. We carry out these hypothesis tests $H_0({\bm x}_j), j=1,\dots,m$ for all the candidate points in a grid $\{{\bm x}_1,\dots,{\bm x}_m\}$. Let the $p$-value from $H_0({\bm x}_j)$ at significance level $\alpha$ be $p_j=\Prob ( W({\bm x}_j) > \chi^2_1(1-\alpha))$. Let the order statistics of these $p$-values be $p_{(1)} \leq \dots \leq p_{(m)}$, and their corresponding hypotheses $H_{0,(1)}, \dots H_{0,(m)}$. The Hochberg decision rule rejects all the hypotheses $H_{0,(1)}, \dots H_{0,(j^*)}$ where $j^* = \operatorname{argmax}_{1\leq j\leq m} \, \{ p_{(j)} \leq \alpha/(m-j+1)\}$. This rule controls the Type 1 error (false positive) to the level of significance $\alpha$ across all these tests \citep{hochberg1988}. Now that we are able to determine for which candidate points ${\bm x}_j$ we reject the local hypothesis tests, then the estimates of the significant density difference regions are \begin{align*} \hat{U}^+ &= \{{\bm x}_j\colon \mathrm{reject \ } H_0({\bm x}_j) \mathrm{\ and\ } \hat{f}_1({\bm x}_j) > \hat{f}_2({\bm x}_j), j=1, \dots, m\} \\ \hat{U}^- &= \{{\bm x}_j\colon \mathrm{reject \ } H_0({\bm x}_j) \mathrm{\ and\ } \hat{f}_1({\bm x}_j)\leq \hat{f}_2({\bm x}_j), j=1, \dots, m\}. \end{align*} The local density difference hypothesis testing procedure is implemented in \code{tidy\_local\_test} and \code{st\_local\_test}. The results for testing between $n_1=207$ {\it G. hakeoides} and $n_2=358$ {\it G. paradoxa} locations at an $\alpha=0.05$ level of significance are displayed in Figure~\ref{fig:kde_local_test}. The regions where {\it G. hakeoides} is significantly more prevalent are in pink, and where {\it G. paradoxa} is significantly more prevalent are in green. These significant density difference regions are superposed on the density-based classifier labels from Figure~\ref{fig:kda}, reproduced here as the pale pink and pale green regions. The density-based classifier always assigns one of these classes (species) to all points, even in the borderline cases where both density estimates are of similar value, and/or where the data are sparse. The significant density difference region estimates, due to a more stringent threshold of statistical evidence, do not include these borderline cases. Hence they yield more targeted regions of relative prevalence. \begin{figure}[!ht] \centering \includegraphics[width=0.33\textwidth]{fig/grevillea6.pdf} \caption{Local significant density difference region estimates ($\alpha=0.05$) for {\it G. hakeoides} ($n_1=207$) and {\it G. paradoxa} ($n_2=358$), superposed on the density-based classifier regions. Regions where {\it G. hakeoides} is statistically more prevalent are in pink, and where {\it G. paradoxa} is more prevalent are in green. Density-based classifier regions are pale pink for {\it G. hakeoides}, and pale green for {\it G. paradoxa}.} \label{fig:kde_local_test} \end{figure} The commands for these local density difference hypothesis tests are \code{tidy\_kde\_local\_test} and \code{st\_kde\_local\_test}. These local tests require two separate input data sets, which are \code{parad\_coord, hakeo\_coord} (tidy) and \code{parad, hakeo} (geospatial) for the {\it G. hakeoides} and {\it G. paradoxa} data samples. \begin{verbatim} R> ## tidy signif. density difference regions R> t6 <- tidy_kde_local_test(data1=parad_coord, data2=hakeo_coord) R> ggplot2::ggplot(t6, ggplot2::aes(x=lon, y=lat)) + + geom_contour_filled_ks(colour=1) R> ## geospatial signif. density difference regions geom_sf plot R> s6 <- st_kde_local_test(x1=parad, x2=hakeo) R> ggplot2::ggplot(s6) + + ggplot2::geom_sf(data=st_get_contour(s6), ggplot2::aes(fill=label)) R> ## base R plot R> plot(s6) \end{verbatim} The output is similar to that for a single density estimate, except that the density value at (\code{lon, lat}) is replaced by an indicator of which data sample is significantly more prevalent (\code{label}). Hence the plotting behaviour is similar to that for a density estimate. The question of optimal bandwidths for a significant density difference region estimate is analogous to that for a density-based classifier, and we opt for a similar solution where \code{tidy\_kde\_local\_test} and \code{st\_kde\_local\_test} call \code{ks::Hpi} for each class data sample. These sample-wise plug-in bandwidths most likely asymptotically minimise the symmetric difference between $\hat{U}^+$ and $U^+$, and between $\hat{U}^-$ and $U^-$, though it has not yet been shown rigorously. Nonetheless empirical evidence indicates that these plug-in bandwidths offer effective estimates of the significant density difference regions. \section{Density derivative estimation} \label{sec:kdde} Crucial information about the structure of a data set is not always revealed by examining solely the density values, and can only be discerned via the density derivatives. For example, the local maxima of the data density are characterised as the locations where the first derivative is the identically zero and the second derivative is negative definite \citep[Chapter~5]{chacon2018}. Some recent examples of the utility of density derivative estimates in data analysis include: the segmentation of digital images, which utilised the first density derivative of pixel colour-locations to guide the search for similar image segments more efficiently than using only the density of the pixel colour-locations \citep{beck2016}; and the identification of differences in cell fluorescence measurements between healthy control and diseased subjects \citep{chacon2011ss}, which utilised the second density derivative to pinpoint more robustly the biologically different regions than is possible by considering only the density of the fluorescence measurements. With the same data as for the density estimation case, i.e., ${\bf X}_1, \dots, {\bf X}_n$ is a random sample drawn from the common density function $f$, our goal is to estimate the derivatives of the unknown density $f$, focusing on the first (gradient) derivative. For 2-dimensional data, the gradient of a density function $f$ is comprised of two partial derivatives $\mathsf{D} f = [\partial f / \partial x_1, \partial f / \partial x_2]$. The kernel estimate of the density gradient is given by \begin{equation} \mathsf{D} \hat{f}_{\bf H}({\bm x}) = n^{-1} \sum_{i=1}^n \mathsf{D} K_{\bf H}({\bm x} - {\bf X}_i) \label{eq:kdde} \end{equation} where the gradient kernel function is $\mathsf{D} K_{\bf H} ({\bm x}) = -(2\pi)^{-1} |{\bf H}|^{-1/2} {\bf H}^{-1} {\bm x} \exp (-\tfrac12 {\bm x}^\top {\bf H}^{-1} {\bm x})$. Since there are two components of the density gradient, it can be visualised using two separate plots, one for each partial derivative. A more concise alternative is a quiver plot, in which arrows, whose length and direction are determined by the gradient, are drawn at each point in the estimation grid. Figure~\ref{fig:kdde} is the quiver plot for the density gradient estimate for {\it G. yorkrakinensis}, superposed on the density estimate. The arrows for the density gradient point towards to the peaks of the modal regions. These arrows are longer where the density gradient is steeper, and they are shorter in the density tails where the slope is flatter. These density gradients indicate the rate of change in the data density, which is not easy to ascertain from the density levels themselves in the underlying density contour plot. \begin{figure}[!ht] \centering \includegraphics[width=0.33\textwidth]{fig/grevillea7.pdf} \caption{Quiver plot of density gradient estimate for {\it G. yorkrakinensis} ($n=93$), superposed over its density estimate.} \label{fig:kdde} \end{figure} The command for a tidy density gradient estimate is \code{tidy\_kdde(, deriv\_order=1)}. The function \code{tidy\_kquiver} converts the output from \code{tidy\_kdde} into a format suitable for the quiver plot layer function \code{ggquiver::geom\_quiver} \citep{ggquiver}. For {\it G. yorkrakinensis}, the code to produce a quiver plot superposed on a density estimate is \begin{verbatim} R> ## tidy density gradient estimate plot R> t7 <- tidy_kdde(yorkr_coord, deriv_order=1) R> t8 <- tidy_kquiver(t7) R> ggplot2::ggplot(t1, ggplot2::aes(x=lon, y=lat)) + + ggquiver::geom_quiver(data=t8, ggplot2::aes(u=u, v=v)) \end{verbatim} The output from \code{tidy\_kdde} is a tibble which is grouped by \code{deriv\_group}. The columns present in a density estimate are also present in a density derivative estimate, along with some additional columns relating to the derivative: \code{deriv\_order} (derivative order, 1 for the gradient), \code{deriv\_ind} (partial derivative enumeration, from 1 to 2), \code{deriv\_group} (partial derivative indices (1,0), (0,1) which correspond to $\partial/\partial x_1, \partial/ \partial x_2$ respectively). Whilst with \code{st\_kquiver} we can compute a geospatial output, \code{ggplot2::geom\_sf} is not able plot arrows, and it is not possible to overlay a \code{ggquiver::geom\_quiver} layer over a \code{geom\_sf} layer. The current work-around is to overlay a \code{ggplot2::geom\_segment} layer over a \code{geom\_sf} layer, with some trial and error required in \code{grid::arrow} to produce suitable arrows. \begin{verbatim} R> ## geospatial density gradient estimate geom_sf plot R> s7 <- st_kdde(yorkr, deriv_order=1) R> s8 <- st_kquiver(s7, thin=9) R> ggplot2::ggplot(s1) + ggplot2::geom_segment(data=s8$sf, + ggplot2::aes(x=lon, xend=lon_end, y=lat, yend=lat_end), + arrow=grid::arrow(length=0.1*s8$sf$len)) \end{verbatim} On the other hand, for a base \proglang{R} plot, the display of geospatial and tidy data are freely interchangeable, so we can overlay the quiver plot \code{plot(, display="quiver")} for a kernel density gradient estimate from the \pkg{ks} package. \begin{verbatim} R> ## geospatial density gradient estimate base R plot R> plot(s8$tidy_ks$ks[[1]], display="quiver") \end{verbatim} For optimal bandwidth selection for kernel density gradient estimates, it is crucial to note that the optimal bandwidth matrix for $\mathsf{D} \hat{f}_{\bf H}$ is not the same as that for $\hat{f}_{\bf H}$. For a density estimate the optimality criterion is $M({\bf H}) = \int \E [\hat{f}_{\bf H} ({\bm x}) - f({\bm x})]^2 \, \mathsf{d} {\bm x}$, whereas the criterion for a density gradient estimate is $M_1({\bf H}) = \int \E \lVert \mathsf{D} \hat{f}_{\bf H} ({\bm x}) - \mathsf{D} f({\bm x}) \lVert^2 \, \mathsf{d} {\bm x}$. Since $M \neq M_1$ then their minimisers will also not be equal in general. The default optimal bandwidth for the density gradient estimate in the \pkg{eks} package is the plug-in bandwidth \citep{chacon2010} obtained from a call to \code{ks::Hpi(, deriv.order=1)}. For the {\it G. yorkrakinensis} data, this bandwidth matrix is [4.39e8, $-4.36$e8; $-4.36$e8, 7.73e8]. In comparison, the optimal bandwidth matrix for the density estimate is [8.84e8, $-8.33$e8; $-8.33$e8, 1.36e9]. See \citet[Section~5.7]{chacon2018} for more details. Whilst \pkg{eks} provides functionality for the computation and display of the kernel estimates of Hessian (second) derivative of the density, we omit them since there does not exist currently a suitably compelling visualisation like the quiver plot. However density Hessian estimates are a key element in other data analysis applications. \section{Applications of density derivative estimates} \label{sec:kdde-app} The true added value of density derivative estimates are as key components in more complex data analysis applications. We demonstrate the crucial role that density gradient estimates play in density-based clustering, and that density Hessian estimates play in density ridge estimates and significant density curvature regions. \subsection{Density-based clustering (unsupervised learning)} The goal of clustering is to discover homogeneous groups within a data set in a trade-off between similarity/dissimilarity: members of the same cluster are similar to each other while members of different clusters are dissimilar to each other. If the $q$ unknown population clusters are $\{C_1,\dots,C_q\}$, then the cluster labelling function is $c({\bm x})=j$ whenever a candidate point ${\bm x}$ belongs to cluster $C_j$. Whilst we are able to estimate the cluster labelling function for all candidate points, for the vast majority of data analysis cases, it is sufficient to compute $\hat{c}({\bf X}_1), \dots, \hat{c}({\bf X}_n)$ for the data sample ${\bf X}_1, \dots, {\bf X}_n$. Since the cluster labels are unknown, then this is an unsupervised learning problem. Many clustering algorithms have been proposed in the literature. Our chosen approach is density-based clustering, where a cluster is a data-rich region (high density values) which is separated from another data-rich region by a data-poor region (low density values). Thus we associate each data point to its `most representative' data-rich region. In the \pkg{eks} package, this is carried out with a mean shift algorithm \citep{fukunaga1975}. For a data point ${\bf X}_i$, we initialise a sequence with ${\bf X}_{i,0} = {\bf X}_i$, then we iterate the recurrence equation $$ {\bf X}_{i, k+1} = {\bf X}_{i,k} + {\bf H}^{-1} \mathsf{D} \hat{f}_{\bf H} ({\bf X}_k)\big/\hat{f}_{\bf H}({\bf X}_k), $$where $\hat f_{\bf H}$ is a density estimate and $\mathsf{D} \hat{f}_{\bf H}$ is a density gradient estimate. This recurrence equation is closely related to the well-known gradient ascent algorithm, with the improvement that accelerates the convergence of the recurrence iterations in regions of low data density. A more computationally stable form of the mean shift recurrence equation is \begin{equation} \label{eq:kms} {\bf X}_{i,k+1} = {\bf X}_{i,k} + {\bm \beta}_{\bf H}({\bf X}_{i,k}) = \frac{\sum_{\ell=1}^n{\bf X}_\ell \, g\big(({\bf X}_{i,k} -{\bf X}_\ell)^\top{\bf H}^{-1}({\bf X}_{i,k}-{\bf X}_\ell)\big)}{\sum_{\ell=1}^n g \big(({\bf X}_{i,k} - {\bf X}_\ell)^\top{\bf H}^{-1}({\bf X}_{i,k}-{\bf X}_\ell)\big)} \end{equation} where $g(x) = x \exp(-\tfrac12 x)$ and ${\bm \beta}_{\bf H}({\bm x}) = \frac{\textstyle \sum_{\ell=1}^n {\bf X}_\ell g(({\bm x} - {\bf X}_\ell)^\top{\bf H}^{-1}({\bm x} - {\bf X}_\ell))}{\textstyle \sum_{\ell=1}^n g(({\bm x} - {\bf X}_\ell)^\top{\bf H}^{-1}({\bm x} - {\bf X}_\ell))} - {\bm x}$. This ${\bm \beta}_{\bf H}$ is known as the mean shift, since it is the difference between the current iterate and a weighted mean of all data points. For our stopping rule, we iterate the recurrence in Equation~\eqref{eq:kms} until either we reach a maximum number of iterations (400) or that the distance between subsequent iterations is less than 0.001 times the minimal marginal IQR (interquartile range) of the input data. This heuristic stopping rule gives sensible results in most cases. The result is a sequence of points $\{{\bf X}_{i,0}, {\bf X}_{i,1}, \dots\}$ which traces out a path, along the steepest ascent of the density gradient, from the data point ${\bf X}_i$ to the mode of the associated data-rich region. The data-rich regions are the `basins of attraction' of the density gradient ascent. If the data points are associated with the same mode, then they are considered to be members of the same cluster. Thus the number of clusters is equal to the number of these basins of attraction. For more details on mean shift and other forms of density-based clustering, see \citet[Section~6.2]{chacon2018}. A clustering of geospatial data is also known as regionalisation \citep{duque2007}. The result of the mean shift clustering on the $n=93$ {\it G. yorkrakinensis} locations into 6 clusters is displayed on the left panel in Figure~\ref{fig:kms}. Observe that we do not need to specify the number of clusters in advance, and the clusters can be of any arbitrary shape. These represent two important advantages over $k$-means clustering, which requires an a priori number of clusters, and whose cluster shapes are restricted to be (intersections of) ellipses. Cluster \#4 (cyan crosses) is the most northerly and most separate from the other clusters. Cluster \#2 (brown triangles) forms the most southerly cluster and is also well-separated. Points on the right edge of cluster \#1 (red circles) and those on the left edge of cluster \#3 (green squares) are close to together, and $k$-means clustering tends to assign them to the same cluster, whereas the directionality of the mean shift assigns them to different clusters. The two smallest clusters \#5 (blue boxed crosses) and \#6 (magenta asterisks) comprise only two and one locations, so these may belong to the larger clusters \#2 and \#1 respectively. Since the mean shift relies on the density gradient ascent paths, we overlay the quiver plot of the density gradient on the convex hulls of the mean shift clusters on the right of Figure~\ref{fig:kms}. We observe that the gradient ascent arrows within each cluster are oriented towards the mode. \begin{figure}[!ht] \centering \includegraphics[width=0.67\textwidth]{fig/grevillea8.pdf} \caption{Mean shift clusters for {\it G. yorkrakinensis} ($n=93$). (Left) Cluster members. (Right) Cluster convex hulls, superposed over the quiver plot of its density gradient estimate.} \label{fig:kms} \end{figure} The commands for mean shift clustering are \code{tidy\_kms} and \code{st\_kms}. The output is similar to that for a single density estimate, except that the data points are returned rather than the estimation grid points, and that \code{estimate} indicates the estimated cluster label rather than the density estimate value. \begin{verbatim} R> ## tidy mean shift clusters R> t9 <- tidy_kms(yorkr_coord) R> ggplot2::ggplot(t9, ggplot2::aes(x=lon, y=lat)) + + ggplot2::geom_point(ggplot2::aes(colour=estimate)) \end{verbatim} For geospatial data, the commands follow analogously. \begin{verbatim} R> ## geospatial mean shift clusters geom_sf plot R> s9 <- st_kms(yorkr) R> ggplot2::ggplot(s9) + ggplot2::geom_sf(ggplot2::aes(colour=estimate)) R> ## base R plot R> plot(s9, pch=16) \end{verbatim} Since the direction along which the data points are shifted is directly related to the density gradient, the default bandwidth for mean shift clustering in \code{tidy\_kms} and \code{st\_kms} is the plug-in bandwidth computed by \code{ks::Hpi(, deriv.order=1)}. For the {\it G. yorkrakinensis} data, this bandwidth matrix is [4.39e8, $-$4.36e8; $-$4.36e8, 7.73e8]. The bandwidth choice is made with the goal of optimal identification of the density gradient ascent paths. It is also supported by the results that the optimal bandwidth for estimating the mode of a density is closely related to the optimal bandwidth for density gradient estimation. In comparison, the optimal bandwidth matrix for the density estimate is [8.84e8, $-8.33$e8; $-8.33$e8, 1.36e9]. The difference reflects the added difficulty in the density gradient estimation problem. \subsection{Density ridge estimate} If a mode corresponds to the peak of single isolated mountain, then a ridge corresponds to the path that joins the multiple peaks in a mountain range. \cite{ozertem2011} proposes that the ridges of the density function serve as a generalisation of principal components. A density ridge, as its name suggests, forms a filament structure, in contrast to principal components which tend to form elliptical structures. Recall that the first principal component is determined by the eigenvector which corresponds to the largest eigenvalue of the variance matrix of the data. On the other hand, a density ridge is based on the Hessian matrix of the density function $f$, which is comprised of the partial derivatives arranged in a $2 \times 2$ matrix $$\mathsf{H} f = \begin{bmatrix} \partial^2 f / (\partial x_1^2) & \partial^2 f / (\partial x_1 \partial x_2) \\ \partial^2 f / (\partial x_1 \partial x_2) & \partial^2 f / (\partial x_2^2) \end{bmatrix}.$$ As all eigenvalues of the density Hessian matrix are negative near a ridge, we focus on the smallest eigenvalue. \citet{ozertem2011} adapt the mean shift recurrence to estimate the density ridge, by replacing the density gradient estimate $\mathsf{D} \hat{f}_{\bf H}$ by the projected density gradient estimate $\hat{{\bm p}}_{\bf H}({\bm x}) = {\bm u}_2({\bm x}) {\bm u}_2({\bm x})^\top \mathsf{D} \hat{f}_{\bf H}({\bm x})$ where ${\bm u}_2({\bm x})$ is the eigenvector associated with the smallest eigenvalue of the density Hessian estimate $\mathsf{H} \hat{f}_{\bf H}({\bm x})$. The mean shift recurrence for the projected gradient for a candidate point ${\bm x}_i$ is \begin{equation} {\bm x}_{i,k+1} = {\bm x}_{i,k} + {\bm u}_2({\bm x}_{i,k}) {\bm u}_2({\bm x}_{i,k})^\top {\bm \beta}_{\bf H}({\bm x}_{i,k}) \label{eq:kms-proj} \end{equation} where ${\bm \beta}_{\bf H}$ is the non-projected mean shift from Equation~\eqref{eq:kms}. The result is a sequence of points $\{{\bm x}_{i,0}, {\bm x}_{i,1}, \dots\}$ which traces out a path, along the steepest ascent of the projected density gradient, from the candidate point ${\bm x}_i$ to an associated end point on the density ridge. We apply Equation~\eqref{eq:kms-proj} to a grid of $m$ initial candidate points ${\bm x}_1, \dots, {\bm x}_m$ to obtain a density ridge estimate. The heuristic stopping rule is the same as for non-projected mean shift, i.e., we iterate until we reach either 400 iterations or the distance between subsequent iterations is less than 0.001 times the minimal marginal IQR of the input data. For more details on density ridge estimates, see \citet[Section~6.3]{chacon2018}. The density ridge estimate for the {\it G. paradoxa} locations is displayed in Figure~\ref{fig:kdr} as the purple lines. It can be interpreted as the sequence of peaks in the data density or as the filament equivalent of the first principal component of the data density. This ridge estimate is obtained from a $151 \times 151$ grid of initial 22801 candidate points. This density ridge reflects a considerable reduction in the complexity of the original {\it G. paradoxa} locations, since the Ramer-Douglas-Peucker simplification in \code{sf::st\_simplify} \citep{ramer1972,douglas2011} is applied. \begin{figure}[!ht] \centering \includegraphics[width=0.33\textwidth]{fig/grevillea9.pdf} \caption{Density ridge estimates for {\it G. paradoxa} ($n=358$), superposed on their locations.} \label{fig:kdr} \end{figure} The commands for the density ridge estimates are \code{tidy\_kdr} and \code{st\_kdr}. The output is similar to the output for a density estimate, except that the estimate grid points are replaced by the points on the estimated density ridge, and the density estimate value is replaced by the density ridge indicator (\code{label}). \begin{verbatim} R> ## tidy density ridge estimate R> t11 <- tidy_kdr(parad_coord) R> ggplot2::ggplot(t11, ggplot2::aes(x=lon, y=lat)) + + ggplot2::geom_path(aes(colour=label, group=segment)) R> ## geospatial density ridge estimate geom_sf plot R> s11 <- st_kdr(parad) R> ggplot2::ggplot(s11) + ggplot2::geom_sf(aes(colour=label)) R> ## base R plot R> plot(s11) \end{verbatim} Since the direction along which the candidate points are shifted relies on the eigenvalue decomposition of the density Hessian estimate, the default bandwidth for a density ridge estimate in \code{tidy\_kdr} and \code{st\_kdr} is the plug-in bandwidth computed by \code{ks::Hpi(, deriv.order=2)}. For the {\it G. paradoxa} data, this bandwidth matrix is [1.25e9, $-6.90$e8, $-6.90$e8, 8.16e8]. For comparison, the optimal bandwidth matrix for the density estimate is [1.03e9, $-5.71$e8, $-5.71$e8, 6.70e8]. The difference reflects the added difficulty in the density Hessian estimation problem. \subsection{Significant density curvature regions} We return to kernel-based inference with feature significance. In this context, a `feature' refers to an important characteristic of the density function $f$, such as a local mode \citep{godtliebsen2002}. We focus on modal regions since these are data-rich regions, and which we characterise in terms of the local significance tests for the density curvature function $\mathsf{H} f$. At each candidate point ${\bm x}$, let the local null hypothesis be $H_0({\bm x}): \mathsf{H} f({\bm x}) = 0$. The significant density curvature region is where the density curvature is significantly non-zero, and that the eigenvalues $\lambda_1, \lambda_2$ of $\mathsf{H} f$ are both negative: $$\mathcal{M} = \{{\bm x} : \mathrm{reject \ } H_0({\bm x}), \lambda_1 f ({\bm x}), \lambda_2 ({\bm x}) < 0\}.$$ A suitable local Wald test statistic for $H_0({\bm x})$ is $W({\bm x}) = \lVert {\bf S}({\bm x})^{-1/2} \operatorname{vech} \mathsf{H} \hat{f}_{\bf H}({\bm x}) \lVert^2$, where $\operatorname{vech} \mathsf{H} \hat{f}_{\bf H} = [\partial^2\hat{f}_{\bf H}/\partial x_1^2, \partial^2\hat{f}_{\bf H}/(\partial x_1 \partial x_2), \partial^2\hat{f}_{\bf H}/\partial x_2^2]$ are the unique elements of the estimate of the density Hessian matrix, and ${\bf S}$ is the null variance of $W$. The formula for ${\bf S}$ is given by \citet{duong2008}, and these authors also assert that the asymptotic null distribution of $W({\bm x})$ is approximately chi-squared with 3 d.f. for all 2-dimensional candidate points ${\bm x}$. Similar to the situation for the serially correlated hypothesis tests for the significant density difference regions in Section~\ref{sec:kde-app}, we apply the Hochberg procedure to control for the overall level of significance. The estimate of the significant density curvature regions $\mathcal{M}$ is $$\hat{\mathcal{M}} = \{{\bm x} : \mathrm{reject} \ H_0({\bm x}), \hat{\lambda}_1 f ({\bm x}), \hat{\lambda}_2 ({\bm x}) < 0\}$$ where $\hat{\lambda}_1, \hat{\lambda}_2$ are the eigenvalues of the estimate of density Hessian $\mathsf{H} \hat{f}_{\bf H}$ and are computed using the usual singular value decomposition. In Figure~\ref{fig:kfs}, the significant density curvature regions for {\it G. paradoxa} are the orange regions. They are superposed on the density ridge estimate (purple curves). The significant density curvature regions are 2-dimensional regions which tend to be concentrated at the most data-rich regions of the 1-dimensional filaments of the density ridge estimate. \begin{figure}[!ht] \centering \includegraphics[width=0.33\textwidth]{fig/grevillea10.pdf} \caption{Significant density curvature regions for {\it G. paradoxa} ($n=358$), superposed on their density ridge estimate.} \label{fig:kfs} \end{figure} The commands for the significant density curvature regions are \code{tidy\_kfs} and \code{st\_kfs}. The output is similar to the output for a density estimate, except that the density estimate value is replaced by the local Wald test statistic, and \code{label} is a significant density curvature indicator. \begin{verbatim} R> ## tidy significant modal regions R> t12 <- tidy_kfs(parad_coord) R> ggplot2::ggplot(t12, ggplot2::aes(x=lon, y=lat)) + + geom_contour_filled_ks(fill=7, colour=1) R> ## geospatial significant modal regions geom_sf plot R> s12 <- st_kfs(parad) R> ggplot2::ggplot(s12) + ggplot2::geom_sf(fill=7) R> ## base R plot R> plot(s12, col=7) \end{verbatim} Since the local Wald test statistics depend on the density Hessian estimate, the default bandwidth for a density ridge estimate in \code{tidy\_kfs} and \code{st\_kfs} is the plug-in computed by \code{ks::Hpi(, deriv.order=2)}. For the {\it G. paradoxa} data, this bandwidth matrix is [1.25e9, $-6.90$e8, $-6.90$e8, 8.16e8]. \section{Export to external GIS software} \label{sec:export} The ability to export the geospatial kernel estimates in standard geospatial data formats via the \code{sf::write\_sf} function extends the functionality of the \pkg{eks} package in true GIS software. For example, the commands to export the {\it G. yorkrakinensis} locations \code{yorkr}, the polygons of the quartile probability contour regions of the density estimate \code{s1}, and the \code{grid} field of the rectangular polygons for the estimation grid of \code{s1} to the geopackage format are: \begin{verbatim} R> ## export to external geospatial format R> sf::write_sf(yorkr, dsn="grevillea.gpkg", layer="yorkrakinensis") R> sf::write_sf(st_get_contour(s1), dsn="grevillea.gpkg", + layer="yorkrakinensis_cont") R> sf::write_sf(s1$grid, dsn="grevillea.gpkg", layer="yorkrakinensis_grid") \end{verbatim} The \code{grevillea.gpkg} geopackage consists of three layers: \code{yorkrakinensis} for the point geometries of the {\it G. yorkrakinensis} locations, \code{yorkrakinensis\_cont} for the multi-polygons of the quartile contour regions of the density estimate, and \code{yorkrakinensis\_grid} for the rectangular polygons of the estimation grid. Recall that quiver plots can be difficult to produce with geospatial data in \pkg{ggplot2} graphics, since the arrows require trial and error to display suitably with \code{ggplot2::geom\_segment}. In contrast, quiver plots are straightforward in a GIS software, e.g. QGIS \citep{qgis}, where rescaleable arrows are a native feature. We can export the \code{sf} field of the output from \code{st\_kdde(, deriv\_order=1)} using \begin{verbatim} R> sf::write_sf(s8$sf, dsn="grevillea.gpkg", layer="yorkrakinensis_quiver") \end{verbatim} This \code{grevillea.gpkg} geopackage can be subsequently employed in QGIS, which is an industry standard software for GIS practitioners since it offers features that are not available in \proglang{R}. For example, it has an interactive point-and-click interface, and it incorporates fast rendering of the OpenStreetMap base maps. A screenshot from a QGIS analysis for a quiver plot overlaid on a density estimate is given in the left panel of Figure~\ref{fig:qgis}, with a similar symbology to those in Figure~\ref{fig:kdde}, i.e., the quartile probability contours of the density estimate are in a heat colour scale, and the arrows of the quiver plot of the density gradient estimate are in black. \begin{figure}[!ht] \centering \includegraphics[width=0.4\textwidth]{fig/grevillea11b.png} \includegraphics[width=0.4\textwidth]{fig/grevillea11d.png} \caption{Screenshots of QGIS analysis for {\it G. yorkrakinensis} ($n=93$). (Left) Probability contour plot of density estimate and quiver plot of density gradient estimate. (Right) Raster of density estimate.} \label{fig:qgis} \end{figure} In addition, QGIS efficiently handles raster geospatial data: rasters are the geospatial equivalent of image data. Whilst the \code{grid} field of a kernel estimate consists of the rectangular polygons for each pixel of the estimation grid, it is converted to a raster via the \pkg{stars} package. The converted raster is displayed on the right of Figure~\ref{fig:qgis}. \begin{verbatim} R> stars::write_stars(stars::st_rasterize(s1$grid), + dsn="yorkrakinensis_raster.tif") \end{verbatim} \section{Other data analysis settings} \label{sec:other} There are numerous variations on the standard density estimate provided by \code{tidy\_kde} for tidy data. For bounded data, there is a boundary density estimate (\code{tidy\_kde\_boundary}) and a truncated density estimate (\code{tidy\_kde\_truncate}). To allow for the kernel function and/or the bandwidth matrix to vary, there are the sample point density estimate (\code{tidy\_kde\_sp}) and balloon density estimate (\code{tidy\_kde\_balloon}). For data observed with errors, there is the deconvolved density estimate (\code{tidy\_kdcde}). There are also distribution-based estimates, with a cumulative distribution estimate (\code{tidy\_kcde}), a copula estimate (\code{tidy\_kcopula}), and a ROC (receiver operating characteristic) curve (\code{tidy\_kroc}). These are all also implemented for geospatial data as \code{st\_k*}. All of these utilise the appropriate default bandwidth selector from the \pkg{ks} package. For brevity, we do not illustrate them here: their usage is demonstrated in their help pages contained in the \pkg{eks} package. Tidy kernel smoothers also are applicable to tidy univariate data. A density estimate of the longitude coordinates of {\it G. yorkrakinensis} is shown in Figure~\ref{fig:kde1d}, with the rug plot on the horizontal axis. \begin{figure}[!ht] \centering \includegraphics[width=0.4\textwidth]{fig/grevillea12.pdf} \caption{Density estimate of the {\it G. yorkrakinensis} ($n=93$) longitude coordinates, with their rug plot.} \label{fig:kde1d} \end{figure} The same command \code{tidy\_kde} can be employed for 1-dimensional data input to compute the density estimate. The \code{ggplot2::ggplot} method for \code{tidy\_ks} objects adds the default aesthetic mapping \code{ggplot2::aes(y=estimate, weight=ks)}, as well as a default vertical axis label, in this case \code{"Density function"}. The appropriate layer function to display the 1-dimensional density estimate is \code{ggplot2::geom\_line}. A rug plot is added using the layer function \code{geom\_rug\_ks}. \begin{verbatim} R> ## tidy 1D density estimate R> t13 <- tidy_kde(dplyr::select(yorkr_coord, lon)) R> ggplot2::ggplot(t13, ggplot2::aes(x=lon)) + + ggplot2::geom_line(colour=1) + geom_rug_ks(colour=3) \end{verbatim} The default bandwidth computed by \code{ks::hpi} is 2.32e4, which is the standard deviation of the univariate kernel function. In contrast, the bivariate bandwidth [8.84e8, $-8.33$e8; $-8.33$e8, 1.36e9] computed by \code{ks::Hpi} is the variance matrix of the bivariate kernel function. All univariate bandwidths are named in the same way as the bivariate bandwidths, except that the capital \code{H} is replaced by a lower case \code{h}, e.g., \code{ks::hns} for the normal scale, \code{ks:hucv} for the unbiased cross validation and \code{ks::hscv} for the smoothed cross validation bandwidths. \section{Conclusion} We have introduced a new \proglang{R} package \pkg{eks} which serves as a bridge from the comprehensive suite of kernel smoothers in the \pkg{ks} package to the tidyverse and geospatial analysis. A wide range of kernel smoothing methods are now available, which (i) improve on the existing kernel density estimates, and (ii) improve the accessibility to more complex kernel-based data analyses, such as density-based classification (supervised learning), mean shift clustering (unsupervised learning), significant density difference testing, density derivative estimation, density ridge estimation and significant modal region testing. The \pkg{eks} package provides practitioners with additional tools to create compelling statistical visualisations from kernel smoothers, whether they are using tidy or geospatial data, or whether they are using base \proglang{R} or tidyverse graphics. \bibliographystyle{apalike}
1810.05493
\section{Introduction\label{ch:intro}} The present work is devoted to conceptual and computational problems in pre-Born--Oppenheimer (pre-BO) molecular structure theory. Without the Born--Oppenheimer (BO) approximation,\cite{BoOp27,Born51,BoHu54} in a ``pre-BO world'', we can gain in accuracy for the numerical results but loose a central paradigm for the well-accustomed concepts of chemistry. The reconstruction or interpretation of many common chemical concepts becomes a real challenge. One of these famous challenges, the problem of the quantum structure of molecules has been recognized long ago\cite{Wo76,Wo77b,WoSu77,We84} and studied by many authors.\cite{ClDi80,Wo80,Wo86, CaAd04,SuWoCPL05,UMH06,UMH08,Lu12,CaCh98,GoSh11,GoSh12,MaHuMuRe11a,MaHuMuRe11b} In the present work we address another challenge, the status of electronically excited rotational-vibrational states in a pre-BO quantum mechanical description. In a pre-BO description there are no electronic states with corresponding potential energy curves or surfaces on which the rotational-vibrational Schr\"odinger equation could be solved. In addition, rotational-vibrational states corresponding to electronic excitations are often embedded in the lowest-lying dissociation channel of the system, \ref{fig:preBOreson}, prone to predissociation.\cite{Harris63} These rovibronic states are thus accessible in a pre-BO description only as resonances. These rovibronic resonances are characterized with some energy and width corresponding to a finite non-radiative (predissociative) lifetime. Our aim is to explore how these properties can be calculated in a pre-BO approach. As to the numerical results, there are practical approaches used for the calculation of quasi-bound states in molecular science \cite{KuKrHo88,Rein82,Moi98}. The stabilization technique, a very simple computational tool has been used to identify resonances and to estimate the resonance energy\cite{HaTa70,Ho83,UsSu02} and can also be extended to the calculations of the width.\cite{MaRaTa93,MaTaRyMo94,MuYaBu94,RyMoMaTa94} The complex coordinate rotation method \cite{AgCo71,BaCo71,Si72} is a neat mathematical approach for the calculation of the resonance energy and width, and has been used in several cases,\cite{Rein82,Moi98} for example in rotational-vibrational calculations on a potential energy surface within the Born--Oppenheimer (BO) theory.\cite{WaBo95,RyMoMaTa94} The usage of complex absorbing potentials \cite{RoEnMo90,RoLiMo91,RiMe93} has been a popular technique in molecular spectroscopy and quantum reaction kinetics with many applications.\cite{SeMi91,ThMi97,Miller98} There exist also more involved and specialized approaches, such as the solution of the Faddeev--Merkuriev integral equations. \cite{Papp02,Papp05} The present work is organized as follows. First, the necessary theoretical framework is described for the variational solution of the Schr\"odinger equation for bound states of few-particle systems. Then, this approach is extended for the identification and calculation of quasi-bound states. Next, numerical examples are presented for the three-particle Ps$^-$ and the four-particle Ps$_2$. Finally, the description of excited bound and resonance rovibronic states of the four-particle H$_2$ is explored. In the end, we finish with a summary and outlook. \begin{figure} \includegraphics[width=0.8\linewidth]{figure01.eps} \caption{% Motivation for this work: calculation of rovibrational levels corresponding to electronically excited states, which are bound in the Born--Oppenheimer description but which appear as resonances in pre-Born--Oppenheimer molecular structure theory. \label{fig:preBOreson} } \end{figure} \clearpage \section{Theory and computational strategy\label{ch:theo}} The Schr\"odinger equation for an $({n_\mathrm{p}}+1)$-particle system with $m_i$ masses and $q_i$ electric charges assigned to the particles is \begin{align} \hat{H} \Psi = E \Psi \end{align} with the non-relativistic quantum Hamiltonian in Hartree atomic units \begin{align} \hat{H} = \hat{T} + \hat{V} = -\sum_{i=1}^{{n_\mathrm{p}}+1}\frac{1}{2m_i}\Delta_{\mx{r}_i} +\sum_{i=1}^{{n_\mathrm{p}}+1}\sum_{j>i}^{{n_\mathrm{p}}+1} \frac{q_iq_j}{|\mx{r}_i-\mx{r}_j|} \end{align} written in laboratory-fixed Cartesian coordinates $\mx{r}=(\mx{r}_1,\mx{r}_2,\ldots,\mx{r}_{{n_\mathrm{p}}+1})$. In the present work we use the bound-state variational approach of Ref.~\citenum{MaRe12} and (a) combine it with the stabilization technique to quickly identify long-lived resonances; and (b) extend it with the complex coordinate rotation method to calculate resonance energies and widths. \subsection{Variational pre-BO calculations using explicitly correlated Gaussian functions and the global vector representation} The overall translation of the center of mass is eliminated by writing the kinetic energy operator in terms of Jacobi Cartesian coordinates and the translational kinetic energy of the center of mass is subtracted. As an alternative to this approach, the original laboratory-fixed Cartesian coordinates can be used throughout the calculations without any further coordinate transformation employing a special elimination technique for the overall translation during the evaluation of the matrix elements.\cite{SiMaRe13} The matrix representation of the translationally invariant Hamiltonian is constructed using a symmetry-adapted basis set defined as follows. A basis function with the quantum numbers% \footnote{% $N$ denotes here the total spatial (orbital plus rotational) angular momentum quantum number in agreement with the recommendations of the International Union of Pure and Applied Chemistry.\cite{GreenBook07} We note that in Ref.~\citenum{MaRe12} the symbol $L$ was used in the same sense. Furthermore, $p$ is the parity, which we call ``natural'' if $p=(-1)^N$ and ``unnatural'' if $p=(-1)^{N+1}$. In this work we restrict the discussion to natural-parity states. The total spin quantum number for particles of type $a$ is $S_a$. For example, $S_\mr{p}$ and $S_\mr{e}$ denote the total spin quantum numbers for the protons and the electrons, respectively. } $\lambda=(N,M_N,p)$ and $\varsigma=(S_a,M_{S_a},S_b,M_{S_b},\ldots)$ ($a,b,\ldots$ label the particle type) is constructed as \begin{align} \Phi^{[\lambda,\varsigma]}(\mx{r},\mx{\sigma}) = \hat{\mathcal{A}} \lbrace \phi^{[\lambda]}(\mx{r})\ \chi^{[\varsigma]}(\mx{\sigma}) \rbrace \ , \label{eq:basis} \end{align} where $\hat{\mathcal{A}} = ({N_\mathrm{perm}})^{-1/2} \sum_{i=1}^{{N_\mathrm{perm}}}\varepsilon_i \hat{P}_i$ is the symmetrization and antisymmetrization operator for bosonic and fermionic-type particles, respectively. $\hat{P}_i$ ($i=1,2,\ldots,{N_\mathrm{perm}}$) is an operator permuting identical particles and $\varepsilon_i=-1$ if $\hat{P}_i$ corresponds to an odd number of interchanges of fermions, otherwise, $\varepsilon_i=+1$. The spatial part of the basis functions with natural parity, $p=(-1)^N$, is constructed using explicitly correlated Gaussian functions\cite{Bo60,Si60,JeSz79,CeRy93,Ry03} and the global vector representation\cite{SuUsVa98,VaSuUs98,SuVaBook98} as \begin{align} \phi^{[\lambda]}(\mx{r};\mx{\alpha},\mx{u},K) = |\mx{v}|^{2K+N} Y_{NM_N}(\hat{\mx{v}}) \exp\left(% -\frac{1}{2} \sum_{i=1}^{{n_\mathrm{p}}+1} \sum_{j>i}^{{n_\mathrm{p}}+1} \alpha_{ij}(\mx{r}_i-\mx{r}_j)^2 \right) \ , \label{eq:spbasis} \end{align} where the $\hat{\mx{v}}=\mx{v}/|\mx{v}|$ unit vector points in the direction of the global vector, $\mx{v} = \sum\displaystyle_{i=1}^{{n_\mathrm{p}}+1} u_i^{(0)} \mx{r}_i$. $Y_{NM_N}$ denote the $N$th order $M_N$th degree spherical harmonic function. The spin function, $\chi^{[\varsigma]}$, is constructed from elementary spin functions so that the resulting function is an eigenfunction of $\hat{S}_a^2$ and $(\hat{S}_a)_z$ for each type of particles ($a,b,\ldots$) with the quantum numbers $\varsigma=(S_a,M_{S_a},S_b,M_{S_b},\ldots)$. Then, the resulting $\Phi^{[\lambda,\varsigma]}$ basis function has the quantum numbers of the non-relativistic quantum theory (it is ``symmetry-adapted'') and contains free parameters, which can be optimized for an efficient description of the ``internal structure'' of a system. The free parameters of the spatial function, \ref{eq:spbasis}, are $K$, $\mx{\alpha}$: $\alpha_{ij}\ (i=1,\ldots,{n_\mathrm{p}}+1,j=i+1,\ldots,{n_\mathrm{p}}+1)$, and $\mx{u}$: $u^{(0)}_i\ (i=1,\ldots,{n_\mathrm{p}}+1)$ with the restriction $\sum_{i=1}^{{n_\mathrm{p}}+1}u^{(0)}_i=0$, which guarantees translational invariance. The spin functions used in this work do not contain any free parameters. We only note here that for the positronium molecule, Ps$_2$, as a special case studied in this work, the entire basis function was additionally adapted to the charge-conjugation symmetry of the electrons and the positrons.\cite{Schrader04a,Schrader04b,Schrader07} The matrix elements of the kinetic and potential energy operators corresponding to the basis functions with natural parity, \ref{eq:basis}--\ref{eq:spbasis}, were evaluated with the pre-BO program according to Ref.~\citenum{MaRe12}. Since the basis functions are not orthogonal we have to solve a generalized eigenvalue problem \begin{align} \mx{H}\mx{c}_i = E_i\mx{S}\mx{c}_i \ , \label{eq:rgeiv} \end{align} to find the variationally optimal linear combination of the basis functions with the linear combination coefficients $\mx{c}_i$ corresponding to the eigenvalue $E_i$. The generalized eigenproblem is solved by introducing $\mx{H}' = \mx{S}^{-1/2}\mx{H}\mx{\mx{S}^{-1/2}}$ and $\mx{c}'_i = \mx{S}^{1/2}\mx{c}_i$, which simplifies the eigenvalue equation, \ref{eq:rgeiv}, to $\mx{H}'\mx{c}'_i= E_i\mx{c}'_i$. In our computations the Cholesky decomposition of $\mx{S}$ for the evaluation of $\mx{S}^{-1/2}$ as well as the diagonalization of the real, symmetric transformed Hamiltonian matrix, $\mx{H}'$, were carried out by using the LAPACK library routines.\cite{lapack} The computational efficiency and usefulness of the variational approach described depends on the parameterization of the basis functions (the choice of the values of $K_I$, $\mx{\alpha}_I$, $\mx{u}_I$ for the basis functions $I=1,\ldots,{N_\mathrm{b}}$). For bound-state calculations we adopted the stochastic variational approach\cite{KuKr77,AlMoSz86,AlMoSz87,AlMoSz88,SuVaBook98} for the optimization of the basis function parameters. The (quasi-)optimization of the parameter set is a very delicate problem and our recipe includes the following details:\cite{MaRe12} (a) a system-adapted random number generator is constructed using a sampling-importance resampling strategy for the generation of trial parameter sets; (b) the acceptance criterion of the generated trial values is based on the linear independence condition and the energy minimization requirement (relying on the variational principle); (c) the selected parameters are fine-tuned using a simple random walk approach or Powell's method\cite{Po04}. Furthermore, once a parameter set has been selected or optimized for a system with some quantum numbers it can be used, ``transferred'', to parameterize basis functions for the same system with different quantum numbers (``parameter transfer approach''). It is important to emphasize that the basis functions are not transferred, since they have different mathematical form for different quantum numbers, but only the parameter set is taken over from one calculation to another. \subsection{Calculation of resonances} The bound-state pre-BO approach described was extended for the calculation of resonance states as follows. First of all, without any change of the computer program, we looked for the quasi-stabilization of higher-energy eigenvalues (higher than the lowest-energy threshold) of the real eigenvalue problem. This application of the stabilization technique\cite{HaTa70,Ho83,UsSu02} is a simple, practical test for identifying possible quasi-bound states and was found to be useful as a first check of the higher-energy eigenspectrum. By making full use of the stabilization theory both the resonance energies and widths could be calculated from consecutive diagonalization of the (real) Hamiltonian matrix corresponding to increasing number of basis functions, which cover increasing boxes of the configuration space.\cite{MaRaTa93,MaTaRyMo94,MuYaBu94,RyMoMaTa94} Instead of using this approach, the complex coordinate rotation method (CCRM) \cite{AgCo71,BaCo71,Si72,Rein82,Moi98} was implemented for the calculation of resonance parameters, energies and widths. The resonance parameters were determined by identifying stabilization points in the complex plane with respect to the coordinate rotation angle and (the size and parameterization of) the basis set. The localized real part of the eigenvalue, $\mr{Re}(\mathcal{E})$, was taken to be the resonance energy, and the imaginary part provided the resonance width, $\Gamma=-2\mr{Im}(\mathcal{E})$, which is inversely proportional to the lifetime, $\tau=\hbar/\Gamma$.\cite{KuKrHo88} \paragraph{Implementation of the complex coordinate rotation method for the Coulomb Hamiltonian} The complex scaling of the coordinates $r \rightarrow r\mr{e}^{\mr{i}\theta}$ translates to the replacement of the Hamiltonian $\hat{H} = \hat{T} + \hat{V}$ with \begin{align} \hat{\mathcal{H}}(\theta) = \mr{e}^{-2\mr{i}\theta} \hat{T} + \mr{e}^{-\mr{i}\theta} \hat{V} \ . \label{eq:Htheta} \end{align} % The matrix representation of $\hat{\mathcal{H}}(\theta)$ is constructed with the matrices of $\hat{T}$ and $\hat{V}$ evaluated by the pre-BO program \cite{MaRe12}. Then, the eigenvalue equation for $\hat{\mathcal{H}}(\theta)$ reads as \begin{align} \mx{\mathcal{H}}(\theta) \mx{v}_i(\theta) = \mathcal{E}_i(\theta) \mx{S}\mx{v}_i(\theta) \ , \label{eq:cHop} \end{align} where $\mx{S}$ is the overlap matrix of the (linearly independent) set of basis functions. $\mx{S}$ is eliminated from the equation similarly to the case of the real generalized eigenproblem, \ref{eq:rgeiv}, \begin{align} \mx{\mathcal{H}}'(\theta) \mx{v}'_i(\theta) = \mathcal{E}_i(\theta) \mx{v}'_i(\theta) \ , \label{eq:cHpmx} \end{align} with \begin{align} \mx{\mathcal{H}}'(\theta) &= \mr{e}^{-2\mr{i}\theta} \mx{T}' + \mr{e}^{-\mr{i}\theta} \mx{V}' \nonumber \\ &= \cos(2\theta) \mx{T}' + \cos(\theta)\mx{V}' -\mr{i}(\sin(2\theta)\mx{T}' + \sin(\theta)\mx{V}') \label{eq:cHpmx2} \end{align} and \begin{align} \mx{T}' = \mx{S}^{-1/2}\mx{T}\mx{S}^{-1/2} \quad\text{and}\quad \mx{V}' = \mx{S}^{-1/2}\mx{V}\mx{S}^{-1/2} \ . \end{align} % The complex symmetric eigenvalue problem, \ref{eq:cHpmx}, is solved using the LAPACK library routines.\cite{lapack} \clearpage \section{Numerical results\label{ch:numres}} The first numerical applications of our implementation were carried out for the notoriously non-adiabatic positronium anion, Ps$^-=\{\mr{e}^-,\mr{e}^-,\mr{e}^+\}$, and for the positronium molecule, Ps$_2=\{\mr{e}^-,\mr{e}^-,\mr{e}^+,\mr{e}^+\}$. The reason for the choice of these systems was of technical nature: we observed in bound-state calculations \cite{MaRe12} that it was straightforward to find an appropriate parameterization of the basis set for the positronium complexes. Furthermore, comparison of the results with earlier calculations\cite{Papp02,LiSh05,UsSu02,SuUs04} allowed us check the developed computational methods and gain experience in the localization of the real and imaginary parts of the complex eigenvalues using the complex coordinate rotation method. While for the bound-state calculations the basis function parameters were optimized for the lowest-energy level(s) using the variational principle, this handy optimization criterion was not available in the CCRM calculations. Thus, we used optimized bound-state basis sets and enlarged them with linearly independent basis functions for the estimation of the resonance parameters. Next, we investigated the calculation of some of the excited states of the H$_2$ molecule. The construction of a reasonably good parameterization for the basis set has turned out to be a challenge. Nevertheless, we describe the essence of our observations and give calculated resonance energy values and approximate widths for the lowest-lying excited states beyond the $b\ ^3\Sigma_\mr{u}^+$ repulsive electronic state embedded in the H(1)+H(1) continuum. \subsection{Ps$^-$\label{sec:Psm}} In \ref{tab:resPsm} bound- and resonance-state parameters ($N=0$, $p=+1$) are collected, which were obtained in this work. The basis sets were generated using the energy minimization and the linear independence conditions using a random number generator set up following the strategy described in Ref.~\citenum{MaRe12}. The parameters for the largest basis sets used during the calculations are given in the Supporting Information. The generated basis sets were apparently large and flexible enough to obtain resonance states not only beyond the first, but also beyond the second, and the third dissociation channels which correspond to $\mr{Ps}(1)+\mr{e}^-$, $\mr{Ps}(2)+\mr{e}^-$, and $\mr{Ps}(3)+\mr{e}^-$, respectively. As to the accuracy, the (real) variational principle, directly applicable for bound states, is not useful for the assessment of the resonance parameters. Instead, we used benchmarks available in the literature resulting from extensive three-body calculations using Pekeris-type wave functions with one and two length scales\cite{LiSh05} and from the solution of the Fadeev--Merkuriev integral equations for three-body systems.\cite{Papp02} First of all, the present results and the literature data are in satisfactory agreement. Our results could be certainly improved by running more extensive calculations with larger basis sets. Instead of going in this direction, a careful comparison is carried out with the reference data in order to learn about the accuracy and convergence behavior of our approach. The results are often in excellent agreement with the benchmarks but in some cases the lifetimes are orders of magnitude off. It can be observed that the calculated lifetimes are worse when the real part of the complex energy was determined less accurately (and given to less significant digits in \ref{tab:resPsm}). The inaccuracy appears in both the real and the imaginary parts and is about of the same order of magnitude compared to the absolute value of the complex energy. Thus, if the widths are expected to be very small and the real part can be determined only to a few digits, the width should be considered only as a rough estimate to its exact value. This observation can be used later in this work also for the assessment of the calculations carried out for the four-particle Ps$_2$ and H$_2$. \begin{table} \caption{% Identified bound- and resonance-state energies and resonance widths, in E$_\mathrm{h}$, of Ps$^-=\lbrace \mr{e}^-,\mr{e}^-,\mr{e}^+\rbrace$.$^\mr{a}$ \label{tab:resPsm} ~\\[-1.cm] } \begin{center} \begin{tabular}{@{}c l@{\ \ \ }r l@{\ \ \ }r r@{}} \cline{1-6} \\[-0.4cm] \cline{1-6} \\[-0.3cm] % \multicolumn{1}{c}{$(N,p,S_-)$ $^\mr{b}$} & \multicolumn{1}{c}{$\mr{Re}(\mathcal{E})$ $^\mr{c}$} & \multicolumn{1}{c}{$\Gamma/2$ $^\mr{c}$} & % \multicolumn{1}{c}{$\mr{Re}(\mathcal{E}_\mr{Ref})$ $^\mr{d}$} & \multicolumn{1}{c}{$\Gamma_\mr{Ref}/2$ $^\mr{d}$} & % Ref. \\ % \cline{1-6} \\[-0.3cm] $(0,+1,0)$ & $-0.262\ 005\ 070$\ $^\mr{e}$ & $0$\ $^\mr{e}$ & $-0.262\ 005\ 070$ & $0$ & \cite{Ko00} \\[0.15cm] % \cdashline{1-6} \\[-0.25cm] $(0,+1,0)$ & $-0.076\ 030\ 455$ & $2.152 \cdot 10^{-5}$ & $-0.076\ 030\ 442$ & $2.151\ 7 \cdot 10^{-5}$ & \cite{LiSh05} \\ % $(0,+1,0)$ & $-0.063\ 649\ 173$ & $4.369 \cdot 10^{-6}$ & $-0.063\ 649\ 175$ & $4.339\ 3 \cdot 10^{-6}$ & \cite{LiSh05} \\ % $(0,+1,0)$ & $-0.062\ 609$ & $2.5 \cdot 10^{-5}$ & $-0.062\ 550$ & $5.0 \cdot 10^{-7}$ & \cite{Papp02} \\[0.15cm] % \cdashline{1-6} \\[-0.25cm] $(0,+1,0)$ & $-0.035\ 341\ 85$ & $3.730 \cdot 10^{-5}$ & $-0.035\ 341\ 885$ & $3.732\ 9 \cdot 10^{-5}$ & \cite{LiSh05} \\ % $(0,+1,0)$ & $-0.029\ 845\ 70$ & $2.781 \cdot 10^{-5}$ & $-0.029\ 846\ 146$ & $2.635\ 6 \cdot 10^{-5}$ & \cite{LiSh05} \\ % $(0,+1,0)$ & $-0.028\ 271$ & $1.8 \cdot 10^{-5}$ & $-0.028\ 200$ & $7.5 \cdot 10^{-6}$ & \cite{Papp02} \\[0.15cm] % \cdashline{1-6} \\[-0.25cm] $(0,+1,0)$ & $-0.020\ 199$ & $8.800 \cdot 10^{-5}$ & $-0.020\ 213\ 921$ & $6.502\ 6 \cdot 10^{-5}$ & \cite{LiSh05} \\ % \cline{1-6} \\[-0.3cm] \cdashline{1-6} \\[-0.25cm] $(0,+1,1)$ & $-0.063\ 537\ 352$ & $2.132 \cdot 10^{-9}$ & $-0.063\ 537\ 354$ & $1.570\ 0 \cdot 10^{-9}$ & \cite{LiSh05} \\ % $(0,+1,1)$ & $-0.062\ 591$ & $2.6 \cdot 10^{-7}$ & $-0.062\ 550$ & $2.5 \cdot 10^{-10}$ & \cite{Papp02} \\[0.15cm] % \cdashline{1-6} \\[-0.25cm] $(0,+1,1)$ & $-0.029\ 369\ 87$ & $1.300 \cdot 10^{-7}$ & $-0.029\ 370\ 687$ & $9.395\ 0 \cdot 10^{-8}$ & \cite{LiSh05} \\ % $(0,+1,1)$ & $-0.028\ 21$ & $1.9 \cdot 10^{-5}$ & $-0.028\ 05$ & $5.0 \cdot 10^{-8}$ & \cite{Papp02} \\[0.15cm] % \cdashline{1-6} \\[-0.25cm] $(0,+1,1)$ & $-0.017\ 071$ & $6.710 \cdot 10^{-6}$ & $-0.017\ 101\ 172$ & $3.560\ 9 \cdot 10^{-7}$ & \cite{Papp02} \\ % \cline{1-6} \\[-0.4cm] \cline{1-6} \\[-0.3cm] \end{tabular} \end{center} \begin{flushleft} ~\\[-0.5cm] $^\mr{a}$ % The dissociation threshold energies, in E$_\mathrm{h}$, accessible for both the $S_-=0$ and 1 states are $E(\mr{Ps}(1))=-1/4=-0.25$, $E(\mr{Ps}(2))=-1/16=-0.062\ 5$, and $E(\mr{Ps}(3))=-1/36=-0.027\ \dot{7}$.\\ % $^\mr{b}$ % $N,p,$ and $S_-$: total spatial angular momentum quantum number, parity, and total spin quantum number of the electrons, respectively. \\ % $^\mr{c}$ % $\mr{Re}(\mathcal{E})$ and $\Gamma$: resonance energy and width with $\Gamma/2=-\mr{Im}(\mathcal{E})$ calculated in this work. \\ % $^\mr{d}$ % $\mr{Re}(\mathcal{E}_\mr{Ref})$ and $\Gamma_\mr{Ref}$: resonance energy and width with $\Gamma/2=-\mr{Im}(\mathcal{E}_\mr{Ref})$ taken from Refs.~\citenum{Papp02} and \citenum{LiSh05}. \\ % $^\mr{e}$ % Bound state. \end{flushleft} \end{table} \clearpage \subsection{Ps$_2$\label{sec:Ps2}} Our next test case was the four-particle positronium molecule, Ps$_2=\lbrace \mr{e}^-,\mr{e}^-,\mr{e}^+,\mr{e}^+ \rbrace$. Resonances of the positronium molecule have recently attracted attention \cite{DiDr10,SuUs04,UsSu02}. Ps$_2$ has few bound states, and thus a detailed spectroscopic investigation of its structure and dynamics is possible only through the detection and analysis of its quasi-bound states. In our list of numerical examples, the positronium molecule is unique because, in addition to the spatial symmetries, its Hamiltonian is invariant under the conjugation of the electric charges. In order to account for this additional property, the basis functions, \ref{eq:basis}--\ref{eq:spbasis}, were adapted also to the charge conjugation symmetry.\cite{SuUs00,SuUs04,Schrader04a,Schrader04b,Schrader07} As a result, the total symmetry-adapted basis functions and also the calculated wave functions are not necessarily eigenfunctions for the total spin angular momentum of the electrons or that of the positrons, $\hat{S}^2_-$ or $\hat{S}^2_+$. Nevertheless, the total spatial angular momentum quantum number, $N$, the parity, $p$, as well as the charge-conjugation parity, $c=+1$ or $-1$, are always good quantum numbers. The parameterization strategy of the basis set was similar to that used for Ps$^-$: we employed (a) the energy minimization condition for the lowest-energy eigenvalue; and (b) the linear independence condition for the generation of new basis function parameters. The parameter sets used in the largest calculations are given in the Supporting Information. The bound and resonance states calculated with $N=0$ total spatial angular momentum quantum number and $p=+1$ parity are collected in \ref{tab:resonPs2}. Considering all possible charge conjugation and spin functions, we obtained only three bound states in agreement with the literature.\cite{BuAd06,UsSu02,SuUs04} Two of the three calculated bound states substantially improve on the best available results.\cite{SuUs04} It is interesting to note that the bound state with $E=-0.314\ 677\ 072$~E$_\mathrm{h}$\ ($c=-1$ and $(S_-,S_+)=(1,1)$) is bound only due to the charge-conjugation symmetry of the electrons and the positrons. The localization of the energy and width for the lowest-energy resonance state of Ps$_2$ is shown in \ref{fig:Ps2res1}. The calculated resonance positions are in good agreement with the literature. In some cases our results might even improve on the best available data,\cite{UsSu02,SuUs04} although there is no such a direct criterion for the assessment of the accuracy of the resonance parameters as the variational principle for bound states. \begin{figure} \includegraphics[scale=1.]{figure02.eps} \caption{% Localization of the parameters for the lowest-energy resonance state of $\mr{Ps}_2$ with $N=0$, $p=+1$, $c=+1$, and $S_\mr{-}=0$, $S_\mr{+}=0$. The stabilization of the trajectories with respect to the rotation angle (circles) and the basis functions (colors) is shown. The stabilization point is located at $(\mr{Re}(\mathcal{E}),\mr{Im}(\mathcal{E})) = (-0.329\ 38,-0.003\ 03)$~E$_\mathrm{h}$. \label{fig:Ps2res1} } \end{figure} \begin{table} \caption{% Identified bound- and resonance-state energies and resonance widths, in E$_\mathrm{h}$, of Ps$_2=\lbrace \mr{e}^-,\mr{e}^-,\mr{e}^+,\mr{e}^+\rbrace$.$^\mr{a}$ \label{tab:resonPs2} ~\\[-1.cm] } \begin{center} \begin{tabular}{@{}c@{ }c l@{\ \ }r l@{\ \ }r r@{}} \cline{1-7} \\[-0.4cm] \cline{1-7} \\[-0.3cm] % \multicolumn{1}{c}{$(N,p,c)$ $^\mr{b}$} & \multicolumn{1}{c}{$(S_-,S_+)$ $^\mr{c}$} & \multicolumn{1}{c}{$\mr{Re}(\mathcal{E})$ $^\mr{d}$} & \multicolumn{1}{c}{$\Gamma/2$ $^\mr{d}$} & % \multicolumn{1}{c}{$\mr{Re}(\mathcal{E}_\mr{Ref})$ $^\mr{e}$} & \multicolumn{1}{c}{$\Gamma_\mr{Ref}/2$ $^\mr{e}$} & % \multicolumn{1}{c}{Ref.} \\ % \cline{1-7} \\[-0.3cm] % $(0,+1,+1)$ & $(0,0)$ & $-0.516\ 003\ 789\ 741$\ $^\mr{f}$ & $0$\ $^\mr{f}$ & $-0.516\ 003\ 790\ 416$ & $0$ & \cite{BuAd06} \\ % $(0,+1,+1)$ & $(0,0)$ & $-0.329\ 38$ & $3.03 \cdot 10^{-3}$ & $-0.329\ 4$ & $3.1 \cdot 10^{-3}$ & \cite{SuUs04} \\ % $(0,+1,+1)$ & $(0,0)$ & $-0.291\ 7$ & $2.5 \cdot 10^{-3}$ & $-0.292\ 4$ & $1.95 \cdot 10^{-3}$ & \cite{SuUs04} \\ % \cline{1-7} \\[-0.3cm] \color{gray} $(0,+1,-1)$ & $(0,0)$ & $-0.314\ 677\ 072$\ $^\mr{f}$ & $0$\ $^\mr{f}$ & $-0.314\ 673\ 3$ & $0$ & \cite{SuUs04} \\ % $(0,+1,-1)$ & $(0,0)$ & $-0.289\ 789\ 3$ & $7.7 \cdot 10^{-5}$ & $-0.289\ 76$ & $7 \cdot 10^{-5}$ & \cite{SuUs04} \\ % $(0,+1,-1)$ & $(0,0)$ & $-0.279\ 25$ & $2.3 \cdot 10^{-4}$ & $-0.279\ 13$ & $1 \cdot 10^{-4}$ & \cite{SuUs04} \\ % % \color{black}\\[-0.5cm] \cline{1-7} \\[-0.3cm] \color{gray} $(0,+1,+1)$ & $(1,1)$ & $-0.277\ 2$ & $5.4 \cdot 10^{-4}$ & $-0.276\ 55$ & $1.55 \cdot 10^{-4}$ & \cite{SuUs04} \\ % \color{black}\\[-0.5cm] \cline{1-7} \\[-0.3cm] $(0,+1,-1)$ & $(1,1)$ & $-0.309\ 0$ & $5.7 \cdot 10^{-3}$ & $-0.308\ 14$ & $1.2 \cdot 10^{-4}$ & \cite{SuUs04} \\ % $(0,+1,-1)$ & $(1,1)$ & $-0.273\ 3$ & $2.3 \cdot 10^{-3}$ & $-0.273\ 6$ & $8.5 \cdot 10^{-4}$ & \cite{SuUs04} \\ % \cline{1-7} \\[-0.3cm] \color{gray} $(0,+1,\pm 1)$ & $(1,0)/(0,1)$ & $-0.330\ 287\ 505$\ $^\mr{f}$ & $0$\ $^\mr{f}$ & $-0.330\ 276\ 81$ & $0$ & \cite{SuUs04} \\ % $(0,+1,\pm 1)$ & $(1,0)/(0,1)$ & $-0.294\ 3$ & $3.1 \cdot 10^{-3}$ & $-0.293\ 9$ & $2.15 \cdot 10^{-3}$ & \cite{SuUs04} \\ % $(0,+1,\pm 1)$ & $(1,0)/(0,1)$ & $-0.282$ & $2 \cdot 10^{-3}$ & $-0.282\ 2$ & $8.5 \cdot 10^{-4}$ & \cite{SuUs04} \\ % \color{black}\\[-0.5cm] \cline{1-7} \color{black}\\[-0.4cm] \cline{1-7} \color{black}\\[-0.3cm] \end{tabular} \end{center} \begin{flushleft} ~\\[-0.5cm] % \color{black} $^\mr{a}$~% For the five symmetry blocks with different $(N,p,c)$ quantum numbers and $(S_-,S_+)$ labels the lowest accessible thresholds are Ps(1S)+Ps(1S), {\color{gray} Ps(1S)+Ps(2P),} {\color{gray} Ps(1S)+Ps(2P),} Ps(1S)+Ps(1S), {\color{gray} Ps(1S)+Ps(2S,2P),} respectively.\cite{SuUs00} The corresponding energies, in E$_\mathrm{h}$, are $E(\mr{Ps(1)+Ps(1)})=-1/2=-0.5$ and {\color{gray} $E(\mr{Ps(1)+Ps(2)})=-5/16=-0.312\ 5$.} (The black and gray coloring is used to help the orientation.) \\ % $^\mr{b}$~% $N,p,$ and $c$: total spatial angular momentum quantum number, parity, and charge conjugation quantum number, respectively. \\ % $^\mr{c}$~% $S_-$ and $S_+$: total spin quantum numbers for the electrons and the positrons, respectively. In the last symmetry block, $(S_-,S_+)=(0,1)$ and $(S_-,S_+)=(1,0)$, are not good quantum numbers because these spin states are coupled due to the charge-conjugation symmetry of the Hamiltonian. \\ % $^\mr{d}$ % $\mr{Re}(\mathcal{E})$ and $\Gamma$: resonance energy and width with $\Gamma/2=-\mr{Im}(\mathcal{E})$ calculated in this work. \\ % $^\mr{e}$ % $\mr{Re}(\mathcal{E}_\mr{Ref})$ and $\Gamma_\mr{Ref}$: resonance energy and width with $\Gamma/2=-\mr{Im}(\mathcal{E}_\mr{Ref})$ taken from Ref.~\citenum{SuUs04}. \\ % $^\mr{f}$~% Bound states. \end{flushleft} \end{table} \clearpage \subsection{Toward the calculation of rovibronic resonances of H$_2$} Next, our goal was to explore how the lowest-lying resonance states of H$_2$ can be calculated in a pre-Born--Oppenheimer quantum mechanical approach. It could be anticipated that one of the major challenges in this undertaking would be the parameterization of the basis functions, which was already in the bound-state calculations more demanding for H$_2$ than for Ps$_2$.\cite{MaRe12} In the bound-state calculations the optimized parameters were fine-tuned in repeated cycles. The entire parameter selection and optimization procedure relied on the variational principle and the minimization of the energy. According to the spatial and permutational symmetry properties of the H$_2$ molecule, there are four different blocks with natural parity \begin{itemize} \item[B1: ] ``$X\ ^1\Sigma_\mr{g}^+$ block'': $N\geq 0,\ p=(-1)^N,\ S_\mr{p}=(1-p)/2,\ S_\mr{e}=0$; \item[B2: ] ``$B\ ^1\Sigma_\mr{u}^+$ block'': $N\geq 0,\ p=(-1)^N,\ S_\mr{p}=(1+p)/2,\ S_\mr{e}=0$; \item[B3: ] ``$a\ ^3\Sigma_\mr{g}^+$ block'': $N\geq 0,\ p=(-1)^N,\ S_\mr{p}=(1-p)/2,\ S_\mr{e}=1$; \item[B4: ] ``$b\ ^3\Sigma_\mr{u}^+$ block'': $N\geq 0,\ p=(-1)^N,\ S_\mr{p}=(1+p)/2,\ S_\mr{e}=1$, \end{itemize} which can be calculated in independent runs with our computer program using basis functions with the appropriate quantum numbers, \ref{eq:basis}--\ref{eq:spbasis}. The lowest-energy levels of the first three blocks correspond to bound states, while the last block starts with the H(1)+H(1) continuum. In the BO picture the $b\ ^3\Sigma_\mr{u}^+$ electronic state is repulsive \cite{KoRy90} and does not support any bound rotational-vibrational levels (see \ref{fig:orientH2} and \ref{tab:boundH2}). Then, one of our goals was the identification of the lowest-energy quasi-bound states in the $b\ ^3\Sigma_\mr{u}^+$ block. During the calculation of the lowest-energy resonances of Ps$_2$, the basis function parameters were generated randomly using a system-adapted random number generator.\cite{MaRe12} Unfortunately, this simple strategy for the H$_2$ resonances was not useful. The energy minimization criterion for the lowest (few) eigenstates was not useful neither, since it resulted only in the accumulation of functions near the H(1)+H(1) limit, the lowest energy levels in the $b\ ^3\Sigma_\mr{u}^+$ block, and the higher-lying quasi-bound states were not at all described by the basis sets generated in this way. Then, our alternative working strategy was the usage of the parameter transfer approach (described in Section ``Theory and computational strategy'' and in Ref.~\citenum{MaRe12}). In this approach a parameter set optimized for a bound state with some quantum numbers, ``state $\mathcal{A}$'', is used to parameterize the basis functions corresponding to another set of quantum numbers and used to calculate ``state $\mathcal{B}$''. It is important to emphasize that the mathematical form of the basis functions is defined by the selected values of the quantum numbers, \ref{eq:basis}--\ref{eq:spbasis}, and thus not the basis functions but only the parameters are transferred from one calculation to another. Our qualitative understanding tells us that this parameter-transfer strategy is computationally useful if the internal structures of ``state $\mathcal{A}$'' and ``state $\mathcal{B}$'' are more or less similar. By inspecting the orientation chart of H$_2$, \ref{fig:orientH2}, our idea was that the combination of the (natural-parity) bound-state optimized parameter sets could provide a parameterization good enough for the identification of the lowest-lying resonance states embedded in the $b\ ^3\Sigma_\mr{u}^+$ continuum. For this purpose, we used the parameters of $2\ 250$ basis functions optimized for the lowest-lying bound states with $N=0$ and $1$ angular momentum quantum numbers corresponding to the $X\ ^1\Sigma_\mr{g}^+$, $B\ ^1\Sigma_\mr{u}^+$, and $a\ ^3\Sigma_\mr{g}^+$ blocks using the sampling-importance resampling strategy of Ref.~\citenum{MaRe12} and Powell's method\cite{Po04} for the fine-tuning of each basis function. As a result of these calculations, we obtained a parameter set large enough for $6\times 2\ 250=13\ 500$ basis functions. In addition, $1\ 000-1\ 000$ basis functions were generated and less tightly optimized for the lowest-energy levels of the $b\ ^3\Sigma_\mr{u}^+$ block with $N=0$ and 1. Using this large parameter set, $\mathcal{P}_L$, $15\ 500$ basis functions were constructed for all possible quantum numbers of the four blocks, B1--B4, with $N=0,1,$ and 2. In each case the resulting basis set was found to be linearly independent. The complete parameter set is given in the Supporting Information. The proton-electron ratio was $m_\mr{p}/m_\mr{e}=1\,836.152\,672\,47$ throughout the calculations.\cite{codata06} \vspace{0.1cm} \paragraph{Bound-state energy levels} The lowest-lying energy values obtained with $\mathcal{P}_L$ for the different quantum numbers are collected in \ref{tab:boundH2}, and are in good agreement with the best available non-relativistic results in the literature. The energy values of the $X\ ^1\Sigma_\mr{g}^+$ electronic and vibrational ground states with $N=0,1,$ and $2$ are larger by only less than $2$~nE$_\mathrm{h}$\ than the theoretical results of Ref.~\citenum{PaKo09}. For all three calculated $B\ ^1\Sigma_\mr{u}^+$ $N=0,1,2$ levels our energy values are lower by more than $1\ \mu\mr{E}_\mr{h}$ compared to the results of Ref.~\citenum{WoOrSt06} obtained in close-coupling calculations using adiabatic potential energy curves and non-adiabatic couplings for six electronic states. We also note that for the $N=0$ lowest-lying vibrational level of $B\ ^1\Sigma_\mr{u}^+$ there is a ``variational-perturbational'' estimate given in Table~3 of Ref.~\citenum{WoOrSt06}, which was anticipated to be more accurate, and thus it was the recommended value for this level in the article, though not a strict upper bound to the exact value. It was obtained not in the six-state close-coupling calculations, but as a result of two-state close-coupling calculations with the potentials and couplings of Ref.~\citenum{WoOrSt06}, incremented with a non-adiabatic correction term.\cite{WoDr92} This term value translates to the energy value $-0.753\ 026\ 440$\ E$_\mathrm{h}$\ based on the explanation given below Eq. (13) of Ref.~\citenum{WoOrSt06}. An earlier non-adiabatic estimate\cite{Wo95} (not upper bound) for this energy level was $-0.753\ 027\ 31$\ E$_\mathrm{h}$\ calculated using the adiabatic energy and a correction to the BO potential\cite{DrWo95} incremented by a non-adiabatic correction.\cite{WoDr92} For comparison, the pre-Born--Oppenheimer energy calculated in the present work (in a fully variational procedure) is $-0.753\ 027\ 186$\ E$_\mathrm{h}$\ (\ref{tab:boundH2}). In the case of the $a\ ^3\Sigma_\mr{g}^+$ $N=1,2$ energy levels the presented energy values obtained in this work are lower than the lowest energies values published\cite{Wo07}. Based on this overview, we can conclude that the parameter set, $\mathcal{P}_L$, performs well for the lowest-lying bound-state energy levels and also contains basis functions optimized for an approximate description of the H(1)+H(1) continuum. Then, we can hope that the application of this parameter set in the CCRM calculations for the description of the related or just energetically nearby-lying quasi-bound states will be useful. \vspace{0.1cm} \paragraph{Electronically excited bound and resonance rovibronic states} In the orientation chart of H$_2$, \ref{fig:orientH2}, the electronic states are collected below the H(1)+H(2) dissociation threshold known from the literature\cite{Herzberg1,BrCa03} (only natural-parity states are considered). Although in our calculations there are no potential energy curves corresponding to electronic states, these conventional electronic-state labels help the orientation and the reference to the calculated energy levels. In the figure those states which are coupled by symmetry and calculated in the same block are highlighted similarly (green or red color and oval or rectangular marking) corresponding to the B1--B4 blocks introduced earlier this section. This coupling is included in the calculations automatically by specifying the total spatial (orbital plus rotational) angular momentum, parity, and spin quantum numbers. The empty ellipses and rectangles indicate bound states, while the shaded signs are for resonance states embedded in their corresponding lowest-lying continuum (here: H(1)+H(1)). We carried out calculations in all four blocks, B1--B4, with $N=0,1,$ and 2 total spatial angular momentum quantum numbers and most of the states indicated in \ref{fig:orientH2} could have been identified using the largest parameter set, $\mathcal{P}_L$. Unfortunately, the accuracy of the calculated energies often did not meet the level of spectroscopic accuracy,\cite{spectracc} and thus we collect here only the essence of the calculations. First of all, the most important qualitative results can be explained by inspecting \ref{fig:resonH2} prepared for the ``$X\ ^1\Sigma_\mr{g}^+$ block'' and for the ``$b\ ^3\Sigma_\mr{u}^+$ block'', B1 and B4. \ref{fig:resonH2} shows a part of the eigenspectrum of the complex scaled Hamiltonian, $\mathcal{H}(\theta)$ of \ref{eq:Htheta}, corresponding to small $\theta$ values, $\theta\in[0.005,0.065]$, and the $[-1.2,-0.5]$~E$_\mathrm{h}$\ interval of the real part of the eigenvalues. In both cases the onset of the H(1)+H(1) continuum can be observed on the real axes at $-0.999\ 455\ 679\ \mr{E}_\mr{h}$. In the $X\ ^1\Sigma_\mr{g}^+$ block the bound rovibrational energy levels assignable to the $X\ ^1\Sigma_\mr{g}^+$ electronic state line up on the real axis with $\mr{Im}(\mathcal{E})=0$ (deviations from this value are due to the incomplete convergence only and the estimated stabilization points are on the real axis). In the $b\ ^3\Sigma_\mr{u}^+$ block however we find no states below the H(1)+H(1) continuum, in agreement with the BO calculations \cite{KoRy90}. The next, H(1)+H(2) threshold corresponds to the energy value $-0.624\ 659\ 800\ \mr{E}_\mr{h}$. Beyond the H(1)+H(1) but below the H(1)+H(2) thresholds stabilization points (with respect to the $\theta$ rotation angle and the basis set) were observed with small negative imaginary values, which were assigned (based on the real parts of the eigenvalues) to rotational-vibrational levels corresponding to the electronically excited states in their symmetry blocks (see \ref{fig:resonH2} and also \ref{fig:orientH2}). The lack of any stabilization points beyond the H(1)+H(2) threshold can be explained with the limited size and flexibility of the basis set. It can be observed in \ref{fig:resonH2} in the $b\ ^3\Sigma_\mr{u}^+$ block that a group of stabilization points appear for $N=1$ and $2$, which are not present for $N=0$. These points for $N=1$ and $2$ were assigned to the rotational-vibrational states with $R=0$ and 1 rotational angular momentum quantum numbers of the $c\ ^3\Pi_\mr{u}^+$ electronic state, respectively. This result demonstrates that the coupling of the rotational and orbital angular momenta are automatically included in the calculations by specifying only the total spatial angular momentum quantum number, $N$. Finally, we note that the H(1)+H(1) continuum does not couple neither to the ``$B\ ^1\Sigma_\mr{u}^+$ block'' nor to the ``$a\ ^3\Sigma_\mr{g}^+$ block'', and in these cases the lowest-lying continuum corresponds to the H(1)+H(2) dissociation channel (\ref{fig:orientH2}). \vspace{0.1cm} \paragraph{Numerical results for the resonance energies and widths} In \ref{tab:resonH2} numerical results are given obtained for the ``$b\ ^3\Sigma_\mr{u}^+$ block'', B4, for the lowest-lying rotational-vibrational levels with $N=0,1$, and 2 corresponding to the $e\ ^3\Sigma_\mr{u}^+$ ($N=0,1,2$) and to the $c\ ^3\Pi_\mr{u}^+$ ($N=1,2$) electronic-state labels. These rovibronic levels are embedded in the H(1)+H(1) continuum, and thus they are considered as rovibronic resonances. The real energy values are in satisfactory agreement with the experimental results.\cite{Di58} We consider however the given imaginary parts as estimates to their accurate values, and all we can conclude at this point is that the obtained imaginary parts are of the order of $10^{-7}$~E$_\mathrm{h}$, which corresponds to a predissociative lifetime, $\tau=\hbar/\Gamma$, of the order of $0.2$~ns. As it was explained earlier, it is difficult to assess the accuracy of the calculated resonance energies and widths, since there is no such a simple criterion as the (real) variational principle for bound states. According to our observations in the test calculations for Ps$^-$ (\ref{tab:resPsm}), the relative errors in the real and imaginary parts with respect to the absolute value of the complex energy are similar. The accuracy of the pre-BO real energy values can be estimated here by their comparison with available experimental results. This observation also indicates that the resonance widths given in \ref{tab:resonH2} should be considered as rough estimates. Theoretical energy values are available in the literature calculated by Ko{\l}os and Rychlewski\cite{KoRy77,KoRy90} and are also cited in \ref{tab:resonH2}. In Ref.~\citenum{KoRy90} adiabatic rotational-vibrational energy levels were determined for the $e\ ^3\Sigma_\mr{u}^+$ electronic state by calculating an accurate adiabatic potential energy curve and solving the corresponding rotational-vibrational Schr\"odinger equation. The theoretical reference value for the $c\ ^3\Pi_\mr{u}^+, R=0, v=0$ level was taken from Ref.~\citenum{KoRy77}, which was obtained by the calculation of an accurate BO potential energy curve and solving the corresponding vibrational Schr\"odinger equation. The resulting BO energy could be furthermore corrected for the $\Lambda$-doubling, but the numerical value for this correction term was not clearly identifiable in Ref.~\citenum{KoRy77}. It would be interesting to see if nonadiabatic corrections can be calculated to these levels, for example using the recently developed nonadiabatic perturbation theory by Pachucki and Komasa.\cite{PaKo08,PaKo09} The lifetimes of rotational-vibrational levels corresponding to the $e\ ^3\Sigma_\mr{u}^+$ state were measured in delayed coincidence experiments,\cite{KiSaAdPaSt99} which include both the radiative and predissociative decay channels accessible from these levels. In the same work the competition of the two decay channels were investigated using ab initio (full configuration interaction electronic wave functions using Gaussian-type orbitals and an accurate adiabatic potential energy curve\cite{KoRy90}) as well as quantum defect theory with a one-channel approximation.\cite{KiSaAdPaSt99} According to these calculations the predissociative lifetimes are of the order of $1\ \mu$s and $100$~ns for the $v=0$ and $v=1$ vibrational levels, respectively, for the $e\ ^3\Sigma_\mathrm{u}^+$ state with $N=0$. The lifetime of the lowest rotational-vibrational levels of the $c\ ^3\Pi_\mathrm{u}^+$ state were calculated using a simple perturbative model, which included the orbit-rotation interaction and used several approximations during the calculations.\cite{ChBh79} In a similar perturbative treatment\cite{CoBr85} the orbit-rotational coupling operator was included and accurate BO potential energy curves were used to describe the $b\ ^3\Pi_\mathrm{u}^+$ and $c\ ^3\Pi_\mathrm{u}^+$ states.\cite{KoWo65,KoRy77} According to both the lifetime measurements\cite{MeVo78,BrNeLo84} and the calculations\cite{ChBh79,CoBr85} the predissociative lifetime of the lowest-lying rotational-vibrational levels of $c\ ^3\Pi_\mr{u}^+$ are of the order of 1~ns. Unfortunately, calculated energy levels were not reported in any of these theoretical work\cite{ChBh79,CoBr85,KiSaAdPaSt99} on the predissociative lifetimes of the $e\ ^3\Sigma_\mr{u}^+$ and $c\ ^3\Pi_\mathrm{u}^+$ states, which makes comparison of the results more difficult. To pinpoint the resonance energies and especially the widths of the rovibronic levels within the rigorous pre-BO framework developed in the present work further calculations are necessary. Nevertheless, the results shown in \ref{tab:resonH2} improve on the best available theoretical values for energy levels published in the literature.\cite{KoRy77,KoRy90} Furthermore, our primary goal was also completed, it was demonstrated that in pre-BO calculations (a) electronically excited rovibronic levels are accessible; and (b) there are excited rovibronic levels, which are described as bound states in the BO theory, but appear as resonances in a pre-BO description, i.e., if the introduction of the BO approximation is completely avoided. \vspace{0.1cm} \paragraph{How to improve on the present results?} First of all, one of the lessons of the present study is that an efficient parameterization of the basis set is one of the main challenges of the calculation of rovibronic resonances in pre-BO theory. We have shown that the random generation of parameters can be improved using the parameter-transfer approach assuming that there are bound states of comparable internal structure to the quasi-bound states to be calculated. In the case of H$_2$ the presented calculations could be improved by the tight optimization of parameter sets for the lowest-lying bound states with unnatural-parity, $p=(-1)^{N+1}$, and by the inclusion of also these parameters in an extended parameter set. Then, another technically straightforward, but computationally more demanding option is the optimization of parameters not only for the lowest-lying states of a symmetry block but for more (or all) vibrational and vibronic excited bound states, \emph{e.g.} for all (ro)vibrational states of $X\ ^1\Sigma_\mr{g}^+$ up to the H(1)+H(1) threshold or for all the bound (ro)vibronic energy levels corresponding to the ``$B\ ^1\Sigma_\mr{u}^+$ block'' as well as to the ``$a\ ^3\Sigma_\mr{g}^+$ block'' up to their lowest-lying correlating threshold, H(1)+H(2) (see \ref{fig:orientH2}). Finally, a more generally applicable solution to the parameterization problem of resonance states would be the development of a useful and practical application of the complex variational principle for resonances.\cite{Moi98} \begin{figure} \includegraphics[width=0.9\linewidth]{figure03.eps} \caption{% Orientation chart for the electronic states of H$_2$ below the H(1)+H(2) dissociation threshold (see for example Herzberg\cite{Herzberg1} or Brown and Carrington\cite{BrCa03}). The same color (red or green) and shape (rectangle or ellipse) coding indicate those states, which can be obtained in the same pre-Born--Oppenheimer calculation. Empty objects indicate bound states, while filled objects refer to the fact that the corresponding rovibronic states (if there are any) are resonances embedded in the H(1)+H(1) continuum. \label{fig:orientH2} } \end{figure} \clearpage \begin{table} \caption{% Assessment of the basis set parameterization: the lowest-lying bound-state energies. \\[-0.6cm] \label{tab:boundH2} } \begin{tabular}{@{}c cr cc@{}} \cline{1-5} \\[-0.4cm] \cline{1-5} \\[-0.3cm] \multicolumn{1}{c}{$(N,p,S_\mr{p},S_\mr{e})$ $^\mr{a}$} & \multicolumn{1}{c}{$E/E_\mr{h}$ $^\mr{b}$} & \multicolumn{1}{c}{$\Delta E_\mr{Ref}/\mu E_\mr{h}$ $^\mr{c}$} & \multicolumn{1}{c}{Ref.} & \multicolumn{1}{l}{Assignment$^\mr{d}$} \\ \cline{1-5} \\[-0.3cm] % $(0,+1,0,0)$ & $-1.164\ 025\ 030$ & $-0.000\ 6$ & $ $\cite{PaKo09} & $X\ ^1\Sigma_\mr{g}^+$ \\ $(1,-1,1,0)$ & $-1.163\ 485\ 171$ & $-0.001\ 4$ & $ $\cite{PaKo09} & $X\ ^1\Sigma_\mr{g}^+$ \\ $(2,+1,0,0)$ & $-1.162\ 410\ 408$ & $-0.001\ 9$ & $ $\cite{PaKo09} & $X\ ^1\Sigma_\mr{g}^+$ \\ \cline{1-5} \\[-0.3cm] % $(0,+1,1,0)$ & $-0.753\ 027\ 186$ & $1.383\ 7$ & $ $\cite{WoOrSt06} & $B\ ^1\Sigma_\mr{u}^+$ \\ $(1,-1,0,0)$ & $-0.752\ 850\ 233$ & $1.444\ 6$ & $ $\cite{WoOrSt06} & $B\ ^1\Sigma_\mr{u}^+$ \\ $(1,+1,1,0)$ & $-0.752\ 498\ 022$ & $1.529\ 1$ & $ $\cite{WoOrSt06} & $B\ ^1\Sigma_\mr{u}^+$ \\ % \cline{1-5} \\[-0.3cm] % $(0,+1,0,1)$ & $-0.730\ 825\ 193$ & $-0.006\ 9$ & $ $\cite{Wo07} & $a\ ^3\Sigma_\mr{g}^+$ \\ $(1,-1,1,1)$ & $-0.730\ 521\ 418$ & $0.008\ 0$ & $ $\cite{Wo07} & $a\ ^3\Sigma_\mr{g}^+$ \\ $(2,+1,0,1)$ & $-0.729\ 916\ 268$ & $0.047\ 9$ & $ $\cite{Wo07} & $a\ ^3\Sigma_\mr{g}^+$ \\ \cline{1-5} \\[-0.3cm] % $(0,+1,1,1)$ & $[-0.999\ 450\ 102]$ $^\mr{e}$ & $[-5.578]$ & $^\mr{f}$ & $b\ ^3\Sigma_\mr{u}^+$ \\ $(1,-1,0,1)$ & $[-0.999\ 445\ 835]$ $^\mr{e}$ & $[-9.844]$ & $^\mr{f}$ & $b\ ^3\Sigma_\mr{u}^+$ \\ $(2,+1,1,1)$ & $[-0.999\ 439\ 670]$ $^\mr{e}$ & $[-16.010]$ & $^\mr{f}$ & $b\ ^3\Sigma_\mr{u}^+$ \\ \cline{1-5} \\[-0.4cm] \cline{1-5} \\[-0.3cm] \end{tabular} \begin{flushleft} $^\mr{a}$~% $N$: total spatial angular momentum quantum number; $p:$ parity, $p=(-1)^N$; $S_\mr{p}$ and $S_\mr{e}$: total spin quantum numbers for the protons and the electrons, respectively. \\ % $^\mr{b}$~% $E$: the energy obtained with the largest parameter set, $\mathcal{P}_L$, used in this study corresponding to $15\ 500$ basis functions for each set of quantum numbers (see the text for details and the Supporting Information\ for the numerical values). The proton-electron ratio was $m_\mr{p}/m_\mr{e}=1\,836.152\,672\,47$.\cite{codata06} \\ % $^\mr{c}$~% $\Delta E_\mr{Ref}=E_\mr{Ref}-E$ with $E_\mr{Ref}$ being the best available non-Born--Oppenheimer theoretical energy value in the literature. \\ % $^\mr{d}$~% Born--Oppenheimer electronic state label. Each energy level given here can be assigned to the lowest-energy vibrational level of the electronic state. \\ % $^\mr{e}$~% The lowest-energy eigenvalue of the Hamiltonian obtained for the given set of quantum numbers. \\ % $^\mr{f}$~% The non-relativistic energy of two ground-state hydrogen atoms, $E(\mr{H}(1)+\mr{H}(1))=-0.999\ 455\ 679\ \mr{E}_\mr{h}$, was used as reference. \\ \end{flushleft} \end{table} \begin{figure} \begin{center} \includegraphics[scale=1.]{figure04.eps} \\ \end{center} % \caption{% Part of the spectrum of the complex scaled Hamiltonian, $\mathcal{H}(\theta)$ with $\theta\in[0.005,0.065]$ corresponding to the largest basis set used in this work for the $X\ ^1\Sigma_\mr{g}^+$ block $[p=(-1)^N,S_\mr{p}=(1-p)/2,S_\mr{e}=0]$ and for the $b\ ^3\Sigma_\mr{u}^+$ block $[p=(-1)^N,S_\mr{p}=(1+p)/2,S_\mr{e}=1]$ with $N=0,1,$ and $2$ total spatial angular momentum quantum numbers. % The black triangles indicate the threshold energy of the dissociation continua corresponding to H(1)+H(1), H(1)+H(2), and H(1)+H(3). \label{fig:resonH2} } \end{figure} \begin{table} \caption{% Identified resonance-state energies and widths, in E$_\mathrm{h}$, of H$_2$ in the $b\ ^3\Sigma_\mr{u}^+$ block $[p=(-1)^N,S_\mr{p}=(1+p)/2,S_\mr{e}=1]$ for $N=0,1,$ and $2$. \label{tab:resonH2} } \begin{tabular}{@{} c c@{\ }c ccc @{}} % \cline{1-6} \\[-0.4cm] \cline{1-6} \\[-0.3cm] \multicolumn{1}{c}{$(N,p,S_\mr{p},S_\mr{e})$ $^\mr{a}$} & \multicolumn{1}{c}{$\mr{Re}(\mathcal{E})$ $^\mr{b}$} & \multicolumn{1}{c}{$\Gamma/2$ $^\mr{b}$} & \multicolumn{1}{c}{$E_\mr{Ref,exp}$ $^\mr{c}$} & \multicolumn{1}{c}{$E_\mr{Ref,theo}$ $^\mr{d}$} & \multicolumn{1}{c}{Assignment $^\mr{e}$} \\ \cline{1-6} \\[-0.3cm] $(0,+1,1,1)$ & $[-0.999\ 450\ 1]$ $^\mr{f}$ & & & $[-0.999\ 455\ 7]$ & H(1)+H(1) continuum \\ & $[...]$ & & & & \\ $(0,+1,1,1)$ & $-0.677\ 947\ 1$ & $1\cdot 10^{-7}$ & $-0.677\ 946\ 1$ & $-0.677\ 942\ 7$ $ $\cite{KoRy90} & $e\ ^3\Sigma_\mr{u}^+, R=0, v=0$ \\ $(0,+1,1,1)$ & $-0.668\ 549\ 3$ & $9\cdot 10^{-7}$ & $-0.668\ 547\ 8$ & $-0.668\ 541\ 0$ $ $\cite{KoRy90} & $e\ ^3\Sigma_\mr{u}^+, R=0, v=1$ \\ \cline{1-6} \\[-0.3cm] $(1,-1,0,1)$ & $[-0.999\ 445\ 8]$ $^\mr{f}$ & & & $[-0.999\ 455\ 7]$ & H(1)+H(1) continuum \\ & $[...]$ & & & & \\ $(1,-1,0,1)$ & $-0.731\ 434\ 0$ & $5\cdot 10^{-7}$ & $-0.731\ 438\ 8$ & $-0.731\ 469\ 1$ $ $\cite{KoRy77} & $c\ ^3\Pi_\mr{u}^+, R=0, v=0$ \\ $(1,-1,0,1)$ & $-0.720\ 72$ & $2\cdot 10^{-7}$ & $-0.720\ 782\ 6$ & & $c\ ^3\Pi_\mr{u}^+, R=0, v=1$ \\ & $[...]$ & & & & \\ $(1,-1,0,1)$ & $-0.677\ 705\ 5$ & $2\cdot 10^{-7}$ & $-0.677\ 704\ 1$ & $-0.677\ 698\ 2$ $ $\cite{KoRy90} & $e\ ^3\Sigma_\mr{u}^+, R=1, v=0$ \\ $(1,-1,0,1)$ & $-0.668\ 32$ & $1\cdot 10^{-6}$ & $-0.668\ 319\ 7$ & $-0.668\ 309\ 8$ $ $\cite{KoRy90} & $e\ ^3\Sigma_\mr{u}^+, R=1, v=1$ \\ \cline{1-6} \\[-0.3cm] $(2,+1,1,1)$ & $[-0.999\ 439\ 7]$ $^\mr{f}$ & & & $[-0.999\ 455\ 7]$ & H(1)+H(1) continuum \\ & $[...]$ & & & & \\ $(2,+1,1,1)$ & $-0.730\ 888\ 2$ & $9\cdot 10^{-7}$ & $-0.730\ 888\ 7$ & & $c\ ^3\Pi_\mr{u}^+, R=1, v=0$ \\ $(2,+1,1,1)$ & $-0.720\ 219\ 0$ & $<2\cdot 10^{-7}$& $-0.720\ 258\ 0$ & & $c\ ^3\Pi_\mr{u}^+, R=1, v=1$ \\ & $[...]$ & & & & \\ $(2,+1,1,1)$ & $-0.677\ 222\ 9$ & $2\cdot 10^{-8}$ & $-0.677\ 222\ 2$ & & $e\ ^3\Sigma_\mr{u}^+, R=2, v=0$ \\ $(2,+1,1,1)$ & $-0.667\ 863\ 2$ & $7\cdot 10^{-7}$ & $-0.667\ 865\ 3$ & & $e\ ^3\Sigma_\mr{u}^+, R=2, v=1$ \\ \cline{1-6} \\[-0.4cm] \cline{1-6} \\[-0.3cm] \end{tabular} % \begin{flushleft} ~\\[-0.5cm] % $^\mr{a}$~% $N$: total spatial angular momentum quantum number; $p:$ parity, $p=(-1)^N$; $S_\mr{p}$ and $S_\mr{e}$: total spin quantum numbers for the protons and electrons, respectively. \\ % $^\mr{b}$ % $\mr{Re}(\mathcal{E})$ and $\Gamma$: resonance energy and width with $\Gamma/2=-\mr{Im}(\mathcal{E})$ calculated in this work. % The largest basis set contained $15\ 500$ basis functions for each set of quantum numbers. The proton-electron ratio was $m_\mr{p}/m_\mr{e}=1\,836.152\,672\,47$.\cite{codata06} \\ % $^\mr{c}$~% $E_\mr{Ref,exp}$ experimental reference value, in E$_\mathrm{h}$, derived as $E_\mr{exp}=E_0+T_\mr{exp}$ with the ground-state energy ($X\ ^1\Sigma_\mr{g}^+,N=0,v=0$) $E_0=-1.164\ 025\ 030$~E$_\mathrm{h}$. All $T_\mr{exp}$ values were obtained by correcting the experimental term values of Dieke\cite{Di58}, with $-0.000\ 681\ 7\ \mr{E}_\mr{h} = -149.63\ \mr{cm}^{-1}$ ($1\ \mr{E}_\mr{h}=219\ 474.631\ 4\ \mr{cm}^{-1}$), since all triplet term values were too high.\cite{MiFr74,KoRy90} \\ % $^\mr{d}$~% $E_\mr{Ref,theo}$: the best available theoretical reference energy values, in E$_\mathrm{h}$, corresponding to accurate adiabatic calculations for the $e\ ^3\Sigma_\mr{u}^+$ levels\cite{KoRy90} and to accurate Born--Oppenheimer calculations for the $c\ ^3\Pi_\mr{u}^+$ levels.\cite{KoRy77} % The non-relativistic energy of two ground-state hydrogen atoms is given in square brackets. \\ % $^\mr{e}$~% Born--Oppenheimer electronic- and vibrational-state labels. The (approximate) rotational angular momentum quantum number, $R$, is also given. \\ % $^\mr{f}$~% The lowest-energy eigenvalue of the real Hamiltonian obtained with the largest parameter set and with the given quantum numbers. % \end{flushleft} \end{table} \clearpage \section{Summary and outlook\label{ch:sum}} The present work was devoted to the calculation of rotational-vibrational energy levels corresponding to electronically excited states, which are bound within the Born--Oppenheimer (BO) approximation but appear as resonances in a pre-Born--Oppenheimer (pre-BO) quantum mechanical description. In order to calculate resonance energies and widths, corresponding to predissociative lifetimes, the pre-BO variational approach and computer program of Ref.~\citenum{MaRe12} was extended with the complex coordinate rotation method (CCRM). Similarly to the bound-state calculations, the wave function was written as a linear combination of basis functions which have the non-relativistic quantum numbers (total spatial---rotational plus orbital---angular momentum quantum number, parity, and total spin quantum number for each particle type). The basis functions were constructed using explicitly correlated Gaussian functions and the global vector representation. This pre-BO-resonance approach was first used for the three- and four-particle positronium complexes, Ps$^-=\lbrace\mr{e}^-,\mr{e}^-,\mr{e}^+\rbrace$ and Ps$_2=\lbrace\mr{e}^-,\mr{e}^-,\mr{e}^+,\mr{e}^+\rbrace$, respectively. These applications allowed us to test the implementation and gain experience in the identification of resonance parameters. For the dipositronium, Ps$_2$, we managed to improve on some of the best available results reported in the literature. Then, the developed methodology and technology was employed for the four-particle molecule, H$_2$. First, the rovibronic states known in the literature were collected and considered which were accessible in our calculations with various sets of (exact) quantum numbers of the non-relativistic theory. The experimental and theoretical energy values of the literature were also used for the assignment of our calculated energy levels with the common BO terminology of electronic and vibrational state labels. As to the computational part, we had to find a useful parameterization strategy for the basis functions. Since the bound-state parameter-optimization approach relied on the energy minimization condition and the (real) variational principle, it was not directly applicable for making the CCRM calculations more efficient. A simple and practical solution to the parameterization problem was the parameter-transfer approach, the basis functions used to describe low-energy resonances of some symmetry were parameterized with optimized parameters of (high-energy) bound states. As a result, a large parameter set was constructed, which was compiled from parameters optimized for different symmetry blocks. The parameterization of basis functions, which have mathematical forms defined by the exact quantum numbers, with this extended parameter set immediately lead to an improvement for the best available energy values available in the literature for the lowest-lying rotational states assigned to the $B\ ^1\Sigma_\mr{u}^+$ and $a\ ^3\Sigma_\mr{g}^+$ electronic states. Then, using this extended parameter set low-energy rovibronic resonances became accessible beyond the $b\ ^3\Sigma_\mr{u}^+$ repulsive electronic state, embedded in the H(1)+H(1) continuum. Based on these calculations, resonance energies were evaluated and resonance widths were estimated for the lowest-lying rotational-vibrational levels of the electronically excited $e\ ^3\Sigma_\mr{u}^+$ and $c\ ^3\Pi_\mr{u}^+$ states. We note here that the coupling of the rotational and orbital angular momenta was automatically included in our computational approach by specifying only the total spatial angular momentum quantum number, $N$. Although the presented results improve on the best available (BO and adiabatic) calculations in the literature for these states, to pinpoint the resonance energies and especially the widths more extensive calculations are necessary. As to further improvements, % the major present technical difficulty is the efficient parameterization of the basis set for resonance states. A generally applicable solution to this problem would be a useful application and implementation of a complex analogue for the real variational principle. In the lack of such a general solution, optimization of large parameter sets for bound (excited) states together with the parameter-transfer strategy might be appropriate for the calculation of a larger number and/or more accurate rovibronic resonances of H$_2$. In addition to the improvement of the parameterization strategy, a generally applicable analysis tool would also be desirable which provides an assignment for the pre-BO wave function with the common BO electronic- and vibrational-state labels where such an assignment is possible. \acknowledgement E.M. is thankful to Prof. Attila G. Cs\'asz\'ar and Prof. Markus Reiher for the continuous encouragement and also thanks Benjamin Simmen for discussions. The financial support of the Hungarian Scientific Research Fund (OTKA, NK83583) an ERA-Chemistry grant is gratefully acknowledged. The computing facilities of HPC-Debrecen (NIIFI) were used during this work. \paragraph{Supporting Information Available} This information is available free of charge via the Internet at http://pubs.acs.org . \providecommand*\mcitethebibliography{\thebibliography} \csname @ifundefined\endcsname{endmcitethebibliography} {\let\endmcitethebibliography\endthebibliography}{} \begin{mcitethebibliography}{91} \providecommand*\natexlab[1]{#1} \providecommand*\mciteSetBstSublistMode[1]{} \providecommand*\mciteSetBstMaxWidthForm[2]{} \providecommand*\mciteBstWouldAddEndPuncttrue {\def\unskip.}{\unskip.}} \providecommand*\mciteBstWouldAddEndPunctfalse {\let\unskip.}\relax} \providecommand*\mciteSetBstMidEndSepPunct[3]{} \providecommand*\mciteSetBstSublistLabelBeginEnd[3]{} \providecommand*\unskip.}{} \mciteSetBstSublistMode{f} \mciteSetBstMaxWidthForm{subitem}{(\alph{mcitesubitemcount})} \mciteSetBstSublistLabelBeginEnd {\mcitemaxwidthsubitemform\space} {\relax} {\relax} \bibitem[Born and Oppenheimer(1927)Born, and Oppenheimer]{BoOp27} Born,~M.; Oppenheimer,~R. Zur Quantentheorie der Molekeln. \emph{Ann. der Phys.} \textbf{1927}, \emph{84}, 457--484\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Born(1951)]{Born51} Born,~M. Kopplung der Elektronen- und Kernbewegung in Molekeln und Kristallen. \emph{Nachr. Akad. Wiss. G\"ottingen, math.-phys. Kl., math.-phys.-chem. Abtlg.} \textbf{1951}, \emph{6}, 1--3\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Born and Huang(1954)Born, and Huang]{BoHu54} Born,~M.; Huang,~K. \emph{Dynamical Theory of Crystal Lattices}; Clarendon Press: Oxford, 1954\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Woolley(1976)]{Wo76} Woolley,~R.~G. Quantum Theory and Molecular Structure. \emph{Adv. Phys.} \textbf{1976}, \emph{25}, 27--52\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Woolley(1978)]{Wo77b} Woolley,~R.~G. Most a Molecule Have a Shape? \emph{J. Am. Chem. Soc.} \textbf{1978}, \emph{100}, 1073--1078\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Woolley and Sutcliffe(1977)Woolley, and Sutcliffe]{WoSu77} Woolley,~R.~G.; Sutcliffe,~B.~T. Molecular Structure and the Born--Oppenheimer Approximation. \emph{Chem. Phys. Lett.} \textbf{1977}, \emph{45}, 393--398\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Weininger(1984)]{We84} Weininger,~S.~J. The Molecular Structure Conundrum: Can Classical Chemistry Be Reduced to Quantum Chemistry? \emph{J. Chem. Educ.} \textbf{1984}, \emph{61}, 939--944\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Claverie and Diner(1980)Claverie, and Diner]{ClDi80} Claverie,~P.; Diner,~S. The Concept of Molecular Structure in Quantum Theory: Interpretation Problems. \emph{Isr. J. Chem.} \textbf{1980}, \emph{19}, 54--81\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Woolley(1980)]{Wo80} Woolley,~R.~G. Quantum Mechanical Aspects of the Molecular Structure Hypothesis. \emph{Isr. J. Chem.} \textbf{1980}, \emph{19}, 30--46\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Woolley(1986)]{Wo86} Woolley,~R.~G. Molecular Shapes and Molecular Structures. \emph{Chem. Phys. Lett.} \textbf{1986}, \emph{125}, 200--205\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Cafiero and Adamowicz(2004)Cafiero, and Adamowicz]{CaAd04} Cafiero,~M.; Adamowicz,~L. Molecular Structure in Non-Born--Oppenheimer Quantum Mechanics. \emph{Chem. Phys. Lett.} \textbf{2004}, \emph{387}, 136--141\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Sutcliffe and Woolley(2005)Sutcliffe, and Woolley]{SuWoCPL05} Sutcliffe,~B.~T.; Woolley,~R.~G. Comment on 'Molecular Structure in Non-Born--Oppenheimer Quantum Mechanics'. \emph{Chem. Phys. Lett.} \textbf{2005}, \emph{408}, 445--447\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[M\"uller-Herold(2006)]{UMH06} M\"uller-Herold,~U. On the Emergence of Molecular Structure from Atomic Shape in the $1/r^2$ Harmonium Model. \emph{J. Chem. Phys.} \textbf{2006}, \emph{124}, 014105--1--5\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[M\"uller-Herold(2008)]{UMH08} M\"uller-Herold,~U. On the Transition Between Directed Bonding and Helium-Like Correlation in a Modified Hooke-Calogero Model. \emph{Eur. Phys. J. D} \textbf{2008}, \emph{49}, 311--315\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Lude{\~{n}}a et~al.(2012)Lude{\~{n}}a, Echevarr{\'{\i}}a, Lopez, and Ugalde]{Lu12} Lude{\~{n}}a,~E.~V.; Echevarr{\'{\i}}a,~L.; Lopez,~X.; Ugalde,~J.~M. Non-Born-Oppenheimer Electronic and Nuclear Densities for a Hooke-Calogero Three-Particle Model: Non-Uniqueness of Density-Derived Molecular Structure. \emph{J. Chem. Phys.} \textbf{2012}, \emph{136}, 084103--1--12\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Cassam-Chena\"{i}(1998)]{CaCh98} Cassam-Chena\"{i},~P. A Mathematical Definition of Molecular Structure -- Open Problem. \emph{J. Math. Chem.} \textbf{1998}, \emph{23}, 61--63\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Goli and Shahbazian(2011)Goli, and Shahbazian]{GoSh11} Goli,~M.; Shahbazian,~S. Atoms in Molecules: Beyond Born--Oppenheimer Paradigm. \emph{Theor. Chim. Acta} \textbf{2011}, \emph{129}, 235--245\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Goli and Shahbazian(2012)Goli, and Shahbazian]{GoSh12} Goli,~M.; Shahbazian,~S. The Two-Component Quantum Theory of Atoms in Molecules (TC-QTAIM): Foundations. \emph{Theor. Chim. Acta} \textbf{2012}, \emph{131}, 1208--1--19\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[M\'atyus et~al.(2011)M\'atyus, Hutter, M\"uller-Herold, and Reiher]{MaHuMuRe11a} M\'atyus,~E.; Hutter,~J.; M\"uller-Herold,~U.; Reiher,~M. On the Emergence of Molecular Structure. \emph{Phys. Rev. A} \textbf{2011}, \emph{83}, 052512--1--5\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[M\'atyus et~al.(2011)M\'atyus, Hutter, M\"uller-Herold, and Reiher]{MaHuMuRe11b} M\'atyus,~E.; Hutter,~J.; M\"uller-Herold,~U.; Reiher,~M. Extracting Elements of Molecular Structure from the All-Particle Wave Function. \emph{J. Chem. Phys.} \textbf{2011}, \emph{135}, 204302--1--12\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Harris(1963)]{Harris63} Harris,~R.~A. Predissociation. \emph{J. Chem. Phys.} \textbf{1963}, \emph{39}, 978--987\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Kukulin et~al.(1988)Kukulin, Krasnopolsky, and Hor\'acek]{KuKrHo88} Kukulin,~V.~I.; Krasnopolsky,~V.~M.; Hor\'acek,~J. \emph{Theory of Resonances---Principles and Applications}; Kluwer: Dodrecth, 1988\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Reinhardt(1982)]{Rein82} Reinhardt,~W.~P. Complex Coordinates in the Theory of Atomic and Molecular Structure and Dynamics. \emph{Ann. Rev. Phys. Chem.} \textbf{1982}, \emph{33}, 223--255\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Moiseyev(1998)]{Moi98} Moiseyev,~N. Quantum Theory of Resonances: Calculating Energies, Widths and Cross-Sections by Complex Scaling. \emph{Phys. Rep.} \textbf{1998}, \emph{302}, 211\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Hazi and Taylor(1970)Hazi, and Taylor]{HaTa70} Hazi,~A.~U.; Taylor,~H.~S. Stabilization Method of Calculating Resonance Energies: Model Problem. \emph{Phys. Rev. A} \textbf{1970}, \emph{1}, 1109--1120\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Ho(1983)]{Ho83} Ho,~Y.~K. \emph{The Method of Complex Coordinate Rotation and its Applications to Atomic Collision Processes}; North-Holland Publishing Company: Amsterdam, 1983\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Usukura and Suzuki(2002)Usukura, and Suzuki]{UsSu02} Usukura,~J.; Suzuki,~Y. Resonances of Positronium Complexes. \emph{Phys. Rev. A} \textbf{2002}, \emph{66}, 010502(R)--1--4\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Mandelshtam et~al.(1993)Mandelshtam, Ravuri, and Taylor]{MaRaTa93} Mandelshtam,~V.~A.; Ravuri,~T.~R.; Taylor,~H.~S. Calculation of the Density of Resonance States Using the Stabilization Method. \emph{Phys. Rev. Lett.} \textbf{1993}, \emph{70}, 1932--1935\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Mandelshtam et~al.(1994)Mandelshtam, Taylor, Ryaboy, and Moiseyev]{MaTaRyMo94} Mandelshtam,~V.~A.; Taylor,~H.~S.; Ryaboy,~V.; Moiseyev,~N. Stabilization Theory for Computing Energies and Widths of Resonances. \emph{Phys. Rev. A} \textbf{1994}, \emph{50}, 2764--2766\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[M\"uller et~al.(1994)M\"uller, Yang, and Burgd\"orfer]{MuYaBu94} M\"uller,~J.; Yang,~X.; Burgd\"orfer,~J. Calculation of Resonances in Doubly Excited Helium Using the Stabilization Method. \emph{Phys. Rev. A} \textbf{1994}, \emph{49}, 2470--2475\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Ryaboy et~al.(1994)Ryaboy, Moiseyev, Mandelshtam, and Taylor]{RyMoMaTa94} Ryaboy,~V.; Moiseyev,~N.; Mandelshtam,~V.~A.; Taylor,~H.~S. Resonance Positions and Widths by Complex Scaling and Modified Stabilization Method: van der Waals Complex NeICl. \emph{J. Chem. Phys.} \textbf{1994}, \emph{101}, 5677--5682\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Aguilar and Combes(1971)Aguilar, and Combes]{AgCo71} Aguilar,~J.; Combes,~J.~M. A Class of Analytic Perturbations for One-Body Schr\"odinger Hamiltonians. \emph{Comm. Math. Phys.} \textbf{1971}, \emph{22}, 269--279\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Balslev and Combes(1971)Balslev, and Combes]{BaCo71} Balslev,~E.; Combes,~J.~M. Spectral Properties of Many-body Schr\"odinger Operators with Dilatation-Analytic Interactions. \emph{Comm. Math. Phys.} \textbf{1971}, \emph{22}, 280--294\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Simon(1972)]{Si72} Simon,~B. Quadratic Form Techniques and the Balslev-Combes Theorem. \emph{Comm. Math. Phys.} \textbf{1972}, \emph{27}, 1--9\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Wang and Bowman(1995)Wang, and Bowman]{WaBo95} Wang,~D.; Bowman,~J.~M. Complex L$^2$ Calculations of Bound States and Resonances of HCO and DCO. \emph{Chem. Phys. Lett.} \textbf{1995}, \emph{235}, 277--285\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Rom et~al.(1990)Rom, Engdahl, and Moiseyev]{RoEnMo90} Rom,~N.; Engdahl,~E.; Moiseyev,~N. Tunneling Rates in Bound Systems Using Smooth Exterior Complex Scaling within the Framework of the Finite Basis Set Approximation. \emph{J. Chem. Phys.} \textbf{1990}, \emph{93}, 3413--3419\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Rom et~al.(1991)Rom, Lipkin, and Moiseyev]{RoLiMo91} Rom,~N.; Lipkin,~N.; Moiseyev,~N. Optical Potentials by the Complex Coordinate Method. \emph{Chem. Phys.} \textbf{1991}, \emph{151}, 199--204\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Riss and Meyer(1993)Riss, and Meyer]{RiMe93} Riss,~U.~V.; Meyer,~H.-D. Calculation of Resonance Energies and Widths Using the Complex Absorbing Potential Method. \emph{J. Phys. B} \textbf{1993}, \emph{26}, 4503--4536\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Seideman and Miller(1992)Seideman, and Miller]{SeMi91} Seideman,~T.; Miller,~W.~H. Calculation of the Cumulative Reaction Probability via a Discrete Variable Representation with Absorbing Boundary Conditions. \emph{J. Chem. Phys.} \textbf{1992}, \emph{96}, 4412--4422\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Thompson and Miller(1997)Thompson, and Miller]{ThMi97} Thompson,~W.~H.; Miller,~W.~H. On the "Direct" Calculation of Thermal Rate Constants. II. The Flux-Flux Autocorrelation Function with Absorbing Potentials, with Application to the O+HCl$\rightarrow$OH+Cl Reaction. \emph{J. Chem. Phys.} \textbf{1997}, \emph{106}, 142--150\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Miller(1998)]{Miller98} Miller,~W.~H. "Direct" and "Correct" Calculation of Canonical and Microcanonical Rate Constants for Chemical Reactions. \emph{J. Phys. Chem. A} \textbf{1998}, \emph{102}, 793--806\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Papp et~al.(2002)Papp, Darai, Nishimura, Hlousek, Hu, and Yakovlev]{Papp02} Papp,~Z.; Darai,~J.; Nishimura,~A.; Hlousek,~Z.~T.; Hu,~C.-Y.; Yakovlev,~S.~L. Faddeev-Merkuriev Equations for Resonances in Three-Body Coulombic Systems. \emph{Phys. Lett. A} \textbf{2002}, \emph{304}, 36--42\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Papp et~al.(2005)Papp, Darai, Mezei, Hlousek, and Hu]{Papp05} Papp,~Z.; Darai,~J.; Mezei,~J.~Z.; Hlousek,~Z.~T.; Hu,~C.-Y. Accumulation of Three-Body Resonances above Two-Body Thresholds. \emph{Phys. Rev. Lett.} \textbf{2005}, \emph{94}, 143201--1--4\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[M\'atyus and Reiher(2012)M\'atyus, and Reiher]{MaRe12} M\'atyus,~E.; Reiher,~M. Molecular Structure Calculations: a Unified Quantum Mechanical Description of Electrons and Nuclei using Explicitly Correlated Gaussian Functions and the Global Vector Representation. \emph{J. Chem. Phys.} \textbf{2012}, \emph{137}, 024104--1--17\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Simmen et~al.(2013)Simmen, M\'atyus, and Reiher]{SiMaRe13} Simmen,~B.; M\'atyus,~E.; Reiher,~M. Elimination of the Translational Kinetic Energy Contamination in Pre-Born--Oppenheimer Calculations. \emph{Mol. Phys.} \textbf{2013}, DOI: 10.1080/00268976.2013.783938\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Cohen et~al.(2007)Cohen, Cvita{\v{s}}, Frey, Holmstr{\"o}m, Kuchitsu, Marquardt, Mills, Pavese, Quack, Stohner, Strauss, Takami, and Thor]{GreenBook07} Cohen,~E.; Cvita{\v{s}},~T.; Frey,~J.; Holmstr{\"o}m,~B.; Kuchitsu,~K.; Marquardt,~R.; Mills,~I.; Pavese,~F.; Quack,~M.; Stohner,~J.; et al. \emph{Quantities, Units and Symbols in Physical Chemistry (the IUPAC Green Book - 3rd Edition)}; RSC Publishing: Cambridge, 2007\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Boys(1960)]{Bo60} Boys,~S.~F. The Integral Formulae for the Variational Solution of the Molecular Many-Electron Wave Equation in Terms of Gaussian Functions with Direct Electronic Correlation. \emph{Proc. R. Soc. Lond. A} \textbf{1960}, \emph{369}, 402--411\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Singer(1960)]{Si60} Singer,~K. The Use of Gaussian (Exponential Quadratic) Wave Functions on Molecular Problems I. General Formulae for the Evaluation of Integrals. \emph{Proc. R. Soc. Lond. A} \textbf{1960}, \emph{258}, 412--420\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Jeziorski and Szalewicz(1979)Jeziorski, and Szalewicz]{JeSz79} Jeziorski,~B.; Szalewicz,~K. High-Accuracy Compton Profile of Molecular Hydrogen from Explicitly Correlated Gaussian Wave Function. \emph{Phys. Rev. A} \textbf{1979}, \emph{19}, 2360--2365\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Cencek and Rychlewski(1993)Cencek, and Rychlewski]{CeRy93} Cencek,~W.; Rychlewski,~J. Many-Electron Explicitly Correlated Gaussian Functions. I. General Theory and Test Results. \emph{J. Chem. Phys.} \textbf{1993}, \emph{98}, 1252--1261\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Rychlewski(2003)]{Ry03} Rychlewski,~J., Ed. \emph{Explicitly Correlated Wave Functions in Chemistry and Physics}; Kluwer Academic Publishers: Dodrecht, 2003\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Suzuki et~al.(1998)Suzuki, Usukura, and Varga]{SuUsVa98} Suzuki,~Y.; Usukura,~J.; Varga,~K. New Description of Orbital Motion with Arbitrary Angular Momenta. \emph{J. Phys. B: At. Mol. Opt. Phys.} \textbf{1998}, \emph{31}, 31--48\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Varga et~al.(1998)Varga, Suzuki, and Usukura]{VaSuUs98} Varga,~K.; Suzuki,~Y.; Usukura,~J. Global-Vector Representation of the Angular Motion of Few-Particle Systems. \emph{Few-Body Systems} \textbf{1998}, \emph{24}, 81--86\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Suzuki and Varga(1998)Suzuki, and Varga]{SuVaBook98} Suzuki,~Y.; Varga,~K. \emph{Stochastic Variational Approach to Quantum-Mechanical Few-Body Problems}; Springer-Verlag: Berlin, 1998\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Schrader(2004)]{Schrader04a} Schrader,~D.~M. Symmetry of Dipositronium, Ps$_2$. \emph{Phys. Rev. Lett.} \textbf{2004}, \emph{92}, 043401--1--4\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Schrader(2004)]{Schrader04b} Schrader,~D.~M. Symmetries and States of Ps$_2$. \emph{Nuclear Instruments and Methods in Physics Research B} \textbf{2004}, \emph{221}, 182--184\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Sch()]{Schrader07} D. M. Schrader, General Forms of Wave Functions for Dipositronium, Ps$_2$, in NASA GSFC Science Symposium on Atomic and Molecular Physics, ed. by A. K. Bhatia (NASA/CP-2006-214146, Goddard Space Flight Center, 2007), pp. 103-110.\relax \mciteBstWouldAddEndPunctfalse \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Anderson et~al.(1999)Anderson, Bai, Bischof, Blackford, Demmel, Dongarra, Du~Croz, Greenbaum, Hammarling, McKenney, and Sorensen]{lapack} Anderson,~E.; Bai,~Z.; Bischof,~C.; Blackford,~S.; Demmel,~J.; Dongarra,~J.; Du~Croz,~J.; Greenbaum,~A.; Hammarling,~S.; McKenney,~A.; et al. \emph{{LAPACK} Users' Guide}, 3rd ed.; Society for Industrial and Applied Mathematics: Philadelphia, PA, 1999\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Kukulin and Krasnopol'sky(1977)Kukulin, and Krasnopol'sky]{KuKr77} Kukulin,~V.~I.; Krasnopol'sky,~V.~M. A Stochastic Variational Method for Few-Body Systems. \emph{J. Phys. G: Nucl. Phys.} \textbf{1977}, \emph{3}, 795--811\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Alexander et~al.(1986)Alexander, Monkhorst, and Szalewicz]{AlMoSz86} Alexander,~S.~A.; Monkhorst,~H.~J.; Szalewicz,~K. Random Tempering of Gaussian-Type Geminals. I. Atomic Systems. \emph{J. Chem. Phys.} \textbf{1986}, \emph{85}, 5821--5825\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Alexander et~al.(1987)Alexander, Monkhorst, and Szalewicz]{AlMoSz87} Alexander,~S.~A.; Monkhorst,~H.~J.; Szalewicz,~K. Random Tempering of Gaussian-Type Geminals. II. Molecular Systems. \emph{J. Chem. Phys.} \textbf{1987}, \emph{87}, 3976--3980\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Alexander et~al.(1988)Alexander, Monkhorst, and Szalewicz]{AlMoSz88} Alexander,~S.~A.; Monkhorst,~H.~J.; Szalewicz,~K. Random Tempering of Gaussian-Type Geminals. III. Coupled Pair Calculations on Lithium Hydride and Beryllium. \emph{J. Chem. Phys.} \textbf{1988}, \emph{89}, 355--359\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Po0()]{Po04} M. J. D. Powell, The NEWUOA software for unconstrained optimization without derivatives (DAMTP 2004/NA05), Report no. NA2004/08, http://www.damtp.cam.ac.uk/user/na/reports04.html last accessed on January 18, 2013\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Li and Shakeshaft(2005)Li, and Shakeshaft]{LiSh05} Li,~T.; Shakeshaft,~R. $S$-Wave Resonances of the Negative Positronium Ion and Stability of a Systems of Two Electrons and an Arbitrary Positive Charge. \emph{Phys. Rev. A} \textbf{2005}, \emph{71}, 052505--1--7\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Suzuki and Usukura(2004)Suzuki, and Usukura]{SuUs04} Suzuki,~Y.; Usukura,~J. Stochastic Variational Approach to Resonances of Ps$^-$ and Ps$_2$. \emph{Nuclear Instruments and Methods in Physics Research B} \textbf{2004}, \emph{221}, 195--199\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Korobov(2000)]{Ko00} Korobov,~V.~I. Coulomb Three-Body Bound-State Problem: Variational Calculations of Nonrelativistic Energies. \emph{Phys. Rev. A} \textbf{2000}, \emph{61}, 064503--1--3\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[DiRienzi and Drachman(2010)DiRienzi, and Drachman]{DiDr10} DiRienzi,~J.; Drachman,~R.~J. Resonances in the Dipositronium System: Rydberg States. \emph{Can. J. Phys.} \textbf{2010}, \emph{88}, 877--883\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Suzuki and Usukura(2000)Suzuki, and Usukura]{SuUs00} Suzuki,~Y.; Usukura,~J. Excited States of the Positronium Molecule. \emph{Nuclear Instruments and Methods in Physics Research B} \textbf{2000}, \emph{171}, 67--80\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Bubin and Adamowicz(2006)Bubin, and Adamowicz]{BuAd06} Bubin,~S.; Adamowicz,~L. Nonrelativistic Variational Calculations of the Positronium Molecule and the Positronium Hydride. \emph{Phys. Rev. A} \textbf{2006}, \emph{74}, 052502--1--5\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[{Ko{\l}os} and Rychlewski(1990){Ko{\l}os}, and Rychlewski]{KoRy90} {Ko{\l}os},~W.; Rychlewski,~J. Adiabatic Potential Energy Curves for the $b$ and $e\ ^3\Sigma_u^+$ States of the Hydrogen Molecule. \emph{J. Mol. Spectrosc.} \textbf{1990}, \emph{143}, 237--250\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[cod()]{codata06} {h}ttp://physics.nist.gov/cuu/Constants (CODATA 2006)\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Pachucki and Komasa(2009)Pachucki, and Komasa]{PaKo09} Pachucki,~K.; Komasa,~J. Nonadiabatic Corrections to Rovibrational Levels of H$_2$. \emph{J. Chem. Phys.} \textbf{2009}, \emph{130}, 164113--1--11\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Wolniewicz et~al.(2006)Wolniewicz, Orlikowski, and Staszewska]{WoOrSt06} Wolniewicz,~L.; Orlikowski,~T.; Staszewska,~G. $^1\Sigma_\mathrm{u}$ and $^1\Pi_\mathrm{u}$ States of the Hydrogen Molecule: Nonadiabatic Couplings and Vibrational Levels. \emph{J. Mol. Spectrosc.} \textbf{2006}, \emph{238}, 118--126\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Wolniewicz and Dressler(1992)Wolniewicz, and Dressler]{WoDr92} Wolniewicz,~L.; Dressler,~K. Nonadiabatic Energy Corrections for the Vibrational Levels of the $B$ and $B'\ ^1\Sigma_\mathrm{u}^+$ States of the H$_2$ and D$_2$ Molecules. \emph{J. Chem. Phys.} \textbf{1992}, \emph{96}, 6053--6064\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Wolniewicz(1995)]{Wo95} Wolniewicz,~L. Lowest Order Relativistic Corrections to the Energies of the $B\ ^1\Sigma_\mathrm{u}$ State of H$_2$. \emph{Chem. Phys. Lett.} \textbf{1995}, \emph{233}, 647--650\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Dressler and Wolniewicz(1995)Dressler, and Wolniewicz]{DrWo95} Dressler,~K.; Wolniewicz,~L. Experiment and Theory of High Resolution Spectra of Rovibronic Molecular States. \emph{Ber Bunsenges. Phys. Chem.} \textbf{1995}, \emph{99}, 246--250\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Wolniewicz(2007)]{Wo07} Wolniewicz,~L. Non-Adiabatic Energies of the $a\ ^3\Sigma_\mathrm{g}^+$ State of the Hydrogen Molecule. \emph{Mol. Phys.} \textbf{2007}, \emph{105}, 1497--1503\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Herzberg(1950)]{Herzberg1} Herzberg,~G. \emph{Spectra of Diatomic Molecules}; D. Van Nostrand Company, Inc.: Princeton, New Jersey, 1950\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Brown and Carrington(2003)Brown, and Carrington]{BrCa03} Brown,~J.; Carrington,~A. \emph{Rotational Spectroscopy of Diatomic Molecules}; Cambridge University Press: Cambridge, 2003\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[spe()]{spectracc} The term ``spectroscopic accuracy'' is not uniquely defined but it is usually used to refer to calculations providing vibrational transition wave numbers with a certainty of at least 1~cm$^{-1}$\ ($\approx 4.6\ \mu$E$_\mr{h}$) and even better accuracy for calculated rotational transitions.\relax \mciteBstWouldAddEndPunctfalse \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Dieke(1958)]{Di58} Dieke,~G.~H. The Molecular Spectrum of Hydrogen and Its Isotopes. \emph{J. Mol. Spectrosc.} \textbf{1958}, \emph{2}, 494--517\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[{Ko{\l}os} and Rychlewski(1977){Ko{\l}os}, and Rychlewski]{KoRy77} {Ko{\l}os},~W.; Rychlewski,~J. Ab initio Potential Energy Curves and Vibrational Levels for the $c$, $I$, and $i$ States of the Hydrogen Molecule. \emph{J. Mol. Spectrosc.} \textbf{1977}, \emph{66}, 428--440\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Pachucki and Komasa(2008)Pachucki, and Komasa]{PaKo08} Pachucki,~K.; Komasa,~J. Nonadiabatic Corrections to the Wave Function and Energy. \emph{J. Chem. Phys.} \textbf{2008}, \emph{129}, 034102--1--7\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Kiyoshima et~al.(1999)Kiyoshima, Sato, Adamson, Pazyuk, and Stolyarov]{KiSaAdPaSt99} Kiyoshima,~T.; Sato,~S.; Adamson,~S.~O.; Pazyuk,~E.~A.; Stolyarov,~A.~V. Competition Between Predissociative and Radiative Decays in the $e\ ^3\Sigma_\mathrm{u}^+$ and $d\ ^3\Pi_\mathrm{u}^-$ States of H$_2$ and D$_2$. \emph{Phys. Rev. A} \textbf{1999}, \emph{60}, 4494--4502\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Chiu and Bhattacharyya(1979)Chiu, and Bhattacharyya]{ChBh79} Chiu,~L.-Y.~C.; Bhattacharyya,~D.~K. Lifetimes of Fine Structure Levels of Metastable H$_2$ in the $c\ ^3\Pi_\mathrm{u}$ State. \emph{J. Chem. Phys.} \textbf{1979}, \emph{70}, 4376--4382\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Comtet and {de Bruijn}(1985)Comtet, and {de Bruijn}]{CoBr85} Comtet,~G.; {de Bruijn},~D.~P. Calculation of the Lifetimes of Predissociative Levels of the $c\ ^3\Pi_\mathrm{u}$ State in H$_2$, HD and D$_2$. \emph{Chem. Phys.} \textbf{1985}, \emph{94}, 365--370\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[{Ko{\l}os} and Wolniewicz(1965){Ko{\l}os}, and Wolniewicz]{KoWo65} {Ko{\l}os},~W.; Wolniewicz,~L. Potential-Energy Curves for the $X\ ^1\Sigma_\mathrm{g}^+$, $b\ ^3\Sigma_\mathrm{u}^+$, and $C\ ^1\Pi_\mathrm{u}$ States of the Hydrogen Molecule. \emph{J. Chem. Phys.} \textbf{1965}, \emph{43}, 2429--2441\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Meierjohann and Vogler(1978)Meierjohann, and Vogler]{MeVo78} Meierjohann,~B.; Vogler,~M. Vibrationally Resolved Predissociation of the $c\ ^3\Pi_\mathrm{u}$ and $e\ ^3\Sigma_\mathrm{u}^+$ States of H$_2$ by Flight-Time-Difference Spectroscopy. \emph{Phys. Rev. A} \textbf{1978}, \emph{17}, 47--51\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[{de Bruijn} et~al.(1984){de Bruijn}, Neuteboom, and Los]{BrNeLo84} {de Bruijn},~D.~P.; Neuteboom,~J.; Los,~J. Predissociation of the $c\ ^3\Pi_\mathrm{u}$ State of H$_2$, Populated after Charge Exchange of H$_2^+$ with Several Targets at keV Energies. \emph{Chem. Phys.} \textbf{1984}, \emph{85}, 233--251\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Miller and Freund(1974)Miller, and Freund]{MiFr74} Miller,~T.~A.; Freund,~R.~S. Singlet-Triplet Anticrossings in H$_2$. \emph{J. Chem. Phys.} \textbf{1974}, \emph{61}, 2160--2162\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \end{mcitethebibliography} \end{document}
2112.01646
\section{Introduction} `Quantum Blur' is a method for manipulating images using quantum circuits~\cite{wootton-fdg,capdeville-21}, devised as first step towards showing that aspects of quantum computing could be used for procedural generation~\cite{wootton-fdg,wootton-cog,hamido-21}. Images are first translated into circuits via an amplitude encoding~\cite{schuld-20}, and then manipulated by making changes to the circuit. The encoding is performed such that small modifications to the circuit result in small, blur-like changes to the image. Due to the use of an amplitude encoding, each point in the image corresponds to a possible output bit string. The number of qubits required to represent a given image therefore scales logarithmically with the number of pixels. As a specific example, a $2048\times1024$ pixel image requires 21 qubits, which is well within reach of both current prototype quantum computers and simulation with conventional hardware. The method can therefore handle high-resolution images using current resources for quantum computation. This is indeed the primary motivation for its development: it is an application of quantum software that is useful with current resources, rather than waiting on future developments. All quantum software is built from a basic set of quantum gates, each of which are used to manipulate quantum superposition and entanglement. As an example of quantum software, Quantum Blur was designed based upon these principles: encoding images as entangled states and inducing a form of blur effect using quantum superposition. However, this alone does not prove that superposition and entanglement are required to obtain the results that we see from Quantum Blur. One of the main aims of this work is therefore to determine what the results of the effect would be when coherence and entanglement are restricted. We will also make comparisons with conventional blur effects. Note that the effect is typically implemented through a simulation of a quantum computer. Though it has been designed so that it can run on real quantum hardware, the small scale means that simulation is typically more convenient. As a proof-of-principle, it therefore only proves that thinking in terms of manipulating quantum superposition and entanglement can be a rewarding way to create methods for procedural generation. It is not yet an example of the advantages of quantum hardware. \section{Summary of Quantum Blur} We will briefly summarize the aspects of quantum blur that are relevant for this work. We will restrict to the case of monochromatic images (or height maps) defined by a set $h$ of values $0 \leq h(x,y) \leq 1$, where the $(x,y)$ are the pixel coordinates. To encode this image in a quantum state, each pixel coordinate $(x,y)$ is assigned a bit string $s(x,y)$. These are chosen such that neighbouring coordinates correspond to strings that differ on only one bit (i.e., a Hamming distance of 1)~\cite{wootton-fdg}. Note that this would not be achieved by simply converting the coordinate values to binary, since the lexicographic order this results in does not have the required properties. Instead, a form of reflected binary code~\cite{lucal-59} is used. The length of the bit strings used is the minimum required to assign a unique string to each coordinate. Specifically, for an image of size $W \times H$, the number of bits (and therefore qubits) required is $n = \left\lceil \log_2 W \right\rceil + \left\lceil \log_2 H \right\rceil$. Images are then encoded by preparing the following superposition state for the $n$ qubits, \begin{equation} \label{eq:state} \left | h \right \rangle = \frac{ \sum_{(x,y)} \sqrt{h(x,y)} \left | s(x,y) \right \rangle} {\sqrt{\sum_{(x,y)} h(x,y)}} . \end{equation} Note that image could equally be encoded with alternate versions of this superposition with arbitrary phases on each term in the superposition. The decoding of images is done by determining all the probabilities $p(s(x,y))$. This can be done either through simulation of the quantum process, or running it on quantum hardware (repeating many times to sample the probabilities). The resulting probabilities are then taken to be the $h(x,y)$, after rescaling so that the maximum is equal to 1, \begin{equation} \label{eq:encoding} h(x,y) = \frac{p(s(x,y))}{\max_{s'} p(s')}. \end{equation} Note that the same output image could equally emerge from a quantum superposition or the classical probability distribution with the same probabilities for each bit string. The distinction between these therefore does not come from the final readout, but from the manipulation of the image while it is encoded as a quantum state. Manipulation of images is performed by adding quantum gates after the preparation step and before the final readout of probabilities. The simplest effect is the one that the method is named after: the blur. To induce a blur effect, each $h(x,y)$ should be updated to introduce a dependence on the neighbouring points. Given the encoding of points as bit strings, this can be done with a small angle rotation such as \texttt{rx}. To see this, consider the points $(x,y)$ and $(x_j,y_j)$ for which the bit strings $s(x,y)$ and $s(x_j,y_j)$ differ only on the single bit $j$. Applying \texttt{rx} to the corresponding qubit has the following effect on the amplitudes, $$ \sqrt{h(x,y)} \rightarrow i \cos \frac{\theta}{2} \sqrt{h(x,y)} + \sin \frac{\theta}{2} \sqrt{h(x_j,y_j)}. $$ This translates into an interpolation between the two $h$ values, parameterized by the angle $\theta$. When \texttt{rx} is applied to all qubits, we can make the following approximation for small $\theta$, $$ \sqrt{h(x,y)} \rightarrow i \sqrt{h(x,y)} + \frac{\theta}{2} \sum_j \sqrt{h(x_j,y_j)}. $$ Though this will be precise only for extremely small values of $\theta$, it nevertheless shows that there will be a blurring of the value $h(x,y)$ by performing an interpolation of the corresponding amplitude with the average of the values for all the points $(x_j,y_j)$. Given the encoding of coordinates to bit strings, this will include all neighbouring points, as required for a standard blur. It is similarly possible to show that \texttt{ry} rotations can induce a blur effect for small angles. However, the \texttt{rx} and \texttt{ry} blurs quickly deviate as $\theta$ is increased when images are encoded as in Eq. \ref{eq:state}. The simplest and most potent example is that of a plain white image, for which $h(x,y)=1$ for all points. Given the encoding of Eq. \ref{eq:encoding}, this corresponds to the state $\left| + \right\rangle ^{\otimes n}$, which is the eigenstate of the \texttt{rx} rotation. The image will therefore not be effected by the \texttt{rx} blur, as would be expected for a standard blur effect. For \texttt{ry} rotations, however, the $\left| + \right\rangle$ states would be rotated towards the $\left| 1 \right\rangle$, resulting in $\left| 1 \right\rangle ^{\otimes n}$ for $\theta=\pi/2$. The resulting image would have a single bright pixel for the coordinate whose string is all \texttt{1}s, and $h(x,y)=0$ otherwise. Even for small angles, the fact that this single pixel results becomes much more probable than all others causes it to dominate the image. This is a clear demonstration of an interference effect, rather than standard blur-like behaviour. However, in this case it is one that leads to unimpressive visuals. More general images will also include a significant overlap with $\left| + \right\rangle ^{\otimes n}$, and this component will be similarly affected. Therefore, though both the \texttt{rx} and \texttt{ry} approaches depart from the simple blur effect for large $\theta$, this typically occurs much faster and less usefully for the \texttt{ry} blur. The \texttt{rx} blur is therefore typically used when using the standard encoding of states according to Eq. \ref{eq:state}. \section{Quantifying Blur} To compare effects from Quantum Blur to conventional blur effects, we will need a way to quantify their results. Blur effects are typically known for smoothing and removing detail. As a consequence of this, they also remove features within an image that cause it to be asymmetric. As such we will use measures of asymmetry and detail in order to quantify their effects. As a measure of difference between two images, we will use the root mean square difference of their pixel values, $$ d(h,h') = \sqrt{\frac{\sum_{(x,y)} (h(x,y) - h'(x,y))^2}{WH }}. $$ This will form the basis of our measure of asymmetry. Within the context of Quantum Blur, there are a natural set of symmetries to analyze. These are the reflections that result from applying a single qubit bit flip gate (\texttt{x} or \texttt{y}) to any qubit. For a measure of asymmetry, we will take the mean difference between each of these flipped images and the original. Specifically, $$ a(h) = \frac{\sum_j d(h,h_j)}{n}, $$ Here $h_j$ is the image that results from flipping qubit $j$, and $n$ is the number of qubits that represents the image. To quantify detail, we will consider the difference between each pixel value and its neighbours. For each pixel we find the maximum such difference. Then the root mean square of all these maximum differences is then used as our measure of detail. This measure of detail is defined in terms of the pixel coordinates $(x,y)$. From the perspective of the bit strings $s(x,y)$, the maximization over all neighbouring coordinates corresponds to maximizing over a subset of the neighbouring strings. This motivates us to define a corresponding detail for the state, using the differences between values for all neighbouring strings. This is then the root mean square of all the differences, $$ \max_j (h(x,y) - h(x_j,y_j))^2, $$ where $(x_j,y_j)$ are coordinates such that the strings $s(x,y)$ and $s(x_j,y_j)$ differ only on bit $j$. Using these quantities, Fig. \ref{classical} shows the behaviour of classical blur effects, where the strength of the blur is increased by increasing the kernel size. In these results we see that both the asymmetry and detail decrease significantly. As one would expect, the difference between the blurred image and original image also increases. The two measures of detail are found to behave very similarly under such blurs. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.9\columnwidth]{classical-all.png} \caption{Difference to original, asymmetry and detail for various conventional blurs when applied to a $128\times128$ monochrome version of the `sailboat on lake'~\cite{images} test image. The blur effects used here are the box blur, gaussian filtering and median filtering implemented by OpenCV~\cite{opencv_library}} \label{classical} \end{center} \end{figure} \section{The Role of Superposition} \begin{figure}[htbp] \begin{center} \includegraphics[width=0.9\columnwidth]{quantum-all.png} \caption{Difference to original, asymmetry and detail for a quantum blur implemented with \texttt{rx} rotations on each qubit and an incoherent analogue of quantum blur. The parameter $\xi$ is the fraction of a $2\pi$ rotation applied in the rotations. For `rx', the same rotation angle is applied to all qubits. The `dim' results are the same but for simulations with limited entanglement, as explained in Section \ref{sec:ent}. The adaptive schedule of rotation angles is explained in section \ref{sec:adaptive}.} \label{rxry} \end{center} \end{figure} As mentioned above, the effect of an \texttt{x} or \texttt{y} gate on any qubit is to perform a reflection. Since \texttt{rx} and \texttt{ry} rotations correspond to partially performed \texttt{x} and \texttt{y} gates, these therefore correspond to a combination of the original image with the reflection. As such, it might be expected that these gates would result in the image becoming more symmetric. However, it is important to remember that the images are represented as quantum states during the application of the gates, and that different phases will be induced by the rotations on different terms within the superposition. The superposition of the states can therefore experience both constructive and destructive interference. The effect of this on the asymmetry is hard to predict, and so the naive expectation of reduced asymmetry cannot be expected to hold in general. In order to see the effects of quantum superposition more clearly, we will construct an analogue of the quantum blur for which there is no interference. To see how this can be defined, note that a $\theta$ rotation of \texttt{rx} or \texttt{ry} results a superposition of the original state and a bit flipped state. The magnitude of the amplitudes for these states are $\cos{\theta/2}$ and $\sin{\theta/2}$, respectively. If orthogonal, these would correspond to probabilities $\cos^2{\theta/2}$ and $\sin^2{\theta/2}$ of getting these states as outcomes if measuring in the appropriate basis. This means that, if we neglect any interference effects, the behaviour is equivalent to randomly applying a bit flip to the qubit with probability $\sin^2{\theta/2}$. As such we can consider a decohered form of the superposition in which we imagine that a bit flip has randomly been applied or not, and take a corresponding weighted average with these probabilities of the flipped and unflipped images. Note that the implementation of the incoherent blur is done entirely classically. Only the weighting probabilities are calculated from the action of the quantum gates. However, it could also be implemented within a quantum simulation using bit flip noise with probability $\sin^2{\theta/2}$ on each qubit. This form of the blur can certainly be expected to reduce the asymmetry. Indeed, we can expect to see a completely uniform image for $\theta = \pi/1$ where the flipped and unflipped images are evenly mixed. In Fig. \ref{rxry} we see that the incoherent effect has similar behaviour to the conventional blurs in the range of angles from $\theta=0$ (no effect) and $\theta=\pi/2$ (fully mixed). We find that the difference to the original image increases, and the asymmetry and detail decrease to zero. Over the entire range from $\theta=0$ to $\theta=\pi$ studied, periodic behaviour is seen as the weighting of the mixing moves from a bias against flipping images (with no flips at $\theta=0$) to a bias towards flipping (with the effect being the same as \texttt{x} or \texttt{y} gates on all qubits at $\theta=\pi$). For higher $\theta$ this would then go back to a bias against flipping (with no flips at $\theta=2\pi$), and so on. The coherent effects of quantum blur for the show very different behaviour. Though the effects for an \texttt{rx} blur is identical to the incoherent case for $\theta=0$ and $\theta=\pi$, the interference effects prevent the vanishing asymmetry and detail at $\theta=\pi/2$. Instead, the asymmetry quickly jumps from the initial value of $0.15$ to around $0.1$ (decaying more quickly than the incoherent case) and mostly remains around this level while interference effects dominate. For the effects of an \texttt{rx} blur on the image detail we see a small decay for small $\theta$, followed by mostly growth. The state detail, however, shows very similar behaviour to the asymmetry. The difference is due to the scrambling effect of the image at $\theta=\pi$: applying a flip to all qubits results in pixel values that where not previously neighbouring (in terms of coordinates) being moved together. The resulting discontinuities therefore increase the measured image detail. For the state detail this does not occur, since relationships between neighbouring strings remain invariant. \section{The Role of Entanglement} \label{sec:ent} The preparation of images begins with the standard initial state of a quantum computer: $\left| 0 \right\rangle^{\otimes n}$. When measured, this outputs a bit string of all \texttt{0}s with certainty, and so corresponds to a single bright pixel at the corresponding coordinate. This is shown in Fig. \ref{dotdot} (a), where the bright pixel for the all \texttt{0} string is located at the top left. By applying gates we `unfold' this single pixel into any desired image. With single qubit rotations we can apply only reflections across the whole image. This is sufficient only to create simple shapes. For more complex images, it is necessary to apply operations only to specific regions. To address parts of an image we can use controlled operations, which would apply reflections only to those parts of the image whose coordinates correspond to to the conditions on the control qubits. Since these operations are entangling, the circuits required to draw general images are necessarily entangling. \subsection{Drawing a simple image} As a simple example of this, we will consider the preparation of the a $32\times 32$ pixel version of the `I' in the IBM logo. This requires 10 qubits, with 5 required to encode the x coordinate and 5 for the y coordinate. The initial state is $\left| 00000 \, 00000 \right\rangle$, where the space is used to separate the qubits for the x coordinate (on the left) from those for the y coordinate (on the right). For each, the qubits are numbered from 0-4, from right to left. We will refer to qubit 0 of the y register as $y-0$, and so on. To begin, a \texttt{x} gate is applied on qubit $y-4$ to move the bright pixel to $(0,1)$, and then another on qubit $x-1$ moves it to $(15,1)$. The resulting image is shown in Fig. \ref{dotdot} (b). Next, $\pi/2$ \texttt{rx} rotations are applied to qubits $x-0$, $x-3$, $x-4$. These are reflections that expand the single point into the bar seen in Fig. \ref{dotdot} (c). A further $\pi/2$ \texttt{rx} rotation is then applied to qubit $y-3$ to widen the bar, and then the same to qubits $y-0$ and $y-2$ to copy it as shown in Fig. \ref{dotdot} (d). A further reflection applied to qubit $y-1$ then copies these bars into the center, as seen in Fig. \ref{dotdot} (e). This is done by a \texttt{rx} rotation with an angle of less than $\pi/2$, which results in the reflected bars having a smaller amplitude. Finally, a reflection must be applied to extend the upper and lower bars. The can be done by a $\pi/2$ \texttt{rx} rotation on $x-2$. However, note that this would extend all the bars. To extend only the upper and lower, the rotation must be applied such that it only addresses the correct parts of the image. The parts in question are all such that qubit $y-1$ should be in state $\left|0\right\rangle$. The desired effect can therefore be applied by a \texttt{crx} gate, a controlled \texttt{rx}, conjugated by \texttt{x} gates on the control qubit. This applies the \texttt{rx} rotation to the target qubit only when the control is in state $\left|0\right\rangle$. Using $y-1$ as control and $x-2$ as target, this will extend only the upper and lower bars, as required. Note that the amplitude of these is then spread out over more points. For a suitably chosen angle for step (e), all points will be brought to equal brightness. This completes the image, as shown in Fig. \ref{dotdot} (f). \begin{figure}[htbp] \begin{center} \includegraphics[width=0.9\columnwidth]{dotdot.png} \caption{The process of creating an `I' using quantum gates, as explained in the text. The bit string coordinates are shown for each y axis value at the bottom. The x axis coordinates are labelled in the same way from left to right.} \label{dotdot} \end{center} \end{figure} Since this image is mostly very simple, most of the operations applied here are on single qubits. Only the final operation requires a two qubit entangling gate. However, for more complex images, the need for entanglement will grow. Also, recall that the standard method of encoding general images as states gives the same phase to all amplitudes. This would not have occurred in the state created above. In general, it may be possible to find ways to create images with less entanglement if there is not constraint on phases. Nevertheless, entanglement will be required in general. \subsection{Effects of limiting entanglement} To demonstrate the necessity of entanglement, we can limit the amount of entanglement used when simulating the quantum process to see the effect on the final images. This was done by first transpiling the preparation operation to single and two qubit gates, and then running the resulting circuit on an matrix product state simulator. This simulator represents the state as a number of tensors connected by bonds whose dimension depends on how much entanglement exists between them~\cite{vidal-03,schollwoeck-11}. By placing an upper limit on the bond dimension, the amount of entanglement allowed to exist in the circuit can be limited. The effects of this are shown on two test images in Fig. \ref{entangle}. \begin{figure*}[htbp] \begin{center} \includegraphics[width=2\columnwidth]{entanglement0.png} \caption{Two $128 \times 128$ test images~\cite{images}, shown when reproduced on the Qiskit MPS simulator~\cite{qiskit} with different maximum bond dimensions.} \label{entangle} \end{center} \end{figure*} For the sailboat image, certain aspects can be easily reproduced by single qubit reflections. Most prominently, these are the dark areas at either side representing the two trees and the shoreline. These aspects are therefore clearly visible even with very low entanglement. Features with fine detail, such as definition on branches and the sailboat, become visible as the amount of entanglement is increased. For the image of a town, the mixture of different angles and high detail makes it difficult to create an approximation with single qubit gates. The image is therefore entirely unrecognizable until a large amount of entanglement is present. The effects of limiting entanglement are also seen in Fig. \ref{rxry}, where the \texttt{rx} blur is shown with the bond dimension limited to 8 and 16. The effect on the difference to the original is to start off at a non-zero value (since the original cannot be represented), However, once the \texttt{rx} blur begins to take effect, the entanglement limited versions have a reduced effect. For the asymmetry and details, we find that the blurring effect of the entanglement limitation means that there is lower baseline level of these properties, and the effects induced by the \texttt{rx} blur are more muted. These results underline the fact that entanglement is required to see the full quantum-induced effects of the blur. In particular, note that the asymmetry and details show a peak at $\theta=\pi/2$ for the \texttt{rx} blur. This peak comes at the point at which the interference effects are maximal. However, it is entirely absent from the two examples with limited entanglement, showing that the effects of interference cannot be fully represented in the images without entanglement. Another way that the entanglement could be limited is through noise. A simple example of this would be dephasing noise applied after the image has been encoded. This would have the effect of destroying the entanglement, but would result in no observable difference if the image where decoded. In the most extreme form, this noise could remove the entanglement entirely, leaving only a classical probability distribution of bit strings. Nevertheless, the noise would have large effects on the manipulation of the image. As a simple example of this,consider the encoding of a plain white image. This could be done using the unentangled state $\left|+\right\rangle^{\otimes n}$ or entangled states such as graph states, depending on the phase convention chosen. Which initial state is chosen greatly determines the way that any subsequent quantum gates affect the superposition state, and therefore the output image. However, complete dephasing would result in the maximally mixed state in all cases. This is invariant under any quantum gate, and so it would be impossible to alter the image in any way. The reduction of a quantum superposition to a classical probability distribution therefore reduces not only entanglement, but also the ability to implement effects due to quantum interference. \section{Adaptive Blur Effect} \label{sec:adaptive} The standard approach to an \texttt{rx} blur is to apply the same rotation angle to all qubits. However, for most images this will not create the most blur-like effect. For example, consider an image with just a single bright pixel. This pixel only has four neighbours, and so only rotations on four qubits will actually result in amplitude being exchanged with these neighbours. Any image larger than $4\times 4$ pixels will have more qubits than this. It would therefore be more effective to first determine the pertinent four qubits, and apply the rotation only to them. In general, images are more complex than just a single bright pixel. Instead therefore will e many pixels with non-zero brightness, with four pertinent qubits for each. An overall weight for each qubit can then be determined based on how many pixels it affects, and the brightness of each. Then, rather than applying the same angle $\theta$ to every qubit, the full rotation could be applied only to the highest weighted qubit. For the rest, a proportion of the angle could e applied based on their weight. We refer to \texttt{rx} rotations applied adaptively applied in this way, as an the `adaptive blur'. For a uniformly bright image, the lowest weight will go to qubits $x-0$ and $y-0$. These reflect around the middle, and so the only pixels it reflects onto their neighbours are those along the middle. For $x-1$ and $y-1$ there are two lines of reflection, and so twice as many pixels reflected to their neighbours. Similarly qubits $x-j$ and $y-j$ reflect $2^j$ as many pixels to their neighbours as $x-0$ and $y-0$. The weight assigned to each qubit will therefore double as we go up the registers. This analysis will also be approximately true for any other sufficiently dense image. This exponential schedule of angles can therefore be used as an approximation of the adaptive blur, without the need for an analysis of the initial image. In Fig. (\ref{rxry}) we see that the behaviour of the adaptive blur has similarities to both the coherent and incoherent cases. Like the former, the interference effects prevent the detail and asymmetry from vanishing. However, like the latter the decay is relatively slow and smooth. Also, the use of different angles for the \texttt{rx} rotations on different qubits means that the same periodic behaviour is not seen. Some traces on the periodic process applied to each qubit are seen, however. Notably that the difference to the initial state does not increase monotonically, with a notable dip around $\theta=\pi/2$. \section{Using Quantum Hardware} Running the quantum circuits required for quantum blur can be done on real quantum hardware, or by simulation on conventional computing hardware. For most problems of interest for quantum computing, the former offers a significant advantage in computational complexity~\cite{nielsen-00}. However, since Quantum Blur was designed as a proof-of-principle method for relatively few qubits, this complexity advantage does not exist. The simulation process can be performed with a complexity of at most $O(WH \log(WH)) = O(n 2^n)$ ~\cite{wootton-fdg}. Though this is exponential with the number of qubits, note that it is only a little higher than linear in the number of pixels. When the simulation method allows direct access to the statevector, the probabilities can be accessed directly from this with no additional complexity. When they must be sampled, the sampling complexity is the same as for real quantum hardware, as discussed below. For running on real quantum hardware, there are three parts to consider: preparation of the state, running the blur circuit and sampling the output. For the first, a quantum circuit needs to be constructed that will prepare the initial state on the device. Since there are $WH$ values to be encoding, this circuit requires at least an $O(WH)=O(2^n)$ complexity in some form~\cite{iten-2016}. Typically this is a time complexity, though alternatives exist which place the burden on space complexity~\cite{araujo-21}. For the blur circuit, the complexity will depend on the complexity of the circuit. The \texttt{rx} blur, for example, can be implemented with the simultaneous application of\texttt{rx}, and so an $O(1)$ time complexity (and $O(n)$) space. Finally, for the sampling the process must be repeated enough times to sample the probabilities of each pixel with sufficient accuracy, which has a complexity of at least $O(WH) = O(2^n)$ for any sensible choice of accuracy. However, the coefficient in this case may be very high, depending on the accuracy required. The total complexity is therefore at least $O(WH)=O(2^n)$. Note that since simulation avoids the step of constructing and running the initialization circuit, and can also avoid sampling of the output, the runtime can easily be very short in comparison with real quantum hardware. This is despite the slight complexity disadvantage for simulation. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.9\columnwidth]{real_is.png} \caption{A $32\times 32$ pixel `I' created through several different methods: (a) Run by simulation, but with the probabilities estimated from 8192 samples; (b) As before, but with the brightness scaling with the logarithm of the probabilities; (c) Sampled from 8192 runs on the 27 qubit \texttt{ibmq\_sydney} device; (d) as before, but logarithmically plotted.} \label{real_run} \end{center} \end{figure} The effects of of using real quantum hardware, and of sampling the probabilities, can be seen in Fig. \ref{real_run}. Note that the sampling causes statistical noise, which can be reduced by increasing the sample number. The use of current prototype quantum hardware also introduces the effects of imperfections in all quantum gates. This can be seen from the non-zero brightness in areas that should be dark. Since errors typically effect single qubits, they effectively induce additional reflections. \section{Conclusions} Blur effects typically have the effect of arbitrarily washing out a source image. However, as we have shown, the Quantum Blur effect retains a significant degree of asymmetry and detail. We have also shown that this behaviour emerges explicitly from the quantum phenomena that are crucial for quantum computation: superposition and entanglement. This underlines that the `Quantum Blur' is a genuine example of quantum software in use. Nevertheless, it is important to note that symmetry and detail preserving behaviour is not necessarily unique to the quantum approach. Classical methods that achieve qualitatively similar results through an entirely non-quantum approach are almost certainly possible. This work could therefore also serve as a motivation to seek out such classical methods to compare with quantum blur effects. \section{Acknowledgements} The authors would like to thank Roman Lipski, whose use of the Quantum Blur effect inspired the topics covered in this work, and Elisa Baeumer for comments on the manuscript. JRW acknowledges support from the Swiss National Science Foundation through NCCR SPIN. IBM, the IBM logo, and ibm.com are trademarks of International Business Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. The current list of IBM trademarks is available at \url{https://www.ibm.com/legal/copytrade}.
0902.4472
\section{Introduction} After the primordially ubiquitous H and He, oxygen is the third most abundant chemical element in the Universe, and the first among those produced via stellar nucleosynthesis. It is a catalyst in the CNO process, which is the main conversion channel of H into He in the interiors of stars more massive than the Sun. The oxygen content of stars and its abundance ratio with other chemical elements derived from observations of metal-poor halo stars are crucial (e.g. Matteucci \& Fran{\c c}ois 1992) to constraining Galactic Chemical Evolution (GCE) models. Oxygen, like the other $\alpha$-elements, has been known for a long time (e.g. Conti et al. 1967) to show an overabundance with respect to iron in stars of low metallicity. This is expected since its main production site is in massive stars which end their lives as Type II supernovae (SNe II, Arnett 1978; Clayton 2003). Due to the longer lifetimes of their precursors, the delayed release of additional iron from type Ia SNe starts to affect \mbox{[O/Fe]}\footnote{By definition, [A/X]=$\log(N_{A}/N_{X})_{*}-\log(N_{A}/N_{X})_{\odot}$}\, only at a later stage. The oxygen abundance in unevolved very metal-poor stars should therefore reflect the enrichment by the very first generations of SNe. The oxygen abundance at low metallicity has been the focus of a plethora of studies. Still, a long-standing debate exists on the exact [O/Fe] trend in metal-poor stars (see e.g. discussion in Asplund 2005 and Mel\'{e}ndez et al. 2006 and references therein). Here, we will only briefly mention some pertinent works related to the subject. Neutral oxygen lines used in abundance studies of solar-type stars include the high excitation line at $615.8$~nm, the weak forbidden [O\,{\sc i}] lines at $630.0$~nm and $636.4$~nm, and the infrared triplet at $777$~nm. The latter is the only strong oxygen atomic feature observable in spectra of late-type stars. While it is known to suffer from large non--LTE effects in early-type stars (Johnson, Milkey \& Ramsey 1974; Baschek, Scholz \& Sedlmayr 1977), there is little consensus on the estimated size of the abundance corrections for cooler stars of solar-type. For example, the formation of the IR triplet lines in the Sun has been the object of a number of investigations. With a semiempirical approach, Altrock (1968) found that the non--LTE source function drops below the Planck function in the solar photosphere, a result later confirmed by the multilevel non--LTE study by Sedlmayr (1974). More recently, the non--LTE abundance corrections in the Sun have been estimated to be between $\sim -0.1$ and $-0.25$~dex (Kiselman 1991, 1993; Gratton et al. 1999; Takeda 2003; Takeda \& Honda 2005; Asplund et al. 2004), mostly depending on whether efficient collisions with neutral H atoms are adopted or not. These authors explain the non--LTE line strengthening as due to photon losses in the lines, while the population (and thus the line opacity) of the lower level in the $777$~nm transition is thought to stay close to LTE. Non--LTE effects on oxygen lines in solar-type stars have been reviewed by Kiselman (2001). Statistical equilibrium calculations using a 16-level oxygen model atom (Kiselman 1991) have predicted an enhancement towards low metallicity of the (negative) non--LTE abundance corrections on the permitted $777$~nm O\,{\sc i}\, lines, suggesting that an LTE approach will seriously overestimate the derived oxygen abundance. The [O/Fe] ratio derived in LTE using this abundance indicator would thus be increasingly reduced in metal-poor stars once non--LTE effects are considered. Other calculations have instead found almost \mbox{[Fe/H]}-independent non--LTE corrections (Takeda 2003). Very high overabundances of oxygen compared to iron have occasionally been derived at low metallicity. Analysing the O\,{\sc i}\ 777\,nm lines in metal-poor dwarfs, Abia \& Rebolo (1989) found a steep monotonic increase in the oxygen-to-iron ratio for lower metallicities, reaching \mbox{[O/Fe]}\,$> 1$ below \mbox{[Fe/H]}\,$\sim -2$). Those authors investigated non--LTE effects for the triplet lines but claimed that they should be less severe than $-0.2$~dex, therefore still leaving a residual increasing trend. Their high abundance estimates were criticized in King (1993) as mainly due to too low effective temperature estimates. Other authors (Gratton \& Ortolani 1986; Barbuy 1988; Sneden et al. 1991; Spite \& Spite 1991; Kraft et al. 1992; Carretta, Gratton \& Sneden 2000; Cayrel et al. 2004; Spite et al. 2005) found \mbox{[O/Fe]}$\simeq 0.4-0.6$ in late-type stars with \mbox{[Fe/H]}$\lae -1$, in particular using the [O\,{\sc i}] lines in metal-poor giants. This nearly plateau-like behaviour (with oxygen content varying much more slowly than iron at low \mbox{[Fe/H]}, or even remaining constant) could be interpreted as the result of the contribution by type II SNe alone to the early Galactic nucleosynthesis (see e.g. Matteucci \& Greggio 1986). While some authors have reported a fair agreement between the near-IR $777$\,nm oxygen lines and the forbidden line (Mishenina et al. 2000; Nissen et al. 2002), others (Takeda 2003; Fulbright \& Johnson 2003) derived higher (up to several tenths of a dex) abundance values from the triplet. In particular -- even when applying to the observations the non--LTE corrections derived in his own investigation -- Takeda still derived apparently too high (by several tenths of a dex) oxygen abundance from the IR triplet lines in metal-poor halo stars, a possible signal of underestimated non--LTE effects. At \mbox{[Fe/H]}\,$\apprle -3.5$, even more extreme examples of the oxygen conflict have been found, with differences of up to $1.55$~dex between [O\,{\sc i}]-based and triplet abundances, and exceptionally large \mbox{[O/Fe]}$\apprge 2$ oxygen enhancements (e.g. Depagne et al. 2002; Israelian et al. 2004). Unfortunately, the neutral atomic oxygen lines, in particular the LTE-obeying [O\,{\sc i}] line, tend to become vanishingly weak in metal-poor solar-type stars. Thus, molecular bands of OH in the UV (electronic transitions) and near IR (vibration-rotation lines) have also been used. The former are relatively strong and thus detectable in FGK stars down to low \mbox{[Fe/H]}, the latter instead require high S/N and are only seen in spectra of cool stars (\mbox{$T_{\rm{eff}}$}$<5000$~K). Results from both are however affected by likely important 3D effects, in particular in the case of the UV lines (Asplund \& Garcia Perez 2001), which may also suffer from the problem of missing opacity (Balachandran \& Bell 1998; Asplund 2004) and possible issues related to continuum location, blends in the spectra, atmospheric extinction/cutoff and low CCD sensitivity. Various other uncertainties -- e.g.~\/ on oscillator strengths for both molecular bands -- and incomplete knowledge of non--LTE effects on molecular line formation may also be important. Generally higher values of oxygen abundance (compared to those from the weak IR molecular features) are derived from the OH lines in the UV (Balachandran, Carr \& Carney 2001; Mel\'{e}ndez \& Barbuy 2002; Barbuy et al. 2003), but the above effects may indeed already for the most part explain the large discrepancy. Other authors (Israelian, Garc{\'\i}a L\'opez \& Rebolo 1998; Boesgaard et al. 1999) have instead found reasonable agreement between the O abundances derived for a sample of mainly turn-off stars from the two indicators. In any case, it seems that consistent oxygen abundances using {\it all} the different diagnostics, namely permitted, as well as forbidden and molecular lines, can hardly be obtained at low metallicity. Departures from homogeneity and from LTE have been at various times invoked for solving the inconsistency. A combination of other factors, including determination of stellar parameters/temperature scale, quality of observations and of reduced data, adopted solar abundances and so on may additionally be of concern and it is hard to disentagle the different effects. To complicate the issue further, one needs to consider that some authors have derived oxygen abundances at low metallicity from unevolved stars, others from giants, and it could be hypothesized that the lower oxygen values usually found for the latter are due to a depletion process in their atmospheres. However, Bessell, Sutherland \& Ruan (1991) discussed evidence that oxygen abundances from OH were similar in dwarfs and giants. More recently, Garc\'{\i}a P{\'e}rez et al. (2006) confirmed a reasonable agreement between OH and [O\,{\sc i}] abundances in metal-poor subgiants, finding that O\,{\sc i}\, triplet abundances give instead higher values even when corrected for non--LTE effects. It is therefore important to check if such residual discrepancy is linked to problems in the previous modelling of the formation of the permitted lines. Here, we thus focus on investigating departures from LTE in 1D. Crucially, we explore the very low metallicity range, to understand if the non--LTE abundance corrections depend strongly on metallicity. In the next section, we describe our non--LTE calculations, and give results in Section 3. We then embark on a comparison with other non--LTE studies of the O\,{\sc i}\,$777$\,nm absorption lines (Sect. 4). Finally, the last two sections are dedicated to a discussion of the impact on the \mbox{[O/Fe]}\, ratio at low metallicity, and to some concluding remarks. \section{Non--LTE calculations} \subsection{Method} The method employed is similar to that described in Fabbian et al. (2006). In order to derive the non--LTE populations of the O atomic levels and the strength of relevant spectral lines, it is necessary to solve the coupled rate and radiative transfer equations simultaneously. The code {\small MULTI} (Carlsson 1986) in its version 2.3 was used to perform the statistical equilibrium calculations for a number of {\small MARCS} model atmospheres (Gustafsson et al. 1975; Asplund et al. 1997), under the assumption that oxygen is a trace element and its departures from LTE will not affect the atmospheric structure. The atmospheric models tested have a microturbulence of $\xi=1$. A 54-level oxygen model atom (described below) was employed. We assumed complete redistribution in a Voigt profile for the formation of the lines. \subsection{Atomic model} \label{fabss:atom} \begin{flushleft} \begin{figure} \begin{center} \includegraphics[width=8cm,height=11.5cm]{figure1.ps} \caption{Grotrian diagram of the atomic model we employed, showing the 53 energy levels of neutral oxygen and the single first ionization level. Lines connecting the various levels in the figure represent the 208 radiative bound-bound transitions included in the model. The $777$~nm transitions are marked with thicker lines\label{fabf:termdiag_oi_df54}} \end{center} \end{figure} \end{flushleft} We have constructed a model atom containing 54 energy levels and a total of 258 radiative transitions (208 b-b and 50 b-f). The Grotrian diagram of our atomic model is shown in Fig.\,\ref{fabf:termdiag_oi_df54}. The necessary data for atomic level energies and corresponding statistical weights were taken from the NIST Atomic Spectra Database\footnote{http://physics.nist.gov/PhysRefData/ASD/index.html} version 3. Neutral levels are included up to an excitation potential of 13.34 eV (i.e. $\sim 0.28$ eV below the continuum). The model is complete to a principal quantum number of n$=7$. Fine-splitting of energy levels was taken into account where appropriate, in particular for the ground state and for the 3p $^5$P level (upper level of the $777$\,nm O\,{\sc i}\, triplet). For the transitions of importance in solar work which were used in Asplund et al. (2004), f-values are also from NIST while radiative and Stark parameters come from VALD\footnote{Available at http://ams.astro.univie.ac.at/vald/} (Vienna Atomic Line Database, Piskunov et al. 1995 and successive updates). Data for bound-free photoionization are from the Opacity Project\footnote{http://vizier.u-strasbg.fr/topbase/topbase.html} (Seaton 1987 and later updates). For collisional broadening due to H atoms, we adopt the classical Uns\"{o}ld approximation. Our tests reveal that choosing an enhancement factor of two for the related damping constant does not affect our results significantly. The contribution to background opacity in the UV from lines of other elements was accounted for by compiling NIST and VALD values. In general, the data in the model atoms\footnote{Respectively, a 45- and a 23-level atomic model} of Carlsson \& Judge (1993, hereafter CJ93) and Kiselman (1993, hereafter K93) were used (see those papers for description of the data sources) unless otherwise stated. We have also included the most recent data for electron-impact excitation, available up to level 2p$^3$ 4f $^3$F from quantum-mechanical calculations (Barklem 2007a), a novel feature of our model atom which has proven one of the essential improvements with respect to other studies. In particular, this has allowed us to include accurate rates for transitions between radiatively forbidden transitions due to electron collisions. Since the estimated cross-sections were derived using LS-coupling, they are null only for singlet-quintet transitions (total electronic spin conservation rule) and for collisionally forbidden S$^{\rm e}$ -- S$^{\rm o}$ transitions. In broad terms, the collisional cross-sections compare fairly well with available empirical data (within the experimental error bars, see Fig. 2 in Barklem 2007a). Although admittedly still subject to uncertainties due to e.g.~\/ pseudo-resonances at higher energies, they should be more realistic than previous estimates. They are in general comparatively larger than H collisions by a few orders of magnitude, except for fine-splitting levels\footnote{We account for fine splitting also for electron-impact data, by using Barklem's calculations (which originally refer to grouped levels) according to conservation of total rates}. We stress here that the cross-sections for electron-impact collisions in the radiatively forbidden transition 3s $^3$S$^{\rm o}$ -- 3s $^5$S$^{\rm o}$ are among the largest in the calculations of Barklem (2007a). In particular (see Fig. 1 in that work) they are larger than the other values for transitions arising from the lowest levels (and between the first seven energy levels in our atomic model). Due to the deficiency of metals, and thus the decreased number of free electrons in their atmosphere, rates due to collisions with neutral H atoms may be particularly important in low-metallicity stars. The approximation we adopted for excitation and ionization via inelastic collisions with H\,{\sc i}\, is based on recipes by Steenbock \& Holweger (1984), which further generalize results in Drawin (1968, 1969), where Thomson's classical theory for electron-atom encounters was applied for collisions between identical particles. The approach is admittedly less than ideal, but unavoidable due to the lack of relevant experimental data and theoretical calculations (see e.g. the discussion in Asplund 2005). In fact, quantum mechanical calculations of rate coefficients exist for only very few electronic transitions of oxygen atoms induced by collisions with H (Krems, Jamieson \& Dalgarno 2006; Abrahamsson et al. 2007), and they are unfortunately often of limited applicability for stellar work due to the low range of temperatures explored. Thus, in order to estimate the size of the main uncertainties in the calculations, we have carried out, similarly to what was done in Fabbian et al. (2006), different non--LTE calculations with a scaling factor S$_{\rm H}$ for transitions between all levels, including for radiatively forbidden transitions (by adopting a minimum f-value of $10^{-3}$ in that case). \vspace{0.7cm} \section{Results} \subsection{Non--LTE mechanisms} \label{fabss:mech} \begin{flushleft} \begin{figure} \begin{center} \includegraphics[width=9cm,height=12cm]{figure2.ps} \caption{Departure coefficients in the level population of the lowest energy levels (up to 3p $^3$P) in our oxygen model atom (for the case without H collisions) are shown respectively for the Sun (upper panel) and a metal-poor turn-off atmospheric model with \mbox{[Fe/H]}$=-3$ (lower panel). Note the very different scales for the y-axes in the two panels. The thicker lines indicate the departure coefficients for lower and upper levels (solid and dashed curve respectively) in the $777$~nm triplet. Note that only one curve is plotted for the latter levels, because the small differences in the departure coefficients among those fine structure levels does not play an important role here\label{fabf:betas}} \end{center} \end{figure} \end{flushleft} The behaviour of the departure coefficients $\beta_i$ (defined as the ratio of the populations in non--LTE and LTE, $\beta_i=n_i/n_i^{\rm LTE}$) with atmospheric depth for various energy levels in the atomic model is shown in Fig.\,\ref{fabf:betas} for the Sun and a metal-poor turn-off model. Non--LTE effects for the O\,{\sc i}\, $777$\,nm lines at solar metallicity are mainly caused by source function dilution, in the atmospheric layers where the lines form, with respect to the Planck function that represents the LTE expectation. This is due to radiative losses in the triplet itself causing line strengthening compared to LTE. As seen in Fig.\,\ref{fabf:betas}, the upper level (3p $^5$P) of the transition gets underpopulated, while the lower level (3s $^5$S$^{\rm o}$) gets overpopulated. If stimulated emission is neglected, this driving effect can be well described as $S_l/B=\beta_u/\beta_l$. The two-level approximation will give a reasonable description of the non--LTE line formation in the Sun, implying that it is the processes in the transition itself that matter most. At low metallicity instead, particularly at the highest temperatures explored, that approximation breaks down. The upper level (3s $^3$S$^{\rm o}$) of the $130$~nm O\,{\sc i}\, resonance lines is in effect metastable, since radiative de-excitation will be immediately followed by an excitation (due to large opacity in those lines). Thus, very large overpopulation compared to LTE develops in the level, because of the larger net rate for it than in LTE. This is caused by radiative transitions having this as their lower level (i.e. by photon losses in higher excitation levels), combined with the low rate coefficients out of the level. The 3s $^3$S$^{\rm o}$ overpopulation tends to propagate to the 3s $^5$S$^{\rm o}$ level via efficient intersystem collisional coupling, and line strengthening of the $777$~nm lines results, due to much increased line opacity. Given the large energy difference with the ground state, collisions to it from the levels of interest are inefficient in maintaining LTE. The dominant contribution to the non--LTE effect at low metallicity thus comes from line opacity, because the lower level of the IR triplet is strongly overpopulated with respect to LTE (Fig.\,\ref{fabf:betas}). The non--LTE abundance corrections then become $\sim -0.5$ dex at \mbox{[Fe/H]}$=-3$ for a typical metal-poor turn-off halo star, and up to $-0.9$~dex if neglecting H collisions. As seen in Fig.\,\ref{fabf:mc_fig}, in non--LTE the line formation at very low metallicity is shifted to higher atmospheric layers. Since the population of the lower level in the oxygen $777$\,nm triplet transition rises very steeply with respect to the LTE expectation when moving outward in the atmosphere, the $\beta$ departure coefficient will become large. Fig.\,\ref{fabf:mc_fig} also shows that the effect (on line-center flux) of photon losses in the line, causing the source function to drop below the Planck function, is small at low \mbox{[Fe/H]}. The opacity effect dominates, with optical depth unity moving up in the atmosphere because of the overpopulation of the lower level of the transition. The large line opacity at the shallower depths where line formation occurs in non--LTE will thus make the absorption stronger. \begin{flushleft} \begin{figure*} \begin{center} \includegraphics[width=15cm,height=8cm]{figure3a.ps} \includegraphics[width=15cm,height=8cm]{figure3b.ps} \caption{The various panels refer to the oxygen feature at $777.2$~nm\ (the bluest and strongest of the O\,{\sc i}\, triplet lines), for \mbox{[Fe/H]}$=-3.00$ (top panels) and $-3.50$ (bottom panels). The atmospheric models have \mbox{$T_{\rm{eff}}$}$=6500$~K, \mbox{log $g$}$=4.00$ [cgs] and solar-scaled, alpha-enhanced ([O/Fe]$=+0.4$) oxygen abundance. The case with no hydrogen collisions is shown here. {\it Top}: the resulting LTE (dashed curve) and non--LTE (solid curve) flux profiles, normalized to continuum flux, for the spectral feature considered are shown in the left panel, while normalized values of B (dashed), S (solid) and S$_l$ (dash-dot) are shown in the right panel. The vertical scale in the two cases is therefore the same. The left panel illustrates how at low metallicity the line becomes much stronger in non--LTE compared to LTE. In the right panel, vertical lines show the continuum (dotted), LTE line center (dashed) and non--LTE line center (solid) Eddington-Barbier values of $\tau_{\nu}$ for which the continuum flux corresponds to the Planck function at that depth (i.e. F$^{cont}_{\nu}$=$\pi$S$_{\nu}$($\tau^{cont}_{\nu}$=E.B.)). The corresponding starred symbols on the various curves therefore represent the Eddington-Barbier formation height and flux, respectively for continuum, LTE line center and non--LTE line center. Note that the Eddington-Barbier value is $\tau^{cont}_{\nu}=2/3$ exactly when the source function varies linearly with optical depth, while it is slightly smaller in this case. The right panel shows that at low metallicity the source function effect is small. {\it Bottom}: as above, but for \mbox{[Fe/H]}$=-3.50$ \label{fabf:mc_fig}} \end{center} \end{figure*} \end{flushleft} Even with decreased free electrons at low metallicity, sufficient intersystem collisional coupling is maintained between levels of similar excitation in the triplet and quintet systems. The 3s $^3$S$^{\rm o}$ and the 3s $^5$S$^{\rm o}$ levels are in fact very close energetically and, despite the corresponding transition being radiatively forbidden, they are strongly coupled via collisions, which explains why their departure coefficients behave similarly, as seen in Fig.\,\ref{fabf:betas}. Due to decreased number of electrons and to small rates for collisional transitions with larger energy separation, electron collisions are more and more ineffective in driving the line formation closer to LTE via depopulating these two levels via other channels, e.g. to the ground state. The less efficient thermalization via impacts is thus unable to balance the tendency of the triplet levels to overpopulate. In addition, the lower level of the $777$~nm transition is effectively metastable due to negligible net radiative rates to the ground state. Among the IR O\,{\sc i}\, triplet lines, the strongest (bluest) suffers generally more severe non--LTE effects. This is due to it having relatively more important photon losses, and given that it will form further out where departures from LTE are largest. The difference in the estimated abundance correction among the $777$~nm lines usually remains within $0.05$~dex, but grows towards solar metallicity. The abundance corrections differ by as much as $\sim -0.15$~dex at \mbox{$T_{\rm{eff}}$}$=6500$~K, \mbox{log $g$}$=2$, \mbox{[Fe/H]}$=0$, where very large non--LTE effects are found. Thus, adopting the same correction for all three lines is not appropriate, especially in such case, and detailed non--LTE calculations are necessary. This is due to substantially different line formation depths, giving larger non--LTE effects for the line that is formed further out, i.e. the strongest (bluest) one, because of outward increase of the departure coefficients. Across all of the parameter space explored, the upper level (3s $^3$S$^{\rm o}$) in the O\,{\sc i}\, UV resonance lines tends to get overpopulated. Large radiation excess is maintained in the resonance lines, becoming particularly strong at low metallicity, for high gravity models. In the $777$~nm lines instead, the ratio of J over B tends to be $< 1$ due to photon losses. This is relatively more important in stars like the Sun, and causes the upper levels in the $777$~nm triplet to depopulate. Photoionization was found not to be important across the parameter space explored in this study. \vspace{0.7cm} \subsubsection{Sensitivity of line formation: main atomic transitions} \begin{flushleft} \begin{table} \centering \caption{Results (for the strongest triplet line at $777.2$~nm) obtained using our 54-level atomic model. The non--LTE and LTE equivalent widths are given to show the sensitivity to various mechanisms. The changes {\it with respect to the standard case} (\mbox{$T_{\rm{eff}}$}$=6500$~K, \mbox{log $g$}$=4$, \mbox{[Fe/H]}$=-3$, $\xi=$1, $\log\,\epsilon_{\rm O}=6.5$, and including the collisional data of Barklem 2007a) are: decrease in metallicity by half dex (to \mbox{[Fe/H]}$=-3.5$); and decrease in oxygen abundance by half dex (to $\log\,\epsilon_{\rm O}=6.00$). For the case including H collisions, additional tests are, following the order listed: removing the $130$~nm O\,{\sc i}\, resonance lines; removing also the intersystem $136$~nm UV O\,{\sc i}\, lines; removing all of the new electron collisions by Barklem except in the radiatively forbidden transition between levels 3s $^3$S$^{\rm o}$ and 3s $^5$S$^{\rm o}$; including all the new electron collisions by Barklem except the previous one; no 3s $^3$S$^{\rm o}$ and 3s $^5$S$^{\rm o}$ coupling (i.e. removing both electron and H collisions between those levels); use of old electron collision data from 23-level model atom of K93 instead of Barklem's data; same as previous but with new electron collision data between levels 3s $^3$S$^{\rm o}$ and 3s $^5$S$^{\rm o}$; old electron data but no H collision coupling between the latter levels; neglect of UV background line opacities; exclusion of the opacity contribution by the blending H Ly-$\beta$ line; and decrease by $1$~dex in Si abundance\label{fabt:df54}} \begin{tabular}{l|cc} & W$_{\rm non-LTE}$ & W$_{\rm LTE}$ \\ & [m\AA] & [m\AA] \\ \hline \hline {\bf S$_{\bf{H}}\bf=0$} & & \\ & & \\ standard case & 14.2 & 2.9 \\ \mbox{[Fe/H]}$=-3.5$ & 10.9 & 1.0 \\ $\log\,\epsilon_{\rm O}=6.0$ & 6.3 & 1.0 \\ \hline {\bf S$_{\bf{H}}\bf=1$} & & \\ & & \\ standard case & 6.7 & 2.9 \\ \mbox{[Fe/H]}$=-3.5$ & 4.2 & 1.0 \\ $\log\,\epsilon_{\rm O}=6.0$ & 2.9 & 1.0 \\ no $130$~nm O\,{\sc i}\, lines\, & 3.9 & 2.9 \\ no $130$~nm, no $136$~nm O\,{\sc i}\, lines & 3.6 & 2.9 \\ 3s -- 3s Barklem e$^-$ & 7.2 & 2.9 \\ all Barklem e$^-$ but 3s -- 3s & 6.5 & 2.9 \\ no 3s -- 3s coupling & 5.2 & 2.9 \\ old e$^-$ collisions & 6.2 & 2.9 \\ old e$^-$ collisions, 3s -- 3s Barklem e$^-$ & 6.4 & 2.9 \\ old e$^-$ collisions, no O+H 3s -- 3s & 5.0 & 2.9 \\ no UV background line opacity & 7.8 & 2.9 \\ no H Ly-$\beta$ opacity & 6.8 & 2.9 \\ Si*0.1 & 9.4 & 2.9 \\ \hline \hline \end{tabular} \end{table} \end{flushleft} \begin{flushleft} \begin{figure*} \begin{center} \caption{ (only available online) Results from multi-{\small MULTI} runs for a solar model, and for a star with \mbox{$T_{\rm{eff}}$}$=6500$~K, \mbox{log $g$}$=4.00$ and \mbox{[Fe/H]}$=-3$, when neglecting H collisions. \label{fabf:multiMULTI_noH}} \end{center} \end{figure*} \end{flushleft} In Table \ref{fabt:df54} we provide the equivalent widths resulting from some of the various tests carried out on our 54-level atomic model in order to check the sensitivity of the $777$~nm triplet line formation to various transitions and processes in the atomic model. The {\it standard case} shown (\mbox{$T_{\rm{eff}}$}$=6500$~K, \mbox{log $g$}$=4$, \mbox{[Fe/H]}$=-3$, $\xi=$1, $\log\,\epsilon_{\rm O}=6.5$) includes the new electron-impact data by Barklem (2007a). It results in $\Delta\log\,\epsilon_{\rm O} \sim -0.9$ and $\sim -0.5$~dex without and with the inclusion of H collisions respectively. The corresponding non--LTE abundance corrections at \mbox{[Fe/H]}$=-3.5$ increase to $\Delta\log\,\epsilon_{\rm O} \sim -1.2$ and $\sim -0.9$~dex respectively. The modifications were performed on this {\it standard case}, as listed in Table \ref{fabt:df54}. From the results there shown, it is clear that the most important transitions are the O\,{\sc i}\, resonance lines (together with the corresponding UV background opacity), and the intersystem collisional coupling between the 3s $^3$S$^{\rm o}$ and 3s $^5$S$^{\rm o}$ states. The model atom employed in the {\it standard case} is also the one we then used to derive our non--LTE corrections across the whole parameter space (see Section\,\ref{fabss:paramspace}). We also carried out ``multi-{\small MULTI}'' tests, with results for the case without H collisions shown in Fig.\,\ref{fabf:multiMULTI_noH} (only available online). Through such tests, the relative effect on the O\,{\sc i}\, $777$~nm triplet line strength of multiplying each of the different rates for radiative and electron collisional transitions individually by a factor of two is revealed. One can thus grasp the impact of different atomic transitions on the formation of the lines of interest. As seen in Fig.\,\ref{fabf:multiMULTI_noH}, and as found by previous authors (e.g. Kiselman 1991, 1993), the formation of the O\,{\sc i}\, triplet in the Sun is mostly influenced by photon losses in those lines and can be described reasonably well by a two-level approximation. We stress however that our calculations also show an additional contribution from line opacity, due to the lower level of the triplet transition getting overpopulated. This increases the resulting non--LTE effects for the Sun to a larger amount than usually adopted, up to $\sim -0.3$~dex when neglecting H collisions. At these solar or moderately low metallicities, the line opacity effect in the O\,{\sc i}\, IR triplet starts to dominate (over that caused by the source function drop) for hot giants. It becomes much more predominant at very low metallicity, where we find an increase in the size of the abundance corrections with increasing \mbox{$T_{\rm{eff}}$}\, and decreasing \mbox{[Fe/H]}, related, respectively, to increased level overpopulation via collisions, and to decreasing continuum opacity. An increase in gravity at metallicities below \mbox{[Fe/H]}$\sim -2$ will also tend to cause larger non--LTE corrections. This effect is related to the increased sensitivity of the line formation process to intersystem collisional coupling, in that regime. So, while in the Sun both source function drop and line opacity are important, with the first being predominant, the metal-poor turn-off model is mostly affected by the latter. The effect of the line source function dilution does not change significantly with \mbox{[Fe/H]}\, at very low metallicity, its contribution to line strengthening remaining $\lae 10\%$. For parameters typical of metal-poor turn-off stars, we find a trend of increasing non--LTE corrections with decreasing \mbox{[Fe/H]}. This is caused by the behaviour of continuous opacity at the UV wavelengths of interest (i.e., around the O\,{\sc i}\, $130$~nm features), which decreases substantially (up to $\sim 15$\%) across the line formation layers when the metal content goes from \mbox{[Fe/H]}$=-3$ to $-3.5$. Even though the contribution of Rayleigh scattering to total continuous opacity is dominant for layers around and above those where the $777$~nm features are formed, it does not vary when metallicity decreases from \mbox{[Fe/H]}$=-3$ to $-3.5$. Instead, it is the decreased contribution by Si photoionization to total continuous opacity when [Fe/H] decreases (see Fig.\,\ref{fabf:out}) that deprives the photons in the O\,{\sc i}\, resonance lines of an alternative route. Their destruction probability via the photoionization channel therefore decreases. As a consequence, overpopulation (driven by photon losses in higher-excitation lines) of the upper level in the O\,{\sc i}\, resonance lines tends to increase. At the same time, the lower level of the $777$~nm transition will too correspondingly get more overpopulated due to efficient intersystem collisional coupling. At low metallicity, the non--LTE effect on the O\,{\sc i}\, $777$~nm triplet feeds on the flow via collisions from the 3s $^3$S$^{\rm o}$ state. The level occupation numbers in the lower level of the IR oxygen triplet thus stay closely coupled to the large overpopulation in the upper level of the UV $130$~nm resonance lines, a situation which LTE cannot predict. At \mbox{[Fe/H]}$\sim -3$, the triplet lines become stronger when increasing the radiative rates for the resonance lines, because that causes larger overpopulation in level 3s $^3$S$^{\rm o}$, and in the lower level of the $777$~nm triplet as a consequence of collisional coupling. Note that in the figure, we evaluate the net flow (indicated by the direction of the arrows for each transition) where $\tau=1$ at line center for the $777$~nm triplet. This generally gives a good indication of the driving mechanisms. However, at low metallicity, one can note that the radiative rate in the triplet is upward. A check reveals that in this case the rates actually change sign across the atmosphere. While at $\tau=1.0$ at line center there is a small upward flow, the net rate turns to being downward in deep layers. This last effect dominates, explaining the overall line strengthening, as marked in the corresponding figure, when increasing the radiative rate in the O\,{\sc i}\, IR triplet. Excluding the transition itself, the most significant impact on the triplet lines at low metallicity is from pumping in the O\,{\sc i}\, $130$~nm resonance lines. Concerning electron collisions, the main effects at low metallicity are strengthening of the line through relatively efficient triplet-quintet intersystem coupling via electron collisions\footnote{As seen in Fig.\,\ref{fabf:multiMULTI_noH}, this channel gives instead a weakening effect in the Sun. The reason is clear from the previously discussed Fig.\,\ref{fabf:betas} which shows that in the Sun, the departure coefficient is larger for 3s $^5$S$^{\rm o}$ (lower level of $777$~nm transition, in the quintet system) than for 3s $^3$S$^{\rm o}$ (upper level of O\,{\sc i}\, $130$~nm UV resonance lines, in the triplet system). An increase of this collisional coupling channel in the Sun will therefore {\it decrease} the overpopulation and thus the opacity effect for the $777$~nm transitions, making those lines form deeper in, where smaller non--LTE effects are felt.} between the similarly highly-excited levels 3s $^3$S$^{\rm o}$ and 3s $^5$S$^{\rm o}$ - which helps to propagate the overpopulation from the former state to the IR oxygen triplet - and line weakening via collisions to the ground state. At low metallicity, both lower (3s $^5$S$^{\rm o}$) and upper (3p $^5$P) levels of the triplet lines get very overpopulated, but by different amounts. \subsubsection {Importance of O\,{\sc i}\, UV transitions and background line opacities} \label{fabsss:UVopacity} According to our calculations, even in metal-poor stars, at least for frequencies near their quasi-saturated core, the extremely optically thick $130$\,nm O\,{\sc i}\, resonance lines (2p$^{4}$ $^{3}$P$_{2,1,0}$-3s 3S$^{o}$) form in the very topmost layers of the photosphere as strong absorption features. Despite this, their thermalization depth lies much deeper in than where monochromatic optical depth in the lines is unity, due to the very large amount of scattering. In particular their J/B$>>1$ where the $777$~nm lines are forming. The properties of scattered photons are decoupled from local conditions, which gives a typical non--LTE situation. The fact that the O\,{\sc i}\, resonance lines are so optically thick even at low [Fe/H] means that at the formation layers of interest, collisional de-excitation from their upper (metastable) level to the lower level of the $777$~nm transition will be favoured with respect to de-excitation to the ground. This situation causes large non--LTE effects at low metallicity. When removing the UV O\,{\sc i}\, resonance lines, the non--LTE effects remain important (see Table \ref{fabt:df54}) due to photon losses from higher levels. Previous studies (e.g., K93; Gratton et al. 1999) found that UV radiation should not play a major role in the formation of the IR oxygen triplet in very-metal poor star. They were, however, lacking the detailed quantum mechanical calculations for intersystem electron collisions which we employ here and which cause increased non--LTE effects. Background absorption from UV lines of other elements mitigates (by providing an escape route other than collisional de-excitation and collisional transfer to the quintet system) but does not extinguish this effect. The exact amount of bound-free absorption from different metals is however still debated (e.g. Balachandran \& Bell 1998; Asplund 2004), due to the possibility that continuum rates, e.g. for Fe I, may be larger than what usually adopted. As noted by Allende Prieto, Hubeny \& Lambert (2003), predicting a realistic flux in the UV is more complicated than for visible or IR wavelengths. The knowledge of accurate bound-bound and photoionization cross-sections and thresholds for metals becomes important, with a large quantity of accurate atomic data required for modelling. Below $\sim 1700$ \AA, Si I bound-free absorption plays an important role. Rayleigh scattering by hydrogen atoms is also important in the ultraviolet for cool stars, in particular at low metallicity. In addition to bound-free metal edges, a sharp increase in opacity is expected at short UV wavelengths also due to the haze caused by background line absorption. Neglecting these effects will seriously overestimate the UV radiation field within the upper atmospheric layers of late-type stars. \begin{flushleft} \begin{figure} \begin{center} \includegraphics[width=9cm,height=12cm]{figure5.ps} \caption{Contributions to total continuous opacity around the $130$~nm O\,{\sc i}\, lines for a \mbox{$T_{\rm{eff}}$}$=6500$, \mbox{log $g$}$=4$, \mbox{[Fe/H]}$=-3$ model atmosphere. The absorption and the scattering part of total opacity are shown by the two thick dashed curves, as marked. For absorption, the importance of its main components (Si, H$^-$, H$_{2}^+$ and H I) is depicted. Total continuous opacity itself is shown by the thick solid curve. For comparison, we also show (small circles) the reduced contribution of Si at \mbox{[Fe/H]}$=-3.5$ and consequent reduced total continuous opacity (large circles). The rest of the sources of opacity in the figure do not change significantly with this decrease in metallicity\label{fabf:out}} \end{center} \end{figure} \end{flushleft} Fig.\,\ref{fabf:out} shows the main contributions to total continuous opacity for UV wavelengths around the O\,{\sc i}\, resonance lines and a {\small MARCS} model having \mbox{$T_{\rm{eff}}$}$=6500$~K, log g$=4$, \mbox{[Fe/H]}$=-3$ and log O$=6.5$. As seen, scattering is roughly constant across the atmospheric model. Its contribution to total opacity dominates down to the atmospheric layers ($\log\,\tau_{500} \sim -0.5$) where the IR oxygen triplet lines form. The contribution of absorption, even though smaller, is still significant, at such layers predominantly thanks to Si\,{\sc i}\, bound-free edges. It is this component which proves important to understand the trends of the non--LTE corrections at low metallicity, since it is the only one which varies significantly with \mbox{[Fe/H]}. At \mbox{[Fe/H]}$=-3.5$, its contribution to absorption for the atmospheric layers of interest reduces to $\sim 35\%$ of the value at \mbox{[Fe/H]}$=-3$. Thus, total continuous opacity becomes significantly smaller. No other metal component plays a significant role. Deeper in, absorption takes over, due to H$^-$ and H$_{2}^+$ at first, and then to the rapid increase of neutral hydrogen photoionization. The larger continuum absorption at [Fe/H]$=-3$ compared to [Fe/H]$=-3.5$ reduces the overpopulation in the upper state of the UV O\,{\sc i}\, resonance transitions because it provides an alternative route to photons in those lines (absorption by the continuum instead of by the line itself). The non--LTE corrections are found to be very sensitive to the UV radiation field. Thus, background opacity caused by lines of other elements has been accounted for in the case of important transitions, in particular the contribution of H Ly-$\beta$ and of features around the O\,{\sc i}\, $130$~nm resonance lines. The latter inclusion has proven particularly important because it significantly reduces the otherwise even more extreme non--LTE effects (see Table \ref{fabt:df54}). In total, 151 lines from VALD were used, around the 130, 103 and 136\,nm transitions\footnote{Respectively accounting for, in order of decreasing importance, the effect of features around the resonance lines, of absorption due to H Ly-$\beta$, and of lines falling close to the intersystem oxygen lines}. For these background features, the line source function was assumed to be equal to the Planck function. This approximation is sufficient also to describe the case of pumping by hydrogen Ly-$\beta$ radiation. This was tested by solving the radiation transfer problem for the hydrogen atom and taking the J field as input for the non--LTE calculations on oxygen. The results showed only very small differences with respect to our approximation, for the formation of the oxygen triplet lines of interest here. Note that H Ly-$\beta$ gives the single largest contribution to background line opacity, however it is the total contribution of the several blending metal lines falling around $130$~nm which is most important, in particular due to S\,{\sc i}\, $130.23$~nm, which falls close to line center of the bluest O\,{\sc i}\, resonance line, Si\,{\sc ii} $130.44$~nm, which is located in the blue wing of the central resonance line but also affects the $103.22$~nm O\,{\sc i}\, line by depressing the continuum, and S\,{\sc i} $130.59$~nm, which falls close to the center of the reddest resonance line. In addition, a number of lines of Fe, Ca, P and other elements give a smaller contribution to line haze. The lines around the O\,{\sc i}\, UV intersystem lines are instead not important. Our study reveals that the inclusion of the UV opacity contribution does not have a significant impact on the $777$~nm triplet lines in the Sun, where the non--LTE effect is in that case at least in good part controlled by processes in those same IR lines. \subsubsection{Effects of collisions with electrons and hydrogen} \label{fabsss:coll} \begin{flushleft} \begin{figure} \begin{center} \includegraphics[width=9.5cm]{figure6.ps} \caption{Net rates contributing to the balance of levels 3s $^5$S$^{\rm o}$ ({\it left}) and 3s $^3$S$^{\rm o}$ ({\it right}). Lines with and without diamond symbols respectively indicate the contribution from collisions and radiation to a given rate. Only the rates with large influence on the level population are shown (their summed contribution is represented by crosses). The thick horizontal line at dn/dt$=0$ marks the statistical equilibrium expectation that the total sum of {\it all} rates to and from a given level is equal to zero. The atmospheric model has \mbox{$T_{\rm{eff}}$}$=6500$, \mbox{log $g$}$=4$ and $\log\,\epsilon_{\rm O}=6.5$. Plots are for \mbox{[Fe/H]}$=-3$ (upper panels) and $-3.5$ (lower panels) respectively\label{fabf:levels5and6_feh}} \end{center} \end{figure} \end{flushleft} Figure \ref{fabf:levels5and6_feh} shows the net rates at low metallicity for the high-excitation levels 3s $^5$S$^{\rm o}$ and 3s $^3$S$^{\rm o}$. The figure clearly illustrates how the balance in the population of the lower level of the $777$~nm oxygen triplet is mainly determined by incoming flow due to collisional coupling - which transmits the overpopulation present in the 3s $^3$S$^{\rm o}$ state - moderated by radiative excitation in the IR lines. We have included in our model atom a homogeneous set of electron-impact excitation data, thanks to the availability of recent theoretical quantum mechanical calculations (Barklem 2007a). Table \ref{fabt:df54} shows the results of testing various modifications to the atomic model, including for electron collisions. The effect of the new electron-impact data is to give larger non--LTE corrections due to increased intersystem coupling, in particular, in the 3s $^3$S$^{\rm o}$ -- 3s $^5$S$^{\rm o}$ transition. This is an efficient channel due to its large cross-section (it is a spin-flip transition induced by an exchange interaction of the electrons) and its small ($\sim 0.38$~eV) energy separation (thus low threshold). Inelastic collisions with neutral H atoms as modelled here can play a crucial role at low metallicity, where the scarcity of free electrons means that rates due to H collisions can become important in late-type stars due to the large H population density. Under the assumption that van Regemorter's and Drawin's formulae well reproduce the thermalization due to electron and H collisions respectively, one derives that the ratio of the rates due to the two processes is a function of the energy separation between the levels involved and of temperature (Asplund 2005). These collisions can thus become particularly efficient for levels that are very close in energy. The approximate recipe available is based on work by Drawin (1968, 1969) which generalizes the modified classical Thomson formula to atom-atom collisions. Unfortunately, as already observed by Steenbock and Holweger, Drawin's formula allows at best an order-of-magnitude estimate of the importance of H collisions. Significant differences have been found compared to more reliable estimates (e.g. experimental data by Fleck et al. 1991; calculations for Na by Belyaev et al. 1999; for Li by Belyaev \& Barklem 2003; and for H+H by Barklem 2007b), differing by up to six orders of magnitude for some transitions. The uncertainty on the efficiency of H collisions has a significant impact on non--LTE studies. There is some evidence (Allende Prieto, Asplund \& Fabiani Bendicho 2004) that for oxygen, relatively efficient collisions with H need to be included in statistical equilibrium calculations. We confirm the impact of their thermalizing effect, especially at low [Fe/H] (see Table \ref{fabt:df54}). They tend to mitigate the large overpopulation in the lower level of the $777$~nm triplet and thus reduce the line opacity. The non--LTE abundance corrections at low metallicity become considerably smaller (by $\apprge 50\%$). In the absence of more reliable calculations, the only possibility to have some indication of the influence of H collisions has been to adopt the Drawin recipe. Even though likely not realistic, this may be useful in the case of oxygen, at least (inevitably) as a first approach, to investigate the impact of H collision efficiency. Given available evidence pointing towards the Drawin formula generally overestimating rates for individual transitions, the range of S$_{\rm H}=0 - 1$ adopted here would seem reasonable for our tests, and may be expected to give an idea of the uncertainties involved with respect to stellar abundance estimates from the $777$~nm O\,{\sc i}\, triplet. We also tested the effect of including the recent data by Krems, Jamieson \& Dalgarno (2006). Even though their estimated rates (using formula 10 in their paper) are larger than our adopted values, they did not have a significant impact on our results since the transition they consider in the singlet and triplet systems ($^1$D--$^3$P) involves levels that follow LTE anyway. \vspace{0.7cm} \subsection{Overall non--LTE effects across the parameter space} \label{fabss:paramspace} \begin{flushleft} \begin{figure} \begin{center} \includegraphics[width=8.5cm,height=6.5cm]{figure7.ps} \caption{Non--LTE abundance corrections versus metallicity, for the strongest O\,{\sc i}\, IR $777$~nm triplet line. The lines connect models with given \mbox{$T_{\rm{eff}}$}\, and \mbox{log $g$}\, but varying metallicity. Different symbols denote, respectively, results for models representative of a normal dwarf (\mbox{$T_{\rm{eff}}$}$=5780$~K, \mbox{log $g$}$=4.44$, indicated by open circles), a normal turn-off star (\mbox{$T_{\rm{eff}}$}$=6500$~K, \mbox{log $g$}$=4$, marked with filled diamonds) and a RR-Lyrae star (\mbox{$T_{\rm{eff}}$}$=6500$~K, \mbox{log $g$}$=2$, indicated by open squares). Dashed lines are for no H collisions, solid lines for a choice of S$_{\rm H}=1$. Solar-scaled oxygen abundance with $\alpha$--enhancement below solar metallicity was adopted in the calculations, as indicated by the horizontal axis at the top of the figure\label{fabf:corr_trends}} \end{center} \end{figure} \end{flushleft} The high-excitation triplet lines are particularly sensitive to temperature and become much stronger, both in LTE and non--LTE, for larger \mbox{$T_{\rm{eff}}$}\ values, because the corresponding level population increases, i.e. there are more atoms with electrons in the proper energy level to produce the spectral lines. Surface gravity (pressure) controls the amount of recombination via the Saha equation, but also the relative importance of collisional processes with respect to radiative ones. The lines get stronger in lower gravity models. Metal content - i.e. smaller or larger number of absorbers - also controls the strength of the lines. An IDL routine\footnote{The routine and associated data are made available for general use. They are downloadable either as individual files, or as a single compressed file, from ftp://nlte:c1o1nlte@ftp.iac.es (at this same address the routine for carbon non--LTE corrections from Fabbian et al. (2006) is also available)} has been prepared in order to interpolate among the grid, therefore making it possible to calculate detailed abundance corrections for any given star with parameters in the range covered. The non--LTE line strength is larger than in LTE across the parameter space explored, i.e., negative abundance corrections are found. Fig. \ref{fabf:corr_trends} shows in what measure the strongest (bluest) O\,{\sc i}\, triplet line at $777.19$~nm is affected by the resulting non--LTE corrections for some of the models (typical dwarf, typical turn-off star, and RR Lyrae star). A qualitatively similar trend is seen, namely of decreasing abundance corrections for all models from solar metallicity to \mbox{[Fe/H]}$=-2$ and then (except when including H collisions in the RR Lyrae case) of increasingly severe non--LTE corrections towards very low metallicity, in particular for turn-off stars, where the resulting $|\Delta\log\,\epsilon_{\rm O}|$ reaches $\sim -0.9$~dex at \mbox{[Fe/H]}$=-3$, when neglecting H collisions. This reduces to $\sim -0.5$~dex if H collisions are included. For the atmospheric models considered, non--LTE corrections are less severe around \mbox{[Fe/H]}$=-2$, but still remain significant, reaching $\sim -0.4$~dex for the RR Lyrae star. Interestingly, especially for the latter, H collisions do not have much impact (the difference staying within $0.1$~dex for all models) unless \mbox{[Fe/H]}$<-2$. At higher metallicity, including H collisions in the RR Lyrae case actually makes the non--LTE abundance corrections ever so slightly more severe\footnote{It is appropriate to note that some authors have suggested to calibrate H collisions on such stars (Clementini et al. 1995; Gratton et al. 1999). Even though RR Lyrae may be prone to very large non--LTE effects, if adopting this strategy, it appears from our results that it would be best to calibrate on metal-poor turn-off stars, where the sensitivity to the choice of H collisions is larger}. For the Sun, non--LTE corrections are $-0.3$~dex or less severe. In the RR Lyrae case, the abundance corrections at solar metallicity are very large and $\sim -1$~dex. \begin{flushleft} \begin{figure*} \begin{center} \includegraphics[width=8cm]{figure8a.ps} \includegraphics[width=8cm]{figure8b.ps}\\ \includegraphics[width=8cm]{figure8c.ps} \includegraphics[width=8cm]{figure8d.ps} \caption{Non--LTE abundance corrections versus metallicity, for the strongest O\,{\sc i}\, IR $777$~nm triplet line. {\it Top}: \mbox{$T_{\rm{eff}}$}$=5500$~K; \mbox{log $g$}$=2$ ({\it left panel}) and \mbox{log $g$}$=4$ ({\it right panel}) respectively. {\it Bottom}: same, but for \mbox{$T_{\rm{eff}}$}$=6500$~K. Results shown in all four panels are computed neglecting collisions with H atoms. The non--LTE corrections tend to get large at high temperature, both in dwarfs and giants. They are significant also in cooler solar-metallicity giants\label{fabf:logo_trends}} \end{center} \end{figure*} \end{flushleft} At metallicities $-2 < \mbox{[Fe/H]}\, < 0$, the line source function roughly follows a two-level approximation and thus the non--LTE corrections will depend on line strength. This explains their increase in atmospheric models having higher \mbox{$T_{\rm{eff}}$}\, and \mbox{[Fe/H]}, and lower \mbox{log $g$}\, (see Fig.\,\ref{fabf:logo_trends}). In a more general sense (see Fig.\,\ref{fabf:logo_trends}, where we show our resulting non--LTE corrections for a range of atmospheric parameters across the grid explored), the abundance corrections increase in size for higher effective temperature atmospheric models due to increased photon losses and/or level overpopulation via collisions. Corrections are small for \mbox{$T_{\rm{eff}}$}$\le 5500$~K, except for solar-metallicity giants, where they can reach $\sim -0.6$~dex. The trend with \mbox{log $g$}\, and \mbox{[Fe/H]}\, is more complex. For higher metallicity models, the non--LTE line source function drop due to photon losses has a two-level nature, therefore naturally increasing for stronger lines. Thus, corrections tend to become larger with increasing temperature and metallicity/oxygen abundance and with decreasing \mbox{log $g$}\, in this higher metallicity range. Below \mbox{[Fe/H]}$\sim -2$ however, the influence of \mbox{log $g$}\, and \mbox{[Fe/H]}\, is the opposite. The non--LTE corrections tend to increase strongly towards very low metallicity and increasing gravity, suggesting that line formation is controlled by a distinct non--LTE mechanism compared to the higher metallicity regime. Even though the $777$~nm triplet lines get much weaker towards low metallicity, departures from LTE tend to increase, becoming very large for typical metal-poor turn-off stars. An increase in gravity at low metallicity tends to make the abundance corrections larger, by increasing the collisional rate in the important intersystem coupling channel. As mentioned in Sect. \ref{fabsss:UVopacity}, the non--LTE effect is controlled by Si\,{\sc i}\, photoionization in the continuum. Thus, when the other parameters (\mbox{$T_{\rm{eff}}$}, \mbox{log $g$}, $\xi$ and $\log\,\epsilon_{\rm O}$) in the atmospheric model are kept constant, decreasing metallicity alone results in increasing non--LTE effects because of smaller continuum opacity and generally less efficient collisions (smaller contribution to free electron pool). For example, even when adopting efficient H collisions (S$_{\rm H}=1$), the abundance corrections jump from $\sim -0.5$ to $\sim -0.85$~dex in a model with \mbox{$T_{\rm{eff}}$}$=6500$~K, \mbox{log $g$}$=4$ and $\log\,\epsilon_{\rm O}=6.5$, when moving from \mbox{[Fe/H]}$=-3$ to $-3.5$. Finally, oxygen abundance, while controlling the {\it strength} of the lines, does not have a crucial influence on the abundance corrections, with differences in $\Delta\log\,\epsilon_{\rm O}$ remaining within $\sim 0.1$~dex for a change of $0.5$~dex in $\log\,\epsilon_{\rm O}$. In summary, the non--LTE effects are particularly significant at the highest temperatures, in particular for low-gravity, solar-metallicity, and for high-gravity, very low metallicity models. In both cases, when H collisions are neglected, the non--LTE abundance corrections reach up to $|\Delta\log\,\epsilon_{\rm O}|\sim 1$~dex. The inclusion of collisions with hydrogen atoms between all energy levels generally reduces the effects substantially (several tenths of a dex) at low metallicity, while at higher metallicity it has a smaller impact, due to radiative losses in the line itself being mostly important. The effects amount to $-0.3$~dex or less in the Sun, but are $-0.4$~dex or larger for hotter, solar-metallicity stars like Procyon. In addition, we have tested the influence of using ATLAS9 model atmospheres from Castelli \& Kurucz (2004) without ``convective overshooting'', instead of MARCS models. The temperature stratification in the MARCS and ATLAS models is generally thought to be quite similar in this parameter space (Gustafsson et al. 2008), so that one would expect the resulting non--LTE corrections to be not too different in magnitude. Any uncertainty from such supposedly small differences should be much less than that due to, say, choice of H collision efficiency or use of the 1D approximation. We carried out a number of test calculations which show that, when using Castelli \& Kurucz atmospheric models describing the Sun and stars of intermediately-low metallicity, the resulting non--LTE abundance corrections remain in fact very similar (within a few hundredths of a dex) to those obtained using corresponding MARCS models. The results for the most metal-poor cases reveal that for [Fe/H]$\le -2.5$, the non--LTE abundance corrections obtained at higher-gravity become more significant with decreasing metallicity also for ATLAS model atmospheres. Table \ref{fabt:kurmar} lists the results obtained with the two sets of models. Compared to the MARCS case, the ATLAS non--LTE corrections, although still very significant, tend to be less dramatic. The largest discrepancy between results using metal-poor MARCS and ATLAS models is found at low gravity. For both sets of atmospheric models, an LTE description of the O\,{\sc i}\, IR triplet is particularly unrealistic for the lowest-metallicity ([Fe/H]$=-3.5$) dwarf case at \mbox{$T_{\rm{eff}}$}$=6500$~K. Knowledge of how the two sets of models compare allows us to include in the next section a discussion of our results in relation to other studies which have employed ATLAS model atmospheres. Since it is almost impossible to disentagle and remove all the different systematics, often due to lack of sufficient information in previous works, we have chosen to compare at face value with other non--LTE studies in the literature, and then to apply our results directly to relevant existing observations. A more detailed study comparing oxygen abundance corrections obtained with different sets of atmospheric models, should however be carried out, in particular {\it at very low metallicity}, where our tests reveal that when using Castelli \& Kurucz models, the non--LTE effects are less severe compared to results obtained using MARCS models. Within the parameter space explored for the two sets of models, the largest differences (several tenths of a dex when neglecting H collisions) between the O\,{\sc i}\, $777$~nm triplet non--LTE corrections appear at the lowest metallicities, both for giant and dwarf models. \begin{flushleft} \begin{table} \centering \begin{tiny} \caption{Comparison of results obtained with representative cases from two different model atmosphere sets (ATLAS, and MARCS), assuming a solar oxygen abundance of $\log\,\epsilon_{\rm O}=8.66$ (Asplund et al. 2004). We list the non--LTE corrections (obtained using our 54-level atomic model, respectively excluding and including the contribution of H collisions, i.e. S$_{\rm H}=0/1$) for the central line in the O\,{\sc i}\, IR triplet. The adopted $\alpha$-element abundance for all atmospheric models below solar metallicity is enhanced by $+0.4$~dex\label{fabt:kurmar}} \begin{tabular}{lccc} \hline\hline \mbox{$T_{\rm{eff}}$} (K)$\setminus$\mbox{log $g$} & 5500$\setminus$4.5 & 6500$\setminus$2.0 & 6500$\setminus$4.0 \\ \hline & & & \\ & $\Delta\log\,\epsilon_{\rm O}$ & $\Delta\log\,\epsilon_{\rm O}$ & $\Delta\log\,\epsilon_{\rm O}$ \\ & {\bf (ATLAS)} & {\bf (ATLAS)} & {\bf (ATLAS)} \\ \hline \mbox{[Fe/H]}$=\,0.0$ & -0.20/-0.11 & -0.83/-0.86 & -0.53/-0.45 \\ \mbox{[Fe/H]}$=-1.0$ & -0.14/-0.05 & -0.64/-0.67 & -0.36/-0.31 \\ \mbox{[Fe/H]}$=-2.0$ & -0.11/-0.03 & -0.32/-0.35 & -0.20/-0.15 \\ \mbox{[Fe/H]}$=-2.5$ & -0.12/-0.03 & -0.28/-0.26 & -0.27/-0.16 \\ \mbox{[Fe/H]}$=-3.5$ & -0.43/-0.10 & -0.26/-0.18 & -0.87/-0.54 \\ \hline & & & \\ & $\Delta\log\,\epsilon_{\rm O}$ & $\Delta\log\,\epsilon_{\rm O}$ & $\Delta\log\,\epsilon_{\rm O}$ \\ & {\bf (MARCS)} & {\bf (MARCS)} & {\bf (MARCS)} \\ \hline \mbox{[Fe/H]}$=\,0.0$ & -0.18/-0.10 & -0.88/-0.92 & -0.50/-0.44 \\ \mbox{[Fe/H]}$=-1.0$ & -0.12/-0.05 & -0.63/-0.66 & -0.34/-0.29 \\ \mbox{[Fe/H]}$=-2.0$ & -0.11/-0.03 & -0.30/-0.31 & -0.19/-0.14 \\ \mbox{[Fe/H]}$=-2.5$ & -0.16/-0.04 & -0.42/-0.33 & -0.56/-0.34 \\ \mbox{[Fe/H]}$=-3.5$ & -0.52/-0.14 & -0.88/-0.53 & -1.21/-0.85 \\ \hline\hline \end{tabular} \end{tiny} \end{table} \end{flushleft} The smaller departure coefficients in the very low-metallicity Castelli \& Kurucz models is caused by reduced intersystem coupling (lower rates) via electron collisions, and thus less severe atomic level overpopulation. This is likely due to the effect of existing differences in electron density/temperature stratification in deep atmospheric layers (for example, up to around $400$~K at [Fe/H]$= -2.5$), between the MARCS and ATLAS sets of very metal-poor models. In this context, the failure of the Kurucz models to provide matching oxygen abundances from the near-IR triplet and the forbidden line, has previously been attributed to inconsistencies in electron density and temperature gradients of those models near the continuum-forming layers (Israelian et al. 2004). Compared to the MARCS models, we find that the effect of increasingly severe non--LTE corrections with decreasing metallicity starts to appear at lower metallicity in the Castelli \& Kurucz models (for example, reaching $\Delta\log\,\epsilon_{\rm O}\sim -0.7/-0.9$~dex with/without H collisions, for a \mbox{$T_{\rm{eff}}$}$=6500$~K, \mbox{log $g$}$=4$ and \mbox{[Fe/H]}$-4$ model). We can only speculate that the two sets of models behave differently due to differences in opacities, and/or equation of state, and/or convection treatment. The issue of which set of 1D models is more suitable or realistic is still open (see discussion in Gustafsson et al. 2008) and of course, current further improvements in the modelling (e.g. use of three-dimensional model atmospheres) may prove crucial. We here warn the reader that there will be systematic errors when applying, as is often done, corrections computed for one set of model atmospheres, to LTE results obtained with a different set of models, for example due to differences between continuous opacities and ionization balances used in the construction of the model atmospheres and those employed for the non--LTE calculations. Therefore, the abundance corrections we derived in this paper should, strictly speaking, be applied only to results obtained with MARCS model atmospheres. However, given the above discussion, we suggest that they can also be safely applied to LTE abundances from Kurucz models down to intermediately-low metallicity. \vspace{0.7cm} \section{Comparison with other theoretical non--LTE studies for oxygen} We now discuss our results in relation to those of previous work present in the literature. \subsection{Kiselman 1991, 1993, 2001; and related works} \label{fabss:Kis_comparison} Kiselman (1991, 1993, 2001) has studied and reviewed the non--LTE effects on oxygen lines. A comparison with those works, and with CJ93, Nissen et al. (2002), Akerman et al. (2004), and Garc\'{\i}a P{\'e}rez et al. (2006) is of significance. Our results are also relevant to the publications by Israelian et al. (2004), Shchukina, Trujillo Bueno \& Asplund (2005) and Mel\'{e}ndez et al. (2006), where the various authors used atomic data based on CJ93. In fact, all the works mentioned in this subsection are at least in part based on the (14- and 45-level) atomic models in that study. The CJ93 atomic models include estimates by van Regemorter (1962) and the impact approximation of Seaton (1962) for collisions involving radiatively allowed bound-bound transitions, and data available from various authors for radiatively forbidden transitions connecting to the ground state and to the continuum. Collisional coupling between the singlet, triplet and quintet systems is largely absent in those models due to lack of data for several transitions, in particular, for intersystem coupling via the radiatively forbidden 3s $^3$S$^{\rm o}$ - 3s $^5$S$^{\rm o}$ transition. Even though, as in CJ93, we too use the code {\small MULTI}, it is a more recent implementation of it (version 2.3) and we have made several other improvements in this work. Our atomic model includes a larger number of energy levels. More crucially with respect to the $777$~nm triplet non--LTE corrections, it features updated atomic data, and takes into account background opacities from lines of other elements in the region of the O\,{\sc i}\, UV lines. Kiselman (1991) studied non--LTE effects on oxygen abundances in solar-type stars, finding that abundance corrections are slightly decreasing from solar to \mbox{[Fe/H]}$=-1$ but increasingly large below \mbox{[Fe/H]}$\sim -2$, reaching as much as $\sim -0.9$~dex below \mbox{[Fe/H]}$\sim -2.5$ (see Fig. 3 in that work). Fig. 4 in Kiselman (1991) shows that for a very metal-poor stellar model (\mbox{[Fe/H]}$\sim -3.5$~dex) the corrections found were even more severe, with a magnitude of $|\Delta\log\,\epsilon_{\rm O}|\apprge 1.4$~dex. These results, in particular of non--LTE effects which gently decrease down to intermediately low metallicity and then steeply increase at very low metallicity, are {\it qualitatively} similar to those we obtain when neglecting H collisions, as Kiselman did. This is somehow fortuitous due to the opposite effect on the abundance corrections at low [Fe/H] of, on one side, our newly adopted electron collision data, and on the other, of both our inclusion of UV background opacity and of the increased completeness of our atomic model.\footnote{Our tests show collisional coupling of high-lying levels to the continuum to be a significant line weakening channel at low metallicity} These influences will not matter much for the Sun, where we too find that processes in the line itself are important, due to the line source function drop having a close to two-level character. Our initial tests with the atomic model by K93 made us discover that intersystem electron collisions are crucial in the metal-poor regime. The {\it original} model in that work gives underestimated abundance corrections due to absence of intersystem coupling. K93 found that the introduction of intersystem collisional coupling changed the equivalent width of the $777$\,nm triplet by a few percent and should therefore be of minor importance. We confirm this if testing on atmospheric models with parameters as in that study (for Sun, Procyon and the metal-poor dwarf HD~140283). However, our calculations on atmospheric models with both higher temperature and lower metallicity (typical parameters of very metal-poor turn-off stars) reveal the existence in that range of a larger influence from intersystem coupling on the non--LTE formation of the O\,{\sc i}\, IR triplet. This is due to the fact that in these model atmospheres there is an increased overpopulation of level 3s\,$^3$S$^{\rm o}$, which can easily transmit to the (common) lower level of the triplet lines due to its similar excitation energy. For the Sun, our investigation obtains non--LTE abundance corrections of $\sim -0.25$~dex for S$_{\rm H}=0$, in reasonable agreement with the original results of K93. We derive a correction of $\sim -0.15$~dex when including inelastic H collisions according to the Drawin recipe. Compared to K93 (see also Kiselman 2001), we find that the lower level of the $777$~nm triplet gets more overpopulated and that its upper levels get less underpopulated. This effect was already noted via test calculations in Kiselman's work. It is due to atom completeness (i.e. to the combined influence of high-excitation energy levels) caused by the significant effect of recombination cascades due to photon losses in high-lying infrared transitions. Coupling with the large population reservoir in the first ionization stage exists via photon suction through bound-bound transitions. The inclusion of high levels ensures that the consequent efficient downward flow of electrons can be well described. The balance between the contribution of source function drop and line opacity will be better described by our newly employed atomic model. We find increased contribution of line opacity. Using our atomic model at very low metallicity (\mbox{[Fe/H]}$\le -3$), we find, in particular at higher effective temperature, that coupling between high-excitation levels and the continuum works in the opposite sense than in the Sun, causing an upward flow. Nissen et al. (2002) derived $777$~nm triplet non--LTE corrections for the subset among their sample of metal-poor main sequence and subgiant stars in which those lines had been included in the observational setup. Using a previous version of our same code and the model atom in K93, and neglecting H collisions, they found moderate non--LTE corrections. The star with lowest metallicity (and also highest temperature) in their sample is LP815-43, for which they derive the largest non--LTE correction of $\Delta\log\,\epsilon_{\rm O}\ = -0.25$~dex. When neglecting H collisions as they do, we obtain $\Delta\log\,\epsilon_{\rm O}\ = -0.55$~dex. Note however that this is likely an overestimate and would bring the non--LTE determination of \mbox{[O/Fe]}\, to very low values. If including H collisions, the abundance correction becomes $\Delta\log\,\epsilon_{\rm O}\sim -0.4$~dex. It is interesting to note that, while their non--LTE corrections show a slight decrease with metallicity down to \mbox{[Fe/H]}$\sim -2.5$, there is a hint of significant increase in the effects at or below the metallicity of LP815-43, as stated at the end of their Section 4.3. This is the only star falling in the parameter range where we observe the very large increase in non--LTE corrections. Qualitatively, the trend with metallicity in the non--LTE corrections of Nissen and collaborators, reproduces our finding at overlapping metallicities. Since we find collisional coupling not to play a major role until very low metallicity is reached, this explains why only the result for LP815-43 is significantly discrepant. The differences in the atomic models (namely, electron and H collisions, atom completeness, oscillator strengths) used in the two studies will indeed mostly matter in that regime. The results of Nissen et al. were used by Akerman et al. (2004) by interpolating the non--LTE corrections for relevant parameters, while Garc\'{\i}a P{\'e}rez et al. (2006) performed tailored calculations in the same fashion as Nissen et al. The above conclusion -- namely, underestimated non--LTE effects at very low metallicity -- applies in those cases too. For all stars in the study by Garc\'{\i}a P{\'e}rez et al., we obtain (neglecting H collisions as they do) more severe non--LTE abundance corrections, however remaining within $\sim 0.05$~dex of their estimates. This would help bring their derived \mbox{[O/Fe]}\, values from the IR triplet in better agreement with those they determined from [O\,{\sc i}], however still without reaching full consistency between the abundance estimates from the two indicators. Shchukina, Trujillo Bueno \& Asplund (2005) neglected H collisions in their study of the impact of non--LTE and granulation effects on Fe and O in the metal-poor halo subgiant HD~140283. Their resulting 1D non--LTE corrections are smaller ($-0.18$~dex) than the correction they estimated for the Sun (which is similar to our value for it). Such trend agrees with our result using an appropriate model atmosphere (\mbox{$T_{\rm{eff}}$}$=5690$~K, \mbox{log $g$}$=3.67$ and \mbox{[Fe/H]}$-2.4$), for which we too find a smaller abundance correction ($-0.12$~dex) than our estimate for a solar model. Note that HD~140283 lies just outside the (higher effective temperature and lower metallicity) regime where we encounter large non--LTE effects. Moreover, Shchukina et al.'s reanalysis of its stellar parameters, aimed at solving the inconsistencies in the abundance determination, arrive at the conclusion that this star has significantly lower temperature and higher metallicity than previously thought. \begin{flushleft} \begin{figure} \begin{center} \includegraphics[width=8cm]{figure9.ps} \caption{Non--LTE abundance corrections versus metallicity, for the O\,{\sc i}\, IR $777$~nm triplet. Open diamonds indicate the results of Mel\'endez et al. (2006), while the other points mark our corrections for S$_{\rm H}=0$ and S$_{\rm H}=1$ (open and filled triangles respectively)}\label{fabf:Meletal06} \end{center} \end{figure} \end{flushleft} Finally, regarding comparison with the work of Mel\'{e}ndez et al. (2006), they computed non--LTE corrections with the NATAJA code (see Shchukina \& Trujillo Bueno 2001), finding a roughly constant oxygen-to-iron overabundance (mean \mbox{[O/Fe]}$\sim +0.5$) from published data on unevolved metal-poor halo stars. Note that their Fig. 4 shows that the lowest-metallicity stars tend to have smaller \mbox{[O/Fe]}\, values than what found around [Fe/H]$\sim -2$. This may indicate (see Nissen et al. 2007) too high temperatures at low [Fe/H], a possibility either related to uncertainties in the reddening or to the not-yet settled \mbox{$T_{\rm{eff}}$}-scale itself. In any case, collisions with H were not included in their work (Mel\'{e}ndez, {\it private communication}). A comparison of the non--LTE corrections shows that theirs are significantly smaller at low \mbox{[Fe/H]}. Inspecting their Fig. 5, where they compare with the non--LTE corrections of Akerman et al (2004), the star with the largest non--LTE effect ($\Delta\log\,\epsilon_{\rm O}\sim -0.35$~dex) is G64-12, which has the lowest metallicity in their literature sample. For a model with similar parameters as those used by them, we obtain a larger abundance correction ($\sim -0.5$~dex, or more if neglecting H collisions as they did). As seen in Fig.\,\ref{fabf:Meletal06}, while also the non--LTE corrections of Mel\'{e}ndez and collaborators tend to become more significant at [Fe/H]$< -2$, we find that the trend with decreasing metallicity is much more pronounced, with more significant non--LTE abundance corrections than found by those authors, thus likely destroying their claimed quasi-flat [O/Fe] trend with metallicity, even when adopting the results we obtained in the case of efficient H collisions. The differences are likely related to the use of different codes, model atmospheres, opacities, and to our inclusion in the oxygen model atom of new electron-impact data. \subsection{Gratton et al 1999} Using an earlier version of the same code employed here and a similar model atom to the one in K93, but Kurucz (1992) model atmospheres, Gratton et al. (1999) studied departures from LTE in F-K stars for a broad range of gravities and metallicities, and including collision with electrons and hydrogen. For the latter process, they adopted very large efficiency (scaling factor S$_{\rm H} \sim 3.2$ in our notation), based on an empirical calibration on RR Lyrae stars, i.e.~\/ on the requirement that the $777$~nm and $616$~nm triplets should provide the same abundances. Forcing such agreement may however mean that other assumptions are folded into (and remain hidden in) the scaling parameter for H collisions. Based on previous evidence from solar investigations (Allende Prieto, Asplund, \& Fabiani Bendicho 2004), there is some indication that the use of Drawin's recipe with no scaling factor (i.e., S$_{\rm H}=1$) may be reasonable for the O\,{\sc i}\, IR triplet. Since collisions with neutral H atoms are generally considered the most important thermalizing process in late-type stars, Gratton et al.'s higher choice for S$_{\rm H}$ will give smaller non--LTE corrections than in our case. Indeed, those authors found relatively mild non--LTE effects on the O\,{\sc i}\, $777$\,nm triplet, in particular $\sim -0.1$~dex in the Sun. Their study reaches down to \mbox{[Fe/H]}$=-3$, where for hot, high-gravity atmospheric models they find the largest corrections (reaching $\Delta\log\,\epsilon_{\rm O}\sim -0.5$~dex) for the O\,{\sc i}\, IR triplet. Choosing atmospheric parameters in common between the two studies (\mbox{$T_{\rm{eff}}$}$=6000$~K, \mbox{log $g$}$=4.5$, \mbox{[Fe/H]}$=-3$, see their Fig. 12), and very efficient H collisions as they did, we too obtain small non--LTE abundance corrections (only $\sim -0.07$~dex in our case, i.e. even a few hundredths of a dex less severe than in Gratton et al.). However, if adopting S$_{\rm H} \le 1$, our non--LTE corrections will actually be larger ($\Delta\log\,\epsilon_{\rm O}\apprle -0.25$~dex). Differences with our study are therefore mostly due to our use of H collisions more in line with the majority of presently available evidence, and only to a lesser extent to employing different model atmospheres. It is interesting to note (their Fig. 12) that their non--LTE corrections for the hottest model used by them (\mbox{$T_{\rm{eff}}$}$=7000$~K) show a significant increase when going from \mbox{[Fe/H]}$=-2$ to $-3$. The results of Gratton and co-workers were for example included in Carretta, Gratton \& Sneden (2000) and in Primas et al. (2001), deriving a ``quasi-flat'' oxygen-to-iron overabundance with mean \mbox{[O/Fe]}$\sim +0.5$ at low metallicity. \subsection{Takeda 1992, 1994, 2003; Takeda \& Honda 2005} Takeda published a series of non--LTE studies of oxygen. In his investigations, rates due to collisions with H atoms were in general computed using the Drawin recipe without corrections. Takeda (1994) found $|\Delta\log\,\epsilon_{\rm O}| < 0.1$~dex for the Sun but $\sim -0.4$~dex for a hotter model representative of Procyon (for which, when including H collisions, we derive a correction of $-0.4/-0.5$~dex depending on the O\,{\sc i}\, triplet line considered). That author also attributed to non--LTE effects, at least qualitatively, the discrepancy between oxygen abundances from the triplet and forbidden lines generally found in studies of low-[Fe/H] halo stars. In Takeda's non--LTE analysis, the most oxygen-deficient (log O$=6.86$) calculation starts to show the signs of UV pumping in the resonance lines (see their Fig. 9 and discussion in the corresponding section). Takeda (2003) gives an analytical formula, independent of metallicity, to approximate the non--LTE abundance corrections resulting from his calculations, arguing that it will provide a fairly good approximation for lines with W$_{\lambda} < 100$~m\AA. For the Sun (see also Takeda \& Honda 2005) and for solar-type dwarfs at moderately low [Fe/H], our calculations including H collisions give similar results to those of Takeda. The mentioned formula by Takeda does not {\it explicitly} contain \mbox{[Fe/H]}. However, given its dependence on line strength, it will give generally less severe abundance corrections at low metallicity due to weaker lines, so that our non--LTE corrections become larger than Takeda's at low metallicity. Note that his Fig. 4 shows that estimated corrections are low and roughly constant ($\sim -0.1$~dex) for the stars considered in his comparison with Nissen et al. (2002). As discussed in \ref{fabss:Kis_comparison}, we obtain roughly similar non--LTE corrections to Nissen et al. (2002), or even more severe effects for the lowest-metallicity star in their sample. Note however that when adopting efficient H collisions as in the study by Takeda, the results will be closer. The absence of a significant trend of the non--LTE corrections with decreasing metallicity found by Takeda (2003) is at variance with the conclusions of e.g. Carretta et al. (2000). Takeda argues that, even though in shallow atmospheric layers there is an appreciable increase of the non--LTE line opacity at low metallicity, this does not affect the deeply-forming $777$~nm O\,{\sc i}\, triplet, so that fine details of the atomic model will not matter, due to a close-to-two-level line source function. Note that Takeda does in fact find a metallicity dependence when neglecting H collisions. We find that at low metallicity, the two-level nature of the non--LTE effect breaks down. In particular, for \mbox{$T_{\rm{eff}}$}$=6500$~K, \mbox{log $g$}$=4$, using Takeda's formula with our estimated $W_{\rm non-LTE}$ would give a constant $\Delta\log\,\epsilon_{\rm O}\sim -0.1$~dex at very low metallicity, compared to our results of $\sim -0.5$~dex and $\sim -0.9$~dex (respectively for \mbox{[Fe/H]}$=-3$ and $-3.5$) when adopting H collisions. In this respect, our abundance corrections would be more helpful in solving the oxygen discrepancy, bringing the abundances derived from the $777$~nm triplet in better agreement with the usually lower value from [O\,{\sc i}]-based LTE estimates. Thus, we do not confirm the small dependence of O\,{\sc i}\, non--LTE corrections from \mbox{[Fe/H]}\, found by Takeda. That author (Takeda, {\it private communication}) included intersystem collisional coupling using Auer \& Mihalas (1973) and, in Takeda (1992) and successive works, treatment of scattering in the continuum source function. Note that Takeda accounted for background line opacity in the UV only from H lines. However, as seen from our tests, metal lines tend to make the non--LTE effect smaller. Therefore, this actually makes the discrepancy with our results more significant, and likely explainable due to the summed effect of different atomic data, and use of different model atmospheres. The results in Takeda (2003) were applied by means of interpolation/extrapolation in Takeda \& Honda (2005), giving a linear increase of \mbox{[O/Fe]}\, with \mbox{[Fe/H]}. \subsection{Ram{\'\i}rez, Allende Prieto \& Lambert 2007} In their study of a large sample of nearby stars, Ram{\'\i}rez, Allende Prieto \& Lambert (2007) have derived non--LTE corrections (neglecting the role of inelastic H collisions) for a grid of model atmospheres, mainly covering the metallicity regime of the Galactic thin and thick disks. Their adopted O\,{\sc i}\ model atom is explained in detail in Allende Prieto et al. (2003). It consists of 54 levels plus continuum and 242 transitions, with the radiative data coming from the Opacity Project and NIST, and electron collision rates estimated via approximate formulae. For excitation by electrons in forbidden transitions, they computed the collision rate coefficient from Eissner \& Seaton (1974), adopting an arbitrary choice of effective collision strength. As shown in the detailed comparison by Barklem (2007a), the data here employed by us for radiatively forbidden transitions show poor agreement with those estimated by Allende Prieto et al. In particular, for the radiatively forbidden transition which we find important in coupling triplet and quintet systems, our adopted rate is two orders of magnitude larger. The non--LTE corrections of Ram{\'\i}rez et al. reach down to $\sim -0.4$~dex at \mbox{[Fe/H]}$\sim -1.5$ (their Fig. 7). For stars with determined metallicities, as from their Table 6, the largest corrections are $\sim -0.3$~dex (e.g., HD~112887 at \mbox{$T_{\rm{eff}}$}$=6319$~K, \mbox{log $g$}$=3.88$, \mbox{[Fe/H]}$=-0.38$). Their abundance corrections show a relatively mild dependence on metallicity for the range studied. Since our investigation extends to much lower metallicity than theirs, we only compared the results for a few representative stars with atmospheric parameters covered by both works, using the routine (Ram{\'\i}rez, {\it private communication}) mentioned in their paper. The star with lowest determined metallicity in Table 6 of Ram{\'\i}rez et al. (2007) has \mbox{[Fe/H]}$= -1.46$. For this star (HD~132475), their routine gives a non--LTE correction of $\sim -0.28$ dex with their adopted parameters of \mbox{$T_{\rm{eff}}$}$= 5613$~K and \mbox{log $g$}$= 3.81$. Our non--LTE correction for similar parameters is smaller, $\sim -0.17$ dex. For a standard star like Procyon (\mbox{$T_{\rm{eff}}$}$=6530$~K, \mbox{log $g$}$=3.96$, \mbox{[Fe/H]}$=0$), for which their formula would give $\sim -0.28$~dex, we derive a much larger estimate of $\sim -0.55$~dex, when adopting S$_{\rm H}=0$. Note however that, as discussed, a choice of S$_{\rm H} \sim 1$ may be more appropriate, which would lead to a value of $\Delta\log\,\epsilon_{\rm O}\sim -0.45$~dex in our case. For the Sun, their non--LTE correction is roughly $\sim -0.15$~dex, which is $\sim 0.1$~dex less severe than in our case. Even adopting their slightly higher (LTE) solar oxygen abundance, our results for the Sun would not change noticeably. A choice of S$_{\rm H} \sim 1$ for H collisions in our case would bring, however fortuitously, our estimate close to that of Ram{\'\i}rez et al. \section{Implications} \begin{flushleft} \begin{figure*} \begin{center} \includegraphics[width=9.1cm]{figure10a.ps} \includegraphics[width=9.1cm]{figure10b.ps} \caption{ Revised [O/Fe] versus [Fe/H], from literature data. The different symbols are: filled circles, for [OI]-based, LTE-obeying values from Nissen et al. (2002); filled triangles, for the 1D estimates (from that same study, but non--LTE corrected according to our results) based on the O\,{\sc i}\, $777$~nm triplet; filled diamonds for O\,{\sc i}-based values for the stars in Israelian et al. (2001) with fully determined stellar parameters and oxygen triplet abundances in their study, taking into account our non--LTE corrections on the oxygen triplet; and empty squares, for our non--LTE corrected estimates using the Mel\'endez et al. (2006) data set (after adjusting \mbox{$T_{\rm{eff}}$}-scale, see Nissen et al. 2007). The non--LTE abundance corrections we applied were computed with a choice of S$_{\rm H}=0$ and S$_{\rm H}=1$ (left and right panel respectively) for hydrogen collisions. The dotted line indicates the mean trend suggested by Israelian et al. (2001), i.e. [O/Fe]$=-0.33$\,[Fe/H]\label{fabf:ofenlte}} \end{center} \end{figure*} \end{flushleft} Our results have important consequences in relation to the debate regarding the \mbox{[O/Fe]}\, trend at low metallicity, and on the understanding of the chemical evolution of this element in the early Galaxy. The large difference often found in the literature between the oxygen content of old stars as derived via the O\,{\sc i}\, IR triplet and the other oxygen abundance indicators (in particular, [O\,{\sc i}]) is likely to be explained in terms of various uncertainties acting to produce disagreeing estimates (Asplund 2005; Mel\'endez et al. 2006). In our re-assessment of non--LTE effects on the O\,{\sc i}\, IR triplet, we have found that at low metallicity they are more important than what is often assumed in the literature. In Fig.\,\ref{fabf:ofenlte}, we show the effect of applying our abundance corrections to literature data by Israelian et al. (2001), Nissen et al. (2002) and Mel\'endez et al. (2006). Since uncertainties in H collision efficiency affect the modelling of the $777$~nm O\,{\sc i}\, triplet (and corresponding non--LTE corrections) very significantly, we have plotted results derived applying S$_{\rm H}=0$ and S$_{\rm H}=1$, respectively. The limited observational evidence for the Sun suggests for oxygen rather efficient thermalization via impacts with H atoms, akin to choosing Drawin's formula without any scaling factor (S$_{\rm H}=1$). The severe non--LTE corrections we find seem to largely erase the linear increasing trend towards low metallicity sometimes found in LTE. Our estimates with a choice of S$_{\rm H}=1$ suggest a rather flat trend below [Fe/H]$=-1$, although with significant residual scatter. Only adopting an unrealistically high amount of thermalization would make the non--LTE effects negligible. When instead completely ignoring H collisions, (as discussed however, this case may be unrealistic), the [O/Fe] values in Fig.\,\ref{fabf:ofenlte} would become even lower (the trend turning into a decrease with decreasing metallicity). If such a model was borne out, it would possibly require new investigations on the efficiency of the $^{12}{\rm C}(\alpha, \gamma)^{16}{\rm O}$ nuclear reaction and on published nucleosynthetic yields at low metallicity. We note that our non--LTE corrections do not destroy the good agreement between [O\,{\sc i}] and O\,{\sc i}\, abundances found by Nissen and collaborators in five of their sample stars. Since those stars are between $-2.42<\rm{[Fe/H]}<-1.04$ they do not suffer the very large corrections (nor the large sensitivity to H collisions) we found at very low metallicity. In fact, our resulting non--LTE corrections, and thus estimated abundances from the $777$~nm triplet, agree with those of Nissen et al. (2002) within $0.1$~dex for those stars. {Our results may imply that the oxygen-to-iron overabundance does not depend significantly on metallicity down to [Fe/H]$=-3$, with an essentially flat trend (this may include a slight slope in either direction)} at very low metallicity. This is in line with the behaviour recently found via accurate abundance estimates (taking into account non--LTE effects) for other typical $\alpha$-capture elements (e.g. Nissen et al. 2007). In Fabbian et al. 2008b, we further investigate this issue, together with a study of the [C/Fe] and [C/O] ratios at low metallicity, via new observational data. We have shown that O\,{\sc i}\, LTE abundances derived from the $777$~nm triplet are likely large overestimates, especially at low \mbox{[Fe/H]}, but there is still significant uncertainty related to the choice of H collision efficiency. Some authors have argued on astrophysical grounds that assuming the Drawin estimate might be reasonable, at least for the Sun. Solar center-to-limb variation in fact not only provides evidence that the O\,{\sc i}\, $777$~nm lines are formed in non--LTE (Altrock 1968; Shchukina 1987; Kiselman 1991), but also suggests a possible significant role of H collisions (e.g. Allende Prieto, Asplund, \& Fabiani Bendicho 2004). Relative to electron collisions, these should be even more important at low metallicity. The model with S$_{\rm H}=1$ seems to predict lines in better agreement with astrophysical expectations from other oxygen spectral features. However, this is only an indication, since it could well result from errors due to the Drawin estimates and to the use of 1D atmospheric models coincidentally acting in opposite directions. Here, we too find that H collisions may need to be taken into account for this element, by reasoning on the otherwise unlikely large impact of non--LTE corrections on \mbox{[O/Fe]}\, at very low metallicity, when they are neglected. At the same time, we warn that in fact (however unavoidable in the absence of available detailed quantum mechanical computation for relevant O+H collisions, which are urgently needed together with constraints via solar observations) adopting the general Drawin recipe -- possibly with some scaling factor -- for all transitions is a rough approximation, since different transitions likely have different sensitivity to such collisions. Realistic cross-sections for continuum absorption are also needed. We have found that Si abundance plays an important role (via bound-free absorption) in the formation of the IR oxygen triplet at very low metallicity. This element is commonly thought to follow the normal behaviour of other $\alpha$-elements in being overabundant compared to Fe at low metallicity. Cayrel et al. (2004) recently obtained an average [Si/Fe]$\sim +0.4$ (with small scatter), which we have adopted in the calculations. However, note that very recently Nissen \& Schuster (2008) have found evidence that halo stars may fall into two groups, with distinct $\alpha$-enhancement. If this significant ($\sim 0.20$~dex) separation between ``low-$\alpha$'' and ``high-$\alpha$'' halo stars is preserved down to very low \mbox{[Fe/H]}\, for silicon, it may well be an important factor in the discussion of the oxygen problem, because a lower content of Si would imply larger non--LTE effects on the O\,{\sc i}\, triplet, for an otherwise fixed set of atmospheric parameters. Any residual star-to-star differences in the Si content of very metal-poor halo stars may thus play some role in the scatter observed in [O/Fe] versus \mbox{[Fe/H]}. It is therefore important, in future studies on the oxygen problem, to pin down the non--LTE corrections by also having very accurate determinations of the stellar silicon content. We cannot yet completely rule out the existence of a continuous steeply increasing trend of \mbox{[O/Fe]}\, with decreasing \mbox{[Fe/H]}. However, its existence would require extremely efficient H collisions (e.g. S$_{\rm H} \apprge 3$), in order to bring the abundance estimates close to the {\it high extreme} set by the LTE expectation. The oxygen-to-iron overabundance more likely reaches $\la +0.5$~dex at very low metallicity, with a plateau or slight metallicity-dependence. Further modelling and new high-quality observations will be needed to completely settle the issue, but models of Galactic chemical evolution constrained by the [O/Fe] ratio derived from observations of stars at low metallicity, can now hope to achieve a more definite conclusion on possible scenarios required by derived stellar composition at early stages. Such low (compared to LTE) [O/Fe] values in Galactic halo stars are in better agreement with recent estimates (P\'equignot 2008) for low-metallicity blue compact dwarf galaxies. Regarding the solar oxygen abundance, it is little affected by the new electron collisions. The determination of Asplund et al. (2004) employed several different indicators obtaining remarkable agreement between them, by performing a 3D non--LTE analysis. Our 1D non--LTE results for the $777$~nm IR triplet lines agree within $\sim 0.05$~dex with those published by Asplund et al., when adopting their choice of S$_{\rm H}=0$. We argue that, even if H collisions were important for oxygen (Asplund and collaborators neglected them in their calculations) - in which case our estimates would give up to $\sim -0.1$~ dex {\it higher} oxygen abundance for some lines (in particular, for the $844.67$~nm line) due to smaller corrections than theirs - the overall change when averaging among all lines will remain within $0.05$~dex. This is still significant, since the derived solar oxygen abundance may be around $\sim 8.70$, which would partially alleviate the current discrepancy with helioseismology results. Incidentally, we cannot explain the extremely large \mbox{[O/Fe]}\, overabundances derived in some cool giants at very low metallicity, and the very different abundances derived in those objects from permitted and forbidden oxygen lines respectively (Israelian et al. 2004), because in that range our non--LTE corrections tend to become $\sim -0.1$~dex or less severe. To gain a better insight into the various outstanding problems, detailed results from 3D non--LTE investigations will be necessary. In a series of papers, Schuler et al. (2004, 2005, 2006) have studied solar-type open cluster dwarfs, finding much lower mean [O\,{\sc i}]-based abundances than for results based on the permitted O\,{\sc i}\, triplet. From the latter, they found a puzzling and abrupt increase in LTE oxygen abundance derived from the high-excitation triplet for stars with Teff $\apprle 5800$ K. We exclude that this trend could be due for the most part to departures from LTE, since our resulting abundance corrections tend to be relatively small in the relevant temperature range. \vspace{0.7cm} \section{Conclusions} We have explored the oxygen non--LTE line formation over a wide parameter space. Thanks to several improvements -- mainly, detailed treatment of UV radiation field including opacities by background lines, and extended model including recently available atomic data -- we derive estimated corrections at low \mbox{[Fe/H]}\, to LTE abundances derived from the IR $777$~nm triplet that are generally larger than usually adopted in the literature. It is clear that the availability of adequate atomic data is crucial to achieve a reliable non--LTE solution for this and other important astrophysical problems. Using rates derived from newly available estimates for electron-impact excitation by electrons (Barklem 2007a), we have performed non--LTE calculations in one-dimensional stellar atmospheric models of late-type stars extending to very low metallicity, in order to understand the formation of the permitted neutral atomic oxygen lines. The trends of non--LTE corrections across the parameter space can be understood in terms of the different non--LTE mechanisms in action: mainly source function drop at solar metallicity, and large level overpopulation at low [Fe/H]. For solar-type dwarfs, the non--LTE corrections we found show relatively small metallicity-dependence down to \mbox{[Fe/H]}$\sim -2$. This is in line with previous findings by other authors (e.g., Takeda 2003). However, at the higher temperatures of turn-off stars at low \mbox{[Fe/H]}, the departures from LTE tend to increase to large values when metallicity decreases, so that LTE abundances will require increasingly large corrections. The two-level approximation valid for the Sun breaks down and results no longer mostly depend on processes in the line itself. The large non--LTE corrections at low \mbox{[Fe/H]}\, follow from increased collisional intersystem coupling. The (over)population of the lower level of the $777$~nm triplet tends to increase and so does line opacity. The formation of the $777$~nm lines becomes sensitive, at low metallicity, to absorption processes in the UV continuum. This makes the non--LTE corrections metallicity-dependent due to the influence of Si absorption. The results have an impact on the derivation of the ``true'' [O/Fe] trend at low metallicity, with a roughly constant plateau from literature data if including thermalization by collisions with hydrogen atoms. The problem of H collisions is obviously still very much open. Progress is required, via theoretical quantum mechanical calculations or via limb darkening observations (Allende Prieto, Asplund \& Fabiani Bendicho 2004), in order to assess whether H collisions can provide efficient thermalization. This is of high priority also for a more conclusive determination of the solar oxygen content as standard of reference in abundance determinations. Some authors (e.g., Kiselman 1991; Nissen et al. 2002; Asplund et al. 2004) have preferred to completely neglect H collisions, because of experimental (e.g. Fleck et al. 1991) and theoretical (e.g. Barklem, Belyaev \& Asplund 2003) evidence that Drawin's formula gives usually too large (by orders of magnitude) estimates for atmospheres of cool stars, at least for simple atoms like Li. We find that even with a choice of S$_{\rm H}=1$, the non--LTE corrections would still remain large at very low metallicity (up to $-0.85$~dex at \mbox{[Fe/H]}$=-3.5$). Even though when adopting extreme choices of the parameter regulating the efficiency of H collisions, the difference in the resulting $777$~nm triplet non--LTE corrections is relatively small towards higher metallicity, it remains significant and $\sim 0.1$~dex for the Sun, in the sense of efficient collisions producing smaller non--LTE effects (which would help to partially alleviate the discrepancy with helioseismology, at least for oxygen). It looks as though a combination of 3D/non--LTE effects, choice of atmospheric models and of temperature scale, and observational uncertainties both in derived stellar parameters and equivalent widths measurements have conspired to prevent a clear solution of the oxygen problem so far. It now seems like better agreement with the abundance from the forbidden [O\,{\sc i}] lines (which should be free from non--LTE effects) and from OH molecular lines (once corrected for 3D effects, Asplund \& Garc\'{\i}a P{\'e}rez 2001) can be obtained when non--LTE effects are taken into account for the $777$\,nm oxygen triplet abundance, see e.g.~\/ Mel\'{e}ndez et al. (2006). Our results seem to suggest that large non--LTE corrections at low metallicity are the norm, at least for turn-off halo stars. Given our findings concerning the importance of the radiatively forbidden 3s $^3$S$^{\rm o}$ - 3s $^5$S$^{\rm o}$ transition, it is crucial to check if a similar behaviour is seen for other atoms with similar structure, e.g. C, N, and S, or even in all atoms. Non--LTE corrections obtained with ATLAS models turned out to be very similar, for a range of atmospheric parameters, to those we found using MARCS models. However, at very low metallicity, significant discrepancies appear, with large differences between non-LTE corrections using the two sets of models. Interestingly, this metallicity range overlaps with that for which the debate between ``flat'' and ``linear'' [O/Fe] trend exists. However, as seen from Fig.\,\ref{fabf:ofenlte}, our main conclusions will not change even if applying ATLAS non-LTE corrections, which would give, around [Fe/H]$=-2.5$, smaller corrections by $\sim 0.3$~dex in the left panel (no H collisions) and by $\sim 0.2$~dex in the right panel (H collisions 'a la Drawin'). Anyhow, more sophisticated modelling of stellar atmospheres is likely another crucial factor with respect to the oxygen problem, in particular for abundances from OH lines, but also to understand how the large non--LTE effects we found act in a 3D atmosphere. Therefore, it is obviously urgent to carry out, for a large range of stellar parameters, full 3D non--LTE calculations, in order to further investigate the formation of the oxygen lines at low \mbox{[Fe/H]}. It is expected that non--LTE effects will be enhanced when using 3D atmospheric models, due to their cooler superficial temperatures, to which the high-excitation lines of interest are known to be sensitive. This crucial step forward will allow enhanced abundance analyses and will help to shed light on early Galactic chemical evolution, contributions by SN II and massive stars, time delay of yields from Type Ia SNe, and the reality of the [C/O] upturn at low metallicity (Akerman et al. 2004; Spite et al. 2005; Fabbian et al. 2008b). It is likely that - together with the use of an accurate temperature scale - when finally taking into account, using detailed calculations with the best available atomic data, the interplay of 3D and non--LTE effects in the formation of the various available abundance indicators (including for the determination of metallicity), it will be possible to in the end fully solve the ``oxygen problem''. Here, we have demonstrated that non--LTE corrections play an important role towards a solution. They will need to be taken into consideration in order to abate large systematic errors still afflicting LTE-based estimates. To check whether agreement between abundances from the different oxygen lines can be found at low metallicity will furthermore require using improved observational data, in particular from a large sample of subgiants or giants where more oxygen indicators are available. This important test will help to clarify outstanding issues related to the [O/Fe] controversy. \bigskip \begin{acknowledgements} This work has been partly funded by the Australian Research Council (grants DP0342613 and DP0558836). DF is grateful to the Department of Astronomy and Space Physics, Uppsala Astronomical Observatory, Uppsala, Sweden, for its hospitality. DF would also like to thank Remo Collet, Jorge Mel\'{e}ndez and Poul Erik Nissen for fruitful discussion and comments. PB is a Royal Swedish Academy of Sciences Research Fellow supported by a grant from the Knut and Alice Wallenberg Foundation. PB also acknowledges the support of the Swedish Research Council. This research has made use of NASA's Astrophysics Data System, of the NIST Atomic Spectra Database (version 3), which is operated by the National Institute of Standards and Technology, and of the Vienna Atomic Line Database. We are indebted to Iv\'{a}n Ram{\'\i}rez for providing the routine which we used to compare with his non--LTE oxygen results, and to Yoichi Takeda, for details on his non--LTE work. \end{acknowledgements}
0902.4715
\section{Introduction} If the dark matter in the Universe is comprised of stable, weakly-interacting massive particles (WIMPs), in many instances this leads to the prediction that WIMPs will self-annihilate into Standard Model (SM) particles that may be visible with the upcoming generation of high-energy particle detectors. If high energy gamma-rays are produced, there are several promising sources within our own Galactic environment where the annihilation radiation from WIMPs may be visible. A detection of annihilation products from multiple sources, in possible concert with detections from colliders and underground labs, will be required to conclusively establish the nature of the dark matter in the Universe. Each source of annihilation radiation has its advantages and disadvantages. Because of its close proximity and high dark matter density, the flux from annihilation radiation will be the largest in the direction of the Galactic center. However, uncertainties in the empirical determination of the central density profile ~\cite{Bergstrom98,Binney2001,Klypin2002,Widrow:2005bt} and in the contamination from gamma-ray sources that are not of dark matter origin~\cite{Jeltema:2008hf} may hinder the extraction of a dark matter signal from this region. Both of these systematics may be somewhat alleviated by searching for annihilation radiation at a few degrees offset from the Galactic center~\citep{Stoehr:2003hf}, but even in this region a full analysis and understanding of the spectrum of the astrophysical backgrounds is required. The high mass-to-light ratios and the relative proximity of the dwarf spheroidal satellite galaxies (dSphs) of the Milky Way make them also excellent independent targets that have been widely considered in the literature~\citep{Baltz00,Tyler:2002ux,Evans:2003sc,Strigari:2006rd, BergstromHooper06,Pieri04,Sanchez-Conde07,Colafrancesco07,Bertone07,Pieri08}. Their status as potential targets for indirect detection has become even more interesting recently, given that the known number of satellites has more than doubled in the past few years ~\cite{Willman:2004kk,Willman:2005cd,Zucker:2006he,Zucker:2006bf,Belokurov:2006ph}, coupled with the discovery that all of these satellites share a common dark matter mass scale within their central 300 pc~\cite{Strigari:2008ib}. While the actual signal of gamma-ray flux from the Milky Way dSphs is smaller than the flux from the Galactic center, the astrophysical gamma-ray backgrounds tend to be reduced in the direction of these objects, as most of them are located at high Galactic latitudes. Additionally, these dSphs have low intrinsic emission from astrophysical gamma-ray sources; not only do they have mass-to-light ratios greater than $\sim 100$ in most cases, but all of them within $\sim 400$ kpc have strong upper limits on HI gas content~\cite{Grcevich:2009gt}. The astrophysical contribution to the calculation of the annihilation flux from any source can generally be divided into two components: one component arising from the smooth halo contribution, which is proportional to the density squared distribution in the halo, and one component arising from the flux due to bound substructures within each of these halos. While it has long been recognized that the presence of substructure in dark matter halos can have a significant effect on the annihilation rate of dark matter particles~\citep{Silk:1992bh}, theoretical calculations of this boost factor from substructure have varied by orders of magnitude because of large uncertainties in both the density profile of substructures and their distribution within the parent halos ~\cite{Baltz00,Strigari:2006rd,PieriBertone08,Afshordi:2008mx}. Numerical simulations are now reaching the necessary resolution to characterize the density distribution of substructures~\cite{Kuhlen:2008aw,Springel:2008cc}, resolving substructure down to four levels in the hierarchy~\citep{Springel:2008cc}. These, the highest resolution simulations, model the evolution of the dark matter and do not include baryons. In principle, simulations of this kind can be used to provide accurate estimates for the expected boost factor only in objects with negligible baryonic mass fractions (e.g., dwarf galaxies). However, in regions like the Galactic center where baryons play an important role in the dynamics, baryonic effects, such as encounters of substructures with stars in the Galactic disk or bulge ~\citep{Angus:2006vp}, or the backreaction of the dark matter distribution in response to disk formation ~\citep{Abadi:2009ve}, must be included. Future simulations that include more physics and even higher resolution will be required to fully characterize boost factors in general circumstances. It is clear that, in order to extract an unambiguous detection of dark matter annihilation radiation, a full understanding of all astrophysical uncertainties is required. With a goal of characterizing these uncertainties, in this paper we present an algorithm for the calculation of the gamma-ray flux from dwarf satellites, accounting for both uncertainties in the smooth halo distribution and the halo substructure distribution. We introduce a method for scanning the parameter space and determining the best fitting dark matter distributions from the kinematics of stars in these satellites using a Monte Carlo Markov Chain (MCMC) analysis. We combine the parameter constraints from the satellite stellar kinematics with the constraints on the parameters of the Constrained Minimal Supersymmetric Standard Model (CMSSM), with a goal of obtaining predictions accounting in a realistic way for all relevant sources of uncertainty of the flux from the dwarf satellites. The predicted regions that we delineate will provide guidance for future gamma-rays experiments for testing the predictions of neutralino dark matter in the CMSSM self-consistently within the context of CDM. As examples, we apply our algorithm to two particular satellites, both of which are known to be strongly dark matter dominated: the classical satellite Draco, which is located at 80 kpc from the Sun, and Segue 1, which is a newly-discovered satellite at 23 kpc from the Sun. We provide these two as examples and leave the work of ranking all the satellites in terms of their expected gamma-ray luminosity to a future paper. As a part of our analysis, we provide a new analytic solution to the equation governing the boost factor from halo substructure. Our solution is particularly useful because it allows for mass functions and halo concentrations to be free functions of host halo mass that can be implemented at each level in the substructure hierarchy. This is particularly important when implementing results from recent numerical simulations which show that the normalization of the mass function is reduced by up to 50\% at the next level of hierarchy~\cite{Springel:2008cc}. Using this solution, we show that the uncertainty in the boost factor is dominated by the extrapolation of the dark matter halo concentration versus mass relation down to mass scales that are currently unable to be resolved in CDM simulations. Assuming an optimistic power law extrapolation, we find mean boost factors $\sim 20$, in agreement with recent numerical extrapolations for Milky Way mass halos~\cite{Springel:2008cc}. Assuming a concentration-mass relation that is linked to the small-scale power spectrum as in the model~\citet{Bullock2001}, however, leads to boost factors of order unity. As an additional application of this analytic formula, we solve for the minimum dark matter halo mass at each point in the CMSSM parameter space, and find that the typical range for the minimum mass CDM halo is $10^{-9}-10^{-6}$ M$_\odot$. This result updates and generalizes the previous calculations of the minimum mass halo in the context of Supersymmetric CDM ~\citep{Schmid:1998mx,Hofmann:2001bi,Loeb:2005pm,Profumo:2006bv,Bertschinger06}. The paper is organized as follows. In~\Sref{sec:annihilation} we review the formalism for determining the gamma-ray flux from dark matter annihilations. In~\Sref{section:CMSSM}, we review our assumptions for the CMSSM and the \texttt{SuperBayeS}~ software that scans the CMSSM parameter space. In~\Sref{section:dwarf} we review the formalism to determine the best fitting dwarf satellite dark matter halo profiles, under the assumption of the CDM model. In~\Sref{section:boost} we present our calculation of the probability distribution for the boost factor and the resulting differences relative to the smooth halo flux. In~\Sref{section:results} we discuss some detection prospects for present observatories, and in~\Sref{section:conclude} we present our conclusions. Our main conclusions for the detectability of the flux are summarized in Figure~\ref{fig:1Dfluxes}. \section{Annihilation Flux and Gamma-ray spectrum} \label{sec:annihilation} Following standard methods~\citep{Jungman96,Bergstrom98} the gamma-ray flux from particle annihilations can be derived via \begin{equation} \label{eq:flux} \Phi(E) = \frac{<\sigma \nu> N_{\gamma}(E)}{8 \pi m_{\chi}^2} \int^{\theta' = \theta_{\rm max}}_{\theta'=0} d\Omega' \int d\Omega \mathcal{R}\left(\vec{\theta'}-\vec{\theta}\right) \int_{\ell_{-}}^{\ell_{+}} \rho_{DM}^2[\ell(\theta)]d\ell(\theta), \end{equation} where $\ell$ is the line-of-sight distance, $\ell_{\pm} = d \cos \theta \pm \sqrt{r_{t, DM}^2 - d^2 \sin^2 \theta}$, $d$ is the distance to the galaxy, $\theta$ is the line-of-sight angle from the center of the galaxy, $<\sigma \nu>$ is the average cross section for annihilation, $r_{t, DM}$ is the tidal radius of the dark matter halo, and $m_{\chi}$ is the WIMP mass. Here, \begin{equation} N_{\gamma}(E) = \int_{E}^{m_{\chi}}\frac{d N}{d E}dE \end{equation} is the number of photons above energy $E$ produced per annihilation and the resolution window function is \begin{equation} \mathcal{R}\left(\vec{\theta}\right) = \frac{\ln 2}{4 \pi \theta_{res}^2} \exp\left[-\ln2 \frac{\vec{\theta}^2}{ \theta_{res}^2}\right]. \end{equation} For Fermi, $\theta_{res}$ is approximately 10 arcminutes at the energies we consider. It is clear from~\Eref{eq:flux} that an accurate prediction for the flux entails incorporating relevant uncertainties both for the astrophysical quantities and for the particle physics model. In the present analysis we constrain the density of the dark matter halo, $\rho(\ell)$, from the kinematics of the stars, while the annihilation cross section, WIMP mass and annihilation channels we use are derived in the following section. \section{The Constrained Minimal Supersymmetric Standard Model} \label{section:CMSSM} For the calculations in this paper, we will assume that the dark matter particle consists of the lightest stable supersymmetric particle. More specifically, we focus on the case of the CMSSM. In this section, we review the basic definitions and parameters of the CMSSM, discussing specifically the \texttt{SuperBayeS}~ code that we use to explore the CMSSM parameter space and how this feeds into our calculation of the flux. \subsection{Neutralino Dark Matter in the CMSSM} Supersymmetry (SUSY) provides a compelling and well-motivated extension to the Standard Model that naturally contains a dark matter candidate~\citep{Jungman96}. Supersymmetry postulates a symmetry between bosons and fermions -- every boson (e.g. gauge bosons) has a fermonic partner (e.g. gauginos) and every fermion (leptons, quarks) has a bosonic partner (sleptons, squarks). The particularly well-studied R-parity conserving, weak-scale softly broken SUSY models provide both natural solutions to the ``fine tuning problem'' and a natural dark matter candidate \citep{Martin98}. The former is achieved through cancellation of quadratic divergences of one loop quantum corrections to the Higgs mass. The latter is due to a conserved discrete symmetry (R-parity) that prohibits the lightest SUSY particle from decaying to only SM particles. Enlarging the particle sector in this manner greatly increases the number of free parameters that specify the model; even the most minimal form of SUSY (MSSM) introduces over a hundred new parameters. Such a large number of free parameters makes the efficient exploration of the MSSM parameter space very challenging. The naive method of exploring the likelihood surface on a regularly--spaced grid is clearly inadequate, as the required computational effort scales exponentially with the number of dimensions considered. Furthermore, present--day constraints on SUSY phenomenology are fairly indirect and do not allow for meaningful constraints on models with so many degrees of freedom. A popular and well motivated simplification is achieved by demanding SUSY parameter unification at GUT (Grand Unified Theory) scales. This so-called constrained MSSM (CMSSM) now limits the number of SUSY parameters to 4 continuous and 1 discrete parameter \citep{Kane93}: the common gaugino mass, $m_{1/2}$, common mass for scalars, $m_0$, trilinear scalar couplings, $A_{0}$, (all of which are specified at the GUT scale, $M_{\rm{GUT}} \simeq 2\times 10^{16}$GeV) the ratio of the higgs expection values, $\tan\beta$, and the sign of the ``$\mu$ term'', sgn($\mu$)~\citep{Martin98, Kane93}. We shall denote the CMSSM parameters as \begin{equation} {\mathscr{C}} \equiv \{ m_0, m_{1/2}, A_0, \tan\beta, \rm{sgn}(\mu) \} \, . \end{equation} It has been recently demonstrated~\cite{Allanach:2005kz,Austri06,Roszkowski:2007fd,Feroz:2008wr,Trotta08} that the value of some SM parameters can strongly affect the predictions for some of the observable quantities, in particular the relic neutralino abundance (see Fig.~4 in~\citet{Roszkowski:2007fd}). Therefore, it is not sufficient to specify the values of ${\mathscr{C}}$ and fix the relevant SM parameters to their current best--fit values, but the latter must be introduced as ``nuisance parameters'' and marginalized over, in order to account for their impact in the predictions. The most relevant SM parameters are \begin{equation} {\mathscr{N}} \equiv \{ m_t, m_b, \alpha_S, \alpha_{em} \} \,, \end{equation} namely, the top quark mass, the bottom quark mass, the strong coupling constant and electromagnetic coupling constant, respectively. In the context of the CMSSM, specification of the parameter set $({\mathscr{C}}, {\mathscr{N}})$ allows for derivation of a full suite of predictions for low--energy observables. The package~\texttt{SuperBayeS}~, developed by ~\citet{Austri06} and~\citet{Trotta08} embeds several codes in a MCMC framework to derive from $({\mathscr{C}}, {\mathscr{N}})$ the SUSY mass spectrum (via SoftSusy \citep{Allanach02}), the neutralino relic abundance (via~DarkSusy~\citep{Gondolo04} or MicrOMEGAs~\cite{Belanger:2001fz,Belanger:2004yn}), SUSY corrections to various Higgs sector quantities (employing FeynHiggs \citep{Frank07, Degrassi03, Heinemeyer99, Heinemeyer00}) and branching ratios of rare processes (using Bdecay~\citep{Foster05}). The CMSSM and SM parameter space is explored by \texttt{SuperBayeS}~using an MCMC Metropolis--Hastings algorithm, or, more recently, by employing the more efficient and robust ``nested sampling'' algorithm~\cite{Skilling04, Feroz:2007kg, Trotta08}. The parameters are then constrained by applying all available constraints on the low--energy observables, including the WMAP 5-years determination of the relic abundance, sparticle and Higgs mass limits, branching ratios of rare processes, electroweak observables and direct constraints on the SM quantities (for a detailed discussion of the likelihood, see~\cite{Austri06,Roszkowski:2007fd,Trotta08}). \subsection{Priors in the CMSSM} The final outcome of the CMSSM analysis is a list of samples drawn from the posterior distribution, $P({\mathscr{C}}, {\mathscr{N}} | D)$ obtained via Bayes' theorem: \begin{equation} \label{eq:bayes} P({\mathscr{C}}, {\mathscr{N}} | D) \propto \mathcal{L}({\mathscr{C}}, {\mathscr{N}}) P({\mathscr{C}}, {\mathscr{N}}) \, , \end{equation} where $D$ denotes the combined data described above, $\mathcal{L}({\mathscr{C}}, {\mathscr{N}}) \equiv P(D | {\mathscr{C}}, {\mathscr{N}}) $ is the likelihood function and $P({\mathscr{C}}, {\mathscr{N}})$ is the prior distribution for the CMSSM and SM parameters. From the posterior one can then derive the probability distribution for any function of the quantities $({\mathscr{C}}, {\mathscr{N}})$ one is interested in, for example the neutralino--proton interaction cross--section (relevant for direct dark matter detection experiments, see~\cite{Trotta07}), the gamma-ray and antimatter flux from the galactic center (relevant for indirect detection searches~\cite{Roszkowski:2007va}) and Higgs--sector physics~(of interest for Higgs-boson searches~\cite{Roszkowski:2006mi}). The posterior in~\Eref{eq:bayes} should be dominated by the likelihood, $\mathcal{L}$, so that the prior influences vanish for strongly constraining data (for more details, see e.g.~\cite{Trotta:2008qt}). However, it has been found that this is not currently the case for the CMSSM --- namely, the available data are not sufficiently constraining to determine the CMSSM parameters in a prior--independent way (see~\citet{Trotta08} for a detailed analysis). This means that some of the posterior constraints on ${\mathscr{C}}$ are somewhat dependent on the chosen prior distribution. The fundamental reason for this is that the mapping between high--energy CMSSM parameters and low energy observable quantities is highly non--linear, due to the nature of the Renormalization Group Equations, and therefore even fairly strong low-energy constraints are relatively mildly informative about the quantities one is interested in, namely ${\mathscr{C}}$. It is however expected that this issue will be resolved once the LHC will deliver direct observations of the SUSY mass spectrum~\cite{rrt4}. \subsection{CMSSM samples and derived quantities} For the results in this paper, we use a nested sampling chain for the parameter space spanned by $({\mathscr{C}}, {\mathscr{N}}$), containing approximately $45,000$ samples. We assume throughout a positive sgn($\mu$) (motivated by consistency with the measured anomalous magnetic moment of the muon). We adopt the chain resulting from the analysis in~\citet{Trotta08}, with a flat prior with the limits denoted in~\tref{tab:limits} (the ``4 TeV'' limits in~\citet{Austri06}). From those CMSSM chains, we derive for each sample the value of the WIMP mass, $m_\chi$, its annihilation cross section, $<\sigma \nu>$, and number of photons produced in the annihilation above 1 GeV, $N_{\gamma}(1$ GeV$)$. As discussed below, we choose 1 GeV because it provides a conservative lower bound for the expected signal energy window from CMSSM dark matter annihilation. Furthermore, we compute the value of the minimum mass halo, $m_{\rm min}$, as explained in section~\ref{sec:mmin} below. \section{Dwarf Satellite Kinematics\label{section:dwarf}} We follow the standard formalism for analyzing stellar line-of-sight velocities from dwarf satellites. In this section, we review the relevant formulae so as to establish notation and conventions that we use throughout this paper. \subsection{Theoretical Modeling} We consider the satellites as two-component systems consisting of stars and dark matter (e.g.~\cite{Lokas:2004sw,Strigari07-redef}). The potential is assumed to be spherically symmetric, and the system is taken to be completely pressure-supported (no rotational support). This is seen to be a good description of many of the dwarf satellites ~\cite{Walker2006,Koch:2006in,Koch:2007ye}. With these assumptions the Jeans equation is \begin{equation} \label{eq:jeans} r\frac{d(\rho_{\star} \sigma_r^2)}{d r} = -\rho_{\star}\frac{G M(r)}{r} - 2 \beta \rho_{\star} \sigma_r^2, \end{equation} where $\rho_{\star}$ is the stellar density, $\sigma_r$ is the stellar radial velocity dispersion, $\beta \equiv 1 - \sigma_t^2/\sigma_r^2$ is the velocity anisotropy parameter, and $\sigma_t$ is the stellar tangential velocity dispersion. The mass of the system, $M(r)$, is defined as the total dynamical mass, which is the sum of the contributions from the stars and the dark matter. The line-of-sight velocity dispersion is \begin{equation} \label{eq:sigmalos} \sigma_{th} = \frac{2}{I_{\star}(R)}\int_R^{\infty}\left(1 - \beta\frac{R^2}{r^2}\right)\frac{r \rho_{\star}\sigma_r^2}{\sqrt{r^2 - R^2}} dr \label{eq:LOS} \end{equation} where $R$ is the projected radius onto the sky and $I_\star(R)$ is the stellar surface density. The use of subscript ``{\it th}" will become apparent below when~\Eref{eq:LOS} is fed into our statistical analysis. Observationally, the integrated masses within $\sim 300$ pc for all of the Milky Way satellites are very similar, consistent with $\sim 10^7$ M$_\odot$ independent of the dwarf galaxy luminosity~\cite{Strigari:2008ib}. To first order, this fact simplifies the selection of the best targets for flux detection to those that are closest to the Sun~\cite{Strigari:2007at}. However, as we discuss below, including the effects of a prior relation for the maximum circular velocity distribution for dark matter halos somewhat modifies this simple estimate. In general, the primary factors that determine the best sources are those objects that have the best signal-to-noise ratio, accounting for the astrophysical backgrounds. Some often-discussed targets include Segue 1 (23 kpc), Ursa Major II (32 kpc), Willman 1 (38 kpc), and Coma Berenices (44 kpc) (e.g.~\cite{Strigari:2007at,Bringmann:2008kj,Geha2008}) all of which were discovered since 2004. The half-light radii of these objects are $\sim$ 10-100 pc, and given their high velocity dispersions of $\sim 4-6$ km/s~\cite{SimonGeha2007}, these objects are consistent with being dark-matter dominated dSphs~\cite{SimonGeha2007,Strigari:2008ib,Geha2008}. The nearest of the more well-known (classical) dSphs include Ursa Minor (66 kpc) and Draco (80 kpc); previous calculations show that these two objects have similar predicted fluxes~\cite{Strigari:2006rd}. Since our main goal in this paper is to discuss the methodology for robustly predicting fluxes and including the boost in the calculation of the gamma-ray flux, we restrict our analysis to two example dSphs: Draco and Segue 1. The kinematic constraints we derive for Draco are much stronger, as this object has been well-studied both from the standpoint of its photometry and kinematics. Segue 1 is a more-recently discovered satellite that appears to be strongly dominated by dark matter and the least luminous galaxy known~\cite{Geha2008}. However, there are only 24 stars with measured velocities in Segue 1~\cite{Geha2008}, and, as we show below, the errors on its mass and flux are much larger than the respective values for Draco. The surface densities of these objects are fit by King~\citep{King62} and Plummer~\citep{Plummer11} profiles, respectively. For Draco, the King core radius is $r_{king} = 0.18$ kpc, and the King tidal radius is $r_t = 0.93$ kpc~\citep{Odenkirchen:2001pf}. For Segue 1, the one-component Plummer fit gives a Plummer core radius of $r_{pl} = 35$ pc~\cite{Martin2008}. In the Plummer profile, $\rho_\star$ falls off as $1/r^5$ in the outer regions, so there is no natural definition of the stellar tidal radius, in contrast with the King profile. In this case, we conservatively assume that the stellar tidal radius is given by the position of the outermost observed star, which is located at a projected radius of $R = 50$ pc~\cite{Geha2008}. In principle, the stellar surface density parameters could also be estimated jointly with our other model parameters, however in the present work we choose to fix them instead to their best fit values as given above. To model the density profile of the respective dark matter halos of these objects, we use an Einasto profile, which is defined as \begin{equation} \rho(r) = \rho_s \exp \left\{-2n\left[\left(\frac{r}{r_s}\right)^{1/n}-1\right]\right\}. \label{eq:einasto} \end{equation} The Einasto profile has been shown to be a good fit to CDM halos with the Einasto index, $n$, ranging from $\sim 3 - 7$ \citep{Merritt06-emp.model.1, Navarro04-cdm.halo.3, Gao07}. For our purposes, this profile is also convenient for two separate reasons: (1) the profile has a well-defined mass, which is important when we calculate substructure boost factors below, and (2) the profile does not diverge towards the center of the halo, which is convenient when calculating the gamma-ray flux. As presented in~\Eref{eq:einasto}, there are three parameters in the Einasto profile that we must determine from the data: the log-slope index $n$, the scale radius $r_s$, and the scale density, $\rho_s$. It will be convenient in the following to replace the Einasto density and radius variables ($\rho_s$, $r_s$) with the implied halo maximum circular velocity and radius of maximum circular velocity ($V_{\rm max}$, $r_{\rm max}$). The dark matter halo density profile is then specified in terms of the parameter set \begin{equation} {\mathscr{H}} \equiv \{ n, V_{\rm max}, r_{\rm max} \} \, . \end{equation} The final quantity we must specify is the velocity anisotropy, which enters both directly in the Jeans equation in~\Eref{eq:jeans} and in the equation that relates the observed line-of-sight velocity dispersion to the underlying stellar radial velocity dispersion~\Eref{eq:LOS}. This quantity is unconstrained by line-of-sight velocity data~\cite{StrigariBullockKaplinghat07}, so in order to allow for general models, we model the velocity anisotropy as \begin{equation} \beta(r) = \frac{\beta_0 + \beta_{\infty} \left(r/r_{\beta}\right)^{\eta}}{1 + \left(r/r_{\beta}\right)^{\eta}}, \label{eq:beta} \end{equation} with four free parameters, \begin{equation} {\mathscr{V}} \equiv \{ \beta_0, \beta_\infty, r_\beta, \eta\} \, . \end{equation} We note that this parametrization is slightly more general than that used in \citet{Strigari07-redef}, allowing here for the power law index $\eta$ in addition to the anisotropy scale radius, $r_\beta$, and the asymptotic inner and outer anisotropies, $\beta_0$ and $\beta_\infty$. It should also be noted that there is an intrinsic degeneracy between the logarithmic slope of the density profile in~\Eref{eq:einasto} and anisotropy. Even in the simplified case of constant anisotropy, this degeneracy restricts how well the central slope $n$ of the halo may be contrained from stellar kinematics~\cite{StrigariBullockKaplinghat07}. \subsection{Likelihood function of dSph model parameters} Above we have discussed our theoretical modeling; we now turn to our description of the line-of-sight velocity data~\cite{Walker2007,Geha2008}. We begin by noting that the line-of-sight velocities from dSphs are well-described by Gaussian distributions~\citep{Walker2006}. The observed velocity distribution is a convolution of the intrinsic velocity distribution, arising from the distribution function, and the measurement uncertainty from an individual star. The probability of obtaining a set of data $d$ given a set of model parameters ${{\mathscr{H}}, {\mathscr{V}}}$ is described by the likelihood ~\citep{Strigari:2006rd} \begin{equation} \label{eq:fulllike} \mathcal{L}({\mathscr{H}}, {\mathscr{V}}) \equiv P(d| {\mathscr{H}}, {\mathscr{V}}) = \prod_{i=1}^{n} \frac{1}{\sqrt{2\pi(\sigma_{th, i}^2 + \sigma_{m, i}^2)}} \exp \left[-\frac{1}{2}\frac{(d_i -u)^2}{\sigma_{th, i}^2 + \sigma_{m, i}^2}\right]\,, \end{equation} where ${\mathscr{H}}$ are the parameters that describe the density profile of dark matter and ${\mathscr{V}}$ are the stellar velocity anisotropy parameters. The product is over the set of $n$ stars, and $u$ is the bulk velocity of the galaxy in the direction of the observer. As expected, the total error at a projected position is a sum in quadrature of the theoretical intrinsic dispersion, $\sigma_{th, i}({\mathscr{H}}, {\mathscr{V}})$, and the measurement error $\sigma_{m, i}$. Often, kinematic data from dSphs is presented in terms of the velocity dispersion in bins of projected radius. In these cases, it is useful to have an expression for the likelihood function similar to~\Eref{eq:fulllike} that is free of any terms associated with the measured velocities of individual stars. An expression of this form can be found by replacing $u$ and $\sigma$ by their respective maximum likelihood values, $\hat{u}$ and $\hat{\sigma}$, where the latter quantities are obtained from a standard maximum likelihood procedure using ~\Eref{eq:fulllike}. Proceeding with this approximation, and also neglecting the measurement uncertainty in comparison with the intrinsic dispersion (which is a good approximation for the bright satellites), the likelihood function in ~\Eref{eq:fulllike} can be reduced to \begin{equation} \label{eq:approxlike} \mathcal{L}({\mathscr{H}}, {\mathscr{V}}) = \prod_{i=1}^{N} \frac{1}{\sqrt{2\pi} \sigma_{th, i}^{N_b}} \exp \left[-\frac{1}{2} \frac{N_b \hat{\sigma}_b^2}{\sigma_{th, i}^2}\right]. \end{equation} This is now a product over the number of bins in projected radius, $N$, each with a velocity dispersion $\hat{\sigma}_b$. The number of stars in a given bin is $N_b$. (Note the difference in the normalization relative to the expression presented in~\citet{Strigari:2007at}, due to a typographical error in~\citet{Strigari:2007at}). Note that the quantity $\sigma_{th, i} = \sigma_{th, i}({\mathscr{H}}, {\mathscr{V}})$ is computed via~\Eref{eq:sigmalos}. As we did above in the context of the CMSSM parameters, we now go over to the posterior probability distribution function (pdf) for the parameters of interest, which is again given by Bayes' theorem \begin{equation} \label{eq:posterior} P ({\mathscr{H}}, {\mathscr{V}} | d) \propto P({\mathscr{H}})P({\mathscr{V}}) \mathcal{L}({\mathscr{H}}, {\mathscr{V}}) \end{equation} where $P({\mathscr{H}}), P({\mathscr{V}})$ are the prior pdf's for the halo and velocity anisotropy parameters, respectively, which we take here to be uncorrelated. We will deal with the issue of priors in more detail below. The task is now to explore numerically the posterior,~\Eref{eq:posterior}, in order to determine credible regions on the model parameters. In previous similar work involving parameter estimation from dwarf satellites ~\citep{Strigari07-redef,Strigari:2007at}, the likelihood function was directly integrated over the range of model parameter space. This method was accurate but could be time-consuming, particularly in the case of large numbers of parameters to marginalize over. Rather than direct numerical integrations, in this paper we explore the posterior distributions using Markov Chain Monte Carlo techniques. Before discussing our MCMC methodology, we now turn to the discussion of the priors entering~\Eref{eq:posterior}. \subsection{Priors for dSph parameters } \label{subsec:priors} \begin{figure} \begin{center} \rotatebox{270}{\includegraphics[height=0.45\hsize]{pics/mass30.ps}} \rotatebox{270}{\includegraphics[height=0.45\hsize]{pics/seguemass4.eps}} \caption{\footnotesize Posterior probability distribution for the mass within 30 pc for Segue 1({\it left panel}) and the mass within 300 pc for Segue 1 ({\it right panel}). The four curves in each panel assume different Bayesian priors: a uniform prior in $V_{\rm max}^{-3}$ (black, solid) $V_{\rm max}^{-2}$ (red, dashed), $V_{\rm max}^{-1}$ (green, dot-dashed), and $\ln(V_{\rm max})$ (blue, dotted). The prior distributions are truncated at $V_{\rm max} = 3 $km s$^{-1}$ as described in the text. Increasing negative powers of $V_{\rm max}$ causes the posterior to be more ``biased'' toward lower mass solutions. As a result, the posterior corresponding to these different priors differ. \label{fig:prioreffect_seg} } \end{center} \end{figure} \begin{figure} \begin{center} \rotatebox{270}{\includegraphics[height=0.45\hsize]{pics/dracomass4.eps}} \caption{\footnotesize Posterior probability distribution for the mass within 300 pc for Draco. The four curves in each panel assume different Bayesian priors: a uniform prior in $V_{\rm max}^{-3}$ (black, solid) $V_{\rm max}^{-2}$ (red, dashed), $V_{\rm max}^{-1}$ (green, dot-dashed), and $\ln(V_{\rm max})$ (blue, dotted). The prior distributions are truncated at $V_{\rm max} = 3 $km s$^{-1}$ as described in the text. Note that the trend in mass within 300 pc with prior reverses here compared to the case in Figure~\ref{fig:prioreffect_seg}. This traces back in part to the fact that the mass is best constrained near twice the half-light radius~\cite{SKB07}. For Draco, 300 pc is within the half-light radius and when the CDM $r_{\rm max}$-- $V_{\rm max}$ prior is imposed, lower $V_{\rm max}$ halos are forced, on average, to be more concentrated at 300 pc. In Figure~\ref{fig:prioreffect_seg}, 300 pc is beyond the half-light radius of Segue 1, and priors that favor larger $V_{\rm max}$ give larger extrapolated masses. \label{fig:prioreffect_dra} } \end{center} \end{figure} Choosing a prior in accordance with the physical situation and our degree of prior beliefs on it is an important aspect of Bayesian analysis, as variations in the priors can lead to sizable differences in the posterior whenever the likelihood is not very strongly peaked. We account for this prior information in our notation by the inclusion of the appropriate infinitesimals. For example, a ``uniform'' prior probability in $x$ will be denoted as $P(x) = d(x)$ whereas a uniform prior probability in $\ln(x)$ is represented as $P(x) = d[\ln(x)] = d(x)/x$. Regardless of these definitions, if data strongly constrains some parameter, or a given combination of parameters, then the prior information should not have much bearing on the result, as the posterior is dominated by the likelihood. However as we will see for Segue 1, with only 24 radial velocities spanning only $\sim 50$ pc in radius, priors can have a significant effect. Guidance as to how to choose the priors for our dSph model parameters can be gleaned from cold dark matter simulations which give precise predictions of halo abundances given a halo shape and mass. We note however, that these simulations do not provide the probability of observing a {\em galaxy}, which itself depends not only on dark matter physics, but also on star formation and baryonic physics. But if we assume that primarily the halo mass, and not shape, affects stellar physics we may draw the halo profile prior probabilities directly from simulations, with some additional simple inputs to account for gas physics. As discussed above, we describe the halo in terms of the parameters $n, r_{\rm max}, V_{\rm max}$. We model the $r_{\rm max}$ prior (conditional on the value of $V_{\rm max}$) as a log-normal distribution, which provides a good description of the $r_{\rm max}-V_{\rm max}$ relation measured in the Via Lactea \citep{Diemand:2008in} and Aquarius~\citep{Springel:2008cc} simulations over the entire range of $V_{\rm max}$. From the Aquarius simulations \citep{Springel:2008cc}, we estimate this relation, given a $V_{\rm max}$, to be \begin{equation} \label{eq:conditionalprior} P(r_{\rm max} | V_{\rm max}) \propto \exp\left\{-\frac{\left[\log(r_{\rm max}) - 1.35\log(V_{\rm max})+1.75\right]^2}{2 \sigma_{\log(r_{\rm max})}^2}\right\} d (\log r_{\rm max}), \label{eq:CDMprior} \end{equation} where $\sigma_{\log(r_{\rm max})} = 0.22$ is a conservative scatter in the Aquarius subhalos for the entire range of $V_{\rm max}$. There are no statistics published (as of this paper) for the parameter $n$ in the Einasto profile, though from \citet{Springel:2008cc} it is reasonable to assume a uniform prior in $1/n$ [i.e. $P(n) \propto d(1/n)$] limited to the range $0.1 < 1/n < 0.5$. The choice of prior for $V_{\rm max}$ is a more delicate issue, for a couple of reasons. One issue relates to the probability that a given CDM halo has a particular value of $V_{\rm max}$, and a second issue relates to how well the line-of-sight velocity data itself constrain particular values of $V_{\rm max}$. On the former point, $V_{\rm max}$ is the primary parameter that relates to the astrophysical process of low-mass galaxy formation: small galaxies with more shallow potential wells are expected to have low star formation rates, so the actual $V_{\rm max}$ prior is expected to be more shallow than that inferred from the CDM subhalo mass function, $P(V_{\rm max}) \propto V_{\rm max}^{-4} dV_{\rm max}$ (derived from the relation $N(>V_{\rm max}) \propto V_{\rm max}^{-3}$ of \citet{Springel:2008cc}). On the latter point, from the perspective of the line-of-sight data, large $V_{\rm max}$ values are not constrained by the data. Given the form of the $r_{\rm max}$ prior, large $V_{\rm max}$ values correspond to large $r_{\rm max}$ values that may fall outside the extent of the stellar profile. Therefore, these values cannot be directly constrained by data and accordingly become dominated by the prior. \Fref{fig:prioreffect_seg} and~\Fref{fig:prioreffect_dra} exemplifies this behavior. The left panel of \Fref{fig:prioreffect_seg} shows the posterior constraints on the mass of Segue 1 within $30$ pc with four different prior assumptions: $P(V_{\rm max}) \propto d(V_{\rm max}^{-3})$, $d(V_{\rm max}^{-2})$, $d(V_{\rm max}^{-1})$, and $d(\ln V_{\rm max})$. The right panel shows the mass within $300$ pc using the same respective priors. Prior choice has little effect on $M(30$ pc$)$ since this value is well constrained by line-of-sight velocity data. In contrast the $M(300$ pc$)$ posterior is dominated by prior choice, as the radius of $300$ pc lies outside the measured stellar distribution. In contrast Draco, whose stellar extent extends beyond $300$ pc, $M(300$ pc$)$ is well constrained. We note that the prior behavior in \Fref{fig:prioreffect_seg} and~\Fref{fig:prioreffect_dra} does not contradict the results presented in~\citet{Strigari:2008ib}, where a uniform prior in $\ln [M(300 {\rm pc})]$ was taken for the entire satellite population, and no CDM-motivated priors were considered. Rather \Fref{fig:prioreffect_seg} and~\Fref{fig:prioreffect_dra} pertain to the specific case of the prior assumed in $V_{\rm max}$. The above situation does, however, present a dilemma when considering priors in the quantity $V_{\rm max}$: the actual V$_{\rm max}$ prior for the observable galaxy is likely to be more shallow than the prior that comes from the predicted CDM $V_{\rm max}$ prior for all substructure (uniform in $V_{\rm max}^{-3}$), but more shallow priors will give more statistical weight to parts of $V_{\rm max}$ parameter space not well constrained by data. Because of these factors, the best that can be achieved with line-of-sight data is a lower limit to the model parameters. Thus, in this paper we use the CDM prior $P(V_{\rm max}) \propto V_{\rm max}^{-4} dV_{\rm max}$ with an imposed cut-off of 3 km s$^{-1}$. This cut-off seems reasonable, as $\sim 3$ km s$^{-1}$ is expected to be a conservative lower bound to the $V_{\rm max}$ values below which gas is able to condense into halos~\cite{Thoul:1994ir}. As we show, the imposed cut-off of 3 km s$^{-1}$ does not affect the predicted flux from Draco, in which the posterior is dominated by the line-of-sight data. However, for Segue 1 (with only 24 stars), the issue is more subtle, as the CDM prior becomes more dominating with decreasing values of $V_{\rm max}$. This is would be particularly true if the $V_{\rm max}$ cut-off extended down to arbitrarily low values, though our choice of a (physically-motivated) cut-off somewhat reduces the effect of the prior, even for Segue 1. In summary, our prior for the CDM halo parameters is given by \begin{equation} P({\mathscr{H}}) = P(r_{\rm max}|V_{\rm max})P(V_{\rm max})P(1/\eta) \end{equation} with $P(r_{\rm max}|V_{\rm max})$ as in~\Eref{eq:conditionalprior} and $P(V_{\rm max}), P(1/\eta)$ according to the prescriptions given above. Finally, we have little physical basis to choose the prior for the anisotropy parameters ${\mathscr{V}}$. Additionally, these parameters are not well constrained by the data. Fortunately, these parameters, unlike the halo parameters, do not have a direct effect on the derived flux or mass (only indirectly through the Jeans equation). Thus, we expect that the choice of these priors will not have as large of an impact on the result as compared to the halo priors. \Fref{fig:prioreffect_beta} shows the difference between the Segue 1 mass withing $300$ pc using our prior assumption and the mass assuming an isotropic velocity distribution. Although our prior assumption biases the probability distribution to lower mass values, the effect is not as extreme as with the V$_{\rm max}$ prior. \begin{figure} \begin{center} \rotatebox{270}{\includegraphics[height=0.45\hsize]{pics/seguemassiso.eps}} \caption{\footnotesize Posterior probability distribution for the mass within 300 pc for Segue 1, assuming an isotropic velocity distribution (the magenta dot-dot-dot-dashed line) and the model in~\Eref{eq:beta} (black solid line). Both curves assume a uniform prior in $V_{\rm max}^{-3}$. Both curves assume a uniform prior in $V_{\rm max}^{-3}$ with an imposed cutoff below $V_{\rm max} = 3$~km s$^{-1}$. \label{fig:prioreffect_beta} } \end{center} \end{figure} The prior choices for all of the parameters (including particle physics quantities) considered in this paper are shown in~Table~\ref{tab:limits}. \begin{table} \caption{\label{tab:limits}Summary of model parameters and the priors imposed on them. Unless otherwise stated, the prior pdf is flat within the given range.} \begin{indented} \item[]\begin{tabular}{@{}ccl} \br CMSSM Parameters, ${\mathscr{C}}$ & Priors Assumed & Notes\\ \mr $m_0$ & $50$ GeV $ < m_0 < 4$ TeV & CMSSM scalar mass\\ $m_{1/2}$ & $50$ GeV $< m_{1/2} < 4$ TeV & CMSSM gaugino mass\\ $A_0$ & $\vert A_0\vert < 7$ TeV & Common Trilinear Coupling\\ $\tan\beta$ &$2 < \tan\beta < 62$ & Ratio of Higgs vev\\ \br \br SM Parameters, ${\mathscr{N}}$ & Priors Assumed & Notes\\ \mr $M_t$ & $160$ GeV $ < M_t < 190$ GeV & Top quark mass\\ $m_b$ & $4$ GeV $ < m_b < 5$ GeV & Bottom quark mass\\ $\alpha_{em}$ & $127.5 < 1/\alpha_{em} < 128.5$ & EM coupling const.\\ $\alpha_s$ & $0.10 < \alpha_s < 0.13$ & Strong coupling const.\\ \br \br DM Halo Parameters, ${\mathscr{H}}$ & Priors Assumed & Notes\\ \mr $n$ & $0.1 < 1/n < 0.5$ & Einasto index, see~\Eref{eq:einasto}\\ $r_{\rm max}$ & see~\Eref{eq:CDMprior} & Used to derive $r_{S}$ \\ $V_{\rm max}$ & flat in $V_{\rm max}^{-3}$; $V_{\rm max} > 3$km/s & Used to derive $\rho_{S}$ \\ \br \br Anisotropy Parameters, ${\mathscr{V}}$ & Priors Assumed & Notes\\ \mr $\eta$ & $0 < \eta < 3$ & see~\Eref{eq:beta}\\ $r_{\beta}$ & $0.01$ kpc $< r_{\beta} < 100$ kpc & Anisotropy scale length\\ $\beta_0$ & $-2 < \beta_0 < 0$ & Central anisotropy\\ $\beta_{\infty}$ & $-3 < \beta_{\infty} < 1$ & Outer anisotropy\\ \br \end{tabular} \end{indented} \end{table} \subsection{Monte Carlo Markov Chain methodology} Here we review the MCMC formalism necessary for our analysis; we refer to the papers referenced for more details. The goal of an MCMC algorithm is to generate a series of points in parameter space (called ``a chain'') with the property that their density is distributed according to the posterior pdf one wishes to explore. Then from the chain of ``accepted'' points, the marginal probability distribution for each of the parameters is recovered by simply binning the points in the chain, and ignoring the uninteresting coordinates (two-dimensional distributions are obtained in a similar manner). In our case, we wish to explore the joint parameter space spanned by the particle physics model parameters, ${\mathscr{C}}$ and ${\mathscr{N}}$, and by the dwarfs model parameters, ${\mathscr{H}}$ and ${\mathscr{V}}$. So we are dealing with a total of 15 parameters. Because the CMSSM likelihood and the stellar kinematics likelihood are independent, the joint posterior factorizes as (more details are given below) \begin{equation} \label{eq:jointP} P({\mathscr{C}}, {\mathscr{N}}, {\mathscr{H}}, {\mathscr{V}}|D,d) \propto P({\mathscr{C}}, {\mathscr{N}})P({\mathscr{H}}, {\mathscr{V}}) \mathcal{L}({\mathscr{C}}, {\mathscr{N}})\mathcal{L}({\mathscr{H}}, {\mathscr{V}}) \, . \end{equation} A great advantage of the MCMC procedure lies in it efficiency, for the computational effort scales roughly proportionally with the dimensionality of the parameter space being explored, rather than exponentially. The true power of MCMC methods, which we specifically utilize in this paper, lies in the fact that in addition to obtaining the distributions for the model parameters, the probability distribution for any function of the model parameters is obtained by simply determining the function at each of the accepted points in the chain. In this way, we may easily determine the distribution of any parameter that is derived from our base set of parameters by post-processing the chain of accepted points. Examples of derived parameters that we are interested in for the purposes of this paper include the dark matter mass of the dwarfs or the gamma-ray flux. While the former quantity is just a function of the parameters that describe the dark matter halo, $({\mathscr{H}}, {\mathscr{V}})$ the latter quantity requires us to combine the probability distributions on $({\mathscr{C}}, {\mathscr{N}})$ with the probability distribution from the dwarf kinematics. To understand how these probabilities can be combined, and quantities such as the flux can be robustly calculated, we appeal to the general properties of conditional distribution functions. A Markov chain of the joint posterior distribution $P({\mathscr{C}}, {\mathscr{N}}, {\mathscr{H}}, {\mathscr{V}}|D,d)$,~\Eref{eq:jointP}, can be obtained from the combination of the two posteriors $P({\mathscr{C}}, {\mathscr{N}}|D)$ and $P({\mathscr{H}}, {\mathscr{V}}|d)$ as long as the two joint probability distribution can be factorized as in~\Eref{eq:jointP} above. This is, in fact, the case here since the likelihood for the particle physics model is unaffected by the stellar kinematic data. It is also the case that the halo density profile and stellar velocity dispersion anisotropy parameters are not affected by the particle physics constraints, hence the two likelihoods are independent. On the other hand, if we had included, for example, gamma-ray flux upper limits from Imaging Air Cerenkov Telescopes (ACTs)~\cite{Buckley:2008zc} in the particle physics likelihood, this would couple the two separate parameter spaces and invalidate the above decomposition. At present including such data is not necessary because the upper limits are well above the CMSSM parameter space, however it will be desirable and indeed necessary in the future. To obtain a Markov chain on the dSph model parameters, we opted for a combination of the slice sampling \citep{Neal00} and the Metropolis-Hastings \citep{Hastings70} algorithm. The advantage of slice sampling is that, unlike the Metropolis-Hastings algorithm, its efficiency is not strongly linked to the proposal pdf. Whereas, with a good proposal pdf, the Metropolis-Hastings algorithm converges faster to the desired distribution. Thus, we obtain an initial proposal pdf using slice sampling and then employed it in the Metropolis-Hastings algorithm to derive our final posterior. (For the actual slice sampling methodology, see \citet{Neal00} and \citet{LewisBridle06-notes}. For the Metropolis-Hastings methodology see \citet{Christensen01, LewisBridle02, Baltz06}, and \citet{LewisBridle06-notes} for details). In practice, we found that our slice sampling run took 3-4 likelihood evaluations per point and offered fairly good convergence. However, our subsequent Metropolis-Hastings run had an acceptance rate of 30\%-50\% with excellent convergence. Nine chains of 30,000 points were obtained, thinned, and then combined with the~\texttt{SuperBayeS}~ chains (as per the method outlined in \ref{appendix:combine}). We refer to~\ref{appendix:convergence} for a more detailed discussion of the convergence criteria we use for our chains. \section{Boost from Halo Substructure \label{section:boost}} In the previous sections, we have outlined our modeling of the dwarf halos from stellar kinematics and our method for scanning the CMSSM parameter space and used this modeling to predict the flux under the assumption of a smooth halo. The final ingredient we must add to the flux predictions is the boost from halo substructure. The goal of this section is to derive the probability distribution for the boost factor, accounting in the most reasonable possible manner for the astrophysical and particle dark matter uncertainties that enter the calculation. \subsection{Defining Boost} Dark matter halos form hierarchically, and this results in a population of surviving gravitationally-bound substructure. High resolution dissipationless numerical simulations reveal substructures in $z=0$ Milky Way-size halos with a mass spectrum that rises towards smaller masses with $N(>m) \propto m^{-\alpha}$ and $\alpha \sim 0.9$, down to the smallest masses that can be resolved, currently $m \gtrsim 10^{-6} M_{\rm host} \sim 10^6 M_\odot$ ~~\citep{Diemand:2008in,Springel:2008cc}. However, as we discuss below, this mass is some ten orders of magnitude larger than the minimum mass expected for CDM structure, $m_{\rm min}$. While numerical simulations that focus on nested regions within very high redshift halos have demonstrated that halos with masses close to the filtering mass do survive the initial process of halo formation ~\citep{Diemand:2005vz,Diemand:2006ey}, more modeling will be required to better understand the survival probability, density structure, and precise mass spectrum of the smallest CDM substructures at $z=0$. Because substructures themselves were assembled from smaller units prior to infall into their host, we expect a hierarchy of substructures within substructures that extends down to $m_{\rm min}$. The first explorations of this hierarchy of mass functions and substructure distributions is just now becoming viable in state-of-the art simulations ~\cite{Springel:2008cc}. The dark matter halos of dSphs are substructures of halos of the Milky Way, therefore their substructure fractions and mass functions are less well explored, but there are clear expectations. Relative to substructures within the Milky Way halo, substructures within dSph halos are expected to be older, and this gives them more time to assimilate into the smooth component of their hosts. Moreover, the halos of dwarf satellites do not get replenished by accreted field halos to the extent that an isolated field halo would. Both of these effects will act to reduce substructure fractions in dSph halos relative to isolated galaxy halos. Following previous treatments, we define the boost $B$ such that total gamma-ray flux $\Phi$ from a halo of mass $M$ is $\Phi(M) = \left[1 + B(M, m_{\rm min})\right] \tilde{\Phi}(M) $. Here $\tilde{\Phi}$ is the flux that comes from a {\em smooth} halo of mass $M$ and and the boost includes a contribution from all subhalos larger than $m_{\rm min}$, which is set by particle physics. We begin with the formulation in \citet{Strigari:2006rd}, but note that the formalism of \citet{PieriBertone08} provides similar results. Adopting the above definition, we may write the boost as an integral that accounts for substructures going down the CDM mass hierarchy~\citep{Strigari:2006rd}: \begin{eqnarray} {\tilde{\Phi}(M)} B(M, m_{\rm min}) = A M^\alpha \int_{m_{\rm min}}^{qM} \left[1+B(m, m_{\rm min})\right]\tilde{\Phi}(m) m^{-1-\alpha} dm. \label{eq:boost} \end{eqnarray} Here we have assumed that halos of mass $M$ host substructures of mass distribution $dN/d\ln m = A(M/m)^{\alpha}$ for $m<qM$ and (for now) we assume a self-similar substructure hierarchy. Written in this way, we can see that the total boost in gamma-ray luminosity depends sensitively on a competition between the smooth flux $\tilde{\Phi}(m)$, which tends to decrease towards smaller masses, and $dN/d\ln m$, which rises to small masses. For halos described by NFW density profiles \cite[e.g.][]{Navarro04-cdm.halo.3,Bullock2001} with a concentration-mass relation that follows $c(m) \propto m^{-\mu}$ with $\mu \sim 0.1$, we expect $\tilde{\Phi}(m) \propto m^{\xi}$ with $\xi \simeq 1- 2.2 \mu$ ~\cite{Strigari:2006rd}. Setting $q=1$, we find that the boost $B(M,m_{\rm min})$ is approximately proportional to $(M/m_{\rm min})^{\alpha-\xi}$. Since the ratio $(M/m_{\rm min}) \sim 10^{15}$ for host halos of relevance, the precise value of the quantity $\alpha - \xi$ ($\sim 2.2 \mu$) at the smallest mass scales becomes crucial in determining whether the boost is significant or negligible. By making the most optimistic assumptions possible ($q=1$, self-similar substructure hierarchies, and optimistic assumptions for the density structures of small halos with $\mu \sim 0.1$) ~\citet{Strigari:2006rd} and~\citet{Kuhlen:2008aw} have shown that the boost should be no larger than about 100, and more typically $\lesssim 10$ for $m_{\rm min} \sim 10^{-6} M_\odot$. \subsection{Analytic Solution for Boost} The above solutions to~\Eref{eq:boost} ~\citep{Strigari:2006rd,Kuhlen:2008aw} were obtained by numerical integration. Here, we present an analytic solution to~\Eref{eq:boost}, and discuss its utility. We begin by assuming that the flux at a fixed host halo mass scales roughly as a power law, $\tilde{\Phi}(M) \propto M^{\xi}$. With this, we may rewrite~\Eref{eq:boost} as \begin{equation} \label{eq:dboost} D(b)' = A\left[q^{\xi - \alpha}\exp(b) + D(\ln q + b)\right] \end{equation} where $b = \ln q + \ln M - \ln m_{\rm min}$ and $D(M) = M^{\xi - \alpha} B(M)$. Note that $B(m_{\rm min}/q) = D(0) = 0$. It should be noted that $B(m_{\rm min}/q) = 0$ and not $B(m_{\rm min}) = 0$ is the appropriate boundary condition (as was assumed in \citet{Strigari:2006rd}). Using these boundary conditions ~\Eref{eq:dboost} can be recursively solved as: \begin{equation} \label{eq:bnrecur} B(M, m_{\rm min}) = \cases{ 0 & $M \le m_{\rm min}/q$ \\ A q^{\xi - \alpha} \frac{\left(\frac{q M}{m_{\rm min}}\right)^{\alpha-\xi}-1}{\alpha-\xi} & $m_{\rm min}/q < M \le m_{\rm min}/q^2$\\ \quad \vdots & $\quad \vdots$\\ } \end{equation} This forms a set of functions, $B(M, m_{\rm min}) = \{B_0(M, m_{\rm min}), B_1(M, m_{\rm min}), \dots, B_n(M, m_{\rm min})\}$ where each function $B_n(M, m_{\rm min})$ is only valid in the interval $m_{\rm min}/q^{n} < M \le m_{\rm min}/q^{n+1}$. Conceptually, the $B_n$'s represent the amount of substructure included in the calculation of the boost. For example, $B_0(M, m_{\rm min})$ represents the boost of a halo with no substructure, and thus we set $B_0(M, m_{\rm min}) = 0$. $B_1(M, m_{\rm min})$ represents the boost with the inclusion of only the subhalos whereas $B_2(M, m_{\rm min})$ includes only the subhalos and sub-subhalos. $B_n(M, m_{\rm min})$ now can be related through~\Eref{eq:bnrecur} by \begin{equation} \label{eq:dnboost} D_{n}(b)' = A\left[q^{\xi - \alpha}\exp(b) + D_{n-1}(\ln q + b)\right]. \end{equation} We may now solve for $B_n(M, m_{\rm min})$ by taking the Laplace transform of~\Eref{eq:dnboost} (see \ref{appendix:boost}). After inversion, this yields (for $n > 0$) \begin{equation} \label{eq:boostans} B_n(M, m_{\rm min}) = \sum_{i=1}^{n} \frac{1}{(i-1)!} \left(\frac{A q^{\xi - \alpha}}{\xi - \alpha}\right)^{i}\gamma \left(i, (\xi-\alpha)\ln\left(\frac{q^{i} M}{m_{\rm min}}\right)\right) \end{equation} where $\gamma(a, x)$ is the lower incomplete gamma function defined by \begin{equation} \gamma(a, x) = \int_0^x x^{a-1} \exp(-x) d x. \end{equation} In the above analysis, we have assumed that the mass function is self-similar for all levels of substructure. However, this is unlikely to be the case and simulations~~\citep{Diemand:2008in,Springel:2008cc} see less substructure in subhalos than in host halos. To account for the fact that the mass function may differ for each level of substructure, it would be useful to perform the same analysis for the case where the mass function varies independently for each level of substructure. We define the mass function to be \begin{equation} \label{eq:subhalomassfuni} \frac{d N}{d \ln m} = A_i \left(\frac{M}{m}\right)^{\alpha_i} \end{equation} at level $i$. Here, $i=0$ would apply to the parent halo whereas $i=1$ would apply to the subhalos and so on. Also, we let $q \rightarrow q_i$ following the same notation. Using the same analysis as above, we can rewrite~\Eref{eq:boostans} as \begin{equation} B_n(M, m_{\rm min}) = \sum_{i=1}^{n} \frac{1}{(i-1)!} \left(\frac{\tilde{A}_i \tilde{q}_i^{\xi - \alpha_i}}{(\xi - \alpha_i)^i}\right)\gamma \left(i, (\xi-\alpha_i)\ln\left(\frac{\tilde{q}_{i} M}{m_{\rm min}}\right)\right) \end{equation} where $\tilde{A}_i \equiv \prod_{j=1}^i A_j$ and $\tilde{q}_i \equiv \prod_{j=1}^i q_j$. For completeness, we present the $\alpha_i=\xi$ solution : \begin{equation} B_n(M, m_{\rm min}) = \sum_{i=1}^{n} \frac{\tilde{A}_i\left(\ln\left(\frac{\tilde{q}_{i} M}{m_{\rm min}}\right)\right)^i}{i!} \end{equation} The utility of the analytic solution derived above stems from the fact that it lets us account for each level of substructure separately. We find that in the most extreme circumstance of $\xi - \alpha = -0.2$ and $m_{\rm min} = 10^{-10} M_{\odot}$, inclusion of only the subhalos and sub-subhalos leads to an accuracy of $98.4\%$. Thus, it is not necessary to go below $n=2$, for accurate boost predictions -- for most part it is sufficient to just resolve subhalos to estimate the boost. \subsection{Annihilation luminosity - mass relation} \label{sec:lumfun} In order to compute the boost, we need to estimate the gamma-ray luminosity from a subhalo of the halo under consideration. The luminosity depends strongly on the log-slope of the inner density profile of the subhalo and the effect of the tidal forces of the parent halo. In order to estimate these effects, we consider a simplified model of subhalo distribution and tidal effects. We consider two possibilities for the spatial distribution of the subhalos. One is motivated by the current high-resolution simulations of Milky Way sized halos ~\citep{Diemand04,Gao07,Madau08,Springel08}, where it was found that the number density of subhalos with masses greater than $\sim 10^{-6}$ the mass of the parent halo falls off as $1/r^2$ at large radii. At small radii, tidal forces of the parent halo will destroy the subhalo and hence reduce the population in the inner most regions. However, it is not clear that the smallest subhalos extending down to earth mass subhalos should have this, more diffuse, spatial distribution. We therefore also consider the other extreme where the spatial distribution of the subhalos follows the smooth distribution. We choose the second possibility for the final computations because the boost is primary affected by the smallest mass halos. We randomly generate subhalos according to the chosen spatial distributions. Before it fell into the host halo, we allow for the log-slope of the subhalo inner density profile to range between $a=0.8$ and $a=1.4$, motivated ~\citep{Diemand04-cusps}. We note, however, that more recent simulations indicate that there is no asymptotic inner slope~\citep{Navarro04-cdm.halo.3, Springel:2008cc,Navarro:2008kc}. The density profile assumed for the subhalo when it was outside the host halo is \begin{equation}\label{eq:boost-calc-density} \rho_a (r) = \rho_0 (r/r_0)^{-a} (1+r/r_0)^{-3+a}\,, \end{equation} where $a=1$ corresponds to the NFW profile and the concentration is defined by the ratio of the virial radius and the scale radius as $c = R_{\rm vir}/r_{0}$. We assume a power-law virial mass function for the subhalo before it fell into the host halo and generate masses randomly for the subhalos. Since we start by picking the virial mass of the halos, we use the observed mass function for field halos. However, there is a huge extrapolation involved -- we assume that the power-law extends all the way down to the minimum halo mass where the mass function gets truncated. Given the virial mass, we find the concentration of the halo using the field concentration--mass relation including the scatter. While these relations were derived for the NFW profile, we adapt them for the profile in ~\Eref{eq:boost-calc-density}. Assigning a concentration is the most uncertain part of the calculation; in fact, as we will see, {\em the flux depends sensitively on the concentration of the smallest halos.} Following the above procedure, we are able to set $r_0$ and $\rho_0$, and from this we then find the corresponding ergodic distribution function, $f_a(E)$. Given the implied tidal radius, we adopt the following simple model for tidal effects wherein any given tidal radius defines a relative energy $E_0$ below which the distribution function of the subhalo drops to zero. This is analogous to the King models for the stars. That is, \begin{eqnarray} f(E) & = & f_a(E) \, \forall \, E>E_0 \,, \label{eq:tidalfe} \\ & = & 0 \, \forall \, E < E_0 \nonumber \,. \\ \end{eqnarray} Two points are worth noting: (1) $E_0$ is fully specified by the ratio of the tidal radius to $r_0$, and (2) the product $\rho_0 r_0^a$ has to be unchanged because the tidal effects do not change the innermost regions of the subhalo. We find that the density profiles resulting from the distribution function in ~\Eref{eq:tidalfe} are well fit by \begin{equation} \rho_{\rm sub} (r) = \rho_a (r) \exp(-(r/r_t)^n)\,, \end{equation} where $r_t$ is defined here as the tidal radius and $n$ is a function of $E_0$ (or equivalently $r_t/r_0$). For any given $\rho_0$ and $r_0$, this model then provides a relation between $r_t$ and the tidally truncated mass of the subhalo. If $r_t$ is smaller than $r_0$, we consider the subhalo destroyed. To fix the mass and tidal radius for a given $\rho_0$, $r_0$ and distance from the center of the host halo, we iteratively solve the Jacobi relation between $r_t$ and mass, under the approximation that both the host halo and the subhalo are isothermal. We allow for the fact that the satellite could have been closer in its orbit by choosing the closest distance to the center along the orbit to be a factor less than one, chosen randomly from 0 to 1, times the current distance. This is now a fully specified model and we may now predict the gamma-ray annihilation luminosity as a function of the tidally truncated mass of the subhalo. At the high mass end of the subhalo mass function, the predictions will have large scatter because of sample variance. At smaller masses, where most of the flux comes from, the large number of subhalos contributing to the flux means that the predictions essentially have no scatter. As a means of illustrating the important uncertainties associated with subhalo density structures we explore two simple models for the concentration-mass relation. The first is a power law (PL) model $c(m) \propto m^{-0.1}$ that is normalized to match the results of N-body simulations over the mass ranges that have been probed by direct simulations ($m \gtrsim 10^{8} M_\odot$). The second is the simple analytic model~\citet{Bullock2001} with the specific implementation of~\citet{Maccio:2008xb}. This model (denoted B01 below) also matches simulations down to the mass-scales explored by simulations, but links the value of $c$ to the spectrum of CDM density fluctuations via estimated collapse times~\footnote{The choice of B01 is representative of many proposed analytic models that link $c$ to the power spectrum. Among these it has the {\em steepest} faint-end slope.}. While the power-law extrapolation leads to concentrations of order 500 for earth-mass halos, the B01 model concentrations are much lower, and plateau for small masses because the slope of the power spectrum of density fluctuations is more shallow on small scales. We note that the flux depends primarily on the profile around $r_0$ and smaller. The concentration sets $\rho_0$ since $\rho_0 \propto c^3$ and the flux is proportional to $\rho_0^2 r_0^3$. We find that the scaling formula suggested by~\citet{Strigari:2006rd}, $\tilde{\Phi}(M) \propto M\,c(M)^{2.2}$, works well (here $M$ is the subhalo tidally-truncated mass) for power-law $c(M)$ functions. We did not find any systematic differences in this scaling for the two assumptions about the spatial distribution of the subhalos. However, we do find that the more concentrated spatial distribution (NFW) implies systematically larger fluxes (by a factor of about 2). In the self-similar calculation below we only include the scaling information. We do not consider the effect of the assumptions about the spatial distribution of subhalos on the signal-to-noise. In particular, if the distribution of the subhalos is as shallow as $1/(r_s+r)^2$, then much of the signal (even for more moderate boosts of order 1) will come from the ``outer'' regions. As we integrate the signal outwards from the stellar core, two effects must be considered. One, the signal-to-noise depends sensitively on the angular acceptance region around the satellite. If the background is mainly extragalactic (or at least constant across the angular region of the galaxy), it will scale linearly with the angular region covered. The scaling of the signal with the angular acceptance can only be estimated if the spatial distribution of the subhalos of a satellite is known. Two, we cannot accurately predict the tidal radius beyond which both the dark matter density profile of the satellite galaxy is cut-off. For the boost calculation in the MCMC exploration, we assume that the spatial distribution of the subhalos follows that of the smooth halo. We reiterate that the relevant quantity is the distribution of small subhalos that cannot yet be resolved by simulations and those subhalos are the ones that contribute dominantly to the gamma-ray flux. \subsection{Minimum Mass CDM Halo} \label{sec:mmin} We now have an expression for the boost as a function of (1) the host halo mass, (2) the mass function of CDM subhalos, (3) the concentration-mass relationship for CDM subhalos, (4) the minimum mass CDM halo, $m_{\rm min}$. The uncertainty in the fourth item arises from the unknown cut-off scale in the mass function of CDM substructure. As mentioned above, this cut-off scale is well below the resolution of modern day CDM numerical simulations of Milky Way like galaxies at $z=0$. The smallest halo size is set by the horizon at kinetic decoupling in the early universe. The kinetic decoupling in turn depends on the scattering interactions of dark matter with standard model fermions, as well as the free-streaming after decoupling ~\citep{Loeb:2005pm,Profumo:2006bv,Bertschinger06,Bringmann:2006mu}, \begin{equation} m_{\rm min} = 5.72\times10^{-2} \Omega_M h^2 C^{3/4} \left(\frac{m_{\chi}\sqrt{g_{eff}}}{100 GeV}\right)^{-15/4} M_{\odot}. \label{eq:mcdm} \end{equation} Here $g_{eff} = 10.75$ is the effective degrees of freedom when the CDM particle freezes out, and \begin{equation} C \equiv \frac{m_{\chi}^2}{p_{cm}^2}\left|\mathcal{M}(t \rightarrow 0)\right|^2, \label{eq:scatteringmatrix} \end{equation} where $\left|\mathcal{M}(t \rightarrow 0)\right|$ is the matrix element of elastic scattering in the limit of no momentum transfer ($t \rightarrow 0$). \citet{Bertschinger06} has calculated $m_{\rm min}$ assuming a Bino-like neutralino, finding typical values $m_{\rm min} \sim 10^{-4}$ M$_\odot$ for typical WIMP masses. We follow the kinetic decoupling calculation of~\citet{Bertschinger06} but generalize it to include Wino and Higgsino-type neutralinos by determining $C$ at each point in the~\texttt{SuperBayeS}~ chain. In~\ref{appendix:scat}, we present the relevant Feynman rules and matrix elements that enter into calculating the scattering matrix in~\Eref{eq:scatteringmatrix}. It is important to note that in our analysis of the scattering matrix element, we assume that before kinetic decoupling occurs, the WIMP interacts solely with electrons and neutrinos. In doing so, we have neglected the effects of muon scattering, which will be relevant if kinetic decoupling occurs at temperatures comparable to the muon mass. Including the non-negligible muon abundance would in turn modify~\Eref{eq:mcdm}, which is beyond the scope of our present analysis. In \Fref{fig:m0}, we show the resulting posterior distribution of $m_{\rm min}$ accounting for the entire presently viable parameter space of the CMSSM. We find $\sim 95\%$ c.l. values for the minimum halo mass within the range $\sim10^{-9} - 10^{-6}$ M$_\odot$. The features in the likelihood for $m_{\rm min}$ arise from the probability distributions in the CMSSM parameters; due to the non-linear transformation between these parameters and the minimum mass halo in~\Eref{eq:mcdm} any small features in the CMSSM parameters are strongly exacerbated in the $m_{\rm min}$ likelihood. \begin{figure} \begin{center} \rotatebox{270}{\includegraphics[height=0.482\hsize]{pics/m0.eps}} \caption{\label{fig:m0}\footnotesize Posterior probability for m$_{\rm min}$ in the CMSSM, assuming flat priors and employing all available constraints. } \end{center} \end{figure} \begin{figure} \begin{center} \rotatebox{270}{\includegraphics[height=0.482\hsize]{pics/cvsm.eps}} \rotatebox{270}{\includegraphics[height=0.482\hsize]{pics/boost.eps}} \caption{\label{fig:cm0}\footnotesize {\it Left}: Halo concentration versus halo mass for our PL model (solid black line) and the B01 model (dashed red line). {\it Right}: The boost for an $M = 10^8 M_\odot$ halo that results from the PL model (solid black) (using~\Eref{eq:boostans}) and the B01 (dashed red) concentration models. Note that both of these concentration-mass models are consistent with current simulations. } \end{center} \end{figure} \subsection{Boost Predictions: Two models \label{subsec:boostmodels}} \begin{figure} \begin{center} \rotatebox{270}{\includegraphics[height=0.482\hsize]{pics/segueboostjames.eps}} \rotatebox{270}{\includegraphics[height=0.482\hsize]{pics/segueboostaquar.eps}} \caption{\label{fig:boost}\footnotesize {\it Left}: Posterior distribution value for the boost factor, assuming the concentration-mass model of B01. {\it Right}: Posterior distribution value for the boost factor, assuming the PL concentration-mass model. } \end{center} \end{figure} Running Monte Carlo simulations (see~\sref{sec:lumfun}), we found that the luminosity $\Phi(M)$ has a power law behavior for both the PL and B01 models. In terms of the first-order estimates of the boost discussed in association with ~\Eref{eq:boost} above, the relevant combination of the mass-function log-slope ($-\alpha$) and the luminosity function log-slope ($\xi$) takes the value $\alpha -\xi \simeq 0.2$ and $0.1$, for the PL and B01 models, respectively. \Fref{fig:cm0} shows the boost for a $10^8$ M$_{\odot}$ halo across its relevant range for both models. The ensuing posterior pdf for the boost for Segue 1 for these two models is shown in \Fref{fig:boost}. The difference in boost between these two models is an order of magnitude and this underscores the large effect that the (uncertain) concentration of the lowest mass halos has on the overall boost. This leads to a natural uncertainty for any flux prediction and emphasizes the need to measure the concentration and mass function well for halos of the smallest size. In order to be conservative, for the remainder of this paper we assume $\alpha - \xi = 0.1$. \section{Flux Predictions and Detection Prospects} \label{section:results} In the analysis above we have assembled the necessary ingredients to make gamma-ray flux predictions given constraints from the halo kinematics and the CMSSM. The corresponding kinematic constraints on the (smooth) dark matter halos of Draco and Segue 1 are illustrated in ~\Fref{fig:rhosrs}. Now, as a final step in the process of making the flux predictions, we must specify the angular region around the center of the dSph within which the flux is calculated, .i.e. $\theta_{\rm max}$ in~\Eref{eq:flux}. Determining the most optimal value of $\theta_{\rm max}$ which maximizes the signal and minimizes the background contribution depends on the experiment and backgrounds under consideration. Satellite experiments such as Fermi~\cite{Atwood:2009ez} and ACTs, such as HESS~\cite{Aharonian:2007km} and MAGIC~\citep{SanchezConde:2009ms}, are expected to have an angular resolution of $\sim 0.2^\circ$ and $0.1^\circ$, respectively. If the extent of the gamma-ray emission region for the dSphs is larger than these point-spread functions the dSphs would be resolved as point sources. \begin{figure} \begin{center} \rotatebox{360}{\includegraphics[height=0.45\hsize]{pics/seguerhosrsext.eps}} \rotatebox{360}{\includegraphics[height=0.45\hsize]{pics/dracorhosrsext.eps}} \caption{\label{fig:rhosrs}\footnotesize Allowed parameter space for the scale radius and scale density for the Einasto profile in~\Eref{eq:einasto} for Segue I ({\em Left}) and Draco ({\em Right}). Inner and outer contours represent the 68\% and 95\% c.l. regions respectively. } \end{center} \end{figure} Our approach in this paper is to provide an algorithm for determining the optimal value of $\theta_{\rm max}$, given the best fitting halo parameters and the measured background spectra. More specifically, we determine the optimal value of $\theta_{\rm max}$ from our Markov chains, along with the estimation of the detection significance, $\sigma = N_s/\sqrt{N_s+N_b}$, where $N_b$ is the number of background photons and $N_s$ is the number of signal photons within a given angular acceptance. We note that the definition of detection significance here is only meant to be an approximation; more realistically the maximum signal-to-noise will depend on the angular distribution of the signal and background, and detailed detector specifications. Nonetheless, proceeding with the above definition, at each point in our chains, we calculate the value of $\theta_{\rm max}$ for which the significance $\sigma$ is maximized, and then construct the pdf of the quantity, $\theta_{\rm max}$. In order to convert the flux to a detected number of photons, we take the total exposure, defined as the orbit-averaged effective area times the observation time, to be $3 \times 10^{11} {\rm cm}^2 {\rm s} \simeq 2000 {\rm cm}^2 \times 5 {\rm years} $~\cite{Atwood:2009ez}. For our input background spectrum we perform a standard analysis utilizing the EGRET diffuse background measurements at high Galactic latitude. We consider just the diffuse extragalactic background, though note that including contributions from the residual Galactic component may increase the total background by flux about an order of magnitude. The diffuse backgrounds over the relevant energy range of $\sim 1-100$ GeV for WIMP annihilation will of course be mapped with even greater precision in the very near future with Fermi. The diffuse extragalactic background is seen to fall off according to the power law $dN/dE \sim E^{-2.1}$~\cite{Hunter1997}; we integrate this background spectrum for energies $> 1$ GeV, and compare to the number of photons produced by dark matter over the same energy range. The resulting pdf for $\theta_{\rm max}$ is shown in~\Fref{fig:thetamax}. As~\Fref{fig:thetamax} shows, accounting for the diffuse backgrounds, the optimal angular extent for each galaxy is $\sim 0.2^\circ$ (the PSF of Fermi), with a gradual tail that extends to larger angles. \begin{figure} \begin{center} \rotatebox{270}{\includegraphics[height=0.45\hsize]{pics/seguethetamax.eps}} \rotatebox{270}{\includegraphics[height=0.45\hsize]{pics/dracothetamax.eps}} \caption{\label{fig:thetamax}\footnotesize The posterior probability distribution for the angular region within which the signal-to-noise ratio is maximized. {\em Left} panel is for Segue 1, and {\em Right} panel is for Draco. The sharp cut-off at low $\theta_{\rm max}$ is a result of the assumed point-spread function. } \end{center} \end{figure} Given the distribution of $\theta_{\rm max}$ for each galaxy, in~\Fref{fig:flux_mx} and~\Fref{fig:flux_SI} we present our results for the flux of gamma-rays from each galaxy with energies greater than $1$ GeV. In these figures, we show the gamma-ray flux versus two CMSSM parameters, the neutralino mass and the spin-independent cross section. For these figures we assume the ``B01'' boost model, where the minimum value for the halo mass is self-consistently calculated from the CMSSM parameter space. Because it has more line-of-sight velocities, the flux distribution for Draco is more strongly-constrained. As is seen, the predicted fluxes are similar for both galaxies, despite the fact that Segue 1 is more than a factor three times closer than Draco. The main reason that these fluxes are similar traces back to the prior used in ~\Eref{eq:CDMprior}; given the velocity dispersion and half-light radius, Segue 1 prefers to reside in a halo with lower $V_{\rm max}$ relative to Draco. The corresponding one-dimensional flux distributions are show in~\Fref{fig:1Dfluxes}. \begin{figure} \begin{center} \rotatebox{0}{\includegraphics[height=0.45\hsize]{pics/seguefluxmassext.eps}} \rotatebox{0}{\includegraphics[height=0.45\hsize]{pics/dracofluxmassext.eps}} \caption{\label{fig:flux_mx}\footnotesize The $E > 1$ GeV gamma-ray flux versus the neutralino mass for Segue 1 ({\em Left}) and Draco ({\em Right}). Both figures include the conservative boost model corresponding to the~\citet{Bullock2001} halo concentration model (discussed in~\Sref{subsec:boostmodels}) and CDM prior (discussed in~\Sref{subsec:priors}). Inner and outer contours represent the 68\% and 95\% c.l. regions respectively. } \end{center} \end{figure} \begin{figure} \begin{center} \rotatebox{0}{\includegraphics[height=0.45\hsize]{pics/seguefluxnucext.eps}} \rotatebox{0}{\includegraphics[height=0.45\hsize]{pics/dracofluxnucext.eps}} \caption{\label{fig:flux_SI}\footnotesize The $E>1$ GeV gamma-ray flux versus the spin-independent cross section for Segue 1 ({\em Left}) and Draco ({\em Right}). Both figures include the conservative boost model corresponding to the~\citet{Bullock2001} halo concentration model (discussed in~\Sref{subsec:boostmodels}) and CDM prior (discussed in~\Sref{subsec:priors}). Inner and outer contours represent the 68\% and 95\% c.l. regions respectively. } \end{center} \end{figure} \begin{figure} \begin{center} \rotatebox{270}{\includegraphics[height=0.45\hsize]{pics/seguefluxext.eps}} \rotatebox{270}{\includegraphics[height=0.45\hsize]{pics/dracofluxext.eps}} \caption{\label{fig:1Dfluxes}\footnotesize Probability density for the flux of $E>1$ GeV gamma-rays from Segue I ({\em Left}) and Draco ({\em Right}), after marginalizing over all parameters of the CMSSM and the dark matter halos. The solid and dashed curves correspond to the boost model with the~\citet{Bullock2001} and power-law halo concentration models (discussed in~\Sref{subsec:boostmodels}) respectively. The dashed-dotted line denotes the flux above which the signal to noise is about 3, using the estimated definition of signal-to-noise is section~\ref{section:results}. Notice the double peaked feature in the Draco posterior. This is due to the fact that the astrophysical component of the flux is very well constrained by data causing the uncertainty in the flux to be dominated by the non-Gaussian CMSSM parameter space. The uncertainty in the flux for Segue 1, though, is dominated by the astrophysics and accordingly is very Gaussian in shape. } \end{center} \end{figure} Using the definition of significance, $\sigma$ (defined above), and an assumed exposure, we can obtain a rough estimate of detection prospects. As an example, we consider the detection prospects at a signal-to-noise greater than 3 ($\sigma = 3$) in five years (same exposure as given previously). Comparing to our flux predictions in~\Fref{fig:1Dfluxes}, we see that with the standard B01 boost model, the parameter space spanned by the flux posterior of Draco lies below this estimated flux limit (vertical dot-dash line) and only a small fraction of the parameter space reaches this limit for Segue 1. On the other hand, taking the optimistic PL boost model, in which the mean boost is $\sim 20$, both Segue 1 and Draco exhibit significant regions of their parameter space that are detectable given our estimated Fermi sensitivity. Specifically, for the PL boost model, we find $\sim 20\%$ of the Draco flux parameter space is detectable with signal-to-noise greater than 3 with five years of Fermi observation (and similarly $\sim 13\%$ of the Segue 1 parameter space at the same level). We note again that the assumed priors (see below) have a large effect on the predicted flux and our general strategy in this work has been to quantify the {\em minimum} expected flux. We also note that for our PL model we assumed a spatial distribution for the subhalos that tracks the underlying host halo mass distribution. Making this spatial distribution more extended will lower the flux from the central region. Our results for the B01 concentration boost model are broadly consistent with the results of \citet{Pieri08}, who also considered annihilation flux from Draco but with an energy threshold of 100 MeV. For the PL boost model, it would be interesting to also consider the flux of gamma-rays from annihilations in the unresolved substructure of the Milky Way and ensure that this is not in conflict with the measured EGRET background \cite{PieriBertone08}. However, such an exercise is hampered by the facts that the Galactic disk could significantly change the substructure distribution as well as abundance, and there is no systematic way of taking this into account at the present time. Of course, to extract a dark matter annihilation signal above background, a detailed comparison between all of the input spectra is required, rather than simply counting photons above some energy threshold. In order to make a detailed comparison between spectra, it will be important to determine the characterstic energy for photons from dark matter annihilation. We define the characteristic energy as $E_{\rm max}$, the energy at which the quantity $E^2 dN/dE$ peaks. For each point in our Markov chain we determine the characteristic energy, and the resulting pdf for $E_{\rm max}$ is given in~\Fref{fig:energymax}. \begin{figure} \begin{center} \rotatebox{270}{\includegraphics[height=0.45\hsize]{pics/energy.eps}} \caption{\label{fig:energymax}\footnotesize The posterior probability density for the peak of the gamma-ray energy spectrum, where the energy spectrum is defined in terms of $E^2 dN/dE$. } \end{center} \end{figure} Finally, we re-vist our above discussion of priors in the context of flux predictions. We see from \Fref{fig:1Dfluxes} that when the astrophysical data is well constraining the majority of the flux uncertainty comes from the particle physics parameter space. This severely reduces the impact of the astrophysical priors on the overall uncertainty of the flux. But for galaxies like Segue 1 with only 24 line-of-sight velocities, the likelihood for its parameters is much less constraining and thus astrophysical priors have a large impact on the flux posterior. In~\Fref{fig:fluxprioreffect}, we show how the assumed prior affects the resulting calculation of the flux from Segue 1 and Draco. The curves in~\Fref{fig:fluxprioreffect} are for the same priors as in~\Fref{fig:prioreffect_seg}. It is thus clear that, given the small number of stars from Segue 1, the assumed prior has a significant effect on the flux calculation. The CDM $V_{\rm max}$ prior forces Segue 1 to reside in halos with smaller $V_{\rm max}$ and this results in smaller predicted fluxes. We note that the physics of faint-end galaxy formation should change this prior significantly and would generally push the prior towards favoring larger $V_{\rm max}$ values. Thus our chosen priors that have the effect of providing the minimum expected flux from Segue 1. Future data sets for this object, and all other ultra-faint satellites, will be crucial for constraining the dark matter mass and estimating the gamma-ray fluxes. The priors on the CMSSM parameter space may also play an important role in determining the flux expectations from the dwarfs~\cite{Trotta08}. We have not undertaken a systematic study of the effect of these priors on the fluxes here. We chose a uniform prior in all the CMSSM parameters including the masses. \begin{figure} \begin{center} \rotatebox{270}{\includegraphics[height=0.45\hsize]{pics/segueflux4.eps}} \rotatebox{270}{\includegraphics[height=0.45\hsize]{pics/dracoflux4ext.eps}} \caption{\footnotesize Effect of the prior on the predicted flux for Segue 1 ({\em Left}) and Draco ({\em Right}). Again, a uniform prior in $V_{\rm max}^{-3}$ (black, solid) $V_{\rm max}^{-2}$ (red, dashed), $V_{\rm max}^{-1}$ (green, dot-dashed), and $\ln(V_{\rm max})$ (blue, dotted) is assumed. As in the case of the mass (see~\Fref{fig:prioreffect_seg}), flat priors in increasing negative powers of $V_{\rm max}$ causes the Segue 1 posterior to be more biased than the kinematically well-constrained case of Draco. For these figures, we have set the boost factor equal to zero. \label{fig:fluxprioreffect} }\end{center} \end{figure} \section{Discussion and Conclusions \label{section:conclude}} Constraining the properties of the dark matter particle in indirect detection experiments will require a firm understanding of the astrophysical uncertainties that contribute to the flux. Dwarf satellites of the Milky Way are particularly interesting targets in this regards because they are the most dark matter dominated objects known and they are largely free from astrophysical uncertainties which result from the presence of baryonic physics. In this paper, we have taken an step towards quantifying uncertainties in flux predictions from dSphs by providing a framework within which both particle physics and astrophysics uncertainties can be included at once. We have combined our MCMC method for determining the dark matter distributions of dwarf satellites using stellar kinematics with the~\texttt{SuperBayeS}~MCMC package which determines the preferred ranges for parameters of the CMSSM. We have focused on two specific dSphs, Segue 1 and Draco, as example cases. Our methods allow us to provide a broad outline for the prospects of detection of these satellites with gamma-ray experiments such as Fermi. The main results of our paper can be summarized as follows: \begin{itemize} \item We find that both Draco (at 80 kpc) and Segue 1 (at 23 kpc) are expected to have grossly similar fluxes, though the flux from Segue 1 is subject to larger uncertainties because of its relative lack of kinematic data and smaller stellar extent. For the most conservative assumptions, Segue 1 prefers to reside in a halo with lower $V_{\rm max}$ relative to Draco, and therefore it has a similar overall flux despite its relative proximity. However, for a flat prior in log$(V_{\rm max})$, the flux from Segue 1 can be much larger than Draco (see Figure~\ref{fig:fluxprioreffect}). This result motivates future observations of stellar kinematic data for Segue 1. We note that our results for the flux, unless otherwise stated, are based on the more conservative prior and conservative boost model, and hence they quantify the minimum expected flux for the assumed CMSSM priors. \item We have provided the first self-consistent calculation for the boost in the flux signal from halo substructure, taking into account both the CMSSM model and the recent results from high resolution numerical simulations. We show that the dominant uncertainty in the boost calculation comes from the assumed halo concentration versus mass relation for halo substructure on mass scales down to the scale of the minimum mass halo. If we assume a model that links halo concentrations to the power spectrum, then we obtain typical boost factors of order unity. If we instead assume a power law continuation of the concentration-mass relation down to the minimum mass halo, the average boost factor increases by an order of magnitude to $\sim 20$ (see Figure~\ref{fig:boost}). This boost would be reduced if the spatial distribution of the smallest subhalos is more diffuse that the smooth dark matter halo component, which is what we have assumed in this paper. We also note that our analytic solution for the boost shows that resolving subhalos, and in some cases sub-subhalos, is sufficient to get an accurate estimate of the boost. These facts motivate future high resolution simulations of halo substructure that can measure the concentrations of small dark matter halos directly, as well as map their spatial distribution within the host. \item We have provided a broad outline of the prospects for detection of these satellites with gamma-ray experiments, focusing specifically on Fermi. We find that, given the diffuse backgrounds, the most optimal solid angle to view these galaxies is $\sim 0.2-0.3^\circ$ (see Figure~\ref{fig:thetamax}). Optimistic fluxes for these galaxies are approximately a few times $10^{-11}$ cm$^{-2}$ s$^{-1}$. For the boost model with the power-law extrapolation of the concentration-mass relation and a subhalo spatial distribution that tracks the underlying host mass distribution, we estimate a $\sim 20\%$ chance for a $\sim 3\sigma$ dark matter gamma ray signal from Draco after 5 years of observation. This expectation is highlighted in Figure~\ref{fig:1Dfluxes}. We note that given the observed uniformity in the central density of the dark matter halos of the dwarfs \cite{Strigari:2008ib}, it should be possible to stack them and increase the signal-to-noise. \item We have provided an updated calculation for the minimum halo mass in CMSSM. Our results broadly agree with those of~\citet{Profumo:2006bv}, and are typically lower than the masses in~\citet{Bertschinger06} who only considered bino-like neutralinos. We find that the minimum mass CDM halo lies in the range $10^{-9}-10^{-6}$ M$_\odot$ (see Figure~\ref{fig:boost}). The inclusion of this effect results in slightly larger boost estimates. \end{itemize} The methods presented here provide a concrete methodology for addressing the uncertainties inherent in dark matter indirect detection from both particle physics and astrophysics perspective. The focus on Segue 1 and Draco was meant to illustrate how two vastly different kinematic data affect the predictions for the gamma-ray flux from dark matter annihilations. It is very interesting to note that Fermi observations of the dwarf satellites of the Milky Way could be relevant for constraining the Supersymmetric parameter space. \ack We acknowledge support from the NSF for this work through grants AST-0607746 and PHY-0555689. LES is supported by NASA through Hubble Fellowship grant $\#$HF-01225.01 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS 5-26555.